TYBSC CS Cloud Computing
TYBSC CS Cloud Computing
(Computer Science)
SEMESTER - VI ( CBCS)
CLOUD COMPUTING
Prof.(Dr.) D. T. Shirke
Offg. Vice-Chancellor,
University of Mumbai,
Published by : Director
Institute of Distance and Open Learning ,
ipin Enterprises University of Mumbai,Vidyanagari, Mumbai -400 098.
Tantia Jogani Industrial Estate, Unit No. 2,
Ground Floor, Sitaram Mill Compound,
DTP Composed : MumbaiJ.R.University Press Mumbai - 400 011
Boricha Marg,
Printed by Vidyanagari, Santacruz (E), Mumbai - 400 098
CONTENTS
Unit No. Title Page No.
1. Cloud Computing 01
4. Virtualization 44
6. Open Stack 71
Course: TOPICS (Credits : 03 Lectures/Week: 03)
USCS602 Cloud Computing
Objectives:
To provide learners with the comprehensive and in-depth knowledge of Cloud Computing concepts,
technologies, architecture, implantations and applications. To expose the learners to frontier areas of
Cloud Computing, while providing sufficient foundations to enable further study and research.
Expected Learning Outcomes:
After successfully completion of this course, learner should be able to articulate the main concepts, key
technologies, strengths, and limitations of cloud computing and the possible applications for
state-of-the-art cloud computing using open source technology. Learner should be able to identify the
architecture and infrastructure of cloud computing, including SaaS, PaaS, IaaS, public cloud, private
cloud, hybrid cloud, etc. They should explain the core issues of cloud computing such as security,
privacy, and interoperability.
Additional Reference(s):
1) OpenStack Essentials, Dan Radez, PACKT Publishing, 2015
2) OpenStack Operations Guide, Tom Fifield, Diane Fleming, Anne Gentle, Lorin Hochstein,
Jonathan Proulx, Everett Toews, and Joe Topjian, O'Reilly Media, Inc., 2014
3) https://www.openstack.org
1
CLOUD COMPUTING
Unit Structure
1.0 Objectives
1.1 Introduction to Cloud Computing
1.2 Characteristics and benefits of Cloud Computing
1.3 Basic concepts of Distributed Systems
1.4 Web 2.0
1.5 Service-Oriented Computing
1.6 Utility-Oriented Computing
1.7 Let us Sum Up
1.8 List of References
1.9 Unit End Exercises
1.0 OBJECTIVE
To understand the concept of cloud computing.
Virtualization
Grid Computing
Utility Computing
Virtualization
1
Cloud computing Types of Virtualization
1. Hardware virtualization
2. Server virtualization
3. Storage virtualization
4. Operating system virtualization
5. Data Virtualization
Fig. 1. SOA
2
Grid computing Cloud Computing
Utility Computing
3
Cloud computing ● Cloud computing gives guarantee to transform computing into a utility
delivered over the internet.
● Enterprise architecture is a function within IT departments that has
developed over time, playing a high value role in managing transitions
to new technologies, such as cloud computing.
4
Cloud Computing
● Characteristics
○ Resources Pooling
Cloud providers pulled the computing resources to give services to
multiple customers with the help of a multi-tenant model.
5
Cloud computing ○ On-Demand Self-Service
It is one of the key and valuable features of Cloud Computing as the user
can regularly monitor the server uptime, capabilities, and allotted network
storage.
○ Easy Maintenance
The servers are easy to maintain and the downtime is required very less
and even in some situations, there is no downtime.
○ Availability
The potential of the Cloud can be altered as per the use and can be
extended a lot. It studies storage usage and allows the user to purchase
extra Cloud storage if needed for a very small amount.
○ Automatic System
Cloud computing automatically analyzes the data needed and supports a
metering capability at some level of services.
○ Economical
It is a one-time investment as the company has to buy the storage and a
small part of it can be provided to the many companies which save the
host from monthly or yearly costs.
○ Security
Security creates a snapshot of the data stored so that the data may not get
lost even if one of the single servers gets damaged.
○ Measured Service
Cloud Computing resources used to handle and the company uses it for
recording.
6
● The characterization of a distributed system is define using definition: Cloud Computing
1. Mainframe Computing
● Mainframes controlling the large computational facilities with multiple
processing units.
● Mainframes are powerful, highly reliable computers functional for
large data movement and massive input and output operations.
● The Mainframe computing is used by large organizations for huge data
processing tasks like online transactions, enterprise resource planning,
and other operations that require the processing of significant amounts
of data.
● One of the most highlighted features of mainframes computing was the
ability to be highly reliable computers that were “always on” and be up
to tolerating failures transparently.
7
Cloud computing ● System shutdown process was needed to replace failed components,
and the system worked properly without any interruption.
● Batch processing is the main application of mainframes computing.
Not only are their popularity and deployments reduced, but also
extended versions of such systems are presently in use for transaction
processing.
● Examples, such as online banking, airline ticket booking, supermarket
and telcos, and government services.
2. Cluster computing
● Cluster computing started using a low-cost another method to the use
of mainframes and supercomputers.
● The technology advancement that makes faster and more powerful
mainframes and supercomputers in time generated an increased
availability of cheap product machines as a side effect.
● These machines are connected by a high-bandwidth network and
controlled by certain software tools that handle them as an individual
system.
● Cluster technology contributed to the natural selection of tools and
frameworks for distributed computing, example including Condor,
Parallel Virtual Machine (PVM), and Message Passing Interface (MPI)
● One of the engaging features of clusters was that the computational
power of commodity machines could be advantages to solve problems
that were previously manageable only on expensive supercomputers.
● Besides, the clusters could be easily extended if more computational
power was needed.
3. Grid computing
● Grid computing is a description of the power grid, grid computing
suggests a new approach to access large computational power, large
storage facilities, and a different variety of services.
● Users can consume resources in the same manner as they use other
utilities such as power, gas, and water.
● Grids initially developed as aggregations of geographically scatter
clusters by means of Internet connections.
● These types of clusters belonged to different organizations, and
arrangements were made among them to share the computational
power.
● This is different from a large cluster, a computing grid was a dynamic
collection of heterogeneous computing nodes, and its scale was
nationwide or even worldwide.
8
● Several developments made possible the spreading of computing grids: Cloud Computing
10
● Virtually any segment of code that performs a task can be changed into Cloud Computing
a service and expose its functionalities through a network-accessible
protocol.
● A service needs to be loosely coupled, reusable, programming
language independent, and location transparent. Loose coupling
enables services to obey different frameworks more easily and makes
them reusable.
● Independence from a specific platform increases services accessibility.
● Accordingly, a wider range of clients, which can improve services in
global registries and consume them in a location transparent manner,
can be served.
● Services are controlled and accumulated into a service-oriented
architecture, which is a logical way to arrange software systems to
provide end users or other entities distributed over the network with
services through published and discoverable interfaces.
● Service oriented computing establishes and broadcasts two important
concepts, which are also fundamental to cloud computing:
1. quality of service (QoS)
2. Software-as-a-Service (SaaS)
○ Quality of service:
■ recognize a set of functional and nonfunctional attributes that can be
used to estimate the behavior of a service from different Viewpoints.
■ QoS could be performance metrics such as response time, or security
attributes, transactional integrity, reliability, scalability, and
availability.
■ QoS requirements are formed between the client and the provider
through an SLA that recognizes the least values for the QoS attributes
that are required to be satisfied upon the service call.
○ Software-as-a-Service
■ The idea of Software-as-a-Service introduces a new delivery model for
applications.
■ The word has been inherited from the application service providers
(ASPs), which bring software services-based solutions over the wide
area network from a central datacenter and make them available on a
rental basis.
■ “The ASP is responsible for maintaining the infrastructure and making
available the application, and the client is discharged from
maintenance costs and difficult upgrades.
11
Cloud computing ■ This SD model is possible because economies of scale are reached by
means of timeshare.
■ The SaaS approach achieves its full development with service-oriented
computing.
■ Loosely coupled software components allow the delivery of complex
business processes and transactions as a service while allowing
applications to be composed on the fly and services to be reused from
everywhere and by anyone.
12
● The capillary scattering of the Internet and the Web enables the Cloud Computing
technological means to notice utility computing on a worldwide scale
and through simple interfaces.
● Computing grids provided a planet scale distributed computing
infrastructure that was approachable on demand. Computing grids
bring the concept of utility computing to a new level.
● With the help of utility computing accessible on a wider scale, it is
easier to provide a trading infrastructure where grid products storage,
computation, and services are offered for or sold.
● Here E-commerce technology provided the infrastructure support for
utility computing. Example, significant interest in buying any kind of
good online spreads to the wider public: food, clothes, multimedia
products, and online services such as storage space and Web hosting.
● Applications were not only dispenses, they started to be composed as a
network of services provided by different entities.
● These services, accessible through the Internet, were made available
by charging on the report to usage.
● SOC widened the concept of what could have been retrieved as a
utility in a computer system, not only measuring power and storage but
also services and application components could be employed and
integrated on demand.
13
Cloud computing 1.8 LIST OF REFERENCES
● Mastering Cloud Computing, RajkumarBuyya, Christian Vecchiola, S
ThamaraiSelvi, Tata McGraw Hill Education Private Limited, 2013.
14
2
ELEMENTS OF PARALLEL COMPUTING
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Elements of Parallel Computing
2.3 Elements of Distributed Computing
2.4 Technologies for Distributed Computing
2.5 Summary
2.6 Reference for further reading
2.7 Unit End Exercises
2.0 OBJECTIVES
● To understand the concept of Parallel & Distributed computing.
● To study the elements of Parallel Distributed computings.
● To study the different types of technologies for Distributed
Computing.
2.1 INTRODUCTION
● The analogous/ simultaneous development in availability of big data
and in the number of simultaneous users on the Internet places
particular pressure on the need to take out computing tasks in parallel,
or simultaneously.
● Parallel and distributed computing occurs around many different topic
areas in computer science, including algorithms, computer
architecture, networks, operating systems, and software engineering.
● During the early 21st century there was volatile growth in
multiprocessor design and other strategies for complex applications to
run rapidly.
● Parallel and distributed computing assemble on fundamental systems
concepts, such as concurrency, mutual exclusion, consistency in state
or memory manipulation, message-passing, and shared-memory
models.
15
Cloud computing ● The first steps in this conduct direction to the development of parallel
computing, which encloses techniques, architectures, and systems for
performing multiple activities in parallel.
● The term parallel computing has indistinct edges with the term
distributed computing and is often used in place of the latter term.
● In this chapter, we associate it with its proper characterization, which
involves the introduction of parallelism within a single computer by
coordinating the activity of multiple processors together.
16
○ Hardware refinement in pipelining, superscalar, and the like are non Elements of Parallel
scalable and require sophisticated compiler technology. Evolving like Computing
compiler technology is a hard task.
18
Elements of Parallel
Computing
19
Cloud computing
Levels of parallelism
● Levels of parallelism are marked based on the lumps of code (like a
grain size) that can be a probable candidate for parallelism. Below
Table lists the categories of code granularity for parallelism.
● All these approaches have a common goal:
○ To boost processor efficiency by hiding latency.
○ To conceal latency, there must be another thread ready to run every
time a lengthy operation occurs.
The plan is to execute concurrently two or more single-threaded
applications, such as compiling, text formatting, database searching, and
device simulation.
● As shown in the table and depicted in figure 5, parallelism within an
application can be discovered at several levels.
○ Large grain (or task level)
○ Medium grain (or control level)
○ Fine grain (data level)
○ Very fine grain (multiple-instruction issue)
Levels of Parallelism
21
Cloud computing
22
● The components of a distributed system communicate with some sort Elements of Parallel
of message passing. This is a term that encloses several Computing
communication models.
23
Cloud computing Architectural styles for distributed computing
● Architectural styles are mainly used to find the vocabulary of
components and connectors that are used as instances of the style
together with a set of constraints on how they can be combined.
● Architectural styles for distributed systems are helpful in
understanding the different roles of components in the system and how
they are distributed across multiple machines.
● Organization of the architectural styles into two major classes:
○ Software architectural styles
○ System architectural styles
● The first class has the relation to the logical organization of the
software, the second class contains all those styles that express the
physical organization of distributed software systems in terms of their
major components.
25
Cloud computing ● Therefore, developing a system strengthening RPC for IPC contain the
following steps:
○ Design and implementation of the server procedures that will be
uncovered for remote invocation.
○ Registration of remote procedures with the RPC server on the node
where they will be made accessible.
○ Design and implementation of the client code that invokes the remote
procedures (RPC).
.NET remoting
● .NET Remoting is the technology enabling IPC among .NET
applications.
● It provides developers with a uniform platform for retrieving remote
objects from within any application developed in any of the languages
supported by .NET.
Service-oriented computing
● Service oriented computing arrange distributed systems in terms of
services, which represent the great abstraction for building systems.
27
Cloud computing ● Service orientation expresses applications and software systems as
aggregations of services that are correlated within a service-oriented
architecture (SOA).
● A service encapsulates a software component that enables a set of
coherent and related functionalities that can be reused and integrated
into huge and more complex applications. The term service is a
general abstraction that encompasses various different
implementations using different technologies and protocols.
● Four major characteristics that identify a service:
1. Boundaries are explicit.
2. Services are autonomous
3. Services divide the schema and contracts, not class or interface
definitions.
4. Service compatibility is determined based on policy.
Service-oriented architecture
● SOA is an architectural style supporting service orientation.
● It arranges a software system into a collection of interacting services.
● SOA encloses a set of design principles that structure system
development and provide means for integrating components into a
coherent and decentralized system.
● SOA based computing packages functionalities into a set of
interoperable services, which can be non-discriminatory into different
software systems belonging to individual business domains.
● The following guiding principles which characterize SOA platforms,
are winning features within an enterprise context:
○ Standardized service contract.
○ Loose coupling
○ Abstraction.
○ Reusability.
○ Autonomy
○ Lack of state
○ Discoverability
○ Composability
28
2.5 LET US SUM UP Elements of Parallel
Computing
● Parallel and distributed computing emerged as a solution for solving
complex
● Parallel computing introduces models and architectures for performing
multiple tasks within a single computing node or a set of tightly
coupled nodes with homogeneous hardware.
● Parallelism is achieved by leveraging hardware capable of processing
multiple instructions in parallel.
● Distributed systems constitute a large umbrella under which several
different software systems are classified.
29
3
CLOUD COMPUTING ARCHITECTURE
Unit Structure
3.0 Objective
3.1 Introduction
3.2 Cloud Computing Architecture
3.3 The cloud reference model
3.4 Cloud Computing Services: SAAS, PAAS, IAAS
3.5 Types of clouds.
3.6 Summary
3.7 Reference for further reading
3.8 Unit End Exercises
3.0 OBJECTIVE
● To understand the architecture of cloud computing.
● To understand the different types of cloud computing servcies.
● To understand the enterprise architecture used in cloud computing.
● To study the different types of clouds.
3.1 INTRODUCTION
● Cloud Computing can be defined as the exercise of using a network of
remote servers hosted on the Internet to store, manage, and process
data, alternatively a local server or a may be a personal computer.
● Organizations offering such types of cloud computing services are
called cloud providers and charge for cloud computing services based
on their usage.
● Grids and clusters are the base for cloud computing.
30
● When we deliver the specific service to the end user, different layers Cloud Computing
can be stacked on top of the virtual infrastructure like a virtual Architecture
machine manager, a development platform, or a specific application
middleware.
● The cloud computing paradigm came out as an output of the
convergence of various existing models, technologies, and concepts
that switch the way we deliver and use IT services.
● A definition of Cloud computing:
“Cloud computing is a utility-oriented and Internet-centric way of
delivering IT services on demand. These services cover the entire
computing stack: from the hardware infrastructure packaged as a set of
virtual machines to software services such as development platforms and
distributed applications.”
32
management layer is frequently integrated with other IaaS solutions that Cloud Computing
provide physical infrastructure and adds value. Architecture
33
Cloud computing ● SaaS is a software delivery model, (one-to-many) whereby an
application is shared across multiple users.
● Example includes CRM3 and ERP4 applications that add up common
needs for almost all enterprises, from small to medium-sized and large
businesses.
● This structure relives the development of software platforms that
provide a general set of features and support specialization and ease of
integration of new components.
● SaaS applications are naturally multitenant.
● The term SaaS was then invented in 2001 by the Software Information
& Industry Association (SIIA).
● The analysis done by SIIA was mainly aligned to cover application
service providers (ASPs) and all their variations, which imprison the
concept of software applications consumed as a service in a wide
sense.
● ASPs Core characteristics of SaaS:
○ The product sold to customers is an application approach.
○ The application is centrally managed.
○ The service delivered is one-to-many.
○ The service provides is an integrated solution delivered on the
contract, which means provided as promised
Platform as a service
● Platform-as-a-Service (PaaS) which provides a development and
deployment platform for running applications in the cloud.
● They compose the middleware on top of which applications are built.
● Following figure shows a general overview of the features
characterizing the PaaS.
34
Cloud Computing
Architecture
35
Cloud computing Infrastructure as a service or hardware as a service
● Infrastructure as a Service is the most popular model and developed
market segment of cloud computing.
● They deliver customizable infrastructure on request.
● The IaaS offers single servers for entire infrastructures, including
network devices, load balancers, and database and Web servers.
● The main aim of this technology used to deliver and implement these
solutions is hardware virtualization:
○ one or more virtual machines configured and interconnected
○ Virtual machines also constitute the atomic components that are
installed and charged according to the specific features of the virtual
hardware:
■ memory
■ number of processors, and
■ disk storage
● IaaS shows all the benefits of hardware virtualization:
○ workload partitioning
■ application isolation
■ sandboxing, and
■ hardware tuning
● HaaS allows better utilization of the IT infrastructure and provides a
more safe environment for executing third party applications.
● Figure 2 shows a total view of the components setup an Infrastructure-
as-a-Service.
● It is possible to identified three principal layers:
○ the physical infrastructure
○ the software management infrastructure
○ the user interface
● At the top layer the user interface allow to access the services exposed
by the software management infrastructure. This types of an interface
is generally based on Web technologies:
○ Web services
○ RESTful APIs, and
36
○ mash-ups Cloud Computing
Architecture
● These automation allow applications or final users to access the
services exposed by the underlying infrastructure.
● A central role of the scheduler, is in charge of allocating the execution
of virtual machine instances. The scheduler communicates with the
other components that perform a variety of tasks.
37
Cloud computing
● Public clouds.
○ The cloud is open to the wider public.
● Private clouds.
○ The cloud is executed within the private property of an institution and
generally made accessible to the members of the institution
● Hybrid clouds.
○ The cloud is a combination of the two previous clouds and most likely
identifies a private cloud that has been augmented with services hosted
in a public cloud.
● Community clouds.
○ The cloud is distinguished by a multi administrative domain consisting
of different deployment models (public, private, and hybrid).
Public clouds
● Public clouds account for the first expression of cloud computing.
● They are an awareness of the canonical view of cloud computing in
which the services provided are made available to anyone, from
anywhere, and at any time through the Network.
● From a structural point of view they are a distributed system, most
likely composed of one or more data centers connected together, on
top of which the specific services offered by the cloud are
implemented.
● Any customer can easily agree with the cloud provider, enter her
username and password and billing details.
● They extend results to reduce IT infrastructure costs and serve as a
viable option for handling peak loads on the local infrastructure.
● They are used for small enterprises, which are able to initiate their
businesses without large up-front investments by completely relying
on public infrastructure for their IT needs.
● A public cloud can recommend any type of service such as
infrastructure, platform, or applications. For example, Amazon EC2 is
a public cloud that delivers infrastructure as a service. Google
AppEngine is also called public cloud that provides an application
development platform as a service; and SalesForceservice.com is a
public cloud that provides software as a service.
38
Private clouds Cloud Computing
Architecture
● Private clouds, which are the same as public clouds, but their resource
provisioning model is restricted within the boundaries of an
organization.
● Private clouds have the benefit of keeping the core business operations
in house by depending on the existing IT infrastructure and reducing
the cost of maintaining it once the cloud has been set up.
● The private cloud can provide services to a different range of users.
● private clouds is the possibility of testing applications and systems at a
comparatively less price rather than public clouds before implementing
them on the public virtual infrastructure.
● The main advantages of a private cloud computing infrastructure:
1. Customer information protection.
2. Infrastructure ensuring SLAs.
3. Compliance with standard procedures and operations.
Hybrid clouds
● A hybrid cloud could be an attractive opportunity for taking advantage
of the best of the private and public clouds. This shows the
development and diffusion of hybrid clouds.
● Hybrid clouds enable enterprises to utilize existing IT infrastructures,
maintain sensitive information within the area, and naturally increase
and reduce by provisioning external resources and releasing them
when they’re no longer needed.
39
Cloud computing ● Figure 5 demonstrate the a general overview of a hybrid cloud:
○ It is a heterogeneous distributed system consisting of a private cloud
that integrates supplementary services or resources from one or more
public clouds.
○ For this intention they are also called heterogeneous clouds.
○ Hybrid clouds look into scalability issues by leveraging external
resources for exceeding
Community clouds
● Community clouds are distributed systems created by integration of
services of different clouds to handle the specific requirement of an
industry, a community, or a business sector.
● The National Institute of Standards and Technologies characteristic
community clouds as follows:
○ The infrastructure is shared by different organizations and supports a
certain community that has shared concerns.
○ It may be controlled by the organizations or a third party and may
exist on premise or off premise.
40
Cloud Computing
Architecture
○ Media industry
■ Where Companies are finding low-cost, agile, and simple solutions to
better the efficiency of content production.
■ Most media involve an expanded ecosystem of partners.
■ Community clouds can provide a shared environment where services
can ease business to business participation and give the horsepower in
terms of aggregate bandwidth, CPU, and storage required to efficiently
support media production.
○ Healthcare industry.
■ In the healthcare industry, there are different storyline in which
community clouds are used.
■ Community clouds provide a global platform on which to share
information and knowledge without telling sensitive data maintained
within the private infrastructure.
41
Cloud computing ■ The naturally hybrid deployment model of community clouds supports
the storing of patient data in a private cloud while using the shared
infrastructure for noncritical services and automating processes within
hospitals.
○ Public sector.
■ The public sector can limit the adoption of public cloud offerings.
■ governmental processes involve several institutions and agencies
■ Aimed at providing strategic solutions at local, national, and
international administrative levels.
■ involve business-to-administration, citizen-to-administration, and
possibly business-to-business processes.
■ Examples, invoice approval, infrastructure planning, and public
hearings.
○ Scientific research.
■ THis is an interesting example of community clouds.
■ In this point, the common interest in handling and using different
organizations to split a large distributed infrastructure is scientific
computing.
● Community.
Providing resources and services, the infrastructure turns out to be more
scalable.
● Graceful failures.
There is no single provider & vendor in control of the infrastructure, there
is no chance of a single point of failure.
3.6 SUMMARY
● Three service models. Software-as-a-Service (SaaS), Platform-as-a-
Service (PaaS), and Infrastructure-as-a-Service (IaaS).
● Four deployment models. Public clouds, private clouds, community
clouds, and hybrid clouds.
● Cloud computing has been rapidly adopted in industry, there are
several open research challenges in areas such as management of cloud
computing systems, their security, and social and organizational
issues.
43
4
VIRTUALIZATION
Unit Structure
4.0 Objective
4.1 Introduction
4.2 Characteristics of Virtualized Environments
4.3 Taxonomy of Virtualization Techniques.
4.4 Summary
4.5 Reference for further reading
4.6 Unit End Exercises
4.0 OBJECTIVE
● To understand the fundamental components of cloud computing
● To study the application running on an execution environment using
virtualization.
● To understand the different virtualization techniques.
● To study the characteristics of Virtualized Environments.
4.1 INTRODUCTION
● Virtualization is a large universe of technologies and concepts of an
abstract environment whether virtual hardware or an operating system
to run applications.
● The word virtualization is often synonymous with hardware
virtualization, which plays a fundamental role in effectively delivering
Infrastructure-as-a-Service (IaaS) solutions for cloud computing.
● Virtualization technologies provide virtual environments at the
operating system level, the programming language level, and the
application level.
● Virtualization technologies provide a virtual environment for not only
executing applications but also for storage, memory, and networking.
1. Increased performance and computing capacity.
a. At present, the average end-user desktop computer is powerful enough
to meet almost all the requirements of everyday computing, with extra
capacity that is rarely used.
44
b. All these desktop computers have resources enough to host a virtual Virtualization
machine manager and execute a virtual machine with by far acceptable
performance.
c. The same deliberation applies to the high-end side of the PC market,
where supercomputers can provide huge compute power that can make
room for the execution of hundreds or thousands of virtual machines.
3. Lack of space
a. The ongoing need for additional capacity, whether storage or compute
power, makes data centers grow rapidly.
b. Companies such as Google and Microsoft expand their infrastructures
by building data centers as large as football fields that are able to host
thousands of nodes.
c. Although this is viable for IT giants, in most cases enterprises cannot
afford to build another data center to accommodate additional resource
capacity.
d. This condition, along with hardware underutilization, has led to the
diffusion of a technique called server consolidation,1 for which
virtualization technologies are fundamental.
4. Greening initiatives.
a. Nowadays, companies are increasingly looking for ways to reduce the
amount of energy they consume and to reduce their carbon footprint.
b. Data centers are one of the major power consumers; they contribute
consistently to the impact that a company has on the environment.
c. Preserving a data center operation not only involves keeping servers on,
but a great deal of energy is also consumed in keeping them cool.
d. Infrastructures for cooling have a significant impact on the carbon
footprint of a data center.
e. Hence, reducing the number of servers through server consolidation
will surely reduce the impact of cooling and power consumption of a
data center. Virtualization technologies can provide an efficient way of
consolidating servers.
45
Cloud computing 5. Rise of administrative costs.
a. Power consumption and cooling costs become higher than the cost of
IT equipment nowadays.
b. The increased demand for extra capacity, which translates into more
servers in a data center.
c. Which is responsible for a significant increment in administrative
costs.
d. Common system administration duties consist of hardware
monitoring, defective hardware replacement, server setup and updates,
server resources monitoring, and backups.
e. These are labor-intensive operations, and the higher the number of
servers that have to be managed, the higher the administrative costs.
f. Virtualization helps to reduce the number of required servers for a
given workload, thus reducing the cost of the administrative
manpower.
46
Virtualization
48
Virtualization
1. Sharing.
a. Virtualization enables the formation of a separate computing
environment within the same host.
b. In this way it is possible to fully utilize the capabilities of a powerful
guest, which would otherwise be underutilized.
c. Sharing is an important feature in virtualized data centers, where this
basic feature is used to reduce the number of active servers and limit
power consumption.
2. Aggregation.
a. Virtualization also enables aggregation, which is the opposite action.
b. A group of different hosts can be bound together and represented to
guests as a single virtual host.
c. This function is naturally implemented in middleware for distributed
computing, with a classical example represented by cluster
management software, which harnesses the physical resources of a
homogeneous group of machines and represents them as a single
resource.
3. Emulation.
a. Guest programs are executed inside an environment that is controlled
by the virtualization layer, which ultimately is a program.
b. This enables controlling and tuning the environment that is revealed to
guests. For illustration, a completely different environment with
respect to the host can be emulated, thus allowing the execution of
49
Cloud computing guest programs needs specific characteristics that are not available in
the physical host.
c. This feature becomes very important for testing purposes, where a
specific guest has to be authenticated against different platforms or
architectures and the wide range of options is not easily attainable during
the development.
4. Isolation.
a. Virtualization enables guests whether they are OS, applications, or
other entities with a completely different environment, in which they
are carried out.
b. The guest program accomplishes its activity by interacting with an
abstraction layer, which gives access to the underlying resources.
c. Isolation comes with several benefits; for example, it enables multiple
guests to run on the one and the same host without interfering with each
other.
d. Second, it provides a segregation between the host and the guest.
e. The virtual machine can sieve the activity of the guest and prevent
harmful operations against the host.
4.2.3 Portability
● The concept of portability registers in different ways according to the
specific type of virtualization considered.
● In the point of a hardware virtualization solution, the guest is filled
into a virtual image that, in most cases, can be securely moved and
executed on top of various virtual machines.
● Virtual images are generally exclusive formats that need a specific
virtual machine manager to be executed. In the programming level
virtualization, as implemented by the JVM or the .NET runtime, the
binary code act for application components can be run without any
recompilation on any implementation of the corresponding virtual
machine.
● This makes the application development cycle more pliable and
application deployment very straightforward: One version of the
application, in most cases, is able to run on different platforms with no
updates.
● Last, portability allows your own system to always be with you and
ready to use as long as the required virtual machine manager is
available.
● This requirement is, in general, less rigorous than having all the
applications and services you need available to you anywhere you go.
50
4.3 TAXONOMY OF VIRTUALIZATION TECHNIQUES Virtualization
51
Cloud computing ● Above two categories we can list out various techniques that provide
the guest a different type of virtual computing environment: bare
hardware, operating system resources, low-level programming
language, and application libraries.
Execution virtualization
● Execution virtualization consists of all methods that aim to emulate an
execution environment that is different from the one hosting the
virtualization layer.
● All these techniques focus their interest on providing support for the
execution of programs, whether these are the operating system, a
binary specification of a program compiled against an abstract
machine model, or an application.
● Hence execution virtualization can be executed directly on top of the
hardware by the OS, an application, or libraries dynamically or
statically connected to an application image.
52
Virtualization
2. Hardware-level virtualization
● Virtualization technique that enables an abstract execution
environment in terms of computer hardware on peak of which a guest
operating system can be run.
● In this model, the guest is actuated by the OS, the host by the physical
computer hardware, the virtual machine by its emulation, and the
virtual machine manager by the hypervisor shown in figure 5.
● The hypervisor is generally a program or a combination of application,
software and hardware that allows the abstraction of the underlying
physical hardware.
Hypervisors
● An elementary element of hardware virtualization is the hypervisor. It
re-form a hardware environment in which guest operating systems are
installed.
● Types of hypervisor:
○ Type I hypervisors execute on top of the hardware. Therefore, they
grasp the place of the OS and communicate directly with the ISA
interface exposed by the underlying hardware, and they imitate this
interface in order to permit the management of guest operating
systems. This hypervisor is also called a native virtual machine.
○ Type II hypervisors need the support of an operating system to enable
virtualization services. Its mean programs controlled by the OS, which
communicate with it through the ABI and emulate the ISA of virtual
hardware for guest operating systems. Shown in figure 6.
54
○ This technique was initially introduced in the IBM System 370. Virtualization
Examples is the extensions to the x86-64 bit architecture introduced
with Intel VT and AMD V.
● Full virtualization.
○ Full virtualization is the ability to run a program and application, such
as an operating system, which resides directly on top of a virtual
machine and without any alteration, as though it were run on the raw
hardware.
● Paravirtualization.
○ This is a not-transparent virtualization solution that enables execution
of thin virtual machine managers.
○ Paravirtualization methods expose a software interface to the virtual
machine that is moderately changed or up-to- date from the host and,
as a consequence, guests need to be modified. .
● Partial virtualization.
○ Partial virtualization enables a partial emulation of the primary
hardware, thus not permitting the complete execution of the guest
operating system in complete isolation.
○ Partial virtualization enables many applications to run translucency,
but not all the features of the OS can be supported.
Operating system-level virtualization
● Operating system-level virtualization creates different and separated
execution environments for applications that are controlled and
handled concurrently.
● Operating systems supporting this type of virtualization are general-
purpose, time shared operating systems with the capability to provide
stronger namespace and resource isolation.
3. Application-level virtualization
● Application-level virtualization allows applications to be executed in
runtime environments that do not natively help all the features required
by such applications.
55
Cloud computing 4.4 SUMMARY
● The term virtualization is a large umbrella under which a variety of
technologies and concepts are classified.
● The common root of all forms of virtualization is the ability to provide
the illusion of a specific environment, whether a runtime environment,
a storage facility, a network connection, or a remote desktop, by using
some kind of emulation or abstraction layer.
56
5
VIRTUALIZATION & CLOUD
COMPUTING
Unit Structure
5.0 Objective
5.1 Introduction
5.2 Virtualization and Cloud Computing
5.3 Pros and Cons of Virtualization
5.4 Virtualization using KVM
5.5 Creating virtual machines
5.6 Virt - management tool for virtualization environment
5.7 Open challenges of Cloud Computing
5.8 Summary
5.9 Reference for further reading
5.10 Unit End Exercises
5.0 OBJECTIVE
● Understand the concept of virtualization and cloud computing.
● To study the pros and cons of virtualization.
● To study virtualization using KVM.
● To understand the open challenges of cloud computing.
5.1 INTRODUCTION
● Virtualization is the “creation of a virtual version of something, such
as a server, a desktop, a storage device, an operating system or
network resources”.
● Another Way, Virtualization is a technique, which allows to share a
single physical instance of a resource or an application among multiple
customers and organizations.
● It does this by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.
57
Cloud computing 5.2 VIRTUALIZATION AND CLOUD COMPUTING
● Virtualization plays an important role in cloud computing since it
allows for the appropriate degree of customization, security, isolation,
and manageability that are fundamental for delivering IT services on
demand.
● Virtualization technologies are primarily used to offer configurable
computing environments and storage.
● Network virtualization is less popular and, in most cases, is a
complementary feature, which is naturally needed in building virtual
computing systems.
● Particularly important is the role of the virtual computing environment
and execution virtualization techniques. Among these, hardware and
programming language virtualization are the techniques adopted in
cloud computing systems.
● Hardware virtualization is an enabling factor for solutions in the
Infrastructure-as-a-Service (IaaS) market segment, while programming
language virtualization is a technology leveraged in Platform-as-a-
Service (PaaS) offerings.
● In both cases, the capability of offering a customizable and sandboxed
environment constituted an attractive business opportunity for
companies featuring a large computing infrastructure that was able to
sustain and process huge workloads.
● Moreover, virtualization also allows isolation and a finer control, thus
simplifying the leasing of services and their accountability on the
vendor side.
● Besides being an enabler for computation on demand, virtualization
also gives the opportunity to design more efficient computing systems
by means of consolidation, which is performed transparently to cloud
computing service users.
● Since virtualization allows us to create isolated and controllable
environments, it is possible to serve these environments with the same
resource without them interfering with each other.
● This opportunity is especially attractive when resources are not
effectively used, because it decreases the number of active resources
by aggregating virtual machines over a smaller number of resources
that become fully utilized.
● This exercise is also known as server consolidation, while the
movement of virtual machine instances is called virtual machine
migration (Figure 1).
58
Virtualization & Cloud
Computing
59
Cloud computing ● Portability is one more advantage of virtualization, especially for
execution virtualization techniques.
● Virtual machine instances are normally represented by one or more
files that can be easily carried with respect to physical systems.
● Portability and self-containment simplify their administration. Java
code is compiled once and runs everywhere. This needs the Java
virtual machine to be installed on the host.
● Portability and self-containment helps to reduce the costs of
maintenance.
● Multiple systems can securely coincide and share the resources of the
underlying host, without interfering with each other.
● This is essential for server strengthening, which allows adjusting the
number of active physical resources dynamically according to the
current load of the system, thus creating the opportunity to save in
terms of energy consumption and to be less impacting on the
environment.
60
● In hardware virtualization, the virtual machine can from time to time Virtualization & Cloud
simply provide a default graphic card that maps only a subset of the Computing
features available in the host.
● In the course of programming level virtual machines, some of the
features of the underlying operating systems may become inaccessible
unless specific libraries are used.
● Example is the first version of Java the support for graphic
programming was very finite and the look and feel of applications was
very needy compared to native applications.
● These problems have been resolved by providing a new framework
called java swing for designing the user interface, and further
development has been done by integrating support for the OpenGL
libraries in the software development kit.
61
Cloud computing Working of KVM
● KVM converts Linux into a type-1 hypervisor.
● All hypervisors require some OS-level part like a memory manager,
process scheduler, input or output (I/O) stack, device drivers, security
manager, a network stack, and more to run VMs.
● KVM consists of all these components because it’s part of the Linux
kernel.
● Every VM is implemented as a regular Linux process, arranged by the
standard Linux scheduler, with fixed virtual hardware like a network
card, graphics adapter, CPU(s), memory, and disks.
KVM features
KVM is part of Linux. Linux is part of KVM. Everything Linux has,
KVM has too. But there are certain features that form KVM, an
enterprise's preferred hypervisor.
● Security
KVM employs a combination of security enhanced Linux and secure
virtualization (sVirt) for enhanced VM security and isolation.
● Storage
KVM is capable of using any storage supported by Linux, including some
local disks and network-attached storage (NAS).
● Hardware support
KVM can use a broad variation of certified Linux supported hardware
platforms.
● Memory management
Kernal VM inherits the memory management features of Linux, including
non-uniform memory access and kernel same-page merging.
● Live migration
Kernel VM helps live migration, which is the ability to proceed a running
VM between physical hosts with no service onstrution.
62
Virtualization & Cloud
Computing
● Lower latency and higher prioritization
The Linux kernel features real-time extensions that allow VM-based apps
to run at lower latency with better prioritization (compared to bare metal).
● Managing KVM
It’s possible to manually manage a handful of VM fires on a single
workstation without a management tool.
Keep all of the default settings. You will be prompted to install several
Oracle components. Install all of them.
63
Cloud computing Start VirtualBox and Click on 'New' in the menu. Enter the Name of your
VM. This is how you will identify it in VirtualBox so name it something
meaningful to you. Select Type and Version. This depends on what OS you
are installing.
This depends on how much memory you have on your host computer.
Never allocate more than half of your available RAM. If you are creating
a Windows VM I recommend at least (1-2 GB)
If you are creating a Linux VM I recommend at least (512 MB)
64
If you already have an existing VM that you want to add select "Use an Virtualization & Cloud
existing Virtual hard drive file." Otherwise select "Create a virtual hard Computing
drive now."
Select 'VDI.' This is usually the best option. The VM will be stored in a
single file on your computer with the .vdi extension.
65
Cloud computing Step 7: Setup File Location and Size
Double click on your newly created VM (It will be on the left hand side
and will have the name you gave it in Step 2). Browse to your installation
media or .iso file. Finish installation.
66
5.6 OVIRT - MANAGEMENT TOOL FOR Virtualization & Cloud
Computing
VIRTUALIZATION ENVIRONMENT
● OVirt is an open source data center virtualization platform developed
and encouraged by Red Hat. OVirt, which provides large-scale,
centralized management for server and desktop virtualization, was
planned as an open source alternative to VMware vCenter.
● OVirt gives kernel based virtual machine management for multi-node
virtualization. Kernel-based Virtual Machines (KVMs) are a
virtualization infrastructure that changes the Linux kernel into a
hypervisor.
● Features of oVirt
○ OVirt enables centralized management of VMs, networking
configurations, hosts, and compute and storage resources from the web
based front end.
○ OVirt also provides features for disaster recovery (DR) and hyper
converged infrastructure deployments.
○ Features for the management of compute resources include:
■ CPU pinning,
■ same-page merging and
■ memory over commitment.
○ VM management features include
■ live migrations,
■ live snapshots,
■ the creation of VM templates and VMs,
■ automated configuration
○ DR features consist of inputting storage domains to different types.
○ OVirt utilizes both self-hosted and Gluster Storage domains for
centralized management.
Components of oVirt
1. oVirt engine
a. The oVirt engine acts as the control center for oVirt environments.
b. The engine enables admins to define hosts and networks, as well as to
add storage, create VMs and manage user permissions.
c. Included in the oVirt engine is a graphical user interface (GUI), which
manages oVirt infrastructure resources.
d. The oVirt engine can be installed on a stand-alone server or in a node
cluster in a VM.
67
Cloud computing 2. oVirt node
a. The oVirt node is a server that runs on CentOS, Fedora or Red Hat
Enterprise Linux with a virtual desktop and server manager (VDSM)
daemon and KVM hypervisor.
b. The VDSM controls the resources available to the node, including
compute, networking and storage resources.
5.7 OPEN CHALLENGES OF CLOUD COMPUTING
1. Security
● The main concern in investing in cloud services is security issues in
cloud computing.
It is because your data gets stored and processed by a third-party
vendor and we cannot see it.
● We listen about broken authentication, compromised credentials,
account hacking, data breaches, etc. in a particular organization. It
makes you a little more doubtful.
2. Password Security
● As large numbers of people access cloud accounts, it sometimes
becomes vulnerable. Anybody who knows the password or hacks into
the cloud will be able to access confidential information.
● Nowadays organizations should use multiple level authentications and
make sure that the passwords remain secured. Also, the passwords
should be updated regularly, especially when a particular employee
leaves the job or leaves the organization.
3. Cost Management
● Cloud computing allows access to application software over a fast
internet connection and lets save on investing in costly computer
hardware, software, management, and maintenance.
4. Lack of expertise
● With the increasing workload on cloud technologies and regularly
improving cloud tools, management has become very difficult.
● There has been a consistent order for a trained workforce who can
coordinate with cloud computing tools and services.
● Firms required training their IT staff to minimize this challenge.
5. Internet Connectivity
● Cloud services are mainly dependent on a high-speed internet
connection.
● Hence businesses that are small and face connectivity problems should
perfectly first invest in a good internet connection so that no downtime
happens.
● It is because internet downtime might incur infinite business losses.
68
6. Control or Governance Virtualization & Cloud
Computing
● One more ethical issue in cloud computing is maintaining proper
control over asset management and maintenance.
● There should be an individual team to make sure that the assets used to
implement cloud services are used according to concur policies and
dedicated procedures.
7. Compliance
● Another major risk of cloud computing is maintaining compliance.
● By compliance using mean, a set of rules about what data is permitted
to be moved and what should be kept in house to maintain compliance.
● The organizations hence follow and respect the compliance rules set by
various government bodies.
8. Multiple Cloud Management
● Companies have begun to invest in multiple public clouds, multiple
private clouds or a combination of both is called the hybrid cloud.
● This has expanded rapidly in recent times.
● So it has become important to list the various challenges faced by such
types of organizations and find solutions to grow with the trend.
9. Creating a private cloud
● Implementing an internal cloud is beneficial. This is because all the
data remains secure in house.
● But the challenge here is that the IT team should build and fix
everything by themselves. Also, the team is required to ensure the
smooth working of the cloud.
● They are required to automate maximum manual tasks to be dynamic.
The execution of tasks should be in the proper order.
10. Performance
● When business applications migrate to a cloud or a third-party vendor,
the business performance starts to depend on the service provider as
well.
● Another major issue in cloud computing is investing in the right cloud
service provider.
11. Migration
● Migration is nothing but updating an application and a new
application or an existing application to a cloud. In the case of a new
application, the process is good and straightforward.
69
Cloud computing 5.8 SUMMARY
● Virtualization has become very popular and extensively used,
especially in cloud computing.
● All these concepts play a fundamental role in building cloud
computing infrastructure and services in which hardware; IT
infrastructure, applications, and services are delivered on demand
through the Internet or more generally via a network connection.
● OVirt is an open source data center virtualization platform developed
and encouraged by Red Hat. OVirt, which provides large-scale,
centralized management for server and desktop virtualization, was
planned as an open source alternative to VMware vCenter.
70
6
OPEN STACK
Unit Structure
6.1 Objectives
6.2 Introduction to Open Stack
6.3 OpenStack test-drive
6.4 Basic Open Stack operations
6.5 OpenStack CLI and APIs
6.6 Tenant model operations
6.7 Quotas, Private cloud building blocks
6.8 Controller deployment
6.9 Networking deployment
6.10 Block Storage deployment
6.11 Compute deployment
6.12 Deploying and utilizing OpenStack in production environments
6.13 Building a production environment
6.14 Application orchestration using OpenStack Heat
6.15 Summary
6.16 Questions
6.17 References
6.1 OBJECTIVES
At the end of this unit, the student will be able to
71
Cloud computing 2. OpenStack consists of a set of interrelated components, each of which
provides a specific function in the cloud computing environment.
These components include:
7. Test OpenStack: To test OpenStack, you can create and deploy virtual
machines, test network connectivity, and simulate workload scenarios
to test the performance and scalability of the environment.
10. Deploy your own OpenStack environment: If you want to deploy your
own OpenStack environment, you can use tools such as DevStack or
Packstack. DevStack is a script that automates the installation of
OpenStack on a single machine, while Packstack is a similar tool that
can be used to deploy OpenStack on multiple machines. To deploy
OpenStack on your own, you will need to have a server or virtual
machine that meets the hardware and software requirements for
OpenStack.
73
Cloud computing 11. Once you have a test environment set up, you can use the OpenStack
web interface (Horizon) or command-line interface (CLI) to create
virtual machines, networks, and storage resources. You can also
explore the different OpenStack components and their functionality,
such as the compute (Nova) and networking (Neutron) components.
To launch an instance using the CLI, you can use the openstack server
create command and specify the necessary parameters.
Create a network: You can create a new network for your instances by
selecting the Network tab in the Horizon dashboard and clicking on
"Create Network." You will be prompted to specify the network type,
subnet details, and other configuration options.
To create a network using the CLI, you can use the openstack network
create command and specify the necessary parameters.
To attach a volume using the CLI, you can use the openstack server
add volume command and specify the necessary parameters.
To manage security groups using the CLI, you can use the openstack
security group create and openstack security group rule create
commands.
To resize an instance using the CLI, you can use the openstack server
resize command and specify the necessary parameters.
75
Cloud computing 6.5 OPENSTACK CLI AND APIS
1. OpenStack provides a command-line interface (CLI) and APIs that
allow you to manage and automate your cloud resources.
3. The CLI uses the OpenStack API to communicate with the OpenStack
services. The CLI is available for all OpenStack services, including
Compute, Networking, Identity, Image, and Block Storage.
4. To use the OpenStack CLI, you need to install the OpenStack client on
your local machine. The client is available for Linux, macOS, and
Windows. Once you have installed the client, you can use the
openstack command to interact with OpenStack services.
5. The OpenStack APIs are a set of RESTful APIs that allow you to
programmatically interact with OpenStack services. The APIs provide
a standardized way of accessing OpenStack services and can be used
by developers to create custom applications and tools that interact with
OpenStack services.
8. Overall, the OpenStack CLI and APIs are powerful tools that allow
you to manage and automate your cloud resources. Whether you prefer
to use the CLI or APIs, OpenStack provides a flexible and extensible
platform for building and managing cloud infrastructure.
76
2. Here are some basic OpenStack tenant model operations: Open Stack
Creating a tenant: You can create a new tenant using the OpenStack
CLI or APIs. When you create a tenant, you specify a name and an
optional description.
Creating users: Once you have created a tenant, you can create one or
more users within that tenant. Users are granted access to the resources
associated with their tenant.
Assigning roles: You can assign roles to users within a tenant. Roles
are used to define what actions a user can perform on a specific
resource. For example, you might assign a user the role of "admin" for
a particular project, giving them full access to all resources within that
project.
Managing user access to tenants: You can control which users have
access to a tenant's resources by assigning roles to those users. You
can assign multiple roles to a user, and you can assign roles to users at
the tenant level or the project level.
Managing resources: Once you have created a tenant, you can create
and manage resources within that tenant. You can create instances,
volumes, networks, and images, and you can assign those resources to
the tenant.
Volume quotas: This sets a limit on the amount of storage that a tenant
can allocate to volumes.
Floating IP quotas: This sets a limit on the number of floating IPs that
a tenant can allocate.
3. Private cloud building blocks are the components that are used to build
a private cloud infrastructure. These components include the physical
hardware, such as servers and storage devices, as well as the software,
such as OpenStack, that is used to manage the cloud infrastructure.
Compute nodes: These are the physical servers that are used to host
virtual machines.
Storage nodes: These are the physical servers that are used to provide
storage for the cloud infrastructure.
Networking hardware: This includes switches and routers that are used
to connect the cloud infrastructure to the external network.
78
Virtualization software: This is the software that is used to create and Open Stack
manage virtual machines.
79
Cloud computing Identity and Access Management (IAM): IAM is used to manage user
access to cloud resources. You can use IAM tools such as OpenStack
Keystone to authenticate and authorize users and assign roles and
permissions.
Monitoring and Management: Monitoring and management tools are
used to ensure that your private cloud infrastructure is running
smoothly. You can use tools such as Nagios or Zabbix to monitor
system performance and detect issues before they become problems.
10. Overall, these building blocks are the foundation of a private cloud
infrastructure, and they enable you to create a flexible, scalable, and
secure cloud environment that meets your organization's needs.
3. Here are some general steps for deploying the controller node:
Install the base operating system: The first step is to install the
operating system on the server that will become the controller node.
Many OpenStack distributions provide pre-configured images that you
can use.
80
Install OpenStack packages: Next, you need to install the OpenStack Open Stack
packages on the controller node. This can be done using package
managers like yum or apt-get.
Verify the installation: After the services are configured, you can
verify that they are working properly by running various tests and
checks. For example, you can use the OpenStack CLI to check that
you can authenticate and access OpenStack services.
4. It's important to note that the controller node deployment process can
vary depending on the specific OpenStack distribution and version you
are using, as well as the requirements of your environment. It's always
a good idea to consult the documentation and follow best practices for
your particular deployment.
Prepare the controller node: The first step is to prepare the controller
node by installing the operating system and configuring the network
interfaces. You should also configure the hostname, domain name, and
time zone.
81
Cloud computing Configure the database: OpenStack uses a database to store
configuration information and metadata about resources. You need to
configure the database service (e.g., MySQL or MariaDB) and create
the necessary databases and users.
Start the services: After configuring the services, you can start them on
the controller node using the service manager of your operating system
(e.g., systemctl or service).
Verify the installation: Finally, you should verify that the OpenStack
services are running correctly by using the OpenStack CLI or API to
create and manage resources.
Install and configure the Neutron service: The first step is to install the
Neutron service on the controller node and configure it to provide
network connectivity. This involves configuring the Neutron server,
the Neutron API, and the Neutron plugin (e.g., ML2). You also need to
configure the Neutron database and the message queue service.
Configure the OVS agent: The next step is to configure the Open
vSwitch (OVS) agent, which provides virtual network connectivity to
instances. This involves configuring the OVS service, creating the
necessary bridges and ports, and configuring the OVS firewall.
82
Create networks, subnets, and routers: Once the Neutron service and Open Stack
OVS agent are configured, you can create networks, subnets, and
routers. A network is a logical abstraction that provides connectivity
between instances, while a subnet is a range of IP addresses that can be
used by instances in a network. A router is a virtual device that
connects two or more networks.
Launch instances: Finally, you can launch instances and attach them to
the networks and security groups you created. The instances should be
able to communicate with each other and with the external network
through the router.
Install and configure the Cinder service: The first step is to install the
Cinder service on the controller node and configure it to provide block
storage. This involves configuring the Cinder server, the Cinder API,
the Cinder scheduler, and the Cinder volume service.
83
Cloud computing Create volume types: A volume type is a way to define the
characteristics of a block volume, such as the size, performance, and
availability. You need to create volume types that reflect the different
needs of your applications.
Create block volumes: Once the storage backends and volume types
are configured, you can create block volumes. A block volume is a
persistent block storage device that can be attached to an instance.
Create storage pools and volumes: Once the Cinder service and storage
backend are configured, you can create storage pools and volumes. A
storage pool is a group of storage devices that are used to create
volumes, while a volume is a block-level storage device that can be
attached to an instance.
84
6.11 COMPUTE DEPLOYMENT Open Stack
Install and configure the Nova compute service: The first step is to
install and configure the Nova compute service on each compute node.
This involves installing the necessary packages, configuring the Nova
compute service, and setting up the networking.
Create and manage instances: Once the Nova compute service and
hypervisor are configured, you can create and manage instances. An
instance is a virtual machine that runs on the compute node and
provides computing resources to users. You can create instances using
the OpenStack dashboard, CLI, or API.
Install and configure the Nova services: The first step in deploying
Nova is to install and configure the Nova services on the controller
node. This involves configuring the Nova API, the Nova conductor,
and the Nova database.
Install and configure the Nova compute nodes: The next step is to
install and configure the Nova compute nodes. This involves
configuring the Nova compute service, setting up networking, and
configuring the hypervisor.
Create flavors: Flavors are predefined templates that define the size,
CPU, memory, and disk specifications of an instance. You can create
different flavors based on the requirements of your applications and
workloads.
Create and manage instances: Once Nova is deployed, you can create
and manage instances using the Nova API or the OpenStack
dashboard. You can select the appropriate flavor for your instances,
attach storage volumes, and configure networking.
Operating System: Choose the operating system for the nodes, and
ensure that it is compatible with the OpenStack version.
Storage: Choose the storage solution for the environment, and ensure
that it is compatible with OpenStack. Consider using redundant storage
systems for high availability.
87
Cloud computing Configure OpenStack: After installation, configure OpenStack
according to the requirements of the production environment. This
includes configuring compute, networking, and storage.
Create a Heat template: A Heat template is a text file that defines the
resources required to deploy an application. The template is written in
YAML or JSON format and includes a description of the resources,
their dependencies, and their configuration.
88
Update the stack: If changes need to be made to the stack, the Heat Open Stack
template can be modified and uploaded to Heat. Heat will then update
the stack by making the necessary changes to the resources.
3.15 SUMMARY
In this chapter we learned about openstack and its components. The
summing of all points as follows
3.16 QUESTIONS
1. What is openstack?
2. Write a short note on
i. OpenStack test-drive
ii. Basic OpenStack operations
iii. OpenStack CLI and APIs
iv. Tenant model operations
v. Quotas, Private cloud building blocks
3. Explain the following concepts in detail.
i. Controller deployment
ii. Networking deployment
89
Cloud computing iii. Block Storage deployment
iv. Compute deployment
v. deploying and utilizing OpenStack in production environments
6.17 REFERENCES
1. OpenStack Essentials, Dan Radez, PACKT Publishing, 2015
3. https://www.openstack.org
90