UNIT 5
UNIT 5
CLOUD ENVIRONMENTS
(PC 628 CS) –SARAAH GHORI
2
Google App Engine
Google App Engine is a cloud computing platform as a service for developing and
hosting web applications in Google-managed data centers. Applications are
sandboxed and run across multiple servers
Initial release: April 7, 2008; 13 years ago
Google App Engine primarily supports Go, PHP, Java, Python, Node. js, . NET,
and Ruby applications, although it can also support other languages via "custom
runtimes".
App Engine offers automatic scaling for web applications—as the number of
requests increases for an application, App Engine automatically allocates more
resources for the web application to handle the additional demand.
The service is free up to a certain level of consumed resources and only in
standard environment but not in flexible environment. Fees are charged for
additional storage, bandwidth, or instance hours required by the application.[5] It
was first released as a preview version in April 2008 and came out of preview in
September 2011.
You can run your applications in App Engine using the flexible
environment or standard environment. You can also choose to simultaneously
use both environments for your application and allow your services to take
advantage of each environment's individual benefits.
Application instances run in a sandbox, using the runtime environment of a supported language listed below.
Applications that need to deal with rapid scaling.
The standard environment is optimal for applications with the following characteristics:
•Source code is written in specific versions of the supported programming languages:
•Python 2.7, Python 3.7, Python 3.8, Python 3.9
•Java 8, Java 11
•Node.js 8, Node.js 10, Node.js 12, and Node.js 14
•PHP 5.5, PHP 7.2, PHP 7.3, and PHP 7.4
•Ruby 2.5, Ruby 2.6, and Ruby 2.7
•Go 1.11, Go 1.12, Go 1.13, Go 1.14, Go 1.15
•Intended to run for free or at very low cost, where you pay only for what you need and when you need it. For
example, your application can scale to 0 instances when there is no traffic.
•Experiences sudden and extreme spikes of traffic which require immediate scaling.
Application instances run within Docker containers on Compute Engine virtual machines (VM).
Applications that receive consistent traffic, experience regular traffic fluctuations, or meet the parameters for scaling up and down gradually.
The flexible environment is optimal for applications with the following characteristics:
•Source code that is written in a version of any of the supported programming languages:
•Runs in a Docker container that includes a custom runtime or source code written in other programming languages.
•Accesses the resources or services of your Google Cloud project that reside in the Compute Engine network.
WebSockets No Yes
Java 8, Python 2, and PHP 5 provide a proprietary Sockets API
(beta), but the API is not available in newer standard runtimes.
Supports installing third-party binaries •Yes for Java 8, Java 11, Node.js, Python 3, PHP 7, Ruby, Go Yes
VI SEM CC Saraah Ghori
1.11, and Go 1.12+.
•No for Python 2.7 and PHP 5.5.
12
Comparing the flexible environment to Compute
Engine
The App Engine flexible environment has the following differences to Compute Engine:
• Flexible environment VM instances are restarted on a weekly basis. During restarts, Google's management services apply any
• You always have root access to Compute Engine VM instances. By default, SSH access to the VM instances in the flexible
environment is disabled. If you choose, you can enable root access to your app's VM instances.
• Code deployments can take longer as container images are built by using the Cloud Build service.
• The geographical region of a flexible environment VM instance is determined by the location that you specify for the App Engine
application of your Cloud project. Google's management services ensures that the VM instances are co-located for optimal
performance.
VI SEM CC Saraah Ghori
13
GAE ( in a minute……………………)
https://www.youtube.com/watch?v=Xuf3J6SKVV0&list=PLIivdWyY5sqIQ4_5
PwyyXZVdsXr3wYhip
Scalable
Secure
req/resp
Figure shows the major
stateless APIs R/O FS
building blocks of the
Google cloud platform
which as been used to
urlfech Python stdlib deliver the cloud services.
mail A VM
process app
images
stateful datastore
APIs memcache
Scaling
Google has pioneered cloud development by leveraging the large number of data centers it operates.
Functional Modules of GAE
GFS is used for storing large amount of data.
MapReduce is for use in application program development.
Chubby is used for distributed application lock services.
BigTable offers a storage for accessing structured data.
The GAE platform comprises the following five major components.
Datastoreapplication
runtime environment
software development kit (SDK)
administration console
GAE web service infrastructure
AWS launched in 2006 from the internal infrastructure that Amazon.com built to handle
its online retail operations. AWS was one of the first companies to introduce a pay-as-
you-go cloud computing model that scales to provide users with compute, storage or
throughput as needed.
AWS offers many different tools and solutions for enterprises and software developers
that can be used in data centers in up to 190 countries. Groups such as government
agencies, education institutions, nonprofits and private organizations can use AWS
services.
The AWS platform was originally launched in 2002 with only a few services. In 2003, AWS was re-envisioned to make Amazon's
compute infrastructure standardized, automated and web service focused. This re-envisioning included the thought of selling
access to virtual servers as a service platform. One year later, in 2004, the first publicly available AWS service -- Amazon SQS --
was launched.
In 2006, AWS was relaunched to include three services -- including Amazon S3 cloud storage, SQS, and EC2 -- officially making
AWS a suite of online core services. In 2009, S3 and EC2 were launched in Europe, and the Elastic Block Store and Amazon
CloudFront were released and adopted to AWS. In 2013, AWS started to offer a certification process in AWS services, and 2018
saw the release of an autoscaling service.
Over time, AWS has added plenty of services that helped make it a low-cost infrastructure platform that is highly available and
scalable. AWS now has a focus on the cloud, with data centers placed around the world, in places such as the United States,
Australia, Europe, Japan and Brazil.
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform,
offering over 200 fully featured services from data centers globally. Millions of customers—including
the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to
functionality to help businesses scale and grow. Running web and application
1. Region — A region is a geographical area. Each region consists of 2 (or more) availability
zones.
2. Availability Zone — It is simply a data center.
3. Edge Location — They are CDN (Content Delivery Network) endpoints for CloudFront.
More than 100 services comprise the Amazon Web •Development tools
Services portfolio, including those for compute,
•Management
databases, infrastructure management, application
•Monitoring
development and security. These services, by category,
include: •Security
•Governance
• Compute
•Big data management
• Storage databases
•Analytics
• Data management
•Artificial intelligence (AI)
• Migration •Mobile development
Amazon Web Services provides services from dozens of data centers spread across availability zones (AZs) in
regions across the world. An AZ is a location that contains multiple physical data centers. A region is a collection of
AZs in geographic proximity connected by low-latency network links.
A business will choose one or multiple availability zones for a variety of reasons, such as compliance and proximity
to end customers. For example, an AWS customer can spin up virtual machines (VMs) and replicate data in different
AZs to achieve a highly reliable infrastructure that is resistant to failures of individual servers or an entire data center.
Amazon Elastic Compute Cloud (EC2) is a service that provides virtual servers -- called EC2 instances -- for
compute capacity. The EC2 service offers dozens of instance types with varying capacities and sizes, tailored to
specific workload types and applications, such as memory-intensive and accelerated-computing jobs. AWS also
provides an Auto Scaling tool to dynamically scale capacity to maintain instance health and performance.
Amazon Simple Storage Service (S3) provides scalable object storage for data backup, collection
and analytics. An IT professional stores data and files as S3 objects -- which can range up to 5
gigabytes (GB) -- inside S3 buckets to keep them organized. A business can save money with S3
through its Infrequent Access storage tier or by using Amazon Glacier for long-term cold storage.
Amazon Elastic Block Store provides block-level storage volumes for persistent data storage
when using EC2 instances. Amazon Elastic File System offers managed cloud-based file storage.
A business can also migrate data to the cloud via storage transport devices, such as AWS
Snowball and Snowmobile, or use AWS Storage Gateway to enable on-premises apps to access
cloud data.
The Amazon Relational Database Service -- which includes options for Oracle, SQL Server,
PostgreSQL, MySQL, MariaDB and a proprietary high-performance database called Amazon Aurora --
provides a relational database management system for AWS users. AWS also offers
managed NoSQL databases through Amazon DynamoDB.
An AWS customer can use Amazon ElastiCache and DynamoDB Accelerator as in-memory and real-
time data caches for applications. Amazon Redshift offers a data warehouse, which makes it easier for
data analysts to perform business intelligence (BI) tasks.
AWS includes various tools and services designed to help users migrate applications,
databases, servers and data onto its public cloud. The AWS Migration Hub provides a
location to monitor and manage migrations from on premises to the cloud. Once in the
cloud, EC2 Systems Manager helps an IT team configure on-premises servers and AWS
instances.
Amazon also has partnerships with several technology vendors that ease hybrid cloud
deployments. VMware Cloud on AWS brings software-defined data center technology
from VMware to the AWS cloud. Red Hat Enterprise Linux for Amazon EC2 is the product
of another partnership, extending Red Hat's operating system to the AWS cloud.
An Amazon Virtual Private Cloud (Amazon VPC) gives an administrator control over a virtual network to use an isolated
section of the AWS cloud. AWS automatically provisions new resources within a VPC for extra protection.
Admins can balance network traffic with the Elastic Load Balancing (ELB) service, which includes the Application Load
Balancer and Network Load Balancer. AWS also provides a domain name system called Amazon Route 53 that routes end
users to applications.
An IT professional can establish a dedicated connection from an on-premises data center to the AWS cloud via AWS Direct
Connect.
A developer can take advantage of AWS command-line tools and software development kits (SDKs) to
• The AWS Command Line Interface, which is Amazon's proprietary code interface.
• A developer can use AWS Tools for Powershell to manage cloud services from Windows environments.
• Developers can use AWS Serverless Application Model to simulate an AWS environment to test Lambda
functions.
AWS SDKs are available for a variety of platforms and programming languages, including Java, PHP,
Amazon API Gateway enables a development team to create, manage and monitor custom application
program interfaces (APIs) that let applications access data or functionality from back-end services. API
AWS also provides a packaged media transcoding service -- Amazon Elastic Transcoder -- and a service that visualizes
workflows for microservices-based applications -- AWS Step Functions.
A development team can also create continuous integration and continuous delivery pipelines with services like:
• AWS CodePipeline
• AWS CodeBuild
• AWS CodeDeploy
• AWS CodeStar
A developer can also store code in Git repositories with AWS CodeCommit and evaluate the performance of microservices-
based applications with AWS X-Ray.
An admin can manage and track cloud resource configuration via AWS Config and AWS Config
Rules. Those tools, along with AWS Trusted Advisor, can help an IT team avoid improperly
configured and needlessly expensive cloud resource deployments.
AWS provides several automation tools in its portfolio. An admin can automate infrastructure
provisioning via AWS CloudFormation templates, and also use AWS OpsWorks and Chef to
automate infrastructure and system configurations.
An AWS customer can monitor resource and application health with Amazon CloudWatch and the
AWS Personal Health Dashboard, as well as use AWS CloudTrail to retain user activity and API
calls for auditing.
AWS provides a range of services for cloud security, including AWS Identity and Access
Management, which allows admins to define and manage user access to resources. An admin can
also create a user directory with Amazon Cloud Directory, or connect cloud resources to an
existing Microsoft Active Directory with the AWS Directory Service. Additionally, the AWS
Organizations service enables a business to establish and manage policies for multiple AWS
accounts.
Amazon Web Services has also introduced tools that automatically assess potential security
risks. Amazon Inspector analyzes an AWS environment for vulnerabilities that might impact
security and compliance. Amazon Macie uses machine learning (ML) technology to protect
sensitive cloud data.
AWS also includes tools and services that provide software- and hardware-based encryption, protect
against DDoS attacks, provision Secure Sockets Layer (SSL) and Transport Layer Security (TLS)
certificates and filter potentially harmful traffic to web applications.
The AWS Management Console is a browser-based graphical user interface (GUI) for AWS. The
Management Console can be used to manage resources in cloud computing, cloud storage and security
credentials. The AWS Console interfaces with all AWS resources.
AWS includes a variety of big data analytics and application services. This
includes:
• Amazon Kinesis, which provides several tools to process and analyze streaming
data.
• AWS Glue, which is a service that handles extract, transform and load jobs.
AWS offers a range of AI model development and delivery platforms, as well as packaged AI-based
applications. The Amazon AI suite of tools includes:
AWS also provides technology for developers to build smart apps that rely on machine learning
technology and complex algorithms.
With AWS Deep Learning Amazon Machine Images (AMIs), developers can create and train custom AI
models with clusters of graphics processing units (GPUs) or compute-optimized instances. AWS also
includes deep learning development frameworks for MXNet and TensorFlow.
On the consumer side, AWS technologies power the Alexa Voice Services, and a developer can use
the Alexa Skills Kit to build voice-based apps for Echo devices.
VI SEM CC Saraah Ghori
49
Mobile development
AWS offers augmented reality (AR) and virtual reality (VR) development tools through the Amazon Sumerian service.
Amazon Sumerian allows users to create AR and VR applications without needing to know programming or create 3D
graphics. The service also enables users to test and publish applications in-browser. Amazon Sumerian can be used in:
• 3D web applications
• Marketing
• Online education
• Manufacturing
• Training simulations
• Gaming
VI SEM CC Saraah Ghori
52
Game development
AWS can also be used for game development. Large game developing companies, such as
Ubisoft, will use AWS services for their games, like For Honor. AWS can provide services for
each part of a game's lifecycle.
For example, AWS will provide a developer back-end services, analytics and developer tools.
Developer tools should help aid developers in making their game, while back-end services might
be able to help with building, deploying or scaling a developer's platform. Analytics might help
developers better know their customers and how they play the game. Developers can also store
data, or host game data on AWS servers
AWS also has a variety of services that enable the internet of things (IoT)
deployments. The AWS IoT service provides a back-end platform to manage IoT
devices and data ingestion to other AWS storage and database services.
The AWS IoT Button provides hardware for limited IoT functionality and AWS
Greengrass brings AWS compute capabilities to IoT devices.
Amazon Web Services has a range of business productivity SaaS options, including:
• The Amazon Chime service enables online video meetings, calls and text-based chats
across devices.
AWS offers a pay-as-you-go model for its cloud services, either on a per-hour or per-second basis. There is also an option to
reserve a set amount of compute capacity at a discounted price for customers who prepay in whole, or who sign up for one- or
three-year usage commitments.
If potential customers can’t afford the costs, then AWS Free Tier is another possible avenue for using AWS services. AWS Free Tier
allows users to gain first-hand experience with AWS services for free; they can access up to 60 products and start building on the
AWS platform. Free Tier is offered in three different options: always free, 12 months free and trials.
AWS competes primarily with Microsoft Azure, Google and IBM in the public IaaS market.
https://www.youtube.com/watch?v=3XFODda6YXo
Technology has come a long way in transforming the industry. Cloud computing is one major
revolution in the process that has completely changed the way business functions.
And, we have been witnessing the series of emerging technologies powered by cloud computing
over the years.
The technology evolution has been happening around the cloud and for the effective cloud
utilization. In the process, these technologies are not just changing the cloud computing
environment, they are transforming the world of computing as a whole.
1) Containers
Containers rose to fame exactly at a point when the ‘speed of delivery
and complexity’ have been very important for the IT industry. Unlike
traditional Virtual Machines (VM) that hold the main OS, Container
technology arrived as a lightweight software packaging method, where a
container package carries a piece of software and its bare essentials
(libraries and configuration files) to traverse across different computing
environments.
Docker and Kubernetes took the container popularity to the next level in
terms of adoption.
“According to the Rightscale State of the Cloud report 2019, 66 percent of
firms have already adopted containers and 60 percent have Kubernetes for
container management.”
That was the time when IT industry struggling with critical hardware maintenance and
software provisioning. Serverless computing answered these concerns by handling key
maintenance and scaling demands of firms, encouraging them to focus on other key
functions in their cloud-based systems.
With serverless computing, the trend for pay-as-you-go and pay-for-use computing
models picked up addressing the majority software burden. This function-as-service
model made the cloud computing environment run faster and more efficiently.
Dealing with single large applications is old fashion! Componentization has been the trend to simplify the
software process. This process of breaking a larger application into small modules or components to deliver
faster is referred to as Microservice.
A microservice architecture breaks monolithic apps into small, joined services or modules. This modular
approach makes it easy for the delivery of multiple modules by different small teams, independent of the actual
‘bulk’ application. This enables continuous delivery of fully-updated software and ultimately speeds up the app
delivery cycle.
This is another major trend that gave a due boost to the cloud computing environment. By bridging gaps, DevOps
culture brought together different teams with expertise in different areas, making them work for a single goal.
Developers create codes, Operations teams work on metrics. Together, they can create wonders in a software
environment giving a competitive edge for organizations. DevOps tools and resources, security integration like
DevSecOps and more make DevOps more special!
IoT has given a new shape to the technology trend. What we see around are the resultant, fitness trackers that come as
wristwatches, smart homes, self-driving automobiles, and more. These processes involve enormous volumes of data.
How do you process this data? The answer many businesses have is through ‘Cloud’.
Cloud-based data analytics platforms, backed by hyper-scaling servers, facilitate effective data processing. Cloud also
offers solution to another key question about setting up IoT, which is basically expensive and complex to build from
scratch.
No worries, major cloud platforms address this concern by giving IoT solutions part of their offering.
Artificial Intelligence is now the next-generation technology solution set to present the technology world in a different
view. With its solutions that exhibit machine intelligence independent of human assistance, AI is emerging to enjoy high
market dominance among existing tools.
However, building AI applications is complex for many businesses. This is where cloud has a crucial role. Such companies
are looking at Cloud for machine learning and other deep learning tools. Because of its wide computing and storage
options, the cloud-based AI is emerging as the most-sought solution for businesses of any size in realizing their AI efforts.
InConclusion
These promising technologies show once again how crucial the Cloud Computing platform is to the IT industry, today and for the
future.
INTRODUCTION
Eucalyptus stands for Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems.
Eucalyptus in cloud computing is an open-source software platform for carrying out IaaS or
Infrastructure-as-a-Service in a hybrid cloud computing or private cloud computing environment.
Eucalyptus in cloud computing pools together existing virtualised framework to make cloud resources
for storage as a service, network as a service and infrastructure as a service. Elastic Utility Computing
Architecture for Linking Your Programs To Useful Systems is short known as Eucalyptus in cloud
computing.
Eucalyptus CLIs can oversee both Amazon Web Services and their own private occasions. Clients can undoubtedly relocate cases from Eucalyptus
to Amazon Elastic Cloud. Network, storage, and compute are overseen by the virtualisation layer. Occurrences are isolated by hardware
virtualisation. The following wording is utilised by Eucalyptus architecture in cloud computing.
1. Images: Any software application, configuration, module software or framework software packaged and conveyed in the Eucalyptus cloud is
known as a Eucalyptus Machine Image.
2. Instances: When we run the picture and utilise it, it turns into an instance.
3. Networking: The Eucalyptus network is partitioned into three modes: Static mode, System mode, and Managed mode.
5. Eucalyptus elastic block storage: It gives block-level storage volumes to connect to an instance.
6. Auto-scaling and load adjusting: It is utilised to make or obliterate cases or administrations dependent on necessities.
1. Cluster Controller: It oversees at least one Node controller and liable for sending and overseeing
occurrences on them.
5. Node Controller: It is an essential part of Nodes. It keeps up the life cycle of the occasions
running on every node.
Numerous other tools can be utilised to associate with AWS and Eucalyptus in cloud computing, and they are
recorded below.
1. Vagrant AWS Plugin: This instrument gives config records to oversee AWS instances and oversee VMs on the
local framework.
2. s3curl: This is a device for collaboration between AWS S3 and Eucalyptus Walrus.
3. s3fs: This is a FUSE record framework, which can be utilised to mount a bucket from Walrus or S3 as a local
document framework.
4. Cloudberry S3 Explorer: This Windows instrument is for overseeing documents among S3 and Walrus.
1. Eucalyptus can be utilised to benefit both the eucalyptus private cloud and the eucalyptus public
cloud.
2. Clients can run Amazon or Eucalyptus machine pictures as examples on both clouds.
3. It isn’t extremely mainstream on the lookout yet is a solid contender to CloudStack and OpenStack.
4. It has 100% Application Programming Interface similarity with all the Amazon Web Services.
5. Eucalyptus can be utilised with DevOps apparatuses like Chef and Puppet.
There are numerous Infrastructure-as-a-Service contributions accessible in the market like OpenNebula,
Eucalyptus, CloudStack and OpenStack, all being utilised as private and public Infrastructure-as-a-Service
contributions.
Of the multitude of Infrastructure-as-a-Service contributions, OpenStack stays the most well-known, dynamic
and greatest open-source cloud computing project. At this point, eagerness for OpenNebula, CloudStack and
Eucalyptus stay strong.
It is utilised to assemble hybrid, public and private cloud. It can likewise deliver your
own datacentre into a private cloud and permit you to stretch out the usefulness to
numerous different organisations.
CONCLUSION
OpenNebula is a powerful, but easy-to-use, open source platform to build and manage
Enterprise Clouds. OpenNebula provides unified management of IT infrastructure and
applications, avoiding vendor lock-in and reducing complexity, resource consumption and
operational costs.
https://www.youtube.com/watch?v=vx24uYpn3hw
OpenNebula brings a significant number of new edge computing features developed in the context of
the ONEedge innovation project to deploy on-demand distributed edge cloud environments. These new edge
computing features enable IT organizations to deploy true hybrid and multi-cloud environments that avoid vendor
lock-in, reducing operational costs, expanding service availability, and enabling new ultra-low-latency applications.
OpenNebula combines the agility, scalability and simplicity of the public cloud, with the greater levels of flexibility,
performance and security of the private cloud, and leverages a geo-distributed offering of cloud and edge locations.
It provides a single control panel with centralized operations and management that abstracts cloud functionality
and ensures portability across providers.
•Virtual Machine: Instantiated Template. A Virtual Machine represents one life-cycle, and several Virtual Machines can be created from a single
Template.
•Virtual Network: A group of IP leases that VMs can use to automatically obtain IP addresses. It allows the creation of Virtual Networks by
mapping over the physical ones. They will be available to the VMs through the corresponding bridges on hosts. Virtual network can be defined in
3.Context attributes (e.g. net mask, DNS, gateway). OpenNebula also comes with a Virtual Router appliance to provide networking services like
•Datastores
•A physical network
The master node, sometimes referred to as the front-end machine, executes all the OpenNebula services. This
is the actual machine where OpenNebula is installed. OpenNebula services on the front-end machine include
the management daemon (oned), scheduler (sched), the web interface server (Sunstone server), and other
advanced components. These services are responsible for queuing, scheduling, and submitting jobs to other
machines in the cluster. The master node also provides the mechanisms to manage the entire system. This
includes adding virtual machines, monitoring the status of virtual machines, hosting the repository, and
transferring virtual machines when necessary. Much of this is possible due to a monitoring subsystem which
gathers information such as host status, performance, and capacity use. The system is highly scalable and is
only limited by the performance of the actual server
The worker nodes, or hypervisor enabled-hosts, provide the actual computing resources needed
for processing all jobs submitted by the master node. OpenNebula hypervisor enabled-hosts use a
virtualization hypervisor such as Vmware, Xen, or KVM. The KVM hypervisor is natively supported
and used by default. Virtualization hosts are the physical machines that run the virtual machines
and various platforms can be used with OpenNebula. A Virtualization Subsystem interacts with
OpenNebula Storage
The datastores simply hold the base images of the Virtual Machines. The datastores must be
accessible to the front-end; this can be accomplished by using one of a variety of available
technologies such as NAS, SAN, or direct attached storage.
Three different datastore classes are included with OpenNebula, including system datastores, image
datastores, and file datastores. System datastores hold the images used for running the virtual
machines. The images can be complete copies of an original image, deltas, or symbolic links
depending on the storage technology used. The image datastores are used to store the disk image
repository. Images from the image datastores are moved to or from the system datastore when virtual
machines are deployed or manipulated. The file datastore is used for regular files and is often used for
kernels, ram disks, or context files
Physical networks are required to support the interconnection of storage servers and virtual
machines in remote locations. It is also essential that the front-end machine can connect to all the
worker nodes or hosts. At the very least two physical networks are required as OpenNebula requires
a service network and an instance network. The front-end machine uses the service network to
access hosts, manage and monitor hypervisors, and to move image files. The instance network
allows the virtual machines to connect across different hosts. The network subsystem of
Introduction
Openstack allows users to install virtual machines that take care of different tasks for managing a cloud
environment on the go. Openstack cloud computing makes horizontal scaling easy, which means
functions that have benefit from running in parallel can serve more users by spinning up occurrences. For
example: if a mobile app wants to communicate with a remote server can share the work of
communicating with each user across many circumstances, which scales up as the application gets more
users.
The most important aspect of the OpenStack is that it is open-source software, which means any user
who wants to access the source code can make the needed changes to the code quickly and freely share
with the community. This, in turn, is beneficial to thousands of developers who are working together to
build the most secure, robust, and safe product they can.
• Nova
Nova is the main computing engine for an OpenStack. It is used to install a large
number of virtual machines for handling the task of computing.
• Cinder
Cinder is a block of storage component which refers to the system being able to access
specific locations on the disk drive. This can come in handy in scenarios where data
access speed is considered as most important.
• Swift
Swift acts as an OpenStack cloud storage for objects and files. Instead of the traditional idea of referring to the file
location, developers can refer to a unique identifier referring to the file or information and allow OpenStack to decide
where to store that information. This relieves the headache of the developer about the capacity of the system. The
system’s responsibility is to back up the data in case of an error in the machine’s network connection or failure.
• Keystone
Keystone provides services of identity for an OpenStack. It is primarily a list of all users who are mapped to
OpenStack and the services offered by the cloud for which they have permission to use. This provides many means of
access. So developers can conveniently map existing user access methods with Keystone.
• Neutron
Neutron provides the capacity of networking for OpenStack. This ensures that all the components of OpenStack that
are installed can communicate with one another quickly and efficiently.
Horizon is the dashboard of OpenStack, which is the only graphical interface. This is the only component that will be visible for
those users who want to try OpenStack. Developers can individually access all the components of OpenStack through an
API. Horizon allows the system admin to take a look at the happenings in the cloud and manage.
• Heat
The heat helps manage the infrastructure needed for a cloud service to run by allowing the developers to store the cloud
application requirements in a file that defines the necessary resources for that application.
• Ceilometer
Ceilometer provides telemetry services, allowing the cloud to provide billing services to various users of the cloud. It also keeps
track of the usage of the system by each user. Ceilometer also tracks the use of all the components of OpenStack.
• Glance
Glance provides image services. These images refer to the hard disk. Glance lets you use the pictures as templates while
installing new virtual machines.
There are different models developed by vendors for deploying Openstack for customers. Some of them are mentioned
below.
• Openstack as a Service.
In this model, the vendor hosts OpenStack management software as a service without the hardware. Customers have to
sign up for the service and match with their internal network, storage, and server to get a fully functioning OpenStack
private cloud.
In this type of model, the vendor provides OpenStack public cloud computing system based on an open stack project.
The vendor provides OpenStack based private cloud, which includes hardware and OpenStack software in this model.
• On-Premises Distribution
In this type of model, the customer downloads and installs OpenStack distribution within the internal network.
A vendor called Nebula sold appliances that could be plugged into a network that generated an OpenStack
deployment.
This service can start actions based on rules against event or metric data collected by the ceilometer.
Openstack is an integral part of the OpenStack cloud platform because it allows communication
between the cloud environment and provides a common standard through API. The distributed
architecture of OpenStack enables next-generation services. Openstack focuses on both enterprises
and service providers.
To conclude, OpenStack has multiple advantages. It has a vibrant ecosystem, and it is open
source and free. Nowadays, more companies are beginning to adopt OpenStack as a part of
their cloud tool kit. Another advantage is that a large number of people can check the source
code. Openstack is being used in many industry sectors, and more are planning to adopt,
considering its popularity and ease of use.
SYLLABUS COMPLETED