CS - 4012 Lab Manual
CS - 4012 Lab Manual
CS - 4012 Lab Manual
Udaipur
Cloud Computing
Laboratory Manual
Prepared by
Dr. Kamal Kant Hiran, Assistant Professor
Department of Computer Science and Engineering
Cloud Technology is the delivery of different services through the Internet. These resources
include tools and applications like data storage, servers, databases, networking and software.
This course will help the students to study, research and analyze the concepts of cloud
technology, cloud architecture, services, obstacles and vulnerabilities, cost management,
legal issues involved in the cloud, migrating to cloud along with Amazon, Google and
Microsoft Cloud Services along with the cases studies.
Outcomes
Upon the completion of Cloud Technology practical course, the student will be able to:
2. Chalk out the major differences between SAAS, PAAS & IAAS.
3. Know the details about various companies in the cloud business and the corresponding
4. Study various cases with regard to migration to cloud, cost management, legal issues, etc.
5. Understand how Amazon, Google and Microsoft Cloud Services work and are different
- Frank Zappa
1 Study the basic cloud architecture and represent it using a case study. 5
2 Enlist major differences between SAAS, PAAS & IAAS. Also submit a 8
research done on various companies in cloud business and the corresponding
services provided by them, tag them under SAAS, PAAS & IAAS.
3 Study and present a report on Jolly Cloud. 10
4 Present a report on obstacles and vulnerabilities in cloud computing on 11
generic level.
5 Present a report on Amazon cloud services. 13
6 Present a report on Microsoft cloud services. 16
7 Present a report on cost management in cloud. 17
8 Enlist and explain legal issues involved in the cloud with the help of a case 19
study.
9 Explain the process to migrating to cloud with a case study. 21
10 Present a report on Google cloud and cloud services. 22
Cloud Computing is the use of hardware and software to deliver a service over a network (typically the
Internet). Cloud Computing architecture comprises of many cloud components, which are loosely coupled.
We can broadly divide the cloud architecture into two parts:
Front End - The front end refers to the client part of cloud computing system. It consists of interfaces and
applications that are required to access the cloud computing platforms, Example - Web Browser.
Back End - The back end refers to the cloud itself. It consists of all the resources required to provide cloud
computing services. It comprises of huge data storage, virtual machines, security mechanism, services,
deployment models, servers, etc.
Case Study 1
‗Pay by the Drink‘ Flexibility Creates Major Efficiencies and Revenue for Coca-Cola‘s International
Bottling Investments Group (BIG)
The Coca-Cola Company‘s sophisticated distribution model includes a partner network of franchise bottlers
that manufacture, package, merchandise and distribute branded beverages to their own customers and
vending partners, who then sell the products to consumers. All of these bottling partners work closely with
their customers (grocery stores, restaurants, street vendors, convenience stores, movie theaters and
amusement parks, etc.) to execute localized strategies developed in partnership with Coca Cola. This
network of bottlers sells Coca-Cola products to consumers at a rate of more than 1.9 billion servings a day.
Over a decade ago, Coca-Cola formed their Bottling Investments Group (BIG) to manage their company-
owned bottling assets. The mission of the group was to help bottlers operate at the same high standards that
Coca-Cola sets for all of its bottling franchisees around the world.
Today, BIG manages bottling operations in 18 markets including emerging markets such as India, Vietnam,
Sri Lanka, Nepal, Myanmar and Bangladesh and accounts for more than 25 percent of the total system
volume.
When Coca-Cola initially created BIG, each of the bottlers they brought in faced a different and distinct set
of business issues due to their unique markets. Despite these challenges, though, BIG succeeded in its vision
Laboratory Manual for Cloud Computing Page 5
to become a model bottler by investing for the long-term in infrastructure and building the right culture to
ensure a sustainable healthy business.
―As we have grown through the years, our leadership stayed focused on implementing key strategic
initiatives in supply chain, sales, revenue and profit generation,‖ said Javier Polit, former CIO, BIG.
―Additionally, we have worked to build leadership capability at all levels with a suite of world-class
development programs from front-line supervisor to senior executive.‖
This successful framework helps new bottlers joining BIG increase their efficiencies and revenues in less
time than they could do on their own through world class tool sets and proven processes. Eventually, many
bottlers transition out of BIG back into the franchise system and metrics show that these bottlers generally
continue to perform at high levels.
The Challenge
BIG‘s stated goal is to drive efficiencies, higher revenue, greater transparency and higher standards across
all of its bottlers. But, the bottlers within BIG each faced very unique challenges inherent to their business
and markets. Thus the challenge for the business was how to address the unique complexities and
requirements of a very diverse group of bottlers with an efficient infrastructure and standardized processes.
One key area of consideration was to reduce the complexity, rigidity and costs of running the mission-
critical applications that were common to each of the bottlers. Motivated by this and a desire to leave behind
its capital intensive, highly inflexible on-premises environments located in two outsourced data centers, BIG
began its foray into cloud computing in 2012.
The original solution involved outsourcing the hosting of these mission critical applications, which included
the company‘s business-critical SAP systems. While this initial effort did begin to successfully move BIG
bottlers from a capex model to an OPEX model and provided some savings, the solution was not without
challenges. Despite these early moves to cloud, BIG‘s overall costs for running its mission critical
applications were still quite heavy.
Reducing the cost to run these spinal cord applications represented a significant opportunity not only to
impact the company‘s bottom line, but also to add greater technological and financial flexibility into the
system.
The Solution
In spring of 2016, BIG began the process of transitioning to the Virtustream Enterprise Cloud. This complex
multi-system SAP migration transitioned seven of BIG‘s international bottlers over a six-month time period.
―This new model takes away the need to calculate the optimum service level for our cloud deployment by
working through complex pricing options and strong arm negotiations, and instead, automatically and
dynamically optimizes service requirements to meet the demands of an individual IT environment or
application,‖ explained Polit.
For BIG, this means that its bottlers can literally ―pay by the drink,‖ which not only provides significant cost
savings, but also offers transparency into consumption that can drive further efficiencies.
Virtustream‘s use of the latest Intel® Xeon® E7 v4 Processors delivers cost-effective performance and
scalability, enabling these capabilities for BIG and their customers. Virtustream protects BIG‘s data by
leveraging key security features of the Intel® Xeon® processors, including Intel AES-NI for data encryption
and Intel® TXT for added tamper-resistance through platform attestation.
These technologies also help to ensure that workloads are only moved to trusted servers and that all data is
protected when it is both at rest and travelling between the company‘s data centers and Virtustream‘s,
meaning BIG can be confident that its intellectual property, customer and employee data and other sensitive
information are protected by one of the most advanced security technologies available.
IaaS
Infrastructure as a service (IaaS) is a cloud computing offering in which a vendor provides users access to
computing resources such as servers, storage and networking. Organizations use their own platforms and
applications within a service provider‘s infrastructure.
Key features
Instead of purchasing hardware outright, users pay for IaaS on demand.
Infrastructure is scalable depending on processing and storage needs.
Saves enterprises the costs of buying and maintaining their own hardware.
Because data is on the cloud, there can be no single point of failure.
Enables the virtualization of administrative tasks, freeing up time for other work.
PaaS
Platform as a service (PaaS) is a cloud computing offering that provides users with a cloud environment in
which they can develop, manage and deliver applications. In addition to storage and other computing
resources, users are able to use a suite of prebuilt tools to develop, customize and test their own applications.
Key features
PaaS provides a platform with tools to test, develop and host applications in the same environment.
Enables organizations to focus on development without having to worry about underlying
infrastructure.
Providers manage security, operating systems, server software and backups.
Facilitates collaborative work even if teams work remotely.
SaaS
Software as a service (SaaS) is a cloud computing offering that provides users with access to a vendor‘s
cloud-based software. Users do not install applications on their local devices. Instead, the applications reside
on a remote cloud network accessed through the web or an API. Through the application, users can store and
analyze data and collaborate on projects.
Key features
SaaS vendors provide users with software and applications via a subscription model.
Users do not have to manage, install or upgrade software; SaaS providers manage this.
Data is secure in the cloud; equipment failure does not result in loss of data.
Use of resources can be scaled depending on service needs.
Source - https://www.ibm.com/cloud/learn/iaas-paas-saas
Source - https://www.msigeek.com/7357/cloud-computing-service-models-benefits
Source - https://blog.crozdesk.com/tapping-saas-paas-iaas/
Jolicloud is a computing platform which makes the cloud more simple and more open. Jolicloud connects
you to all of your favorite online apps, social media, videos, photos and files from any device in the world.
Jolicloud is the creator of the drive app, a new way to manage your storage online.
Jolicloud was a pioneer in cloud computing with the Jolibook, the first personal cloud computer and JoliOS,
the first cloud OS designed for netbooks and recycled computers.
Application Manager
Perhaps the greatest thing about Jolicloud is its application manager. Hundreds of free apps are available,
and all can be installed with a single click.
Web Apps
A lot of the websites most people use every day – including Gmail and Google Calendar are better thought
of as applications than they are websites. Gmail, for example, is a complete email interface (and an
extremely powerful one at that). Such websites-as-applications are so common on today‘s Internet that we
even have a term for them: web apps.
Jolicloud offers hundreds of web apps in its application manager. Install these and those given web apps can
run in their own window, apart from your browser.
Jolicloud was funded by two investors, Mangrove Capital Partners and Atomico.
Source- https://www.crunchbase.com/organization/jolicloud
The cloud user is responsible for application-level security. The cloud provider is responsible for physical
security, and likely for enforcing external firewall policies. Security for intermediate layers is shared
between the user and the operator.
Although cloud makes external security easier, it does pose new problems related to internal security. Cloud
providers must guard against theft or denial-of-service attacks by users. Users need to be protected from one
another.
Transferring such high volumes of data between two clouds might take from a few days to even months with
network having high data rates.
Session Riding: Session riding happens when an attacker steals a user‘s cookie to use the application in the
name of the user. An attacker might also use attacks in order to trick the user into sending authenticated
requests to arbitrary web sites to achieve various things.
Virtual Machine Escape: In virtualized environments, the physical servers run multiple virtual machines
on top of hypervisors. An attacker can exploit a hypervisor remotely by using vulnerability present in the
hypervisor itself – such vulnerabilities are quite rare, but they do exist. Additionally, a virtual machine can
escape from the virtualized sandbox environment and gain access to the hypervisor and consequentially all
the virtual machines running on it.
Reliability and Availability of Service: We expect our cloud services and applications to always be
available when we need them, which is one of the reasons for moving to the cloud. But this isn‘t always the
case, especially in bad weather with a lot of lightning where power outages are common. The CSPs have
uninterrupted power supplies, but even those can sometimes fail, so we can‘t rely on cloud services to be up
and running 100% of the time. We have to take a little downtime into consideration, but that‘s the same
when running our own private cloud.
Data Protection and Portability: When choosing to switch the cloud service provider for a cheaper one,
we have to address the problem of data movement and deletion. The old CSP has to delete all the data we
stored in its data center to not leave the data lying around.
Alternatively, the CSP that goes out of the business needs to provide the data to the customers, so they can
move to an alternate CSP after which the data needs to be deleted. What if the CSP goes out of business
without providing the data? In such cases, it‘s better to use a widely used CSP which has been around for a
while, but in any case data backup is still in order.
CSP Lock-in: We have to choose a cloud provider that will allow us to easily move to another provider
when needed. We don‘t want to choose a CSP that will force us to use his own services, because sometimes
we would like to use one CSP for one thing and the other CSP for something else.
Internet Dependency: By using the cloud services, we‘re dependent upon the Internet connection, so if the
Internet temporarily fails due to a lightning strike or ISP maintenance, the clients won‘t be able to connect to
the cloud services. Therefore, the business will slowly lose money, because the users won‘t be able to use
the service that‘s required for the business operation. Not to mention the services that need to be available
24/7, like applications in a hospital, where human lives are at stake.
Source - https://www.cloudcomputing-news.net/news/2014/nov/21/top-cloud-computing-threats-and-
vulnerabilities-enterprise-environment/
Amazon Web Services offers a broad set of global cloud-based products including compute, storage,
databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise
applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to
deployment tools, directories to content delivery, over 140 AWS services are available. New services can be
provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and
medium-sized businesses, and customers in the public sector to access the building blocks they need to
respond quickly to changing business requirements.
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web
services—now commonly known as cloud computing. One of the key benefits of cloud computing is the
opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your
business. With the cloud, businesses no longer need to plan for and procure servers and other IT
infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of
servers in minutes and deliver results faster.
Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers
hundreds of thousands of businesses in 190 countries around the world.
The AWS Cloud spans 66 Availability Zones within 21 geographic regions around the world, with
announced plans for 12 more Availability Zones and four more Regions in Bahrain, Cape Town, Jakarta,
and Milan.
Source - https://aws.amazon.com/
Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database
storage, content delivery and other functionality to help businesses scale and grow.
In simple words AWS allows you to do the following things-
1. Running web and application servers in the cloud to host dynamic websites.
2. Securely store all your files on the cloud so you can access them from anywhere.
3. Using managed databases like MySQL, PostgreSQL, Oracle or SQL Server to store information.
4. Deliver static and dynamic files quickly around the world using a Content Delivery Network (CDN).
5. Send bulk email to your customers.
Now that you know what you can do with AWS, lets have an overview of various AWS services.
Basic Terminologies
1. Region — A region is a geographical area. Each region consists of two (or more) availability zones.
2. Availability Zone — It is simply a data center.
3. Edge Location — They are CDN (Content Delivery Network) endpoints for Cloud Front.
2. Light Sail — If you don‘t have any prior experience with AWS this is for you. It automatically
deploys and manages compute, storage and networking capabilities required to run your applications.
3. ECS (Elastic Container Service) — It is a highly scalable container service to allows you to run
Docker containers in the cloud.
4. EKS (Elastic Container Service for Kubernetes) — Allows you to use Kubernetes on
AWS without installing and managing your own Kubernetes control plane. It is a relatively new
service.
5. Lambda — AWS‘s serverless technology that allows you to run functions in the cloud. It‘s a huge
cost saver as you pay only when your functions execute.
6. Batch — It enables you to easily and efficiently run batch computing workloads of any scale on
AWS using Amazon EC2 and EC2 spot fleet.
7. Elastic Beanstalk — Allows automated deployment and provisioning of resources like a highly
scalable production website.
Storage
1. S3 (Simple Storage Service) — Storage service of AWS in which we can store objects like files,
folders, images, documents, songs, etc. It cannot be used to install software, games or Operating
System.
2. EFS (Elastic File System) — Provides file storage for use with your EC2 instances. It uses NFSv4
protocol and can be used concurrently by thousands of instances.
3. Glacier — It is an extremely low-cost archival service to store files for a long time like a few years
or even decades.
4. Storage Gateway — It is a virtual machine that you install on your on-premise servers. Your on-
premise data can be backed up to AWS providing more durability.
Databases
1. RDS (Relational Database Service) — Allows you to run relational databases like MySQL,
MariaDB, PostgreSQL, Oracle or SQL Server. These databases are fully managed by AWS like
installing antivirus and patches.
3. Elasticache — It is a way of caching data inside the cloud. It can be used to take load off of your
database by caching most frequent queries.
4. Neptune — It has been launched recently. It is a fast, reliable and scalable graph database service.
Migration
1. DMS (Database Migration Service) — It can be used to migrate on-site databases to AWS. It also
allows you to migrate from one type of database to another, e.g. from Oracle to MySQL.
2. SMS (Server VPC (Virtual Private Cloud) — It is simply a data center in the cloud in which you
deploy all your resources. It allows you to better isolate your resources and secure them.
3. Cloud Front -It is AWS‘s Content Delivery Network (CDN) that consists of Edge locations that
cache resources.
4. Route53 — It is AWS‘s highly available DNS (Domain Name System) service. You can register
domain names through it.
5. Direct Connect — Using it you can connect your data center to an Availability zone using a high
speed dedicated line.
6. API Gateway — Allows you to create, store and manage APIs at scale.
7. Migration Service) — It allows you to migrate on-site servers to AWS easily and quickly.
8. Snowball — It is a briefcase sized appliance that can be used to send terabytes of data inside and
outside of AWS.
Besides these, Networking & Content Delivery Tools, Analytic Tools, Security, Identity, and Compliance,
Application and Mobile services, Desktop & App Streaming etc. are also part of AWS.
Source-https://blog.usejournal.com/what-is-aws-and-what-can-you-do-with-it-395b585b03c
Azure was announced in October 2008, started with codename "Project Red Dog", and released on February
1, 2010, as "Windows Azure" before being renamed "Microsoft Azure" on March 25, 2014. Most users run
Linux on Azure, some of the many Linux distributions offered, including Microsoft's own Linux-
based Azure Sphere.
Source - Wikipedia
Azure is an ever-expanding set of cloud computing services to help your organisation meet its business
challenges. With Azure, your business or organisation has the freedom to build, manage and deploy
applications on a massive, global network using your preferred tools and frameworks.
Source - https://azure.microsoft.com/en-in/overview/what-is-azure/
Cost Analytics
For complete visibility into the cloud services used, the actual usage patterns and trends are the first step. No
matter your cloud environment, in addition to tracking what you have spent, it is important to project what
you will be spending. You need consolidated and granular details in the form of interactive graphical and
tabular reports across multiple dimensions, as well as time frames in a multi-cloud environment to correlate
data for analysis and reporting against business objectives.
Budgets
Define and allocate budgets for departments, cost centers, projects, and ensure approval mechanisms to
avoid cost overrun by sending out alerts when thresholds are breached. Use the show back report to
chargeback departments for their cloud usage and limit the cost and use of resources. This alignment of cost
with value ensures the anticipated business benefit once the cloud resources are in production.
Source - https://dzone.com/articles/fundamentals-of-cloud-cost-management
Service levels
It should go without saying that the starting point should be the business case and intended use of the
service, and not any legal document, such as a service level agreement. Understand what business problem
the service will be solving; the intended internal and external users; when, where and how the service will be
accessed; whether or not the service is business-critical; the practical consequences if the service is down or
degraded for any period of time; and how the use of the service may change over time. Then, ensure the
agreement reflects your needs.
Almost invariably, the agreement will address availability, planned outages, critical and noncritical outages,
service credits and termination rights. Typically, the sole remedy in case of a breach of the agreement is a
service credit, which is usually capped based on some percentage of fees paid during the previous 12-month
period. Customers should ask whether the credit is simply window dressing or actually a meaningful
economic remedy that would deter the vendor from breaching the agreement.
Commercial/Other
The considerations above are a good starting point but they are just the tip of the iceberg. Here are a few
more to consider: storage fees, if and when there are automatic upgrades; whether or not there are multiple
environments (e.g., development, test, and production) available to customer; how customization works in a
cloud setting; how many data recoveries does the vendor provide free of charge (and what are the costs of
additional backups); and how easy is it to move to another cloud and how will the vendor support the
transition.
Source-https://www.forbes.com/2010/04/12/cloud-computing-enterprise-technology-cio-network-
legal.html#3faeedb02ebe
Source - https://media.amazonwebservices.com/CloudMigration-main.pdf
Laboratory Manual for Cloud Technology Page 21
10. Present a report on Google cloud and cloud services.
Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the
same infrastructure that Google uses internally for its end-user products, such as Google
Search and YouTube. Alongside a set of management tools, it provides a series of modular cloud services
including computing, data storage, data analytics and machine learning. Registration requires a credit card or
bank account details.
Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless
computing environments.
In April 2008, Google announced App Engine, a platform for developing and hosting web applications in
Google-managed data centers, which was the first cloud computing service from the company. The service
became generally available in November 2011. Since the announcement of App Engine, Google added
multiple cloud services to the platform.
Google Cloud Platform is a part of Google Cloud, which includes the Google Cloud Platform public cloud
infrastructure, as well as GSuite, enterprise versions of Android and Chrome OS, and application
programming interfaces (APIs) for machine learning and enterprise mapping services.
Google Cloud Platform is a set of Computing, Networking, Storage, Big Data, Machine Learning and
Management services provided by Google that runs on the same Cloud infrastructure that Google uses
internally for its end-user products, such as Google Search, Gmail, Google Photos and YouTube.