Unit: 1: Cloud COMPUTING (Subject Code TCS 074)
Unit: 1: Cloud COMPUTING (Subject Code TCS 074)
Introduction
Unit: 1
CLOUD
COMPUTING (Subject Code TCS 074) Mr. Saurabh Gupta
M. Tech (CSE) from IIT
Department of CSE
Course Details Dr. Shakuntala Misra
(B.Tech 7th Sem) National Rehabilitation University
LUCKNOW
07/04/2024 MR.SAURABH GUPTA UNIT -1 1
Branch wise Applications of Cloud Computing
Cloud service providers provide various applications in the field of art, business, data
storage and backup services, education, entertainment, management, social networking,
etc.
The most widely used cloud computing applications are given below -
Buy stacks of servers and Maintain and upgrade Recruit network professional
other hardware components the servers
The National Institute of Standards and Technology (NIST) has a more comprehensive
definition of cloud computing. It describes cloud computing as "a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers, storage, applications and services) that can be
rapidly provisioned and released with minimal management effort or service provider
interaction."
• Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the
basic need of IT companies.
• In that server room, there should be a database server, mail server, networking,
firewalls, routers, modem, switches, QPS (Query Per Second means how much
queries or load will be handled by the server), configurable system, high net speed,
and the maintenance engineers.
Distributed Systems:
• The purpose of distributed systems is to share resources and also use them
effectively and efficiently.
• In client/server computing, a server takes requests from client computers and shares its
resources, applications and/or data with the clients on the network
• a client is a computing device that initiates contact with a server in order to make use of
a shareable resource.
• A server may serve multiple clients at the same time while a client is in contact with
only one server.
• Both the client and server usually communicate via a computer network but sometimes
they may reside in the same system.
• The client server computing works with a system of request and response. The
client sends a request to the server and the server responds with the desired
information.
• The client and server should follow a common communication protocol so they can
easily interact with each other. All the communication protocols are available at the
application layer.
• A server can only accommodate a limited number of client requests at a time. So it
uses a system based to priority to respond to the requests.
• An example of a client server computing system is a web server. It returns the web
pages to the clients that requested them.
Improves the system performance. Improves system scalability, fault tolerance and
resource sharing capabilities.
Processors communicate with each other Computer communicate with each other
through bus. through message passing.
Mainframe computing:
• Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines.
• These are responsible for handling large data such as massive input-output
operations.
• Even today these are used for bulk processing tasks such as online transactions etc.
Cluster Computing
A cluster computer refers to a network of same type of computers whose target is to
work as a same unit. Such a network is used when a resource hungry task requires high
computing power or memory. Two or more same types of computers are clubbed
together to make a cluster and perform the task.
Grid Computing
Grid computing refers to a network of same or different types of computers whose target
is to provide a environment where a task can be performed by multiple computers
together on need basis. Each computer can work independently as well.
• The customers are charged for them as you go basis without any upfront cost. The
utility model maximizes the efficient use of resources while minimizing the associated
cost.
• Utility computing has an advantage that there will be a low initial cost to acquire
computer resources.
• The customer can access the infinite amount of computing solution with the help of the
internet or a virtual private network. The provider will perform the backend
infrastructure and computing resources management.
• On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.
• At around in 1961, John MacCharty suggested in a speech at MIT that computing can be
sold like a utility, just like a water or electricity
• The applications were delivered to enterprises over the Internet, and this way the dream
of computing sold as utility were true.
• 2002- Amazon launched Amazon Web Services (AWS) providing services like storage,
computation and even human intelligence.
• 2006- Amazon launched Amazon Web Services (AWS) with Elastic Compute Cloud
(EC2) which allows users to launch computing resources on pay-per-use.
• 2008- Google launched its Google App Engine (GAE) Platform-as-a-Service (PaaS),
allowing developers to host web applications in its managed data centers.
• 2010- Microsoft, entered the cloud market with the launch of its cloud computing
platform Azure.
• 2011- IBM launched Smart Cloud then renamed it as IBM's Bluemix and launched in
2014 for the academic organizations concerning to PaaS services.
• 2011- IIT Delhi launched Baadal cloud for Infrastructure-as-a-Service (IaaS) services.
• 2013- C-DAC Chennai launched Meghdoot cloud platform for IaaS services
• On-demand self-service:
Cloud computing resources can be provisioned without human interaction from the
service provider. In other words, the user can provision additional computing resources
(storage space, virtual machine instances, database instances etc.) as needed without
going through the cloud service provider.
Cloud computing resources are available over the network and can be accessed by
diverse customer platforms. It other words, cloud services are available over a network
—ideally high broadband communication link—such as the internet, or in the case of a
private clouds it could be a local area network (LAN).
Resource pooling means that multiple customers are serviced from the same physical
resources. Providers’ resource pool should be very large and flexible enough to service
multiple client requirements and to provide for economy of scale. When it comes to
resource pooling, resource allocation must not impact performances of critical
applications.
One of the great things about cloud computing is the ability to quickly provision resources
in the cloud as users need them. And then to remove them when they don’t need them.
Cloud computing resources can scale up or down rapidly and, in some cases, automatically,
in response to business demands. It is a key feature of cloud computing. The usage,
capacity, and therefore cost, can be scaled up or down with no additional contract or
penalties.
• Measured service:
Cloud computing resources usage is metered and users pay accordingly for what they have
used. Resource utilization can be optimized by leveraging charge-per-use capabilities. This
means that cloud resource usage—whether virtual server instances that are running or
storage in the cloud—gets monitored, measured and reported by the cloud service
provider. The cost model is based on “pay for what you use”—the payment is variable
based on the actual consumption by the user.
Elasticity is used just to meet the sudden up Scalability is used to meet the static
and down in the workload for a small increase in the workload.
period of time.
Elasticity is used to meet dynamic changes, Scalability is always used to address the
where the resources need can increase or increase in workload in an organization.
decrease.
. Scalability is used by giant companies
Elasticity is commonly used by small whose customer circle persistently grows in
companies whose workload and demand order to do the operations efficiently.
increases only for a specific period of time.
It is a short term planning and adopted just Scalability is a long term planning and
to deal with an unexpected increase in adopted just to deal with an expected
demand or seasonal demands. increase in demand.
1. Cost efficiency: The biggest reason behind companies shifting to Cloud Computing is
that it takes considerably lesser cost than any on-premise technology. Now, companies
need not store data in disks anymore as the cloud offers enormous storage space,
saving money and resources.
2. High speed: Cloud Computing lets us deploy the service quickly in fewer clicks. This
quick deployment lets us get the resources required for our system within minutes.
3. Excellent accessibility: Storing information in the cloud allows us to access it
anywhere and anytime regardless of the machine making it a highly accessible and
flexible technology of the present times.
4. Back-up and restore data: Once data is stored in the cloud, it is easier to get its back-
up and recovery, which is quite a time-consuming process in on-premise technology.
5. Manageability: Cloud Computing eliminates the need for IT infrastructure updates and
maintenance since the service provider ensures timely, guaranteed, and seamless
delivery of our services and also takes care of all the maintenance and management of
our IT services according to the service-level agreement (SLA).
1. Vulnerability to attacks: Storing data in the cloud may pose serious challenges of
information theft since in the cloud every data of a company is online. Security
breach is something that even the best organizations have suffered from and it’s a
potential risk in the cloud as well. Although advanced security measures are
deployed on the cloud, still storing confidential data in the cloud can be a risky affair.
2. Network connectivity dependency: Cloud Computing is entirely dependent on the
Internet. This direct tie-up with the Internet means that a company needs to have
reliable and consistent Internet service as well as a fast connection and bandwidth to
reap the benefits of Cloud Computing.
3. Downtime: Downtime is considered as one of the biggest potential downsides of
using Cloud Computing. The cloud providers may sometimes face technical outages
that can happen due to various reasons, such as loss of power, low Internet
connectivity, data centers going out of service for maintenance, etc. This can lead to
a temporary downtime in the cloud service.
• 4. Vendor lock-in: When in need to migrate from one cloud platform to another, a
company might face some serious challenges because of the differences between
vendor platforms. Hosting and running the applications of the current cloud platform
on some other platform may cause support issues, configuration complexities, and
additional expenses. The company data might also be left vulnerable to security
attacks due to compromises that might have been made during migrations.
• 5. Limited control: Cloud customers may face limited control over their deployments.
Cloud services run on remote servers that are completely owned and managed by
service providers, which makes it hard for the companies to have the level of control
that they would want over their back-end infrastructure.
(a) EC10
(b) EC2
(c) EC3
(a) Security
(b) Scalability
(c) Elasticity
Deployments can scale up to accommodate spikes in usage and down when demands
decrease. Customers are billed on a pay-per-use basis. When this model is used to
create a hybrid cloud environment, it is sometimes called “cloud bursting.”
Cloud provisioning has several benefits that are not available with traditional provisioning
approaches, such as:
Speed: Organizations’ developers can quickly spin up several workloads on-demand, so the
companies no longer require IT administrators to provide and manage computing resources.
Like any other technology, cloud provisioning also presents several challenges,
including:
Complex management and monitoring: Organizations may need several provisioning
tools to customize their cloud resources. Many also deploy workloads on more than
one cloud platform, making viewing everything on a central console more challenging.
Resource and service dependencies: Cloud applications and workloads often tap into
basic infrastructure resources, such as computing, networking, and storage. But public
cloud service providers offer higher-level ancillary services like serverless functions
and machine learning (ML) and big data capabilities. Such services may carry
dependencies that can lead to unexpected overuse and surprise costs.
Policy enforcement: User cloud provisioning helps streamline requests and manage
resources but requires strict rules to make sure unnecessary resources are not provided.
That is time-consuming since different users require varying levels of access and frequency.
Setting up rules to know who can provide which resources, for how long, and with what
budgetary controls can be difficult.
• An Amazon EC2 instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for
running applications on the Amazon Web Services (AWS) infrastructure.
Instances are created from Amazon Machine Images (AMI). The machine images are like
templates. They are configured with an operating system (OS) and other software,
which determine the user's operating environment. Users can select an AMI provided by
AWS, the user community or through the AWS Marketplace. Users also can create their
own AMIs and share them.
• Operating system: EC2 supports many OSes, including Linux, Microsoft Windows
Server, CentOS and Debian.
• Persistent storage: Amazon's Elastic Block Storage (EBS) service enables block-level
storage volumes to be attached to EC2 instances and be used as hard drives. With
EBS, it is possible to increase or decrease the amount of storage available to an EC2
instance and attach EBS volumes to more than one instance at the same time.
Amazon CloudWatch: This web service allows for the monitoring of AWS cloud services and
the applications deployed on AWS. CloudWatch can be used to collect, store and analyze
historical and real-time performance data. It can also proactively monitor applications,
improve resource use, optimize costs and scale up or down based on changing workloads.
Automated scaling: Amazon EC2 Auto Scaling automatically adds or removes capacity from
Amazon EC2 virtual servers in response to application demand. Auto Scaling provides more
capacity to handle temporary increases in traffic during a product launch or to increase or
decrease capacity based on whether use is above or below certain thresholds.
Bare-metal instances: These virtual server instances consist of the hardware resources,
such as a processor, storage and network. They are not virtualized and do not run an OS,
reducing their memory footprint, providing extra security and increasing their processing
power.
• Amazon EC2 Fleet: This service enables the deployment and management of instances
as a single virtual server. The Fleet service makes it possible to launch, stop and
terminate EC2 instances across EC2 instance types with one action. Amazon EC2 Fleet
also provides programmatic access to fleet operations using an API. Fleet management
can be integrated into existing management tools. With EC2 Fleet, policies can be
scaled to automatically adjust the size of a fleet to match the workload.
• Pause and resume instances: EC2 instances can be paused and resumed from the
same state later on. For example, if an application uses too many resources, it can be
paused without incurring charges for instance usage.
Instance types are grouped into families based on target application profiles. These
groups include the following:
• Compute optimized: Compute optimized instances are used to run big data
applications that require large amounts of processing power and memory on the
AWS cloud. These instances are designed and optimized for running computational
and data-intensive applications that require fast network performance, extensive
availability and high input/output (I/O) operations per second (IOPS). Examples of
types of applications includes scientific and financial modeling and simulation,
machine learning, enterprise data warehousing and business intelligence.
• Graphics processing unit (GPU):These instances provide a way to run graphics-
intensive applications faster than with the standard EC2 instances. Systems that
rely on GPUs include gaming and design work. For example, Linux distributions
often take advantage of GPUs for rendering graphical user interfaces, improving
compression speeds and speeding up database queries.
Cloud economics is not just about costs in actual monetary terms, but also about
the opportunity costs of the cloud and the peculiarities of managing costs in a
highly dynamic environment.
Cloud total cost of ownership (TCO): TCO is the total cost of adopting, operating, and
provisioning cloud infrastructure. TCO is helpful for understanding return on investment.
Businesses have always performed TCO analysis for traditional IT infrastructure.
However, performing TCO analysis for cloud computing can be challenging because the
environment is inherently more complex and dynamic than on-premises environments.
Getting an accurate TCO for cloud computing means capturing the purchase price of on-
premises vs. cloud solutions as well as the intangible costs of either solution. In practice,
this means:
• Calculating the cost of your current IT infrastructure
• Estimating the total cost of cloud adoption (including migration costs)
• Quantifying the intangible benefits of the cloud
The overall goal is to achieve a lower TCO compared to on-premises infrastructure, but it
can also be about justifying a higher TCO by listing the intangible benefits associated
with the cloud, such as agility and greater speed to market.
The implication is that your business will have variable cloud computing bills that
depend on the services you use and how they are consumed. While this model
may save your business upfront capital expenditure, it can become a huge financial
suck if resources are not managed properly.
Elasticity:
Cloud computing eliminates the need for over-provisioning because you pay only for
what you use. Cloud computing platforms, such as AWS, dynamically allocate
resources to projects and processes, ensuring that a business has the right amount
of resources it needs at any given time. This increases cost efficiency and allows
businesses to optimize resource usage.
This elasticity is one of the most appealing aspects of cloud computing and a major
selling point when making a case for switching to the cloud.
On-demand pricing:
This means cloud costs can quickly spiral out of control if you are not monitoring
them regularly and making data-driven decisions.
a) Google
b) Microsoft
c) Amazon
d) Blackboard
a)Internet
b)Wireless
c)Hard drives
d)People
(a) True
(b) False
11. Which of the following a company must consider before moving towards
cloud computing?
QUESTIONS
Q1: Define cloud computing and its advantages and disadvantages.
Q5: List out some cloud computing vendors and the services they
provide.
QUESTIONS
Q.6 Describe distributed computing and parallel computing in detail.