0% found this document useful (0 votes)
26 views24 pages

Unit 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views24 pages

Unit 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT V APP IMPLEMENTATION IN CLOUD

Cloud Providers Overview – Virtual Private Cloud – Scaling (Horizontal and


vertical) – Virtual Machines– Docker Container – Kubernetes.

CLOUD PROVIDERS OVERVIEW

 A cloud service provider is a third-party company offering a cloud-based platform,


infrastructure, application, or storage services.
 Much like a homeowner would pay for a utility such as electricity or gas,
companies typically have to pay only for the amount of cloud services they use, as
business demands require.
 Cloud service providers are companies that establish public clouds, manage
private clouds, or offer on-demand cloud computing components (also known as
cloud computing services) like Infrastructure-as-a-Service (IaaS), Platform-as-a-
Service (PaaS), and Software-as-a-Service(SaaS).
 Cloud services can reduce business process costs when compared to on-premise
IT.
 These clouds aren’t usually deployed as a standalone infrastructure solution, but
rather as part of a hybrid cloud.
Why use a cloud service provider
Using a cloud service provider is a helpful way to access computing services that you
would otherwise have to provide on your own, such as:
Infrastructure: The foundation of every computing environment. This infrastructure
could include networks, database services, data management, data storage (known in this
context as cloud storage), servers (cloud is the basis for serverless computing), and
virtualization.
Platforms: The tools needed to create and deploy applications. These platforms could
1

include operating systems like Linux®, middleware, and runtime environments.


Page
Software: Ready-to-use applications. This software could be custom or standard
applications provided by independent service providers.
Benefits
1. Cost and flexibility. The pay-as-you-go model of cloud services enables
organizations to only pay for the resources they consume. Using a cloud service
provider also eliminates the need for IT-related capital equipment purchases.
Organizations should review the details of cloud pricing to accurately break down
cloud costs.
2. Scalability. Customer organizations can easily scale up or down the IT resources
they use based on business demands.
Mobility. Resources and services purchased from a cloud service provider can be
accessed from any physical location that has a working network connection.
Disaster recovery. Cloud computing services typically offer quick and reliable
disaster recovery.
Challenges
3. Hidden costs. Cloud use may incur expenses not factored into the initial return on
investment analysis. For example, unplanned data needs can force a customer to
exceed contracted amounts, leading to extra charges. To be cost-effective,
companies also must factor in additional staffing needs for monitoring and
managing cloud use. Terminating use of on-premises systems also has costs, such
as writing off assets and data cleanup.
4. Cloud migration. Moving data to and from the cloud can take time. Companies
might not have access to their critical data for weeks, or even months, while large
amounts of data are first transferred to the cloud.
5. Cloud security. When trusting a provider with critical data, organizations risk
security breaches, compromised credentials and other substantial security risks.
Also, providers may not always be transparent about security issues and practices.
Companies with specific security needs may rely on open source cloud security
tools, in addition to the provider's tools.
2
Page
6. Performance and outages. Outages, downtime and technical issues on the
provider's end can render necessary data and resources inaccessible during critical
business events.
Complicated contract terms. Organizations contracting cloud service providers
must actively negotiate contracts and service-level agreements (SLAs). Failure to
do so can result in the provider charging high prices for the return of data, high
prices for early service termination and other penalties.
7. Vendor lock-in. High data transfer costs or use of proprietary cloud technologies
that are incompatible with competitor services can make it difficult for customers
to switch CSPs. To avoid vendor lock-in, companies should have a cloud exit
strategy before signing any contracts.

Top 5 Cloud Service Providers In 2023


1. Amazon Web Services: Best in Cloud Computing
2. Microsoft Azure: Best in Hybrid Cloud
3. Google Cloud Platform: Best in Application Deployment
4. IBM Cloud: Best in Cloud-based AI
5. Oracle: Best in Databases

Offered Key Features


Services

 Amazon Elastic
Compute Cloud
 Scalability
 AWS Lambda
 Cost-effectiveness and affordability
 Amazon Simple
AWS  Reliability
Storage Service
 Security
 Elastic Block Store
 Global reach of the services
 Amazon Virtual
Private Cloud
3
Page
 Amazon Route 53

 Azure Kubernetes
 Azure SQL  Flexibility
 Azure Machine  Analytics support
Learning  Strong IT support
Azure
 Azure Backup  Scalability
 Azure Cosmos DP  Affordability
 Azure Active  Reliability
Directory

 Google Compute
Engine
 Affordability
 Google Kubernetes
 User-friendliness
Google Engine
 Speed
Cloud  Google Cloud
 Advanced admin control capabilities
Spanner
 Cloud-based data transfer
 Google Cloud Virtual
Network

 IBM Cloud Code


Engine
 IBM Hyper Protect  High availability
Virtual Servers  Cloud infrastructure administration
IBM  IBM Cloud Functions  Open-source technology integration
 IBM WebSphere  Private, public, and hybrid cloud support
Application Servers  Persistent data storage
 IBM Power Systems
Virtual Servers

 Oracle Cloud  Built-in database optimization


Oracle
Infrastructure  Reliability and security
4
Page
 Oracle Big Data  Cost-efficiency and affordability
Cloud  High availability
 Oracle Database  Scalability
Cloud Service  Flexibility
 Oracle Autonomous
Database

5
Page
VIRTUAL PRIVATE CLOUD

 A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a
public cloud.
 VPC customers can run code, store data, host websites, and do anything else they
could do in an ordinary private cloud, but the private cloud is hosted remotely by a
public cloud provider. (Not all private clouds are hosted in this fashion.)
 VPCs combine the scalability and convenience of public cloud computing with
the data isolation of private cloud computing.
 Imagine a public cloud as a crowded restaurant, and a virtual private cloud as a
reserved table in that crowded restaurant. Even though the restaurant is full of
people, a table with a "Reserved" sign on it can only be accessed by the party who
made the reservation. Similarly, a public cloud is crowded with various cloud
customers accessing computing resources – but a VPC reserves some of those
resources for use by only one customer
 A public cloud is shared cloud infrastructure. Multiple customers of the cloud
vendor access that same infrastructure, although their data is not shared – just like
every person in a restaurant orders from the same kitchen, but they get different
dishes.
 Public cloud service providers include AWS, Google Cloud Platform, and
Microsoft Azure, among others.
 The technical term for multiple separate customers accessing the same cloud
Infrastructure is "multitenancy"
 A private cloud, however, is single-tenant. A private cloud is a cloud service that
is exclusively offered to one organization. A virtual private cloud (VPC) is a
private cloud within a public cloud; no one else shares the VPC with the VPC
customer
6
Page
How is a VPC isolated within a public cloud?
A VPC isolates computing resources from the other computing resources available in the
public cloud. The key technologies for isolating a VPC from the rest of the public cloud
are:
Subnets: A subnet is a range of IP addresses within a network that are reserved so that
they're not available to everyone within the network, essentially dividing part of the
network for private use. In a VPC these are private IP addresses that are not accessible
via the public Internet, unlike typical IP addresses, which are publicly visible.
VLAN: A LAN is a local area network, or a group of computing devices that are all
connected to each other without the use of the Internet. A VLAN is a virtual LAN. Like a
subnet, a VLAN is a way of partitioning a network, but the partitioning takes place at a
different layer within the OSI model (layer 2 instead of layer 3).
VPN: A virtual private network (VPN) uses encryption to create a private network over
the top of a public network. VPN traffic passes through publicly shared Internet
infrastructure – routers, switches, etc. – but the traffic is scrambled and not visible to
anyone.
` A VPC will have a dedicated subnet and VLAN that are only accessible by the VPC
customer. This prevents anyone else within the public cloud from accessing computing
resources within the VPC – effectively placing the "Reserved" sign on the table. The
VPC customer connects via VPN to their VPC, so that data passing into and out of the
VPC is not visible to other public cloud users.
Some VPC providers offer additional customization with:
 Network Address Translation (NAT): This feature matches private IP addresses
to a public IP address for connections with the public Internet. With NAT, a publicfacing
website or application could run in a VPC.
 BGP route configuration: Some providers allow customers to customize BGP
routing tables for connecting their VPC with their other infrastructure
Advantages of using a VPC instead of a private cloud
Scalability: Because a VPC is hosted by a public cloud provider, customers can add
7
Page

more computing resources on demand. Easy hybrid cloud deployment: It's relatively
simple to connect a VPC to a public cloud or to on-premises infrastructure via the VPN.
(Learn about hybrid clouds and their advantages.)
Better performance: Cloud-hosted websites and applications typically perform better
than those hosted on local on-premises servers.
Better security: The public cloud providers that offer VPCs often have more resources
for updating and maintaining the infrastructure, especially for small and mid-market
businesses. For large enterprises or any companies that face extremely tight data security
regulations, this is less of an advantage.

SCALING IN CLOUD COMPUTING

 It can be outlined as changing the size of something, for instance scaling the
business. It is even same within the context of databases.
 Cloud scalability in cloud computing refers to increasing or decreasing IT
resources as needed to meet changing demand. Scalability is one of the hallmarks
of the cloud and the primary driver of its explosive popularity with businesses.
 Data storage capacity, processing power, and networking can all be increased by
using existing cloud computing infrastructure. Scaling can be done quickly and
easily, usually without any disruption or downtime.
 Third-party cloud providers already have the entire infrastructure in place; In the
past, when scaling up with on-premises physical infrastructure, the process could
take weeks or months and require exorbitant expenses.
 This is one of the most popular and beneficial features of cloud computing, as
businesses can grow up or down to meet the demands depending on the season,
projects, development, etc.
 By implementing cloud scalability, you enable your resources to grow as your
traffic or organization grows and vice versa.
There are a few main ways to scale to the cloud:
8
Page
 If our business needs more data storage capacity or processing power, wel want a
system that scales easily and quickly.

 Cloud computing solutions can do just that, which is why the market has grown so
much. Using existing cloud infrastructure, third-party cloud vendors can scale with
minimal disruption.

Types of scaling
 Vertical Scalability (Scaled-up)
 horizontal scalability
 diagonal scalability
Vertical Scaling
 To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often there
are spaces available, as not all rooms are filled at once. People can move easily as
there is space for them. As long as the capacity of this hotel is not exceeded, no
problem. This is vertical scaling.
 With computing, you can add or subtract resources, including memory or storage,
within the server, as long as the resources do not exceed the capacity of the
machine. Although it has its limitations, it is a way to improve your server and
avoid latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.

9
Page
Horizontal Scaling
 Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars
travel smoothly in each direction without major traffic problems. But then the area
around the highway develops - new buildings are built, and traffic increases. Very
soon, this two-lane highway is filled with cars, and accidents become common.
Two lanes are no longer enough. To avoid these issues, more lanes are added, and
an overpass is constructed. Although it takes a long time, it solves the problem.
 Horizontal scaling refers to adding more servers to your network, rather than
simply adding resources like with vertical scaling. This method tends to take more
time and is more complex, but it allows you to connect servers together, handle
traffic efficiently and execute concurrent workloads.
10
Page
Diagonal Scaling
 It is a mixture of both Horizontal and Vertical scalability where the resources are
added both vertically and horizontally. Well, you get diagonal scaling, which
allows you to experience the most efficient infrastructure scaling. When you
combine vertical and horizontal, you simply grow within your existing server until
you hit the capacity. Then, you can clone that server as necessary and continue the
process, allowing you to deal with a lot of requests and traffic concurrently.

 Example 1. Horizontal scaling :



Horizontal scaling means we scale by adding additional machines to our existing
bunch of resources.

Example 2. Vertical scaling :


11

Vertical scaling means we scale by adding more computing power like CPU and RAM to
Page

an existing machine
Benefits of cloud scalability
Key cloud scalability benefits driving cloud adoption for businesses large and small:
 Convenience: Often, with just a few clicks, IT administrators can easily add more
VMs that are available-and customized to an organization's exact needs-without
delay. Teams can focus on other tasks instead of setting up physical hardware for
hours and days. This saves the valuable time of the IT staff.
 Flexibility and speed: As business needs change and grow, including unexpected
demand spikes, cloud scalability allows IT to respond quickly. Companies are no
longer tied to obsolete equipment-they can update systems and easily increase
power and storage. Today, even small businesses have access to high-powered
resources that used to be cost-prohibitive.
 Cost Savings: Thanks to cloud scalability, businesses can avoid the upfront cost
of purchasing expensive equipment that can become obsolete in a few years.
Through cloud providers, they only pay for what they use and reduce waste.
 Disaster recovery: With scalable cloud computing, you can reduce disaster
recovery costs by eliminating the need to build and maintain secondary data
centers.
12
Page
Horizontal scaling Vertical scaling

Horizontal scaling is difficult to


Vertical scaling easy to implement.
implement.

In Horizontal scaling the databases at Vertical scaling means we scale by adding more
each node or site only contains part of computing power like CPU and RAM to an
the data. existing machine.

Scaling can be done with less


Vertical scaling involve more downtime.
downtime.

Less concurrency when compared to Horizontal


More concurrency.
scaling.

Data sharing is complex. Data sharing is easy.

Less reliable when compared to Horizontal


Horizontal scaling is more reliable.
scaling.

Horizontal scaling is very costly. Vertical scaling is cheaper.

It is also known as scale out. It is also known as scale up.

More easy to upgrade in the future. Upgradation in future is not so easy.

Consumes more power. Consume less power.

13
Page
VIRTUAL MACHINES

 Virtual machines (VMs) are computers that run inside of other computers using a
process known as virtualization.
 A virtual machine (VM) is a software-based computer that exists within another
computer’s operating system, often used for the purposes of testing, backing up
data, or running SaaS applications.
 Virtualization makes it possible to create multiple virtual machines, each with
their own operating system (OS) and applications, on a single physical machine. A
VM cannot interact directly with a physical computer. Instead, it needs a
lightweight software layer called a hypervisor to coordinate between it and the
underlying physical hardware. The hypervisor allocates physical computing
resources—such as processors, memory, and storage—to each VM. It keeps each
VM separate from others so they don’t interfere with each other.

What are virtual machines used for?

Common use cases for virtual machines on single computers include:

 Testing - Software developers often want to test their applications in different


environments. They can use virtual machines to run their applications in
various OSes on one computer. This is simpler and more cost-effective than
testing on several different physical machines.

 Running software designed for other OSes - Although certain software


applications are only available for a single platform, a VM can run software
designed for a different OS. For example, a Mac user who wants to run
software designed for Windows can run a Windows VM on their Mac host.

 Running outdated software - Some pieces of older software can’t be run in


modern OSes. Users who want to run these applications can run an old OS on a
virtual machine.
14
Page
 Browser isolation - Browser isolation is the practice of 'isolating' web browser
activity away from the rest of a computer's operating system to keep malware
from affecting the computer's other files and programs. Some broswer isolation
tools use VMs to establish this isolation — though this approach can slow
down browsing activity.

How does cloud computing use virtual machines?

Several cloud providers offer virtual machines to their customers. These virtual machines
typically live on powerful servers that can act as a host to multiple VMs and can be used
for a variety of reasons that wouldn’t be practical with a locally-hosted VM. These
include:

 Running SaaS applications - Software-as-a-service, or SaaS for short, is a


cloud-based method of providing software to users, in which an application is
served to user over the Internet rather than running on their computers. Often,
it is virtual machines in the cloud that do the computation for SaaS applications
as well as delivering them to users. If the cloud provider has a geographically
distributed network edge, then the application will run closer to the user,
resulting in faster performance.

 Backing up data - Cloud-based VM services are popular for backing up data,


because the data can be accessed from anywhere. Plus, cloud VMs provide
better redundancy, require less maintenance, and generally scale better than
physical data centers. (For example, it’s relatively easy to buy an extra
gigabyte of storage space from a cloud VM provider, but much more difficult
to build a new local data server for that extra gigabyte of data.)

 Hosting services like email and access management - Hosting these services on
cloud VMs is generally faster and more cost-effective, and helps minimize
maintenance and offload security concerns as well.

 Browswer isolation - Some browser isolation tools use cloud VMs to run web
broswing activity and deliver safe content to users via a secure Internet
connection
15
Page
 Cloud computing: For the last 10+ years, VMs have been the fundamental
unit of compute in cloud, enabling dozens of different types of applications and
workloads to run and scale successfully.

 Supporting DevOps: VMs are a great way to support enterprise developers,


who can configure VM templates with the settings for their software
development and testing processes. They can create VMs for specific tasks
such as static software tests, including these steps in an automated
development workflow. This all helps streamline the DevOps toolchain.

 Testing a new operating system: A VM lets you test-drive a new operating


system on your desktop without affecting your primary OS.

 Investigate malware: VMs are useful for malware researchers that frequently
need fresh machines on which to test malicious programs.

Advantages of VMs

VMs offer several benefits over traditional physical hardware:

 Resource utilization and improved ROI: Because multiple VMs run on a single
physical computer, customers don’t have to buy a new server every time they
want to run another OS, and they can get more return from each piece of
hardware they already own.
 Scale: With cloud computing, it’s easy to deploy multiple copies of the same
virtual machine to better serve increases in load.
 Portability: VMs can be relocated as needed among the physical computers in a
network. This makes it possible to allocate workloads to servers that have
spare computing power. VMs can even move between on-premises and cloud
environments, making them useful for hybrid cloud scenarios in which you
share computing resources between your data center and a cloud service
provider.

 Flexibility: Creating a VM is faster and easier than installing an OS on a


physical server because you can clone a VM with the OS already installed.
16

Developers and software testers can create new environments on demand to


handle new tasks as they arise.
Page
 Security: VMs improve security in several ways when compared to operating
systems running directly on hardware. A VM is a file that can be scanned for
malicious software by an external program. You can create an entire snapshot
of the VM at any point in time and then restore it to that state if it becomes
infected with malware, effectively taking the VM back in time. The fast, easy
creation of VMs also makes it possible to completely delete a compromised
VM and then recreate it quickly, hastening recovery from malware infections.

DOCKER CONTAINER

 Docker is an open platform for developing, shipping, and running applications.

 Docker enables to separate our applications from our infrastructure so wecan


deliver software quickly.

 With Docker, we can manage our infrastructure in the same ways we manage
our applications. By taking advantage of Docker’s methodologies for shipping,
testing, and deploying code quickly, we can significantly reduce the delay
between writing code and running it in production.

 Docker provides the ability to package and run an application in a loosely


isolated environment called a container. The isolation and security allows us to
run many containers simultaneously on a given host.

 Containers are lightweight and contain everything needed to run the


application, so we do not need to rely on what is currently installed on the host.

Dockers provides tooling and a platform to manage the lifecycle of our containers:

 Develop our application and its supporting components using containers.

 The container becomes the unit for distributing and testing our application.

 When we’re ready, deploy our application into our production environment, as
a container or an orchestrated service. This works the same whether our
17

production environment is a local data center, a cloud provider, or a hybrid of


the two.
Page
Docker architecture

Docker uses a client-server architecture. The Docker client talks to the Docker
daemon, which does the heavy lifting of building, running, and distributing our Docker
containers.

The Docker client and daemon can run on the same system, or you can connect a
Docker client to a remote Docker daemon.

The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.

Another Docker client is Docker Compose, that lets you work with applications
consisting of a set of containers.

18
Page
The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker
objects such as images, containers, networks, and volumes. A daemon can also
communicate with other daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with
Docker. When you use commands such as docker run, the client sends these commands
to dockerd, which carries them out. The docker command uses the Docker API. The
Docker client can communicate with more than one daemon.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac, Windows or Linux


environment that enables you to build and share containerized applications and
microservices. Docker Desktop includes the Docker daemon (dockerd), the Docker client
(docker), Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper.
For more information, see Docker Desktop.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can
use, and Docker is configured to look for images on Docker Hub by default. You can
even run your own private registry.

When you use the docker pull or docker run commands, the required images are pulled
from your configured registry. When you use the docker push command, your image is
pushed to your configured registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
19
Page
Docker Image

Docker Image can be compared to a template which is used to create Docker Containers.
They are the building blocks of a Docker Container. These Docker Images are created
using the build command. These Read only templates are used for creating containers by
using the run command. We will explore Docker commands in depth in the “Docker
Commands blog”.

Docker lets people (or companies) create and share software through Docker images.
Also, you don’t have to worry about whether your computer can run the software in a
Docker image — a Docker container can always run it.

I can either use a ready-made docker image from docker-hub or create a new image as
per my requirement. In the Docker Commands blog we will see how to create your own
image.

Containers

A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one or
more networks, attach storage to it, or even create a new image based on its current state.
20
Page
By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container’s network, storage, or other
underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it
when you create or start it. When a container is removed, any changes to its state that are
not stored in persistent storage disappear.

Example docker run command

The following command runs an ubuntu container, attaches interactively to your local
command-line session, and runs /bin/bash.

$ docker run -i -t ubuntu /bin/bash

When you run this command, the following happens (assuming you are using the default
registry configuration):

1. If you do not have the ubuntu image locally, Docker pulls it from your configured
registry, as though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container
create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This
allows a running container to create or modify files and directories in its local
filesystem.
4. Docker creates a network interface to connect the container to the default network,
since you did not specify any networking options. This includes assigning an IP
address to the container. By default, containers can connect to external networks
using the host machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is
running interactively and attached to your terminal (due to the -i and -t flags), you
can provide input using your keyboard while the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is
not removed. You can start it again or remove it.
21
Page
KUBERNETES

KUBERNETES also known as K8s, is an open-source system for automating


deployment, scaling, and management of containerized applications

It groups containers that make up an application into logical units for easy management
and discovery.

Kubernetes is software that automatically manages, scales, and maintains multi-container


workloads in desired states

How does Kubernetes work?

1. When developers create a multi-container application, they plan out how all the
parts fit and work together, how many of each component should run, and roughly
what should happen when challenges (e.g., lots of users logging in at once) are
encountered.
2. They store their containerized application components in a container registry
(local or remote) and capture this thinking in one or several text files comprising
aconfiguration. To start the application, they “apply” the configuration to
Kubernetes.
3. Kubernetes job is to evaluate and implement this configuration and maintain it
until told otherwise. It:
1. Analyzes the configuration, aligning its requirements with those of all the
other application configurations running on the system
2. Finds resources appropriate for running the new containers (e.g., some
containers might need resources like GPUs that aren’t present on every
host)
3. Grabs container images from the registry, starts up the new containers, and
helps them connect to one another and to system resources (e.g., persistent
storage), so the application works as a whole
4. Then Kubernetes monitors everything, and when real events diverge from desired
states, Kubernetes tries to fix things and adapt. For example, if a container crashes,
Kubernetes restarts it. If an underlying server fails, Kubernetes finds resources
22

elsewhere to run the containers that node was hosting. If traffic to an application
Page
suddenly spikes, Kubernetes can scale out containers to handle the additional load,
in conformance to rules and limits stated in the configuration.

Why use Kubernetes?

One of the benefits of Kubernetes is that it makes building and running complex
applications much simpler. Here’s a handful of the many Kubernetes features:

1. Standard services like local DNS and basic load-balancing that most applications
need, and are easy to use.
2. Standard behaviors (e.g., restart this container if it dies) that are easy to invoke,
and do most of the work of keeping applications running, available, and
23

performant.
Page
3. A standard set of abstract “objects” (called things like “pods,” “replicasets,” and
“deployments”) that wrap around containers and make it easy to build
configurations around collections of containers.
4. A standard API that applications can call to easily enable more sophisticated
behaviors, making it much easier to create applications that manage other
applications.

The simple answer to “what is Kubernetes used for” is that it saves developers and
operators a great deal of time and effort, and lets them focus on building features for their
applications, instead of figuring out and implementing ways to keep their applications
running well, at scale.

By keeping applications running despite challenges (e.g., failed servers, crashed


containers, traffic spikes, etc.) Kubernetes also reduces business impacts, reduces the
need for fire drills to bring broken applications back online, and protects against other
liabilities, like the costs of failing to comply with Service Level Agreements (SLAs).

Where can I run Kubernetes?

Kubernetes also runs almost anywhere, on a wide range of Linux operating systems
(worker nodes can also run on Windows Server). A single Kubernetes cluster can span
hundreds of bare-metal or virtual machines in a datacenter, private, or any public cloud.
Kubernetes can also run on developer desktops, edge servers, microservers like
Raspberry Pis, or very small mobile and IoT devices and appliances.

With some forethought (and the right product and architectural choices) Kubernetes can
even provide a functionally-consistent platform across all these infrastructures. This
means that applications and configurations composed and initially tested on a desktop
Kubernetes can move seamlessly and quickly to more-formal testing, large-scale
production, edge, or IoT deployments. In principle, this means that enterprises and
organizations can build “hybrid” and “multi-clouds” across a range of platforms, quickly
and economically solving capacity problems without lock-in.
24

.
Page

You might also like