Unit 5
Unit 5
Amazon Elastic
Compute Cloud
Scalability
AWS Lambda
Cost-effectiveness and affordability
Amazon Simple
AWS Reliability
Storage Service
Security
Elastic Block Store
Global reach of the services
Amazon Virtual
Private Cloud
3
Page
Amazon Route 53
Azure Kubernetes
Azure SQL Flexibility
Azure Machine Analytics support
Learning Strong IT support
Azure
Azure Backup Scalability
Azure Cosmos DP Affordability
Azure Active Reliability
Directory
Google Compute
Engine
Affordability
Google Kubernetes
User-friendliness
Google Engine
Speed
Cloud Google Cloud
Advanced admin control capabilities
Spanner
Cloud-based data transfer
Google Cloud Virtual
Network
5
Page
VIRTUAL PRIVATE CLOUD
A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a
public cloud.
VPC customers can run code, store data, host websites, and do anything else they
could do in an ordinary private cloud, but the private cloud is hosted remotely by a
public cloud provider. (Not all private clouds are hosted in this fashion.)
VPCs combine the scalability and convenience of public cloud computing with
the data isolation of private cloud computing.
Imagine a public cloud as a crowded restaurant, and a virtual private cloud as a
reserved table in that crowded restaurant. Even though the restaurant is full of
people, a table with a "Reserved" sign on it can only be accessed by the party who
made the reservation. Similarly, a public cloud is crowded with various cloud
customers accessing computing resources – but a VPC reserves some of those
resources for use by only one customer
A public cloud is shared cloud infrastructure. Multiple customers of the cloud
vendor access that same infrastructure, although their data is not shared – just like
every person in a restaurant orders from the same kitchen, but they get different
dishes.
Public cloud service providers include AWS, Google Cloud Platform, and
Microsoft Azure, among others.
The technical term for multiple separate customers accessing the same cloud
Infrastructure is "multitenancy"
A private cloud, however, is single-tenant. A private cloud is a cloud service that
is exclusively offered to one organization. A virtual private cloud (VPC) is a
private cloud within a public cloud; no one else shares the VPC with the VPC
customer
6
Page
How is a VPC isolated within a public cloud?
A VPC isolates computing resources from the other computing resources available in the
public cloud. The key technologies for isolating a VPC from the rest of the public cloud
are:
Subnets: A subnet is a range of IP addresses within a network that are reserved so that
they're not available to everyone within the network, essentially dividing part of the
network for private use. In a VPC these are private IP addresses that are not accessible
via the public Internet, unlike typical IP addresses, which are publicly visible.
VLAN: A LAN is a local area network, or a group of computing devices that are all
connected to each other without the use of the Internet. A VLAN is a virtual LAN. Like a
subnet, a VLAN is a way of partitioning a network, but the partitioning takes place at a
different layer within the OSI model (layer 2 instead of layer 3).
VPN: A virtual private network (VPN) uses encryption to create a private network over
the top of a public network. VPN traffic passes through publicly shared Internet
infrastructure – routers, switches, etc. – but the traffic is scrambled and not visible to
anyone.
` A VPC will have a dedicated subnet and VLAN that are only accessible by the VPC
customer. This prevents anyone else within the public cloud from accessing computing
resources within the VPC – effectively placing the "Reserved" sign on the table. The
VPC customer connects via VPN to their VPC, so that data passing into and out of the
VPC is not visible to other public cloud users.
Some VPC providers offer additional customization with:
Network Address Translation (NAT): This feature matches private IP addresses
to a public IP address for connections with the public Internet. With NAT, a publicfacing
website or application could run in a VPC.
BGP route configuration: Some providers allow customers to customize BGP
routing tables for connecting their VPC with their other infrastructure
Advantages of using a VPC instead of a private cloud
Scalability: Because a VPC is hosted by a public cloud provider, customers can add
7
Page
more computing resources on demand. Easy hybrid cloud deployment: It's relatively
simple to connect a VPC to a public cloud or to on-premises infrastructure via the VPN.
(Learn about hybrid clouds and their advantages.)
Better performance: Cloud-hosted websites and applications typically perform better
than those hosted on local on-premises servers.
Better security: The public cloud providers that offer VPCs often have more resources
for updating and maintaining the infrastructure, especially for small and mid-market
businesses. For large enterprises or any companies that face extremely tight data security
regulations, this is less of an advantage.
It can be outlined as changing the size of something, for instance scaling the
business. It is even same within the context of databases.
Cloud scalability in cloud computing refers to increasing or decreasing IT
resources as needed to meet changing demand. Scalability is one of the hallmarks
of the cloud and the primary driver of its explosive popularity with businesses.
Data storage capacity, processing power, and networking can all be increased by
using existing cloud computing infrastructure. Scaling can be done quickly and
easily, usually without any disruption or downtime.
Third-party cloud providers already have the entire infrastructure in place; In the
past, when scaling up with on-premises physical infrastructure, the process could
take weeks or months and require exorbitant expenses.
This is one of the most popular and beneficial features of cloud computing, as
businesses can grow up or down to meet the demands depending on the season,
projects, development, etc.
By implementing cloud scalability, you enable your resources to grow as your
traffic or organization grows and vice versa.
There are a few main ways to scale to the cloud:
8
Page
If our business needs more data storage capacity or processing power, wel want a
system that scales easily and quickly.
Cloud computing solutions can do just that, which is why the market has grown so
much. Using existing cloud infrastructure, third-party cloud vendors can scale with
minimal disruption.
Types of scaling
Vertical Scalability (Scaled-up)
horizontal scalability
diagonal scalability
Vertical Scaling
To understand vertical scaling, imagine a 20-story hotel. There are innumerable
rooms inside this hotel from where the guests keep coming and going. Often there
are spaces available, as not all rooms are filled at once. People can move easily as
there is space for them. As long as the capacity of this hotel is not exceeded, no
problem. This is vertical scaling.
With computing, you can add or subtract resources, including memory or storage,
within the server, as long as the resources do not exceed the capacity of the
machine. Although it has its limitations, it is a way to improve your server and
avoid latency and extra management. Like in the hotel example, resources can
come and go easily and quickly, as long as there is room for them.
9
Page
Horizontal Scaling
Horizontal scaling is a bit different. This time, imagine a two-lane highway. Cars
travel smoothly in each direction without major traffic problems. But then the area
around the highway develops - new buildings are built, and traffic increases. Very
soon, this two-lane highway is filled with cars, and accidents become common.
Two lanes are no longer enough. To avoid these issues, more lanes are added, and
an overpass is constructed. Although it takes a long time, it solves the problem.
Horizontal scaling refers to adding more servers to your network, rather than
simply adding resources like with vertical scaling. This method tends to take more
time and is more complex, but it allows you to connect servers together, handle
traffic efficiently and execute concurrent workloads.
10
Page
Diagonal Scaling
It is a mixture of both Horizontal and Vertical scalability where the resources are
added both vertically and horizontally. Well, you get diagonal scaling, which
allows you to experience the most efficient infrastructure scaling. When you
combine vertical and horizontal, you simply grow within your existing server until
you hit the capacity. Then, you can clone that server as necessary and continue the
process, allowing you to deal with a lot of requests and traffic concurrently.
Vertical scaling means we scale by adding more computing power like CPU and RAM to
Page
an existing machine
Benefits of cloud scalability
Key cloud scalability benefits driving cloud adoption for businesses large and small:
Convenience: Often, with just a few clicks, IT administrators can easily add more
VMs that are available-and customized to an organization's exact needs-without
delay. Teams can focus on other tasks instead of setting up physical hardware for
hours and days. This saves the valuable time of the IT staff.
Flexibility and speed: As business needs change and grow, including unexpected
demand spikes, cloud scalability allows IT to respond quickly. Companies are no
longer tied to obsolete equipment-they can update systems and easily increase
power and storage. Today, even small businesses have access to high-powered
resources that used to be cost-prohibitive.
Cost Savings: Thanks to cloud scalability, businesses can avoid the upfront cost
of purchasing expensive equipment that can become obsolete in a few years.
Through cloud providers, they only pay for what they use and reduce waste.
Disaster recovery: With scalable cloud computing, you can reduce disaster
recovery costs by eliminating the need to build and maintain secondary data
centers.
12
Page
Horizontal scaling Vertical scaling
In Horizontal scaling the databases at Vertical scaling means we scale by adding more
each node or site only contains part of computing power like CPU and RAM to an
the data. existing machine.
13
Page
VIRTUAL MACHINES
Virtual machines (VMs) are computers that run inside of other computers using a
process known as virtualization.
A virtual machine (VM) is a software-based computer that exists within another
computer’s operating system, often used for the purposes of testing, backing up
data, or running SaaS applications.
Virtualization makes it possible to create multiple virtual machines, each with
their own operating system (OS) and applications, on a single physical machine. A
VM cannot interact directly with a physical computer. Instead, it needs a
lightweight software layer called a hypervisor to coordinate between it and the
underlying physical hardware. The hypervisor allocates physical computing
resources—such as processors, memory, and storage—to each VM. It keeps each
VM separate from others so they don’t interfere with each other.
Several cloud providers offer virtual machines to their customers. These virtual machines
typically live on powerful servers that can act as a host to multiple VMs and can be used
for a variety of reasons that wouldn’t be practical with a locally-hosted VM. These
include:
Hosting services like email and access management - Hosting these services on
cloud VMs is generally faster and more cost-effective, and helps minimize
maintenance and offload security concerns as well.
Browswer isolation - Some browser isolation tools use cloud VMs to run web
broswing activity and deliver safe content to users via a secure Internet
connection
15
Page
Cloud computing: For the last 10+ years, VMs have been the fundamental
unit of compute in cloud, enabling dozens of different types of applications and
workloads to run and scale successfully.
Investigate malware: VMs are useful for malware researchers that frequently
need fresh machines on which to test malicious programs.
Advantages of VMs
Resource utilization and improved ROI: Because multiple VMs run on a single
physical computer, customers don’t have to buy a new server every time they
want to run another OS, and they can get more return from each piece of
hardware they already own.
Scale: With cloud computing, it’s easy to deploy multiple copies of the same
virtual machine to better serve increases in load.
Portability: VMs can be relocated as needed among the physical computers in a
network. This makes it possible to allocate workloads to servers that have
spare computing power. VMs can even move between on-premises and cloud
environments, making them useful for hybrid cloud scenarios in which you
share computing resources between your data center and a cloud service
provider.
DOCKER CONTAINER
With Docker, we can manage our infrastructure in the same ways we manage
our applications. By taking advantage of Docker’s methodologies for shipping,
testing, and deploying code quickly, we can significantly reduce the delay
between writing code and running it in production.
Dockers provides tooling and a platform to manage the lifecycle of our containers:
The container becomes the unit for distributing and testing our application.
When we’re ready, deploy our application into our production environment, as
a container or an orchestrated service. This works the same whether our
17
Docker uses a client-server architecture. The Docker client talks to the Docker
daemon, which does the heavy lifting of building, running, and distributing our Docker
containers.
The Docker client and daemon can run on the same system, or you can connect a
Docker client to a remote Docker daemon.
The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
Another Docker client is Docker Compose, that lets you work with applications
consisting of a set of containers.
18
Page
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker
objects such as images, containers, networks, and volumes. A daemon can also
communicate with other daemons to manage Docker services.
The Docker client (docker) is the primary way that many Docker users interact with
Docker. When you use commands such as docker run, the client sends these commands
to dockerd, which carries them out. The docker command uses the Docker API. The
Docker client can communicate with more than one daemon.
Docker Desktop
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can
use, and Docker is configured to look for images on Docker Hub by default. You can
even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled
from your configured registry. When you use the docker push command, your image is
pushed to your configured registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
19
Page
Docker Image
Docker Image can be compared to a template which is used to create Docker Containers.
They are the building blocks of a Docker Container. These Docker Images are created
using the build command. These Read only templates are used for creating containers by
using the run command. We will explore Docker commands in depth in the “Docker
Commands blog”.
Docker lets people (or companies) create and share software through Docker images.
Also, you don’t have to worry about whether your computer can run the software in a
Docker image — a Docker container can always run it.
I can either use a ready-made docker image from docker-hub or create a new image as
per my requirement. In the Docker Commands blog we will see how to create your own
image.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or
delete a container using the Docker API or CLI. You can connect a container to one or
more networks, attach storage to it, or even create a new image based on its current state.
20
Page
By default, a container is relatively well isolated from other containers and its host
machine. You can control how isolated a container’s network, storage, or other
underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it
when you create or start it. When a container is removed, any changes to its state that are
not stored in persistent storage disappear.
The following command runs an ubuntu container, attaches interactively to your local
command-line session, and runs /bin/bash.
When you run this command, the following happens (assuming you are using the default
registry configuration):
1. If you do not have the ubuntu image locally, Docker pulls it from your configured
registry, as though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container
create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This
allows a running container to create or modify files and directories in its local
filesystem.
4. Docker creates a network interface to connect the container to the default network,
since you did not specify any networking options. This includes assigning an IP
address to the container. By default, containers can connect to external networks
using the host machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is
running interactively and attached to your terminal (due to the -i and -t flags), you
can provide input using your keyboard while the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is
not removed. You can start it again or remove it.
21
Page
KUBERNETES
It groups containers that make up an application into logical units for easy management
and discovery.
1. When developers create a multi-container application, they plan out how all the
parts fit and work together, how many of each component should run, and roughly
what should happen when challenges (e.g., lots of users logging in at once) are
encountered.
2. They store their containerized application components in a container registry
(local or remote) and capture this thinking in one or several text files comprising
aconfiguration. To start the application, they “apply” the configuration to
Kubernetes.
3. Kubernetes job is to evaluate and implement this configuration and maintain it
until told otherwise. It:
1. Analyzes the configuration, aligning its requirements with those of all the
other application configurations running on the system
2. Finds resources appropriate for running the new containers (e.g., some
containers might need resources like GPUs that aren’t present on every
host)
3. Grabs container images from the registry, starts up the new containers, and
helps them connect to one another and to system resources (e.g., persistent
storage), so the application works as a whole
4. Then Kubernetes monitors everything, and when real events diverge from desired
states, Kubernetes tries to fix things and adapt. For example, if a container crashes,
Kubernetes restarts it. If an underlying server fails, Kubernetes finds resources
22
elsewhere to run the containers that node was hosting. If traffic to an application
Page
suddenly spikes, Kubernetes can scale out containers to handle the additional load,
in conformance to rules and limits stated in the configuration.
One of the benefits of Kubernetes is that it makes building and running complex
applications much simpler. Here’s a handful of the many Kubernetes features:
1. Standard services like local DNS and basic load-balancing that most applications
need, and are easy to use.
2. Standard behaviors (e.g., restart this container if it dies) that are easy to invoke,
and do most of the work of keeping applications running, available, and
23
performant.
Page
3. A standard set of abstract “objects” (called things like “pods,” “replicasets,” and
“deployments”) that wrap around containers and make it easy to build
configurations around collections of containers.
4. A standard API that applications can call to easily enable more sophisticated
behaviors, making it much easier to create applications that manage other
applications.
The simple answer to “what is Kubernetes used for” is that it saves developers and
operators a great deal of time and effort, and lets them focus on building features for their
applications, instead of figuring out and implementing ways to keep their applications
running well, at scale.
Kubernetes also runs almost anywhere, on a wide range of Linux operating systems
(worker nodes can also run on Windows Server). A single Kubernetes cluster can span
hundreds of bare-metal or virtual machines in a datacenter, private, or any public cloud.
Kubernetes can also run on developer desktops, edge servers, microservers like
Raspberry Pis, or very small mobile and IoT devices and appliances.
With some forethought (and the right product and architectural choices) Kubernetes can
even provide a functionally-consistent platform across all these infrastructures. This
means that applications and configurations composed and initially tested on a desktop
Kubernetes can move seamlessly and quickly to more-formal testing, large-scale
production, edge, or IoT deployments. In principle, this means that enterprises and
organizations can build “hybrid” and “multi-clouds” across a range of platforms, quickly
and economically solving capacity problems without lock-in.
24
.
Page