CHP 3 Automation
CHP 3 Automation
Automation and
Orchestration.
What is Automation?
• Automation focuses on making tasks easier and faster for humans to
perform.
• Cloud automation mechanisms have been developed for three
categories of users:
• Individual customers
• Large cloud customers
• Cloud providers
• Cloud providers:Cloud providers have devised some of the most sophisticated and advanced
automation tools, and use them to manage cloud data centers. Tools are available that
accommodate requests from both individual customers and large organizational customers.
The Need For Automation In A Data Center
• After all the facilities have been installed and configured, operating a data center is
• much more complex than operating IT facilities for a single organization. Four aspects
• of a cloud data center stand out.
• Extreme scale
• Diverse services
• Constant change
• Human error
2.Device Boot-up: When the device is powered on for the first time at its deployment
location, it starts the ZTP process. The device typically boots up with a default or factory
configuration.
3. DHCP and Auto-Discovery: The device uses DHCP (Dynamic Host Configuration
Protocol) to obtain an IP address and network parameters from a DHCP server in the
local network. It may also use other methods like LLDP (Link Layer Discovery Protocol)
or DNS-based auto-discovery to identify the provisioning server.
4. Contacting the Provisioning Server: Once the device has an IP address and can
communicate on the network, it contacts a ZTP or provisioning server to fetch its
configuration files and other necessary resources.
5. Configuration and Validation: The provisioning server sends the device its complete
configuration file, scripts, and any required software updates. The device then applies this
configuration and validates its settings.
6. Operation: After successful provisioning, the device is ready to operate according to
its designated role in the network.
Benefits of Zero Touch Provisioning:
• Efficiency: ZTP automates the device setup process, reducing the time and effort
required for deployment.
• Consistency: Automated provisioning ensures that all devices are configured
consistently, reducing the risk of human errors.
• Scalability: ZTP is particularly useful in large-scale deployments where manually
configuring each device would be impractical or time-consuming.
• Flexibility: It allows for remote and unattended deployments, making it easier to
manage distributed or geographically dispersed networks.
Infrastructure as Code
• Infrastructure as Code (IaC) is an approach to managing and provisioning
computing infrastructure with machine-readable scripts or configuration files,
rather than using manual processes or interactive configuration tools.
• A process where a data center operator creates a specification and uses software
to read the specification and configure underlying systems.Two approaches have
been used: push and pull.
• The push version follows the traditional pattern of installing a configuration: a
tool reads a specification and performs the commands needed to configure the
underlying system.
• The pull version requires an entity to initiate configuration. For example, when a
new software system starts, the system can be designed to pull configuration
information from a server.
Automation tools
Orchestration: Automation with a larger scope.
• Early automation systems derived from manual processes led to separation of
functionality, with some tools helping deploy virtualized servers, others
handling network configuration, and so on.
• The move to containers made the question especially relevant for three
reasons:
1. Rapid creation
2. Short lifetime
3. Replication
• Rapid creation: The low-overhead of containers means that it takes significantly
less time to create a container than to create a VM. An automated system is
needed because a human would take an intolerably long time performing the
steps required to create a container.
• Short lifetime: Unlike a VM that remains in place semi-permanently once
created, a container is ephemeral. A container resembles an application process:
a typical container is created when needed, performs one application task, and
then exits.
• Replication: Replication is key for containers. When demand for a particular ser-
vice increases, multiple containers for the service can be created and run
simultaneously, analogous to creating multiple concurrent processes to handle
load. When demand for a service declines, unneeded container replicas can be
terminated.
• In addition to automated configuration and deployment, a container orchestrator
usually handles three key aspects of system management:
Dynamic scaling of services
Coordination across multiple servers
Resilience and automatic recovery
Dynamic scaling of services: An orchestrator starts one or more copies of a container
running, and then monitors demand. When demand increases, the orchestrator
automatically increases the number of simultaneous copies. When demand
decreases, the orchestrator reduces the number of copies, either by allowing
copies to exit without replacing them or by terminating idle copies.
Coordination across multiple servers: Although multiple containers can run on a
given physical server, performance suffers if too many containers execute on one
server. Therefore, to manage a large-scale service, an orchestrator deploys copies
of a container on multiple physical servers. The orchestrator monitors performance,
and balances the load by starting new copies on lightly-loaded servers.
• Resilience and automatic recovery: An orchestrator can monitor an
individual container or a group of containers that provide a service. If
a container fails or the containers providing a service become
unreachable, the orchestrator can either restart the failed containers
or switch over to a backup set, thereby guaranteeing that the service
remains available at all times.
Kubernetes: An Example Container Orchestration System.
Kubernetes is a technology that was developed at Google and later moved to
open source. The technology has become popularly Known as Kubernetes
and abbreviated K8s, the technology manages many aspects of running a
service. Below Figure lists seven features of Kubernetes.
• Service naming and discovery: Kubernetes allows a service to be accessed
through a domain name or an IP address. Once a name or address has been
assigned, applications can use the name or address to reach the container that
runs the service. Typically, names and addresses are configured to be global,
allowing applications running outside the data center to access the service.
• Load balancing: Kubernetes does not limit a service to a single container.
Instead, if traffic is high, Kubernetes can automatically create multiple copies of
the container for a service, and use a load balancer† to divide incoming requests
among the copies.
• Storage orchestration: Kubernetes allows an operator to mount remote storage
automatically when a container runs. The system can accommodate many types
of storage, including local storage and storage from a public cloud provider.
• Optimized container placement: When creating a service, an operator specifies a cluster of
servers (called nodes) that Kubernetes can use to run containers for the ser- vice. The operator
specifies the processor and memory (RAM) that each container will need. Kubernetes places
containers on nodes in the cluster in a way that optimizes the use of servers.
• Automated recovery: Kubernetes manages containers. After creating a container, Kubernetes
does not make the container available to clients until the container is run- ning and ready to
provide service. Kubernetes automatically replaces a container that fails, and terminates a
container that stops responding to a user-defined health check.
• Management of configurations and secrets: Kubernetes separates management in- formation
from container images, allowing users to change the information needed for configuration and
management without rebuilding container images. In addition to stor- ing conventional
configuration information, such as network and storage configurations, Kubernetes allows one
to store sensitive information, such as passwords, authentication tokens, and encryption keys.
• Automated rollouts and rollbacks: Kubernetes allows an operator to roll out a new version of
a service at a specified rate. That is, a user can create a new version of a container image, and
tell Kubernetes to start replacing running containers with the new version (i.e.repeatedly
terminate an existing container and start a replacement container running the new image).
More important, Kubernetes allows each new container to inherit all the resources the old
container owned.
The Kubernetes Cluster Model
• Kubernetes provides container orchestration, which means it automates the
development and operation of a set of one or more containers to provide a
computation service.
• Kubernetes uses the term cluster to describe the set of containers plus the
associated support software used to create, operate, and access the containers.
• The number of containers in a cluster depends on demand, and Kubernetes can
increase or decrease the number as needed.
• Software in a cluster can be divided into two conceptual categories:
• one category contains software invoked by the owner of the cluster to create
and operate containers.
• The other category contains software invoked by users of the cluster to obtain
access to a container.
Below figure depicts the conceptual organization of a Kubernetes cluster, and shows
the roles of an owner and users. Although the terms “owner” and “user” seem to
refer to humans ,the roles do not have to be filled by entering commands manually.
Each of the two categories of software provides multiple APIs.
Kubernetes Pods
• Kubernetes deploys a set of one or more containers. In fact, Kubernetes deploys
one or more running copies of a complete application program. Many
applications do indeed consist of a single container.
• Kubernetes uses the term pod to refer to an application. Thus, a pod can consist
of a single container or multiple containers.
• A pod defines the smallest unit of work that Kubernetes can deploy.
• When it deploys an instance of a pod, Kubernetes places all containers for the
pod on the same node.
• In terms of networking, Kubernetes assigns an IP address to each running pod.
• If a pod has multiple containers, all containers in the pod share the IP address.
• Communication among containers in the pod occurs over the local host network
interface, just as if the containers in the pod were processes running on a single
computer
Pod Creation, Templates, And Binding Times
• Kubernetes uses a binding time approach in which a programmer creates a
template for the pod (sometimes called a pod manifest) that specifies items to
use when running the pod.
• A template assigns a name to the pod, specifies which container or containers to
run, lists the network ports the pod will use, and specifies the version of the
Kubernetes API to use with the pod.
• A template can use yaml or json formats; Below Figure shows an example.
• When it deploys a pod on a node, Kubernetes stores information from the
template with the running pod.
• Any changes to a template apply to any new pods that are created from the
template, but do not affect already running pods.
Kubernetes Terminology: Nodes And Control Plane
• When the control plane software runs on a node, the node is known as a master node.
• A node that Kubernetes uses to run containers is a Kubernetes node or worker node
• We will use the terms master node and worker node to make the purpose of each node explicit.
Worker Node Software Components
• Each worker node runs software components that control and manage the pods
running on the node.
• A pod is merely an environment, so the software components manage the
containers that comprise each pod.
• Below figure lists the main software components.
• Service Proxy: Sometimes called the kube-proxy, the Service Proxy is responsible
for configuring network forwarding on the node to provide network connectivity
for the pods running on the node. Specifically, the Service Proxy configures the
Linux iptables facility.
• Kubelet : The Kubelet component provides the interface between the control
plane and the worker node.
• It contacts the API server, watches the set of pods bound to the node, and
handles the details of running the pods on the node.
• Kubelet sets up the environment to ensure that each pod is isolated from other
pods and interfaces with the Container Runtime system to run and monitor
containers.
• Kubelet also monitors the pods running on the node, and reports their status
back to the API server.
• Kubelet includes a copy of the cAdvisor software that collects and summarizes
statistics about pods. Kubelet then exports the summary through a Summary API,
making them available to monitoring software (e.g., Metrics Server).
• Container Runtime: Kubernetes does not include a Container Runtime system.
• Instead, it uses a conventional container technology and assumes that each node
runs conventional Container Runtime software.
• Although Kubernetes allows other container systems to be used, most
implementations use Docker Engine.
• When Kubernetes needs to deploy containers, Kubelet interacts with the
Container Runtime system to perform the required task.
• Below illustrates the software components that run on each worker node.