Reed, Mark - Kubernetes - The Ultimate Beginners Guide To Effectively Learn Kubernetes Step-by-Step-Publishing Factory (2020)
Reed, Mark - Kubernetes - The Ultimate Beginners Guide To Effectively Learn Kubernetes Step-by-Step-Publishing Factory (2020)
Mark Reed
© Copyright 2019 - All rights reserved.
It is not legal to reproduce, duplicate, or transmit any part of this document
in either electronic means or in printed format. Recording of this
publication is strictly prohibited and any storage of this document is not
allowed unless with written permission from the publisher except for the
use of brief quotations in a book review.
Table of Contents
Introduction
Chapter One: A Kubernetes Overview
Chapter Two: Kubernetes Architecture
Chapter Three: Kubernetes API Server
Chapter Four: Kubernetes Load Balancing, Networking and Ingress
Chapter Five: Scheduler
Chapter Six: Monitoring Kubernetes
Final Words
Introduction
The history of computer science can be characterized by the
development of abstractions that aim at reducing complexity and
empowering people to create more sophisticated applications. However, the
development of scalable and reliable applications is still more challenging
than it should be. To reduce the complexity, containers and container
orchestration APIs such as Kubernetes have been introduced as crucial
abstractions that radically simplify the development of scalable and reliable
systems. Though orchestrators and containers are in the process of being
absorbed into the mainstream, they do a great job at enabling the developers
to create and deploy applications with agility, reliability, and, above all,
speed.
Kubernetes has become the de-facto platform used for deploying and
managing cloud native applications. Evidently, the adoption of Kubernetes
has developed to be used in more complex and mission-critical applications.
As such, enterprise operations teams ought to be conversant with
Kubernetes to effectively manage any challenges that may arise. Note that
developer experience, operator experience, and multi-tenancy are the core
challenges that Kubernetes users encounter. Though the complexity of
using and operating Kubernetes may be a huge concern, enterprises that
manage to overcome the challenges enjoy different benefits, such as
increased release frequencies, faster recovery from failures, quicker
adoption of cloud technologies, and an improved customer experience that,
in turn, offers a myriad of business advantages. The good news is that
developers have the freedom to ensure faster innovations, while the
operations teams ensure that resources are utilized efficiently and
compliance is upheld.
This book focuses on explaining the management and design of
Kubernetes clusters. As such, it covers in detail the capabilities and services
that are provided by Kubernetes for both developers and daily users. The
following chapters will take the reader through the deployment and
application of Kubernetes, while taking into consideration the different user
cases and environments. The reader will then gain extensive knowledge
about how Kubernetes is organized, when it is best to apply various
resources, and how clusters can be implemented and configured effectively.
In the following chapters, you will gain an in-depth understanding of the
Kubernetes architecture, the clusters, and how they are installed, how they
operate, and how the software can be deployed through applying the best
practices possible.
If you are new to Kubernetes, this book offers in-depth information that
helps in understanding Kubernetes, its benefits, and why you need it. For
instance, the following chapters give a detailed introduction to Kubernetes,
containers, and development of containerized applications. It will further
describe the Kubernetes cluster orchestrator and how tools and APIs are
used to improve the delivery, development, and maintenance of distributed
applications. You will understand how to move container applications into
production by applying the best practices, and you will also learn how
Kubernetes fits into your daily operations, ensuring that you prepare for
production-ready container application stacks. This book aims at helping
the reader comprehend the Kubernetes technology, along with educating on
how to use the Kubernetes tooling efficiently and effectively, with the aim
of developing and deploying apps to Kubernetes clusters.
Chapter One:
A Kubernetes Overview
What is Kubernetes?
Kubernetes is a Greek word used to define an open-source container
orchestration system that is automated for deployment, scaling, and
management. The first version of Kubernetes was released in July 2015 as a
collaboration between Google and the Cloud Native Computing Foundation
(CNCF). It makes it easier for a developer to package an application with
the various elements it needs and finally come out as one package. For
developers wanting to create more complex applications that require
various elements involving multiple containers and machines, Kubernetes is
an ideal solution. It can help application elements to restart and move across
various systems as they are required to. It serves as the basic framework
that allows users to choose the different frameworks, instruments, and the
language, among other tools they may prefer. Even though Kubernetes is
not a platform service tool, they still form a good basis for the development
of these applications. Kubernetes is designed to help solve and offer modern
application infrastructure solutions.
Its main unit of organization is called a pod . A pod is a group of
containers that are taken as a group on a machine, and has the ability to
communicate with one another easily. Each pod has a unique IP address,
thereby ensuring that different applications can use similar ports without the
risk of conflict. All containers using the ports can recognize each other as
local hosts and can therefore correlate. These pods are organized into a
service that works together to become a system of labels to store metadata
in Kubernetes. These parts then create a systematic and consistent way to
give predefined instructions via a command line center. A pod can define
the volume of a network disk and relay the information to the containers in
the pod. Pods are easily managed by setting up a control system, thus
ensuring they are working properly.
Kubernetes does not limit the types of applications that are supported,
and its main objective is to give support to a wide variety of workloads. It’s
important that people understand that Kubernetes does not give source
codes, nor does it build the workload; instead, it determines the workflow
of the application development and the technical requirements. Kubernetes
does not do command logging, monitoring, or alerting solutions—it gives
an integration to a mechanism that collects and exports metrics. It does not
give any comprehensive configurations or management systems, which is
left to the developer to ensure they create a good, self-healing system to
protect their application. Kubernetes creates a no-need environment for
orchestration and gives the user a continuous, current state that will
eventually lead to the desired state. It makes it easier for users to move from
the first step to the last step, thus avoiding the need to have an orchestration
system. This makes Kubernetes an easy, efficient, powerful and resilient
system. The ecosystem provided by Kubernetes ensures that there is a
micro-based implementation to address the concerns of microservice. The
efficiency of Kubernetes makes it the best in terms of information
technology, and is widely used by many developers to ensure that their
applications and software tools are efficient and flawless.
Kubernetes also has replica sets that ensure swift maintenance of the
number of containers that have been assigned to one pod. A selector , in
this case, is used to help the replica sets work properly; that is, it helps in
good identification of the pods that are associated with every container.
This sorts out what pods to add to which containers, and which to reduce or
maintain. Kubernetes, being a multi-tier application, offers two modes of
service discovery, beginning by first assigning stable IP addresses and DNS
names to the services. These ensure that there is no traffic in the network
and that the IP addresses match with those of the selector. In case of any
defaults, the service is exposed inside or outside the cluster, depending on
the load of traffic in the cluster network. Ample storage is readily available
in the Kubernetes containers, since every pod restart tends to clear data on
these containers. This, therefore, gives the pod enough space to serve them
for a lifetime, even though this space can still be shared with other pods.
The pod configuration determines the location at which every volume is
mounted. Different containers can then mount their volume at different
locations in the cluster network.
Kubernetes uses namespaces to ensure that there is a way of
partitioning how various users apply the provisions given by Kubernetes.
These namespaces can differentiate between development, production, and
testing for various users, and they ensure there is a free environment for
each Kubernetes user. ConfigMaps and Secrets in Kubernetes are used to
help with changes in the configuration settings, which is to ensure that users
find a better place to store and manage the information on these containers.
Kubernetes store their memory in nodes that are used by the ConfigMaps
and Secrets. If the information in these pods is deleted, then the information
on the whole memory copy system in the ConfigMaps and Secrets is
deleted as well. Users will only be able to access these pods through
environment variables or in the container file system, which can only be
accessed from within the pods—nowhere else in the system. Developers
now have an easier time in the configuration and deployment of these
applications. The advanced configuration of Kubernetes will ensure more
efficient data protection and configuration. Kubernetes provides a good
platform for cloud-native applications that require scaling and real-time
data streaming, which is why most people see Kubernetes as an efficient
and important tool in their work.
Why People Need Kubernetes
Developers look for good methods they can use package and run their
applications, thereby ensuring fewer downtimes. Kubernetes saves the day
by ensuring that your system is up and running all the time. It offers good
value for people who use them, and many who have used it found it to be
beneficial.
1. Reproducibility
Kubernetes ensures that if, for example, you delete the complete
production environment you had before, you can recreate the same
environment by using the backup data. Kubernetes helps you during
disaster recovery, thus ensuring that you do not need to start the
configuration all over again. This can only be achieved if you follow the
instructions given by cloud providers. You can change your account and
still be able to recover the data you had backed up. The concept of
reproducibility brought out by Kubernetes helps you create a cloud account.
If you use Terraform and Ansible to code everything for your application,
then the cost of your development might be higher than you thought it
would. In the past, people spent time and money writing and generating
codes for the configuration, but with the rise of Kubernetes, this is now as
easy as a snap of a finger. With Kubernetes, you can easily describe
everything in YAML (or YAML Ain’t Markup Language) files. It also
manages the DNS records that are required to bring together services. It
gives you a chance to create a series of multiple infrastructures rather than
having just one. You can also create a new setup demo by using YAML files
and deploy them into different namespaces on the cluster system. To
achieve this, users should be able to follow the right steps and do things the
right way. Creating these containers ensures that you built them in a way
that allows you to easily change the configuration settings. Always expect
to have various or separate environments as you create your YAML files.
The Continuous Integration (CI) system used should be able to create an
easy environment, which is why the deployment process should be well-
integrated. Kubernetes makes it easy for you and others who use it to
achieve the desired results.
2. Immutability
Kubernetes has a special way of immutability, in that you do not have
to update anything; rather, you would replace it. Traditionally, people used
to upgrade their software every time there was a new or better version.
Going through the hassle of upgrading to the software is over with the
introduction of Kubernetes. If problems occurred while people updated
their software, developers could experience a tough time looking for ways
to fix the problem, and the hassle of reading through documents,
troubleshooting, and looking for solutions online could be stressful.
Kubernetes came up with a better way to cover the downtime needed when
upgrading and updating—instead of updating your server, you would install
a new server.
Before installing the new server, you would first need to ensure that it
works perfectly and that it is compatible. If everything is okay with the new
server, you can then go ahead and install it. A spare server is also essential,
as it will be handy when it comes time to deploy the infrastructure. While
undertaking these upgrades, you can also make sure that you have the
virtual machines and cloud providers to make the work a bit easier. Manual
installation or updates of servers can be time-consuming, which is why you
need automated tools to continue with the installation.
Kubernetes ensures that it is easier for you to install the servers, as the
concept and virtual machines used to set them up have been simplified. The
person using Kubernetes will realize that the containers running on top of
Kubernetes are secure if the correct installation methods are used, and the
containers are replaced rather than adjusted. Kubernetes has made it
possible for users, if there is a problem with the newer version, to easily
switch back to the previous version. However, during this upgrade, you
need to recognize that you do not need to change to the previous container
after an upgrade. For example, if the new container version is not
compatible with your previous database, the rejoining of the previous nodes
can be difficult, and you may not be able to go back to the previous version
of the database format if it has already been adjusted.
3. Easier deployment
The deployment on servers, at times, has been a tricky situation to
handle. Before the release of Kubernetes, users needed to create a new
version of the application on a different service directory, then switch the
symlink to all the other servers. People would need to use empty caches and
make sure to populate the database, so they could deploy the servers. When
the containers were created, this process was simplified because the
initialization of the containers was taken care of by the launching of the
container. After the natural replacement of the containers, the caches would
normally purge themselves. Kubernetes ensured that their system was well-
protected, so there would be no containers launched that can damage the
Kubernetes cluster.
Faster and easier deployment ensures that you also get a faster time to
the market. The smaller breakdown of teams allows each team to focus on a
set role and ensure that it is efficient. These teams can be specialized to
support experts, who can then be assigned to support these multiple teams.
This will help build your IT team, allowing it to handle large applications
from as many containers as possible, as they have already been managing
small teams.
Tinder is one application that has used Kubernetes to grow their
journey into the market, and is known to be efficient and easy to use. Due to
its high use, Tinder reported that they were struggling with scale and
scalability. This is when they applied to outsource a reliable container
system that could support them, and found Kubernetes to be their answer.
Since then, their business operations have been smooth.
4. Networking abstraction
Writing an iptable script was one of the many traditional ways that
people used to network. Iptable was a tool that had a lot of errors. Even
after many attempts to fix it, there were still many things that could go
wrong with the tool, and it became hard for developers to change it due to
the knowledge they needed to have about it and the required permissions.
The site admins would not let any requests for change to be handled by
outsiders; therefore, only internal workers could take action and requests for
change. However, Kubernetes networks and policies hide the complexity of
their network and make the firewall rules easily deployable. They give clear
instructions on how the application should be run by storing them in the
grit repository (or grit repo). The other configuration settings for the
application can also be found there. They can maintain their firewall, and
they tend to use the Kubernetes audit tools to ensure security. This would be
better rather than restricting firewall configurations that are used as
compliance policies.
5. Deploying and updating software at scale
Deployment is a way to speed up the process used when building,
testing, and releasing software. Most software does not support the
deployment of these processes at scale, but Kubernetes does. Thanks to
Kubernetes, you can maintain and change your application lifecycle. It has
a deployment controller that can simplify complex tasks into easy ones; it
can identify completed, in-process and failing deployments fast by
balancing their capabilities; and it saves time, as it can easily pause a
deployment and resume it later on. If the current or upgraded version is not
stable, it can control the new version to revert to the earlier version. They
have created more simplified deployment operations that include
horizontal auto-scaling , in which the scalar can measure the number of
pods that can be depleted by a certain number of resources; rolling updates
that give you the predefined number of limited pods and the spare pods that
exist temporarily; and canary deployments that ensure that you
simultaneously upgrade from the older version to the newer version.
6. Great for cloud multi-cloud adoption
Microservice infrastructure in these containers makes it easy for you to
split your application into small components and run them in different
environments and clouds. Kubernetes does not limit you and can be used
anywhere—in public, private, or other hybrid clouds. This functionality
enables you to reach out to users wherever they are, and makes it more
secure. Many of today's businesses have grown to love and use this
microservice, since it makes their tools more easily manageable.
7. IT Cost Optimization
If your company is operating on a massive scale, then Kubernetes has
your budget covered. This is because it offers a container-based
infrastructure that can package to different applications to help optimize
how you use your cloud and hardware investments. Hardware and cloud
management was a hard nut to crack for people who did not use
Kubernetes, but since it’s been developed, people are now enjoying using
Kubernetes, as it caters for their budgets.
Spotify is one company that has benefitted cost-effectively from
Kubernetes. According to Spotify, since they started using Kubernetes, they
have been able to save their CPU utilization, thus leading to better IT
optimization.
Importance of Kubernetes
1. Portability
Kubernetes offers an easy and faster way for deployment. Companies
can benefit from using the available multiple clouds to grow their
infrastructure even more. It does not tie you down to one system and its
portability makes it even easier.
2. Scalability
This means that Kubernetes can be deployed anywhere, in all cloud
environments. Their containers can run, even in bare metals or virtual
machines. Due to recent changes and advancements in development and
deployments, Kubernetes allows you to scale much faster than before. The
orchestration system ensures that it improves the application performance
automatically, which also helps optimize infrastructure utilization and
makes it not just limited to the metrics, but also the resources used in the
process.
A good example of a company that has been able to benefit from the
scalability of Kubernetes is the LendingTree application. This application
uses many microservices that act as a platform for their business operations.
The horizontal scalability ensures that their clients can access the
application, even when they are at their peak seasons. Kubernetes is
responsible for ensuring that these applications are well-deployed and
running all the time, and that the applications do not default.
3. Highly available
Kubernetes is available both at the application and infrastructure level.
They also have a reliable storage system, therefore ensuring that there are
readily available workloads. The configuration settings ensure that their
multi-nodes replicate easily, thereby enhancing availability. Its availability
makes it even better and more efficient to use, which is why developers and
companies prefer it, as most other software does not allow for availability
or replication. The success of the application you come up with does not
depend on the features that it has, but on the availability of the application
—if your application is unavailable or not working when you most need it,
then you will be unable to make anything out of it. This is why developers
look forward to using Kubernetes, since it is always available and doesn’t
break down at crucial times. This is the best solution for companies that
have had problems with their application unavailability.
4. Open-source
The open-source style gives developers the freedom to design work
with Kubernetes, combined with the vast ecosystem that is available. It does
not limit you to using the Kubernetes tools, but has ensured that the tools
you choose to design your work with are compatible with the containers
available. This open-source will help in the automation, deployment,
scaling, and managing of your application, and you will also be able to run
the application without having to waste time. The open-source feature also
makes developers and companies want to use Kubernetes even more.
5. Market leader without Competition
People have a lot of trust in Google and Kubernetes, and the latter
having been originally designed by Google means that this faith extends to
the Google-designed system. It has instant credibility to people and has,
therefore, ended up as the highest used application deployment system by
developers. People have tried competing with Kubernetes, but have failed
or not been able to match its reach. Kubernetes can fix its security threats
and does not have a lot of unnecessary traffic, since its system can clear this
out from the network system. Due to this, it remains a leader in the market,
with almost zero competitors.
Many developers have declared Kubernetes to be the next big thing
over the next few years because they can see that the future in Kubernetes
will continue to offer a set of wide infrastructure platforms for developers.
Research has shown that students due to graduate over the next four or five
years are keen to use Kubernetes for their application deployment. Being a
virtualization admin means focusing on containers that can give you the
best service, since virtualization is not only relevant to infrastructure teams,
but developers as well. If you use the wrong containers for your application
development, nobody will enjoy using the application you created. You
might also end up being frustrated with the application and might even give
it up, or end up spending more than you initially planned. This is why
Kubernetes is known to be the best in terms of containers and application
development, and the reason why it lacks competitors and will continue
being the leader in the market