Kubernetes_Notes
Kubernetes_Notes
l
lo
In Kubernetes, there is a master node and multiple worker nodes, each worker node can handle
multiple pods.
Pods are just a bunch of containers clustered together as a working unit. You can start designing
your applications using pods.
x.
Once your pods are ready, you can specify pod definitions to the master node, and how many you
want to deploy. From this point, Kubernetes is in control.
It takes the pods and deploys them to the worker nods. If a worker node goes down, Kubernetes
starts new pods on a functioning worker node.
le
This makes the process of managing the containers easy and simple.
It makes it easy to build and add more features and improving the application to attain higher
customer satisfaction.
ip
Finally, no matter what technology you're invested in, Kubernetes can help you.
.a
w
w
w
#Containerization is the trend that is taking over the world, allowing firms to run any kind of different
applications in a variety of different environments. To keep track of all these containers, to schedule,
to manage, and to orchestrate them, we all require an orchestration tool. Kubernetes does it
exponentially well.
l
Kubernetes is a master-slave type of architecture. It operated with Master node and worker node
lo
principles.
What exactly they do?
Master Node:
x.
>The main machine that controls the nodes
> Main entry point for all administrative tasks
> It handles the orchestration of the worker nodes
Worker Node:
le
> It is a worker machine in Kubernetes (used to be known as a minion)
> This machine performs the requested tasks. The Master Node controls each Node
> Runs containers inside pods
ip
> This is where the Docker engine runs and takes care of downloading images and starting
containers
.a
Know in-depth concepts here in the original article: https://blog.risingstack.com/what-is-kubernetes-
how-to-get-started/
w
w
w
While tools such as #Docker provide the actual containers, we also need tools to take care of things
such as replication, failovers, orchestration, and that is where Kubernetes comes into play.
.a
The Kubernetes API is a great tool for automating a deployment pipeline. Deployments are not only
more reliable, but also much faster, because we’re no longer dealing with VMs.
When working with Kubernetes, you have to become accustomed with concepts and namings like
w
pods, services, and replication controllers. If you're not already familiar yet, no worries, there are
some excellent resources available to learn Kubernetes and get up to speed.
w
BTW, take a look at these tips, tricks, and lessons for taking containerized apps to k8s:
l
https://lnkd.in/eZtxx-Z
lo
x.
le
ip
.a
> Automatic binpacking: This is where Kubernetes helps in automatically placing containers based
on their resource requirements, limits, and other constraints, without compromising on availability.
> Service discovery and load balancing: In simple words, service discovery is the process of figuring
w
>Self-healing: Restarts the containers that fail, replaces, and reschedules containers when nodes die.
> Automated rollouts and rollbacks: With this feature, Kubernetes does progressively roll out
changes, and it ensures it doesn’t kill all your instances at the same time.
> Storage orchestration: Automatically mount the storage system of your choice, whether from local
storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS,
iSCSI, Gluster, Ceph, Cinder, or Flocker.
l
See more in the original article: https://lnkd.in/e8MzdeV
lo
x.
le
ip
.a
w
w
l
What else? Know more: https://lnkd.in/ejetevG
lo
#Kubernetes Core Features.
x.
le
ip
.a
w
w
1. Container runtime: Kubernetes uses Container Runtime Interface (CRI) to transparently manage
your containers without necessarily having to know (or deal with) the runtime used.
w
3. The Volume Plugin: A volume broadly refers to the storage that will be availed for the pod.
5. Cloud Provider: Kubernetes can be deployed on almost any platform you may think of.
6. Identity Provider: You can use your own identity provider system to authenticate your users to the
cluster as long as it uses OpenID connect. Read this amazing article: https://lnkd.in/eySj5aG
l
lo
Kubernetes setup:
How much time can you devote to setting up the #Kubernetes? ��
x.
That's the question to ask yourself.
le
ip
.a
w
Kubernetes itself (meaning the plain, open-source version) does not have a built-in installer, nor does
it offer much in the way of one-size-fits-all default configurations. You’ll likely need to tweak (or write
w
from scratch) a lot of configuration files before you get your cluster up and running smoothly.
Thus, the process of installing and configuring Kubernetes can be a very daunting one that
consumes many days of work.
Some Kubernetes distributions offer interactive installer scripts that help automate much of the
setup process. If you use one of these, setup and installation is easier to accomplish in a day or two.
But it’s still by no means a turnkey process.
l
#Kubernetes autoscaling. How does it work?
lo
Scaling is an essential operational practice that used to be done manually for a long time concerning
applications, with the introduction of tools like Kubernetes, the things have changed dramatically in
the software industry.
x.
In the context of the Kubernetes cluster, there are typically two things you would like to scale as a
user, Pods, and Nodes.
le
There are three types of scaling:
> HorizontalPodAutoscaler
> VerticalPodAutoscaler, and
> Cluster Autoscaler.
ip
With these techniques, Kubernetes can take intelligent scaling decisions automatically.
Cluster Autoscaler (CA) scales your node clusters based on the number of pending pods. It checks
w
to see whether there are any pending pods and increases the size of the cluster so that these pods
can be created.
w
Mastering autoscaling needs some patience and persistent efforts to see which technique suits your
app's needs by doing trial and error.
Continuous learning and experimentation is the key:)
w
le
ip
.a
w
w
w
Kubernetes is devised as a highly available cluster of computers that are connected to work as a
single unit for more power and efficiency.
The cluster forms the heart of Kubernetes: It can schedule and run containers across a group of
machines, be they physical or virtual, on-premises or in the cloud.
Kubernetes is still a bit of sophisticated technology and has a steep learning curve, even after a
couple of years working with it, you’ll still wonder if you got it all under control.
But when your company asks you to decide on using and implementing Kubernetes, one question
l
you will have is, deciding on the Kubernetes clusters.
lo
My friend Sander has written an amazing article on this, take a look - https://lnkd.in/eSC5vpa
x.
Kubernetes security:
Keep your clusters updated with the latest #Kubernetes security patches.
le
ip
.a
w
w
w
Just like any application, Kubernetes is continuously updating new features and security updates.
Hence, it is imperative that the underlying nodes and Kubernetes clusters need to be in parallel and
up to date as well.
The standard “zonal” Kubernetes Engine clusters will have only one master node backing them, but
you can create “regional” clusters that provide multi-zone feature, highly available masters. One
crucial thing to remember here is, while creating a cluster, be sure to select the “regional” option.
l
lo
In 2018, a severe vulnerability in #Kubernetes (CVE-2018–1002105) was
disclosed...
x.
This vulnerability allowed an unauthorized and unauthenticated user to gain full admin privileges on
a cluster and perform privilege escalation.
le
ip
.a
w
w
w
In one more incident, a security firm RedLock said that hackers accessed one of Tesla’s Amazon
cloud accounts, and they used it to run cryptocurrency-mining malware. The initial point of entry for
the Tesla cloud breach was an unsecured administrative console for Kubernetes.
So many scary stories!
w
security best practices. To tighten the security and ease handling a large number of accounts,
RBAC makes use of an intermediate item called binding. Via role binding mechanism, you can
create “roles,” which will have a set of capabilities, then assign each user one or more roles. For
w
example, some users might just have permission to list pods, and some other users may have
permission to get, list, watch, create, update, patch, delete pods. Writing an article on this and will
be out soon.
w
As #microservices and container-based infrastructure are enriching how the software is built these
days, new challenges with security and compliance appear for regulated firms.
l
lo
x.
le
ip
.a
Here is an example where Artem Semenov from Align Technology is showing us the basic
w
requirements for making #K8S compliant with sensitive data handling regulations and possible
technical solutions - https://lnkd.in/g2G-rFX
w
How does Domino’s deliver so many new solutions, features, and updates, while it’s hot? By
cultivating an experimental culture of cloud-native innovation within the company.
l
Domino’s intends to create new business value and speed their time to market by rewriting core
lo
applications to run as microservices.
Dominos teams are modernizing these core applications in-house with microservices, but each team
uses a different platform.
x.
le
ip
.a
w
w
The in-house teams chose a comprehensive, production-grade Kubernetes distribution platform with
enterprise security features and full lifecycle management support and with Kubernetes, Domino’s is
evaluating the feasibility and level of effort to convert current systems and processes to a container-
w
based architecture.
l
lo
x.
le
ip
.a
This is the talk summary of Melanie Cebula at Qcon London. About the way her team wraps
Kubernetes into easy-to-consume internal services for its development teams.
Instead of creating a set of dreaded YAML files per environment, development teams need only
w
provide their project-specific, service-focused inputs and then run the internal service kube-gen
(alias k gen).
This simple command takes care of generating all the required YAML files, ensuring their
correctness, and finally applying them in the corresponding Kubernetes cluster(s).
w
The infrastructure team at Airbnb is saving hundreds, if not thousands, of hours for 1,000+ engineers
who can now use a much simpler abstraction that has been adapted to their needs, with a user
experience that's familiar to them.
w
The figure above shows the kube-gen wrapper generates the needed configuration files per
environment at Airbnb. Source: Melanie Cebula, Airbnb.
It supports rapid development for millions of #IoT products with Kubernetes. How?
Bose has been a big player in helping IoT devices and audio enabling systems for several years.
l
Bose engineering leadership team always wanted to move to a microservices architecture.
lo
When the demand started growing, they had to look for a solution that can help their engineering
platform team to deploy services to production quickly without any hassle. For this, they evaluated
and found many alternative platforms but finally chose Kubernetes due to its scaled IoT platform-as-
a-service running on AWS and vibrant community aspect.
x.
le
ip
.a
w
w
They launched a revised platform along with Prometheus, An open-source monitoring system to
serve more than 3 million connected IoT devices. Today, Bose has over 1,800 namespaces and 340
w
worker nodes in one of its live production clusters. Bose has more than 100 engineers working on
this platform already, this platform is helping them make 30,000 nonproduction deployments every
year.
Read the original story: https://lnkd.in/eJgBRHN
Also, take a look at this video on CI/CD enablement for connected device products with OTA
capabilities: http://bit.ly/IoTCICD
Amadeus had two choices: Either pour more concrete and extend the data center or move the
workload to the cloud & this made them go with Google Cloud.
l
So within 18 months, Amadeus had lifted and shifted one of their most critical application 'Master
lo
Pricer' to the Google Cloud Platform.
Now you know, the next step for them was to move to Kubernetes since it made more sense with
Google Cloud Platform.
x.
le
ip
.a
w
w
w
The aim is, they wanted to go faster with Kubernetes, and the challenge was to add a disciplinary
policy of learning Kubernetes across the team.
So, the team at Amadeus started learning how to operate Kubernetes and how to monitor it, do
alerting.
l
http://bit.ly/AmadeusGCP
lo
#Kubernetes has helped Adidas to deploy faster, safer, with more quality
and scale quickly.
x.
Read further...
le
ip
.a
w
w
w
Adidas understood the importance of Kubernetes over VMs, and now their tech stack is wholly
powered by Kubernetes. Before creating a VM would take days or even weeks that would impact the
developers' productivity and the business overall.
Deployments that used to take four to five days can now be deployed four to five times a day with
the help of Kubernetes. Currently, Adidas has over 4,000 pods running on Kubernetes, achieving the
velocity it needs to develop applications faster than ever.
Their lead infrastructure engineers say that, it is easy to set up and configure the new tool, but the
problem comes in scaling.
l
They also stressed on the point that training is essential for engineers working on the platform.
lo
Source credits: TechGenix
x.
#CloudNative technologies.
le
ip
.a
w
w
Read how News UK utilized the power of Kubernetes to save itself in the cloud-native world.
The critical goal for News UK was to be better able to scale up its environment around breaking
news events & unpredictable reader volumes.
w
They thought if VMs can help them, but soon they realized that VMs take long to spin up and when
there is a spike of traffic, it is not fast enough to bring new capacity into the AutoScalingGroup
(that's what Marcin Cuber, a former cloud DevOps engineer at News UK has to say)
Cuber also had some advice for any organization looking to adopt Docker and Kubernetes.
> make your Docker images as small as possible and to focus on running stateless applications with
Kubernetes
> run health checks for your applications and to use YAML to deploy anything
l
News UK also wanted to cut cloud costs, so they paired EKS clusters with AWS spot instances, and
lo
they also used AWS Lambda to make this work efficiently.
x.
A banking app's must-read story of running #Kubernetes in production.
A journey that affirms you don't have to be too big to use Kubernetes.
le
ip
.a
w
w
w
They started their #CloudNative journey by splitting the massive monolith application into smaller
microservices.
To spin up these microservices, they used Ansible, Terraform, and Jenkins and to deploy these
microservices as a whole unit (as shown in the image).
They chose Kubernetes as the abstraction layer along with AWS, not worrying about where the
containers are running, and this is how they were able to manage microservices and unlocked the
velocity of microservices. They also chose Kubernetes from a security perspective and to specify
how the applications should run.
l
lo
Now they run around 80+ microservices now in production with the help of Kubernetes:)
Watch and learn how they did it in this video 'Running Kubernetes in production at Lunar Way by
Kasper Nissen.' - https://lnkd.in/eU9s3JX
x.
Why did eBay choose #Kubernetes?
Daily, eBay handles 300 billion data queries & a massive amount of data that’s above 500 petabytes.
le
ip
.a
w
w
w
eBay has to move massive amounts of data & manage the traffic, keeping in mind a smooth user
experience while still ensuring a secure, stable environment that’s flexible enough to encourage
innovation.
eBay's 90% of cloud technology was dependent on OpenStack, and they are in the move to ditch
OpenStack altogether.
eBay is “re-platforming, itself with Kubernetes, Docker, & Apache Kafka, a stream processing
platform that increases data handling and decreases latency.
l
lo
The goal is to improve the user experience and to promote productivity with their engineers and
programmers & completely revamp its data center infrastructure.
The other activities in this re-platforming include designing their own custom servers and rolling out
x.
a new, decentralized strategy for their data centers. Like Facebook & Microsoft, eBay is relying on
open-sourcing to design their custom servers.
le
Bloomberg is one of the first companies to adopt #Kubernetes.
They used Kubernetes into production in 2017.
ip
.a
w
w
w
The aim was to bring up new applications and services to users as fast as possible and free up
developers from operational tasks. After evaluating many offerings from different firms, they
selected Kubernetes as they thought it aligned exactly with what they were trying to solve.
One of the key aims at Bloomberg was to make better use of existing hardware investments using
different features of Kubernetes. As a result, they were able to very efficiently use the hardware to
Nothing great comes easy; Kubernetes makes many things simpler only if you know how to use it.
As the developers initially found it challenging to use, the teams had many training programs around
Kubernetes at Bloomberg.
l
lo
Shopify's #Kubernetes journey is just minded blowing:)
x.
le
ip
.a
w
Shopify was one of the pioneers in large-scale users of #Docker in production. They ran 100% of
w
their production traffic in hundreds of containers. Shopify engineering team saw the real value of
containerization and also aspired to introduce a real orchestration layer.
They started looking at orchestration solutions, and the technology behind Kubernetes fascinated
them.
w
It all started in 2016, where all the engineers were happy running services everywhere with a simple
stack that included Chef, Docker, AWS, and Heroku. But just like any other company that is in the
growth phase, the Shopify encountered some challenges when this Canadian e-commerce company
saw 80k+ requests per second during peak demand. Wohooo:)
The Shopify engineering team believed in three principles: providing a 'paved road, 'hide complexity'
and 'self-serve.'
l
lo
All credits to Niko Kurtti, QCon & InfoQ.
Box’s #Kubernetes journey is one of the finest #CloudNative inspirations. Read…
A few years ago at Box, it was taking up to six months to build a new
x.
#microservice.
Fast forward to today, it takes only a couple of days.
le
ip
.a
w
w
How did they manage to speed up? Two key factors made it possible,
w
1. Kubernetes technology
2. DevOps practices
Founded in 2005, Box was a monolithic PHP application and had grown over time to millions of lines
of code. The monolithic nature of their application led to them basically building very tightly coupled
designs, and this tight coupling was coming in their way. It was resulting in them not being able to
innovate as quickly as they wanted to.
See the full video talk by Kunal Parmar, Senior Engineering Manager at Box: https://lnkd.in/etnJTbE
l
I hope you all are #GoT fans here...
lo
Let me tell you the #Kubernetes story at HBO!
x.
The engineers started panicking as they knew the unpredictable traffic for the most anticipated
Game of Thrones season seven premiere is going to be HUGE.
le
ip
.a
w
w
One of the challenges they found out was the under-utilization of the deployed resources.
Node.js code tends only to use a single CPU core.
AWS EC2 instances that had excellent networking capabilities tended to be based on dual-core
w
CPUs.
As such, HBO was only using 50 percent of the deployed CPU capacity across its deployment. The
ability to spin up new instances on EC2 wasn't quite as fast as what HBO needed.
HBO also found that in times of peak demand for Game of Thrones, it was also running out of
available IP addresses to help deliver the content to viewers.
At last, the HBO chose Kubernetes among other alternatives, basically because of its vibrant and
active community.
Credits: KubeCon 2017, eWEEK
l
lo
A conventional bank running its real business on such a young technology?
Nope, I am not kidding. Italy's banking group, Intesa Sanpaolo, has made this transition.
x.
le
ip
.a
w
w
These are banks who still run their ATM networks on 30-year-old mainframe technology, and
embracing the hottest trend & tech is nearly unbelievable. Even though ING, the banking and
financial corporation, changed the way the banks were seen by upgrading itself with Kubernetes and
#DevOps practices very early in the game, there was still a stigma with adopting Kubernetes in the
w
The bank's engineering team came up with an initiative strategy in 2018 to throw away the old way
of thinking and started embracing the technologies like microservices, container architecture, and
migrate from monolithic to multi-tier applications. It was transforming itself into a software
company, unbelievable.
l
How did 'Pokemon Go' able to scale so efficiently?
lo
The answer is #Kubernetes. Read the story...
x.
le
ip
.a
w
500+ million downloads and 20+ million daily active users. That's HUGE.
Pokemon Go engineers never thought their user base would increase exponentially surpassing the
expectations within a short time. Even the servers couldn't handle this much traffic.
w
The Challenge:
The horizontal scaling on one side but Pokemon Go also faced a severe challenge when it came to
vertical scaling because of the real-time activity by millions of users worldwide. Niantic was not
w
The Solution:
The magic of containers. The application logic for the game ran on Google Container Engine (GKE)
powered by the open-source Kubernetes project.
“Going Viral” is not always easy to predict but you can always have Kubernetes in your tech stack.
l
lo
CI/CD with Kubernetes:
How can you quickly achieve CI/CD automation with #Kubernetes and roll it out across your
organization?
x.
le
ip
.a
w
Step 1: Develop your microservice using dependencies from registries that are proxied in Artifactory.
The resulting App package can be a .war or .jar file.
w
Step 2: Create a Docker Framework using Tomcat and Java-8 on Ubuntu as a base image. Push this
image to a Docker registry in Artifactory, where it is also scanned by Xray to assure security and
license compliance.
w
Step 3: Create the Docker image for the microservice by adding the .war/.jar file to the Docker
Framework, and push the image to a Docker registry in Artifactory, where it is scanned by Xray.
Step 4: Create a Helm chart for the microservice, and push it to a Helm repository in Artifactory.
l
What else?
lo
Here are 15 interesting takeaways from the #CNCF annual survey.
x.
le
ip
.a
w
w
w
l
lo
x.
le
ip
.a
Here are some tips and tricks shared by Timothy Josefik on HackerNoon.
w
More and more companies are trying to use Kubernetes in production, and that's a good move.
Free resources:
l
lo
x.
le
ip
.a
All credits to Kubernetes for helping companies scale and win big time.
l
lo
x.
le
ip
.a
w
w
w