Learn+DevOps+ +kubernetes
Learn+DevOps+ +kubernetes
Kubernetes
Why Kubernetes
• The salary of DevOps related jobs are
sky high, up to $146,207 on average in
San Fransisco
Kubernetes
• If you are in a Ops or DevOps role, you’ll want to start using containers to
deliver software more ef cient, easier, and faster
• It gives you all the exibility and cost savings that you always wanted within
one framework
fl
fi
Who am I
• My name is Edward Viaene
• I’m a big advocate of Agile and DevOps techniques in all the projects I
work on
Online Training
• Online training on Udemy
Kubernetes
• What is included in this course?
• Cluster Setup lectures using minikube or the Docker client (for desktop usage)
and production cluster setup on AWS using kop
• Lectures and demos on Kubeadm are also available for on-prem setups
Kubernetes
• What version am I using? (don’t worry too much about it)
• I update all the scripts and yaml de nitions on Github to make sure
they work with the latest version available
• If not, you can send me a message and I’ll update the script on Github
Course Overview
Introduction Kubernetes Basics Advanced topics Administration Packaging
What is Kubernetes Node Architecture Service auto-discovery Master Services Introduction to Helm
Cloud / On-premise setup Scaling pods Con gMap Quotas and Limits Creating Helm Charts
Cluster Setup Deployments Ingress Namespaces Helm Repository
Building Containers Services External DNS User Management Building & Deploying
Running your rst app Labels Volumes RBAC
Building Container Images Healthchecks Pod Presets Networking
ReadinessProbe StatefulSets Node Maintenance
Pod State & LifeCycle Daemon Sets High Availability
Secrets Monitoring TLS on ELB
WebUI Autoscaling
Node Af nity
InterPod (Anti-)Af nity Extras
Taints and Tolerations kubeadm
• Setting up your Kubernetes cluster for the rst time can be hard, but
once you’re passed the initial lectures, it will get easier, and you’ll deepen
your knowledge by learning all the details of Kubernetes
• When you’re nished with this course, you can continue with my 2 other
(Advanced) Kubernetes courses, you’ll get a coupon code in the last
lecture (bonus lecture)
Course Overview
Introduction Kubernetes Basics Advanced topics Administration Packaging
What is Kubernetes Node Architecture Service auto-discovery Master Services Introduction to Helm
Cloud / On-premise setup Scaling pods Con gMap Quotas and Limits Creating Helm Charts
Cluster Setup Deployments Ingress Namespaces Helm Repository
Building Containers Services External DNS User Management Building & Deploying
Running your rst app Labels Volumes RBAC
Building Container Images Healthchecks Pod Presets Networking
ReadinessProbe StatefulSets Node Maintenance
Pod State & LifeCycle Daemon Sets High Availability
Secrets Monitoring TLS on ELB
WebUI Autoscaling
Node Af nity
InterPod (Anti-)Af nity Extras
Taints and Tolerations kubeadm
Kubernetes
Support and Downloads
Feedback and support
• To provide feedback or get support, use the discussion groups
• You can scan the following barcode or use the link in the next document
after this introduction movie
Procedure Document
• Use the next procedure document in the next lecture to download all the
resources for the course
fi
What is Kubernetes
What is kubernetes
• Kubernetes is an open source orchestration system for Docker containers
What is kubernetes
• Instead of just running a few docker containers on one host manually,
Kubernetes is a platform that will manage the containers for you
• Kubernetes clusters can start with one node until thousands of nodes
• Docker Swarm
• Mesos
Kubernetes Advantages
• You can run Kubernetes anywhere:
• Highly modular
• Open source
• Great community
• Backed by Google
Containers
Virtual Machines vs Containers
}
app A app B app C
{
OS OS OS app A app B app C
Docker
Container bins bins Engine
Hypervisor libs libs
Host OS Host OS
Server Server
Learn DevOps: Kubernetes - Edward Viaene
} }
app A app B app C
Docker
bins bins Engine
Container
Virtual Machine
libs libs
Guest OS
Hypervisor
Host OS
Server
Docker
• Docker is the most popular container software
• Docker Engine
• Docker Hub
Docker Bene ts
• Isolation: you ship a binary with all the dependencies
• You can run the same docker image, unchanged, on laptops, data center
VMs, and Cloud providers.
fi
Containerization
• But, there are more integrations for certain Cloud Providers, like AWS & GCE
• Things like Volumes and External Load Balancers work only with
supported Cloud Providers
• I will rst use minikube to quickly spin up a local single machine with a
Kubernetes cluster
• I’ll then show you how to spin up a cluster on AWS using kops
Kubernetes Setups
• Doing the labs yourself is possible (and highly recommended):
• Using the AWS Free tier (gives you 750 hours of t2.micro’s / month)
• http://aws.amazon.com
• Using DigitalOcean
Kubernetes Setup
Set up Kubernetes locally
Minikube Setup
• Minikube is a tool that makes it easy to run Kubernetes locally
• It’s aimed on users who want to just test it out or use if for development
Minikube Setup
• It works on Windows, Linux, and MacOS
• To launch your cluster you just need to enter (in a shell / terminal /
powershell):
$ minikube start
Demo
Local kubernetes setup using minikube
Demo
Local kubernetes setup using the docker client
Kubernetes Setup
minikube vs docker client vs kops vs kubeadm
minikube / docker client / kops / kubeadm
• There are multiple tools to install a kubernetes cluster
• Minikube and docker client are great for local setups, but not for real
clusters
• For other installs, or if you can’t get kops to work, you can use kubeadm
• The kubeadm lectures can be found at the end of this course, and let
you spin up a cluster on DigitalOcean
Kubernetes Setup
Setting up Kubernetes on AWS
Cloud Setup
• To setup Kubernetes on AWS, you can use a tools called kops
Cloud Setup
• Kops only works on Mac / Linux
fi
Demo
Kubernetes setup on AWS
Demo
DNS troubleshooting
Building Containers
Building your own app in Docker
Building containers
• To build containers, you can use Docker Engine
• Windows: https://docs.docker.com/engine/installation/windows/
• MacOS: https://docs.docker.com/engine/installation/mac/
• Linux: https://docs.docker.com/engine/installation/linux/
• Or you can use my vagrant devops-box which comes with docker installed
• In the demos I will always use an ubuntu-xenial box, setup with vagrant
Docker le
• Dockerizing a simple nodeJS app only needs a few les:
fi
fi
fi
Docker le
• To build this project, docker build can be used
$ cd docker-demo
$ ls
Dockerfile index.js package.json
$ docker build .
[…]
$
• After the docker build process you have built an image that can run the
nodejs app
fi
Demo
Your own docker image
Docker Registry
Push containers to Docker Hub
Docker le
• You can run the docker app by executing “docker run” locally
• Then you can push any locally built images to the Docker Registry
(where docker images can be stored in)
Docker le
• To push an image to Docker Hub:
$ docker login
$ docker tag imageid your-login/docker-demo
$ docker push your-login/docker-demo
$ cd docker-demo
$ ls
Dockerfile index.js package.json
$ docker build -t your-login/docker-demo .
[…]
$ docker push your-login/docker-demo
[…]
$
fi
Docker remarks
• You can build and deploy any application you want using docker and kubernetes,
if you just take into account a few limitations:
• Don’t try to create one giant docker image for you app, but split it up if
necessary
• All the data in the container is not preserved, when a container stops, all the
changes within a container are lost
• You can preserve data, using volumes, which is covered later in this course
• For more tips, check out the 12-factor app methodology at 12factor.net
Docker remarks
• Here are a few of cial images you might use for your app:
• https://hub.docker.com/_/nginx/ - webserver
• https://hub.docker.com/_/php/ - PHP
• https://hub.docker.com/_/node - NodeJS
• https://hub.docker.com/_/ruby/ - Ruby
• https://hub.docker.com/_/python/ - Python
• https://hub.docker.com/_/openjdk/ - Java
Learn DevOps: Kubernetes - Edward Viaene
fi
Demo
Pushing docker image to Docker Hub
Running rst app
fi
First app
• Let’s run our newly built application on the new Kubernetes cluster
• A pod can contain one or more tightly coupled containers, that make up
the app
• Those apps can easily communicate which each other using their local
port numbers
Create a pod
• Create a le pod-helloworld.yml with the pod de nition:
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
fi
fi
Useful Commands
Command Description
• This AWS Load Balancer will route the traf c to the correct pod in Kubernetes
• There are other solutions for other cloud providers that don’t have a Load
Balancer
fi
Demo
Running the rst app behind a load balancer
fi
Introduction to Kubernetes
Recap
First app apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
Workstation labels: Kubernetes Cluster Docker Hub
app: helloworld
spec:
Container image containers: Container image
- name: k8s-demo
k8s-example k8s-example
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
Pod 1 containerPort: 3000 Pod
1
k8s-example k8s-example
First app
Workstation Kubernetes Cluster Docker Hub
Pod 1 Pod 1
k8s-example k8s-example
First app
Workstation apiVersion: v1 Kubernetes Cluster apiVersion: v1 Docker Hub
kind: Service kind: Service
metadata: metadata:
name: helloworld-service name: helloworld-service
Container image Container image
spec: spec:
k8s-example ports: ports:
k8s-example
- port: 80 - port: 31001
targetPort: nodejs-port nodePort: 31001
Pod 1 protocol: TCP Pod 1 targetPort: nodejs-port
k8s-example selector: k8s-example protocol: TCP
app: helloworld selector:
type: LoadBalancer app: helloworld
type: NodePort
Service Service
First app
Workstation Kubernetes Cluster Docker Hub
Pod 1 Pod 1
k8s-example k8s-example
First app
Workstation Kubernetes Cluster Docker Hub
Pod 1 Pod 1
k8s-example k8s-example
Container image
Pod busybox
busybox
Kubernetes Basics
Node Architecture
Architecture overview
kubelet Service:
iptables internet
Load Balancer
kube-proxy
node 1 Docker
Pod 1 Pod 2 Pod 3
Pod
apiVersion: v1
kubelet
iptables kind: Pod
kube-proxy metadata:
name: nodehelloworld.example.com
node 2 Docker labels:
Pod 4 Pod 5 Pod N app: helloworld
spec:
{
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
node N Container ports:
- containerPort: 3000
Scaling pods
Scaling
• If your application is stateless you can horizontally scale it
• Any les that need to be saved can’t be saved locally on the container
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fi
Scaling
• Our example app is stateless, if the same app would run multiple times, it
doesn’t change state
• Later in this course I’ll explain how to use volumes to still run stateful apps
• Those stateful apps can’t horizontally scale, but you can run them in a
single container and vertically scale (allocate more CPU / Memory /
Disk)
Learn DevOps: Kubernetes - Edward Viaene
Scaling
• Scaling in Kubernetes can be done using the Replication Controller
• The replication controller will ensure a speci ed number of pod replicas will
run at all time
• A pods created with the replica controller will automatically be replaced if they
fail, get deleted, or are terminated
• Using the replication controller is also recommended if you just want to make
sure 1 pod is always running, even after reboots
fi
Scaling
• To replicate our example app 2 times
apiVersion: v1
kind: ReplicationController
metadata:
name: helloworld-controller
spec:
replicas: 2
selector:
app: helloworld
template
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
Demo
Horizontally scale a pod with the replication controller
Deployments
Replication Set
• Replica Set is the next-generation Replication Controller
• This Replica Set, rather than the Replication Controller, is used by the
Deployment object
Learn DevOps: Kubernetes - Edward Viaene
fi
Deployments
• A deployment declaration in Kubernetes allows you to do app
deployments and updates
• When using the deployment object, you de ne the state of your application
• Kubernetes will then make sure the clusters matches your desired state
fi
Deployments
• With a deployment object you can:
Deployments
• This is an example of a deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
Useful Commands
Command Description
kubectl get pods --show-labels get pods, and also show labels attached
to those pods
kubectl rollout status deployment/helloworld-deployment Get deployment status
kubectl set image deployment/helloworld-deployment Run k8s-demo with the image label
k8s-demo=k8s-demo:2 version 2
kubectl edit deployment/helloworld-deployment Edit the deployment object
Demo
A deployment
Services
Services
• Pods are very dynamic, they come and go on the Kubernetes cluster
• That’s why Pods should never be accessed directly, but always through a
Servic
• A service is the logical bridge between the “mortal” pods and other
services or end-users
Learn DevOps: Kubernetes - Edward Viaene
e
Services
• When using the “kubectl expose” command earlier, you created a new
Service for your pod, so it could be accessed externally
fi
Services
• The options just shown only allow you to create virtual IPs or ports
Services
• This is an example of a Service de nition (also created using kubectl expose):
apiVersion: v1
kind: Service
metadata:
name: helloworld-service
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: nodejs-port
protocol: TCP
selector:
app: helloworld
type: NodePort
• Note: by default service can only run between ports 30000-32767, but you could
change this behavior by adding the --service-node-port-range= argument to the
kube-apiserver (in the init scripts)
fi
Demo
A new service
Labels
Labels
• Labels are key/value pairs that can be attached to objects
• Labels are like tags in AWS or other cloud providers, used to tag resources
• You can label your objects, for instance your pod, following an organizational structure
• In our previous examples I already have been using labels to tag pods:
metadata:
name: nodehelloworld.example.com
labels
app: helloworld
fi
Labels
• Labels are not unique and multiple labels can be added to one object
• Once labels are attached to an object, you can use lters to narrow down
results
• Using Label Selectors, you can use matching expressions to match labels
• For instance, a particular pod can only run on a node labeled with
“environment” equals “development”
fi
Node Labels
• You can also use labels to tag node
• Once nodes are tagged, you can use label selectors to let pods only run
on speci c nodes
fi
fi
Node Labels
• First step, add a label or multiple labels to your nodes:
$ kubectl label nodes node1 hardware=high-spec
$ kubectl label nodes node2 hardware=low-spec
Demo
Node Selector using labels
Health Checks
Health checks
• If your application malfunctions, the pod and container can still be running, but
the application might not work anymore
• To detect and resolve problems with your application, you can run health
checks
• The typical production application behind a load balancer should always have
health checks implemented in some way to ensure availability and resiliency
of the app
Learn DevOps: Kubernetes - Edward Viaene
Health checks
• This is how a health check looks like on our example container:
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
livenessProbe
httpGet
path:
port: 300
initialDelaySeconds: 1
timeoutSeconds: 30
Demo
Performing health checks
Readiness Probe
Readiness Probe
• Besides livenessProbes, you can also use readinessProbes on a
container within a Pod
• If the check fails, the container will not be restarted, but the Pod’s IP
address will be removed from the Service, so it’ll not serve any
requests anymore
Learn DevOps: Kubernetes - Edward Viaene
Readiness Probe
• The readiness test will make sure that at startup, the pod will only receive
traf c when the test succeeds
• You can use these probes in conjunction, and you can con gure different
tests for them
• If your container always exits when something goes wrong, you don’t need
a livenessProbe
• In general, you con gure both the livenessProbe and the readinessProbe
fi
fi
Demo
Performing health checks (readinessProbe)
Pod State
Pod State
• In this lecture I’ll walk you through the different statuses and states of a
Pod and Container:
• I’ll then show you the lifecycle of a pod in the next lecture
Pod State
• Pods have a status eld, which you see when you do kubectl get pods:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
dns-controller-7cc97fb976-4b9nt 1/1 Running 0 4h
etcd-server-events-ip-172-20-38-169.eu-west-1.compute.internal 1/1 Running 0 4h
etcd-server-ip-172-20-38-169.eu-west-1.compute.internal 1/1 Running 0 4h
kube-apiserver-ip-172-20-38-169.eu-west-1.compute.internal 1/1 Running 0 4h
Pod State
• Other valid statuses are:
Pod State
• Other valid statuses are:
• Failed: All containers within this pod have been Terminated, and at
least one container returned a failure code
• The failure code is the exit code of the process when a container
terminates
• A network error might have been occurred (for example the node
where the pod is running on is down)
Pod State
• You can get the pod conditions using kubectl describe pod PODNAME
$ kubectl describe pod kube-apiserver-ip-172-20-38-169.eu-west-1.compute.internal -n kube-system
[…]
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Pod State
• There are 5 different types of PodConditions:
Pod State
• There’s also a container state:
$ kubectl get pod kube-apiserver-ip-172-20-38-169.eu-west-1.compute.internal -n kube-system -o yaml
[…]
containerStatuses:
- containerID: docker://7399a5ffb84ac91bf64f54c2395ed632736ef284d11b784ec827fd9d0a56083f
image: gcr.io/google_containers/kube-apiserver:v1.9.8
imageID: docker-pullable://gcr.io/google_containers/kube-
apiserver@sha256:79d444e6cb940079285109aaa5f6a97e5c0a5568f6606e003ed279cd90bcf1ca
lastState: {}
name: kube-apiserver
ready: true
restartCount: 0
state:
running:
startedAt: 2018-08-06T08:12:14Z
[…]
Pod Lifecycle
Pod Lifecycle
Type Status
Initialized True
initialDelaySeconds Ready True
readiness PodScheduled True
probe
liveness
post start hook pre stop hook
probe
Type Status
Initialized True
Type Status Ready False
Initialized False PodScheduled True
Ready False
PodScheduled True
Time
Learn DevOps: Kubernetes - Edward Viaene
Demo
Pod lifecycle
Secrets
Secrets
• Secrets provides a way in Kubernetes to distribute credentials, keys,
passwords or “secret” data to the pods
• You can also use the same mechanism to provide secrets to your
application
• There are still other ways your container can get its secrets if you don’t
want to use Secrets (e.g. using an external vault services in your app)
Learn DevOps: Kubernetes - Edward Viaene
Secrets
• Secrets can be used in the following ways:
• Can be used for instance for dotenv les or your app can just read this
le
fi
fi
fi
Secrets
• To generate secrets using les:
$ echo -n "root" > ./username.txt
$ echo -n "password" > ./password.txt
$ kubectl create secret generic db-user-pass --from- le=./username.txt —from- le=./password.txt
secret "db-user-pass" created
$
fi
fi
fi
fi
fi
fi
Secrets
• To generate secrets using yaml de nitions:
secrets-db-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: db-secret $ echo -n "root" | base64
type: Opaque cm9vdA=
data: $ echo -n "password" | base64
password: cm9vdA= cGFzc3dvcmQ=
username: cGFzc3dvcmQ=
• After creating the yml le, you can use kubectl create:
$ kubectl create -f secrets-db-secret.yml
secret “db-secret" created
$
fi
fi
Using secrets
• You can create a pod that exposes the secrets as environment variables
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
env
- name: SECRET_USERNAM
valueFrom
secretKeyRef
name: db-secre
key: usernam
- name: SECRET_PASSWOR
[…]
Using secrets
• Alternatively, you can provide the secrets in a le:
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
volumeMounts
- name: credvolum
mountPath: /etc/cred The secrets will be stored in:
readOnly: tru /etc/creds/db-secrets/username
volumes /etc/creds/db-secrets/password
- name: credvolum
secret:
secretName: db-secrets
fi
Demo
Secrets
Demo
Wordpress
Web UI
Web UI
• Kubernetes comes with a Web UI you can use instead of the kubectl
commands
Web UI
• In general, you can access the kubernetes Web UI at https://<kubernetes-
master>/ui
Web UI
• If you are using minikube, you can use the following command to launch
the dashboard:
$ minikube dashboard
• The DNS service can be used within pods to nd other services running on the
same cluster
• Multiple containers within 1 pod don’t need this service, as they can contact each
other directly
• A container in the same pod can connect the the port of the other container
directly using localhost:por
fi
fi
DNS
• An example of how app 1 could reach app 2 using DNS:
Pod 1 Pod 2
IP: 10.0.0.1 IP: 10.0.0.2
Service: app1 Service: app2
container container
$ host app1-service
Application 1 app1-service has address 10.0.0.1 Application 2
$ host app2-service
app2-service has address 10.0.0.2
$ host app2-service.default
app2-service.default has address 10.0.0.2 Default stands for the default namespace
$ host app2-service.default.svc.cluster.local
Pods and services can be launched in
app2-service.default.svc.cluster.local has address 10.0.0.2
different namespaces (to logically separate
your cluster)
Service: app1
container kube-dns
namespace: kube-system
Demo
Service Discovery
Con gMap
fi
Con gMap
• Con guration parameters that are not secret, can be put in a Con gMap
• The Con gMap key-value pairs can then be read by the app using:
• Environment variables
• Using volumes
fi
fi
Con gMap
• A Con gMap can also contain full con guration les
• This way you can “inject” con guration settings into containers without
changing the container itself
fi
fi
fi
fi
fi
Con gMap
• To generate con gmap using les:
fi
fi
fi
fi
fi
fi
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
volumeMounts
- name: con g-volum
mountPath: /etc/con The con g values will be stored in les:
volumes /etc/con g/driver
- name: con g-volum /etc/con g/param/with/hierarchy
con gMap
name: app-con g
fi
fi
fi
fi
fi
fi
fi
fi
g
fi
fi
fi
Using Con gMap
• You can create a pod that exposes the Con gMap as environment variables
apiVersion: v1
kind: Pod
metadata:
name: nodehelloworld.example.com
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- containerPort: 3000
env
- name: DRIVE
valueFrom
con gMapKeyRef
name: app-con
key: drive
- name: DATABAS
[…]
fi
fi
g
fi
fi
Demo
Con gMap
fi
Ingress
Ingress
• Ingress is a solution available since Kubernetes 1.1 that allows inbound
connections to the cluster
• With ingress you can run your own ingress controller (basically a
loadbalancer) within the Kubernetes cluster
• There are a default ingress controllers available, or you can write your
own ingress controller
Learn DevOps: Kubernetes - Edward Viaene
Ingress
Internet Ingress rules
host-x.example.com => pod 1
port 80 (http) host-y.example.com => pod 2
host-x.example.com/api/v2 => pod n
port 443 (https)
Pod
Pod 1 Pod 2
Application 1 Application 2
Learn DevOps: Kubernetes - Edward Viaene
:
Ingress rules
• You can create ingress rules using the ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloworld-rules
spec:
rules:
- host: helloworld-v1.example.com
http:
paths:
- path: /
backend:
serviceName: helloworld-v1
servicePort: 80
- host: helloworld-v2.example.com
http:
paths:
- path: /
backend:
serviceName: helloworld-v2
servicePort: 80
Demo
Ingress Controller
External DNS
External DNS
• In the previous lecture I explained you how to setup an ingress
controlle
• On public cloud providers, you can use the ingress controller to reduce
the cost of your LoadBalancers
• You can use 1 LoadBalancer that captures all the external traf c and
send it to the ingress controller
• The ingress controller can be con gured to route the different traf c
to all your apps based on HTTP rules (host and pre xes)
fi
fi
fi
fi
External DNS
• One great tool to enable such approach is External DNS
• This tool will automatically create the necessary DNS records in your
external DNS server (like route53)
• For every hostname that you use in ingress, it’ll create a new record to
send traf c to your loadbalancer
• Other setups are also possible without ingress controllers (for example
directly on hostPort - nodePort is still WIP, but will be out soon)
Learn DevOps: Kubernetes - Edward Viaene
fi
Pod
Ingress rules
AWS host-x.example.com => pod 1
LoadBalancer nginx
host-y.example.com => pod 2
Service: ingress ingress host-x.example.com/api/v2 => pod n
controller
Pod
External
DNS
Demo
External DNS
Volumes
Running apps with state
Volumes
• Volumes in kubernetes allow you to store data outside the container
• That’s why up until now I’ve been using stateless apps: apps that don’t
keep a local state, but store their state in an external service
Kubernetes Volumes
• Volumes can be attached using different volume plugins:
Docker
Pod 1 Pod 2 Pod N
fi
fi
Volumes
• If your node stops working, the pod can be rescheduled on another node, and
the volume can be attached to the new node
kubelet
iptables
kube-proxy
AWS Cloud
Docker
node 1 myapp
EBS Storage
kubelet
iptables
kube-proxy
node 2 Docker
myapp
• Tip: the nodes where your pod is going to run on also need to be in the same
availability zone
Learn DevOps: Kubernetes - Edward Viaene
fi
Volumes
• To use volumes, you need to create a pod with a volume de nition
[…]
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
volumeMounts:
- mountPath: /myvol
name: myvolume
ports:
- containerPort: 3000
volumes:
- name: myvolume
awsElasticBlockStore:
volumeID: vol-055681138509322ee
fi
Demo
Using volumes
Volumes
Provisioning
Volumes
• The kubernetes plugins have the capability to provision storage for you
• The AWS Plugin can for instance provision storage for you by creating
the volumes in AWS before attaching them to a node
• This is still in beta when writing this course, but will be stable soon
• I’ll also keep my github repository up to date with the latest de nitions
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
Volumes
• To use auto provisioned volumes, you can create the following yaml le:
storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1
• This will allow you to create volume claims using the aws-ebs provisioner
• Kubernetes will provision volumes of the type gp2 for you (General
Purpose - SSD)
fi
Volumes
• Next, you can create a volume claim and specify the size:
my-volume-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8G
Volumes
• Finally, you can launch a pod using a volume:
my-pod.yml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Demo
Wordpress with Volumes
Pod Presets
Pod Presets
• Pod presets can inject information into pods at runtime
• Imagine you have 20 applications you want to deploy, and they all need to get a
speci c credential
• You can edit the 20 speci cations and add the credential, or
• You can use presets to create 1 Preset object, which will inject an environment
variable or con g le to all matching pods
• When injecting Environment variables and VolumeMounts, the Pod Preset will apply
the changes to all containers within the pod
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fi
fi
fi
Pod Presets
• This is an example of a Pod Preset
apiVersion: settings.k8s.io/v1alpha1 # you might have to change this after PodPresets become stable
kind: PodPreset
metadata:
name: share-credential
spec:
selector:
matchLabels:
app: myapp
env:
- name: MY_SECRET
value: "123456"
volumeMounts:
- mountPath: /share
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
Pod Presets
• You can use more than one PodPreset, they’ll all be applied to matching
Pods
• If there’s a con ict, the PodPreset will not be applied to the pod
• It’s possible that no pods are currently matching, but that matching
pods will be launched at a later time
fl
Demo
Pod Presets
StatefulSets
Stateful distributed apps on a Kubernetes cluster
StatefulSets
• Pet Sets was a new feature starting from Kubernetes 1.3, and got renamed to
StatefulSets which is stable since Kubernetes 1.9
• Your podname will have a sticky identity, using an index, e.g. podname-0
podname-1 and podname-2 (and when a pod gets rescheduled, it’ll keep that
identity)
• Statefulsets allow stateful apps stable storage with volumes based on their ordinal
number (podname-x)
• Deleting and/or scaling a StatefulSet down will not delete the volumes
associated with the StatefulSet (preserving data)
Learn DevOps: Kubernetes - Edward Viaene
StatefulSets
• A StatefulSet will allow your stateful app to use DNS to nd other peers
• If you wouldn’t use StatefulSet, you would get a dynamic hostname, which
you wouldn’t be able to use in your con guration les, as the name can
always change
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fi
fi
fi
StatefulSets
• A StatefulSet will also allow your stateful app to order the startup and
teardown:
• This is useful if you rst need to drain the data from a node before it
can be shut down
fi
Demo
StatefulSets - Cassandra
Daemon Sets
Daemon Sets
• Daemon Sets ensure that every single node in the Kubernetes cluster
runs the same pod resource
Daemon Sets
• Typical use cases:
• Logging aggregators
• Monitoring
• Running a daemon that only needs one instance per physical instance
Daemon Sets
• This is an example Daemon Set speci cation:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: monitoring-agent
labels:
app: monitoring-agent
spec:
template:
metadata:
labels:
name: monitor-agent
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
fi
Resource Usage Monitoring
Resource Usage Monitoring
• Heapster enables Container Cluster Monitoring and Performance
Analysi
• I’ll use In uxDB, but others like Google Cloud Monitoring/Logging and
Kafka are also possible
Learn DevOps: Kubernetes - Edward Viaene
s
fl
• All these technologies (Heapster, In uxDB, and Grafana) can be started in pods
• https://github.com/kubernetes/heapster/tree/master/deploy/kube-con g/
in uxdb
• After downloading the repository the whole platform can be deployed using
the addon system or by using kubectl create -f directory-with-yaml- les/
Learn DevOps: Kubernetes - Edward Viaene
fl
fi
fl
fi
fi
Docker Docker
Pod
Heapster Pod
In uxDB Pod
node 2
cAdvisor kubelet Grafana Pod
Docker
Pod
• In Kubernetes 1.3 scaling based on CPU usage is possible out of the box
• With alpha support, application based metrics are also available (like
queries per second or average request latency)
• To enable this, the cluster has to be started with the env var
ENABLE_CUSTOM_METRICS to true
Learn DevOps: Kubernetes - Edward Viaene
Autoscaling
• Autoscaling will periodically query the utilization for the targeted pods
• Autoscaling will use heapster, the monitoring tool, to gather its metrics
and make scaling decisions
Autoscaling
• An example:
• You run a deployment with a pod with a CPU resource request of 200m
Autoscaling
• This is a pod that you can use to test autoscaling:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-exampl
ports:
- name: http-port
containerPort: 80
resources
requests
cpu: 200
Autoscaling
• This is an example autoscaling speci cation:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
fi
Demo
Autoscaling
Af nity and anti-af nity
fi
fi
Af nity and anti-af nity
• In a previous demo I showed you how to use nodeSelector to make sure
pods get scheduled on speci c nodes:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
[…]
nodeSelector
hardware: high-spec
fi
fi
fi
Af nity and anti-af nity
• The af nity/anti-af nity feature allows you to do more complex
scheduling than the nodeSelector and also works on Pods
• You can create rules that are not hard requirements, but rather a
preferred rule, meaning that the scheduler will still be able to schedule
your pod, even if the rules cannot be met
• You can create rules that take other pod labels into account
• For example, a rule that makes sure 2 different pods will never be
on the same node
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fi
fi
• Pod af nity/anti-af nity allows you to create rules how pods should be
scheduled taking into account other running pod
• I’ll rst cover node af nity and will then cover pod af nity/anti-af nity
fi
s
fi
fi
fi
Af nity and anti-af nity
• There are currently 2 types you can use for node af nity:
• 1) requiredDuringSchedulingIgnoredDuringExecution
• 2) preferredDuringSchedulingIgnoredDuringExecution
• The second type will try to enforce the rule, but it will not guarantee it
• Even if the rule is not met, the pod can still be scheduled, it’s a soft
requirement, a preference
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fi
fi
fi
fi
fi
Af nity and anti-af nity
• I also supplied a weighting to the
preferredDuringSchedulingIgnoredDuringExecution statement
• The higher this weighting, the more weight is given to that rule
• When scheduling, Kubernetes will score every node by summarizing the weightings
per node
• If only the rule with weight 1 matches, then the score will only be 1
• The node that has the highest total score, that’s where the pod will be scheduled on
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
• kubernetes.io/hostname
• failure-domain.beta.kubernetes.io/zone
• failure-domain.beta.kubernetes.io/region
• beta.kubernetes.io/instance-type
• beta.kubernetes.io/os
• beta.kubernetes.io/arch
Learn DevOps: Kubernetes - Edward Viaene
• Similar to node af nity, you have 2 types of pod af nity / anti-af nity:
• requiredDuringSchedulingIgnoredDuringExecution
• preferredDuringSchedulingIgnoredDuringExecution
• The required type creates rules that must be met for the pod to be scheduled, the
preferred type is a “soft” type, and the rules may be met
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
fl
fi
fi
fi
fi
fi
fi
Interpod Af nity and anti-af nity
• A good use case for pod af nity is co-located pods:
• You might want that 1 pod is always co-located on the same node with
another pod
• For example you have an app that uses redis as cache, and you want
to have the redis pod on the same node as the app itself
fi
fi
fi
Interpod Af nity and anti-af nity
• When writing your pod af nity and anti-af nity rules, you need to specify a
topology domain, called topologyKey in the rules
• If the af nity rule matches, the new pod will only be scheduled on nodes
that have the same topologyKey value as the current running pod
fi
fi
Interpod Af nity and anti-af nity
new pod
app=redis
Node1 Node2
podAf nity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: pod
matchExpressions: app=myapp
- key: "app"
operator: In
values:
- myapp
topologyKey: "kubernetes.io/hostname" new pod
app=redis
fi
fi
fi
Interpod Af nity and anti-af nity
Node1 eu-west-1a Node2 eu-west-1a
new pod
app=db
new pod pod
app=db app=myapp
podAf nity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: new pod
matchExpressions: app=db
- key: "app"
operator: In
values:
- myapp
topologyKey: "failure-domain.beta.kubernetes.io/zone" Node3 eu-west-1b
fi
fi
fi
Pod Af nity and anti-af nity
• Contrary to af nity, you might want to use pod anti-af nit
• You can use anti-af nity to make sure a pod is only scheduled once on a
node
• For example you have 3 nodes and you want to schedule 2 pods, but
they shouldn’t be scheduled on the same node
• Pod anti-af nity allows you to create a rule that says to not schedule
on the same host if a pod label matches
fi
fi
fi
fi
fi
fi
y
Node1 Node2
podAntiAf nity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: new pod pod
matchExpressions: app=db app=myapp
- key: "app"
operator: In
values:
- myapp
topologyKey: "kubernetes.io/hostname"
fi
fi
fi
Interpod Af nity and anti-af nity
• When writing pod af nity rules, you can use the following operators:
• You might have to take this into account if you have a lot of rules and a
larger cluster (e.g. 100+ nodes)
fi
fi
fi
fi
Interpod af nity
Demo
fi
Pod Anti-Af nity
Demo
fi
Taints and tolerations
Taints and tolerations
• In the previous lectures I explained you the following concepts:
fl
fi
• This will make sure that no pods will be scheduled on node1, as long as
they don’t have a matching toleration
• If the taint is applied while there are already running pods, these will not
be evicted, unless the following taint type is used:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: “NoExecute"
tolerationSeconds: 360
• If you don’t specify the tolerationSeconds, the toleration will match and the pod
will keep running on the node
• In this example, the toleration will only match for 1 hour (3600 seconds), after
that the pod will be evicted from the node
Learn DevOps: Kubernetes - Edward Viaene
• If you have a few nodes with speci c hardware (for example GPUs), you
can taint them to avoid running non-speci c applications on those nodes
• This will automatically taint nodes that have node problems, allowing
you to add tolerations to time the eviction of pods from nodes
fi
s
fi
r
spec:
kubelet:
featureGates:
TaintNodesByCondition: "true"
• In the next slide I’ll show you a few taints that can be possibly added.
"
fi
Custom Resource De nitions
• Custom Resource De nitions lets you extend the Kubernetes API
• Resources are the endpoints in the Kubernetes API that store collections
of API Objects
• For example, there is the built-in Deployment resource, that you can use
to deploy applications
• In the yaml les you describe the object, using the Deployment
resource type
fi
fi
• Operators, explained in the next lecture, use these CRDs to extend the
Kubernetes API with their own functionality
fi
fi
Operators
Operators
• An Operator is a method of packaging, deploying, and managing a
Kubernetes Application (de nition: https://coreos.com/operators/)
fi
fi
fi
Operators
• Any third party can create operators that you can start using
• In the demo, I’ll show you how to start using an Operator for PostgreSQ
• If you’d just deploy a PostgreSQL container, it’d only start the database
• If you’re going to use this PostgreSQL operator, it’ll allow you to also create
replicas, initiate a failover, create backups, scal
Operators
• Example yaml extract:
apiVersion: cr.client-go.k8s.io/v
kind: Pgcluste
metadata:
labels:
archive: "false"
archive-timeout: "60"
crunchy_collect: "false"
name: mycluster
pg-cluster: mycluster
primary: "true"
name: mycluste
namespace: default
Postgres-operator
Demo
Cron Job
Scheduling recurring jobs
Cron Jobs
• Still in alpha
Administration
Kubernetes Administration
Master Services
Architecture overview
kubectl
scheduling REST
actuator
node n
Resource Quotas
Resource Quotas
• When a Kubernetes cluster is used by multiple people or teams,
resource management becomes more important
• You don’t want one person or team taking up all the resources (e.g.
CPU/Memory) of the cluster
• You can divide your cluster in namespaces (explained in next lecture) and
enable resource quotas on it
Resource Quotas
• Each container can specify request capacity and capacity limits
Resource Quotas
• Example of resource quotas:
• You run a deployment with a pod with a CPU resource request of 200m
Resource Quotas
• If a capacity quota (e.g. mem / cpu) has been speci ed by the
administrator, then each pod needs to specify capacity quota during
creation
• The administrator can specify default request values for pods that don’t
specify any values for capacity
• If a resource is requested more than the allowed capacity, the server API
will give an error 403 FORBIDDEN - and kubectl will show an error
fi
Resource Quotas
• The administrator can set the following resource limits within a namespace:
Resource Description
requests.cpu The sum of CPU requests of all pods cannot exceed this value
requests.mem The sum of MEM requests of all pods cannot exceed this value
The sum of storage requests of all persistent volume claims cannot exceed this
requests.storage
value
limits.cpu The sum of CPU limits of all pods cannot exceed this value
limits.memory The sum of MEM limits of all pods cannot exceed this value
con gmaps total number of con gmaps that can exist in a namespace
persistentvolumeclaims total number of persistent volume claims that can exist in a namespace
fi
Namespaces
• The name of resources need to be unique within a namespace, but not
across namespaces
• e.g. the marketing team can only use a maximum of 10 GiB of memory,
2 loadbalancers, 2 CPU cores
Namespaces
• First you need to create a new namespace
$ kubectl create namespace myspace
fi
Namespaces
• You can then create resource limits within that namespace:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: myspace
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
Namespaces
• You can also create object limits:
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: myspace
spec:
hard:
con gmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
fi
Demo
Namespace quotas
User Management
User Management
• There are 2 types of users you can create
User Management
• There are multiple authentication strategies for normal users:
• Bearer Tokens
• Authentication Proxy
• OpenID
• Webhooks
Learn DevOps: Kubernetes - Edward Viaene
fi
User Management
• Service Users are using Service Account Tokens
fi
User Management
• Independently from the authentication mechanism, normal users have the
following attributes:
• a UID
• Groups
fi
User Management
• After a normal users authenticates, it will have access to everything
• AlwaysAllow / AlwaysDeny
fi
User Management
• Authorization is still work in progress
• This allows admins to dynamically con gure permissions through the API
fi
Demo
Adding users
RBAC
Authorization
• After authentication, authorization controls what the user can do, where
does the user have access to
• When an API request comes in (e.g. when you enter kubectl get nodes),
it will be checked to see whether you have access to execute this
command
Authorization
• There are multiple authorization module available:
Authorization
• RBAC: role based access control
• You can parse the incoming payload (which is JSON) and reply with
access granted or access denied
Learn DevOps: Advanced Kubernetes - Edward Viaene
fi
RBAC
• To enable an authorization mode, you need to pass --authorization-
mode= to the API server at startup
• Most tools now provision a cluster with RBAC enabled by default (like
kops and kubeadm)
RBAC
• If you’re using minikube, you can add a parameter when starting
minikube:
• You rst describe them in yaml format, then apply them to the cluster
• First you de ne a role, then you can assign users/groups to that role
• You can create roles limited to a namespace or you can create roles
where the access applies to all namespace
RBAC Role
• RBAC Role granting read access to pods and secrets within default
namespace
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: [“pods”, “secrets”]
verbs: ["get", "watch", "list"]
RBAC Role
• Next step is to assign users to the newly created role
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: bob
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
RBAC Role
• If you rather want to create a role that spans all namespaces, you can use
ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader-clusterwide
rules:
- apiGroups: [""]
resources: [“pods”, “secrets”]
verbs: ["get", "watch", "list"]
RBAC Role
• If you need to assign a user to a cluster-wide role, you need to use
ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
subjects:
- kind: User
name: alice
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader-clusterwide
apiGroup: rbac.authorization.k8s.io
Authorization demo
RBAC demo
Networking
Networking
• The approach to networking is quite different than in a default Docker setup
• Pod-To-Service communication
• External-To-Servic
Networking
• In Kubernetes, the pod itself should always be routable
Networking
• On AWS: kubenet networking (kops default)
• Every pod can get an IP that is routable using the AWS Virtual Private Network
(VPC)
• The kubernetes master allocates a /24 subnet to each node (254 IP addresses)
• There is a limit of 50 entries, which means you can’t have more than 50 nodes
in a single AWS cluster
• Although, AWS can raise this limit to 100, but it might have a performance
impact
Networking
• Not every cloud provider has VPC-technology (although GCE, Azure does as well)
• An Overlay Networ
Flannel
node 1
Pod
veth0
container 1
10.3.1.2 docker0 annel0 host IP
10.3.1.1 10.3.x.x 192.168.0.2 MAC addr
container 2
src: 192.168.0.2
dst: 192.168.0.3
UDP
Flannel acts as network
node 2 gateway between Packet Encapsulated
nodes packet with
Pod src: 10.3.1.2
dst: 10.3.2.2
veth0
container 1 10.3.2.2 docker0 annel0 host IP
10.3.2.1 10.3.x.x 192.168.0.3
container 2
Node Maintenance
Node Maintenance
• It is the Node Controller that is responsible for managing the Node
objects
Node Maintenance
• When adding a new node, the kubelet will attempt to register itself
• It allows you to easily add more nodes to the cluster without making API
changes yourself
Node Maintenance
• When you want to decommission a node, you want to do it gracefully
• You drain a node before you shut it down or take it out of the cluster
• If the node runs pods not managed by a controller, but is just a single pod:
$ kubectl drain nodename --force
Demo
Drain a node
High Availability
High Availability
• If you’re going to run your cluster in production, you’re going to want to
have all your master services in a high availability (HA) setup
• Only one of them will be the leader, the other ones are on stand-by
Architecture overview - HA
etcd No High Availability
Master 1 Master 2
authorization authorization
APIs APIs
scheduling REST scheduling REST
actuator actuator …
• If you’re going to use a production cluster on AWS, kops can do the heavy lifting for
you
• If you’re running on an other cloud platform, have a look at the kube deployment
tools for that platform
• kubeadm is a tool that is in alpha that can set up a cluster for you
• In the next demo I’ll show you how to modify the kops setup to run multiple master
nodes
Learn DevOps: Kubernetes - Edward Viaene
Demo
HA setup
Federation
Federation
• Federation allows you to manage multiple Kubernetes clusters
Federation
• The Setup
• etcd cluster
• federation-apiserver
• federation-controller-manager
• In this lecture I’ll go over the possible annotations for the AWS Elastic Load
Balancer (ELB)
Learn DevOps: Kubernetes - Edward Viaene
fi
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled
Used to enable access
logs on the load balancer
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-pre x
fi
TLS on AWS ELB
Annotation Description
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-
Cross-AZ loadbalancing
enabled
fi
fl
Helm
• To start using helm, you rst need to download the helm clien
• You need to run “helm init” to initialize helm on the Kubernetes cluster
• After this, helm is ready for use, and you can start installing charts
fi
t
Helm - charts
• Helm uses a packaging format called charts
fi
Helm - charts
• Charts use templates that are typically developed by a package
maintainer
• You can think of templates as dynamic yaml les, which can contain logic
and variables
fi
fi
Helm - charts
• This is an example of a template within a chart:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
drink: {{ .Values.favoriteDrink }}
• The favoriteDrink value can then be overridden by the user when running
helm install
Helm charts
demo
Helm
Create your own helm charts
Helm Charts
• You can create helm charts to deploy your own apps
Helm Charts
• To create the les necessary for a new chart, you can enter the command:
apiVersion: v1
helm create mychart appVersion: "1.0"
description: A Helm chart for Kubernetes
name: mychart
version: 0.1.0
Chart.yaml
mychart/
key:value
values.yaml
deployment.yaml
templates/
service.yaml
fi
Demo
helm create NAME
Demo
node-app-demo helm chart
Demo
Create a Chart repository using AWS S3
Demo
Build and Deploy a Chart using Jenkins
Serverless
Functions in Kubernetes
What is Serverless
• Public Cloud providers often provide Serverless capabilities in which you can
deploy functions, rather than instances or containers
• Azure Functions
• AWS Lambda
• With these products, you don’t need to manage the underlying infrastructure
• The functions are also not “always-on” unlike containers and instances, which
can greatly the reduce cost of serverless if the function doesn’t need to be
executed a lot
Learn DevOps: Kubernetes - Edward Viaene
What is Serverless
• Serverless in public cloud can reduce the complexity, operational costs, and
engineering time to get code runnin
• A developer can “just push” the code and does not to worry about many
operational aspects
• Although “cold-starts”, the time for a function to start after it has not been
invoked for some time, can be an operational issue that needs to be taken
care of
Learn DevOps: Kubernetes - Edward Viaene
What is Serverless
• This is an example of a AWS Lambda Function:
• For example in AWS you would use the API Gateway, to setup a URL that
will invoke this function when visited
Serverless in Kubernetes
• Rather than using containers to start applications on Kubernetes, you can also use Functions
• OpenFaas
• Kubeless
• Fission
• OpenWhisk
• You can install and use any of the projects to let developers launch functions on your Kubernetes
cluster
• As an administrator, you’ll still need to manage the underlying infrastructure, but from a
developer standpoint, he/she will be able to quickly and easily deploy functions on Kubernetes
Serverless in Kubernetes
• All these projects are pretty new (as of September 2018), so their feature
set will still drastically change
• In this course, I’ll demo Kubeless, which is easy to setup and use
Kubeless
Kubeless
• Kubeless is a Kubernetes-native framework (source: https://github.com/
kubeless/kubeless/)
fi
fi
Kubeless
• With kubeless you deploy a function in your preferred language
• Python
• NodeJS
• Ruby
• PHP
• .NET
• Golang
• Others
Learn DevOps: Kubernetes - Edward Viaene
Kubeless
• Once you deployed your function, you’ll need to determine how it’ll be
triggered
• HTTP functions
• Scheduled function
Learn DevOps: Kubernetes - Edward Viaene
Kubeless
• Once you deployed your function, you need to determine how it’ll be
triggered
• AWS Kinesis
Kubeless
demo
Kubeless - PubSub
demo
Kubeless UI
demo
Microservices
Microservices
• Kubernetes makes it easy to deploy a lot of diverse applications
Microservices - Monoliths
AWS Kubernetes Cluster
Ingress
Load Balancer
Client
App 1 App 2
Database 1 Database 2
Ingress
Load Balancer
Client
App 1 App 2
Database 1 Database 2
Ingress
Ingress • No encryption
• No retries, no failover
• No intelligent loadbalancing
• No routing decisions
• No metrics/logs/traces
• No access control
• …
App 1 App 2 App 4
App 3
Microservices
Kubernetes Cluster
Ingress App 3
Sidecar
Microservices
Kubernetes Cluster
Ingress App 3
Sidecar
• Routing decisions
App 1 App 2 App 4 • Metrics/logs/traces
• Access control
Sidecar Sidecar Sidecar
Management interface
Istio
Kubernetes Cluster
Pod Pod
App 1 App 2
Envo Envo
proxy proxy
fi
Istio Install
demo
Istio enabled apps
demo
hello world app
Kubernetes Cluster
Clien
(curl)
istio-ingress
Traf c Routing
demo
fi
hello world app - v2
Kubernetes Cluster
Pod Pod
hello world
Canary deployments
demo
hello world app - v2
Kubernetes Cluster
Pod Pod
hello world
Envo Envo
Weight (10%) proxy proxy
Retries
demo
hello world app - v3
Kubernetes Cluster
istio-ingress
Pod Latency: 5s
Security
Mutual TLS
Security
• The goals of Istio security are (source: https://istio.io/docs/concepts/security/#authentication)
Security
• Istio provides two types of authentication:
Security
• Mutual TLS in Istio:
• Attacks like impersonation by rerouting DNS records will fail, because a fake
application can’t prove its identity using the certi cate mechanism
fi
fi
hello world
!!!
Envo Envo
proxy proxy
!!! world
hello
Envo Envo
proxy proxy
Mutual TLS
demo
Security
RBAC with mutual TLS
RBAC
• Now that we’re using Mutual TLS, we have strong identities
• Based on those identities, we can start to doing Role Based Access Control
(RBAC)
• RBAC allows us to limit access between our services, and from end-user to
service
• Istio is able to verify the identity of a service by checking the identity of the x.509
certi cate (which comes with enabling mutual TLS
• Good to know: The identities capability in istio is built using the SPIFFE standard
(Secure Production Identity Framework For Everyone, another CNCF project)
Learn DevOps: Kubernetes - Edward Viaene
fi
RBAC
• RBAC in istio (source: https://istio.io/docs/concepts/security/)
RBAC
• RBAC is not enabled by default, so we have to enable it
• For example, in the demo, we’ll only enable it for the “default”
namespace
apiVersion: "rbac.istio.io/v1alpha1"
• We can then create a ServiceRole that speci es the rules and a
kind: RbacCon g
ServiceRoleBinding to link a ServiceRole to
metadata: a subject (for example a
name: default
Kubernetes ServiceAccount)
spec:
mode: 'ON_WITH_INCLUSION'
inclusion:
namespaces: ["default"]
fi
fi
istio-ingress
RBAC in istio
demo
Security
End-user authentication
End-user authentication
• Up until now we’ve talked about service-to-service communication
• After having strong identities using the x.509 certi cates that mutual
TLS provides, I showed you how to use role based access control
(RBAC)
fi
End-user authentication
• JWT stands for JSON web token
• it’s an open standard for representing claims securely between two parties (see
https://jwt.io/ for more information)
• In our implementation, we’ll receive a JWT token from an authentication server after
logging in (still our hello world app)
• The data is not encrypted, but the token contains a signature, which can be
veri ed to see whether it was really created by the serve
• Only the server has the (private) key, so we can’t recreate or tamper with the
token
Learn DevOps: Kubernetes - Edward Viaene
fi
s
End-user authentication
• This is an example of a token:
• eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODk
wIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.S KxwRJ
SMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
fl
End-user authentication
• You can use jwt.io to decode the token:
End-user authentication
• In webapps using authentication, the server can issue a JWT token when
the user is authenticate
• In the JWT payload, data can be stored, like the username, groups, etc
• This can then used later by the app, when the users sends new requests
• If the signature in the token is valid, then the JWT is valid, and the
data within the token can be used
End-user authentication
• Using microservices, every app would have to be separately con gured
• Once validated the service would need to check whether the user has access to
this service (authorization)
• With istio, this can be taken away from the app code and managed centrally
• You can con gure the jwt token signature/properties you expect in istio, and create
policies to allow/disallow access to a service
• For example: the “hello” app can only be accessed if the user is authenticated
• The sidecar will verify the validity of the signature, to make sure the token is valid
fi
client Policy
- jwt:
issuer: "[email protected]"
jwksUri: "http://auth.kubernetes.newtech.academy/.well-known/jwks.json"
principalBinding: USE_ORIGIN
Pod istio-ingress
/login Pod
hello-auth jwt token hello
Envo
proxy Envo
jwt token proxy
retrieve jwks.json
end-user authentication
demo
Egress traf c
demo
fi
Egress traf c
Kubernetes Cluster External Services
istio-ingress
Pod
Service
hello X
Envo
proxy
fi
Distributed tracing
demo
Distributed tracing
Kubernetes Cluster
istio-ingress
trace
trace trace
• For example, when you create a new pod, a request will be sent to the
kubernetes API server, and this can be intercepted by an admission
controller
fi
Admission Controllers
• Admission controllers can be enabled by the administrator
kube-apiserver --enable-admission-plugins=NamespaceLifecycle,...
• When using kops it can be con gured using yaml, or with minikube by
passing an argument after minikube start
Admission Controllers
Admission Controller Description
Using the “LimitRange” object type, you set the default and limit
LimitRanger cpu/memory resources within a namespace. The LimitRanger
admission controller will ensure these defaults and limits are applied
Makes sure that kubelets (that run on every node) can only modify
NodeRestriction
their own Node/Pod objects (objects that run on that speci c node)
You can setup a webhook that can modify the object being sent to
MutatingAdmissionWebhook the kube-apiserver. The MutatingAdmissionWebhook ensures that
matching objects will be sent to this webhook for modi cation
You can setup a webhook that can validate the objects being sent
ValidatingAdmissionWebhook to the kube-apiserver. If the ValidatingAdmissionWebhook rejects
the request, the request fails
Request Object
API HTTP Auth Mutating Validating Persisted to
Handler Authz admission schema etcd
admission
validation
Webhook
Webhook Webhook
Webhook
• It can modify or accept/reject objects that are being created within kubernetes
• One typical use case is to inject a sidecar to every pod that is being launched on
a cluster
• Run a sidecar proxy where all the traf c ows through rst
• Use the sidecar to inject variables using shared storage, or Con gMap/
Secrets
fi
fl
fi
fi
Webhooks
• In the next demo I’ll show you how to con gure a MutatingWebhook
• Once setup, when we launch a pod in a speci c namespace, the pod metadata will be
altered by the MutatingWebhook to add a label
• In our example we’re adding a label, but we could really change anything, like injecting an
init container or a sidecar
• The webhook that does the changes to the metadata is just another container (using a
deployment object)
• The container uses a REST API that responds to a POST request on a speci c endpoint
fi
fi
fi
Admission Controllers
Mutatingwebhook namespace Testmutatingwebhook namespace
Webhook Webhook
Pod
deployment pod
Webhoo
Service
AdmissionReview
Response Schedule pod
with patched speci cation
Cluster-level objects
kube-apiserver
AdmissionReview
Request
MutatingWebhookCon guration
fi
fi
Demo
Mutatingwebhook
Pod Security Policies
Pod Security Policies
• Pod Security policies enable you to do control the security aspects of the
pods creation & updates:
• For example:
• Make sure containers only run within a UID / GID range, or make sure
that containers can’t run as root
• It’ll determine whether the pod meets the pod security policy based on the
security context de ned within the pod speci cation
fi
fi
Pod Security Policies
• The Pod Security Policy admission controller is currently not enabled by
default (Kubernetes 1.16) - it probably will be in the future
• In the next demo I’ll show you how to enable it and create a PodSecurityPolicy
to implement some extra security controls for new pods that are created
• One for the system processes, because some of them need to run
privileged / as root
• One for the pods users want to schedule which should be tighter than the
system policy (for example deny privileged pods)
Skaffold
• Having this work ow, Skaffold can monitor your application for changes
while you are developing it
• Skaffold can also be used as a tool that can be incorporated into your CI/
CD pipeline, as it has the build/push/deploy work ow built-in
• That way you can have the work ow locally to test your app, and have
it handled by your CI/CD in the same way
fl
fl
fl
Skaffold
• Skaffold is very pluggable:
• Other builds next to docker are also possible, like Bazel, build packs
or custom builds
Cleanup
File sync images &
resources
• etcd is a distributed and reliable key-value store for the most critical
data of a distributed system
• It’s meant to be simple, it has a well-de ned, user facing API (gRPC)
• Secure, with automatic TLS and optional client certi cate authorization
Source: https://github.com/etcd-io/etcd
Learn DevOps: Kubernetes - Edward Viaene
fi
fi
etcd
• All Kubernetes objects that you create are persisted to the etcd backend
(typically running inside your cluster)
• If you have a 1-master Kops cluster or a minikube setup, you’ll typically have a
1-node etcd cluster
• The latency between your nodes should be low, as heartbeats are sent
between nodes
• If you have a cluster spanning multiple DCs you’ll need to tune your
heartbeat timeout in the etcd cluster
Learn DevOps: Kubernetes - Edward Viaene
etcd
• A write to etcd can only happen by the leader, which is elected by an
election algorithm (as part of the raft algorithm)
• If a write goes to one of the other etcd nodes, the write will be routed
through the leader (each node also knows who the leader node is)
• etcd will only persist the write if a quorum agrees on the write
etcd
• All Kubernetes object data is stored within the etcd cluster, so you’ll want
to have a backup of this data when running a production cluster
• etcd supports snapshots to take a backup of your etcd cluster, which can
store all data into a snapshot le
Raft consensus
Raft consensus algorithm
Backup & Restore in kops
Demo
Congratulations
AWS EKS
AWS EKS
• AWS EKS (Amazon Elastic Kubernetes Services), is a fully managed
Kubernetes service
• Unlike Kops, EKS will fully manage your master nodes (which includes the
apiserver and etcd)
• You pay a fee for every cluster you spin up (to pay for the master nodes) and
then you pay per EC2 worker that you attach to your cluster
• It’s a great alternative for kops if you want to have a fully managed cluster and
not deal with the master nodes yourself
• Depending on your cluster setup, EKS might be more expensive than running
a kops cluster - so you might still opt to use Kops for cost reasons
Learn DevOps: Kubernetes - Edward Viaene
AWS EKS
• EKS is a popular AWS service and supports lots of handy features:
• AWS created its own VPC CNI (Container networking interface) for EKS
• AWS can even manage your workers to ensure updates are applied to
your workers
• Service Accounts can be tied to IAM roles to use IAM roles on a pod-level
• Integrates with many other AWS services (like CloudWatch for logging)
AWS EKS
• There is a command line tool, eksctl, available to manage eks cluster,
which I’ll use in the demos
• You can nd the documentation at eksctl.io, where you will also nd the
download instructions
• You can also pass a yaml based con guration le if you want set your
own con guration, like VPC subnets (otherwise it’ll create VPC and
subnets for you)
fi
fi
fi
AWS EKS
Demo
IAM Roles for Service Accounts
IRSA
• EKS supports IAM Roles for Service Accounts (or IRSA)
• With this feature you can specify IAM policies at a pod level
• For example: one speci c pod in the cluster can access an s3 bucket, but
others cannot
• Previously, IAM policies would have to be set up on the worker level (using EC2
instance roles)
• With IAM Roles for Service Accounts, it lets you hand out permissions on a more
granular level
• One major caveat, the app running in the container that uses the AWS SDK must
have a recent SDK to be able to work with these credentials
Learn DevOps: Kubernetes - Edward Viaene
fi
IRSA
• IAM Roles for Service Accounts uses the IAM OpenID Connect provider (OIDC) that
EKS exposes
• To link an IAM Role with a Service Account, you need to add an annotation to the
Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
• The EKS Pod Identity Webhook will then automatically inject environment variables
into the pod that have this ServiceAccount assigned (AWS_ROLE_ARN &
AWS_WEB_IDENTITY_TOKEN_FILE)
• These environment variables will be picked up by the AWS SDK during authentication
IRSA
Kube-apiserver
ServiceAccount ServiceAccount has
role annotated
Mutating
admission
Pod
Assumes role
AWS SDK
IAM role
AWS Servic
e.g. S3
• It can synchronise your version control (git) and your Kubernetes cluster
• With ux, you can put manifest les (your kubernetes yaml les) within
your git repository
• Flux will monitor this repository and make sure that what’s in the manifest
les is deployed to the cluster
• Flux also has interesting features where it can automatically upgrade your
containers to the latest version available within your docker repository (it
uses semantic versioning for that - e.g. “~1.0.0”)
fi
fi
Flux
• Flux has joined the CNCF - Cloud Native Computing Foundation
• You declaratively describe the entire desired state of your system in git
Flux