Docker & Kubernetes
Docker & Kubernetes
Docker & Kubernetes
Docker
1. To show running containers: docker container ls
2. When you use docker desktop containers are not created in host OS, it is created in
Linux VM.
3. Docker run command has 3 states (preparing - downloading image, starting, running)
4. To stop container: docker container stop containerId
5. -p 8080:80 - maps TCP port 80 in the container to port 8080 on the Docker host
6. -p 192.168.1.100:8080:80 - maps TCP port 80 in the container to port 8080 on the
Docker host for connections to host IP 192.168.1.100
7. docker container run - - publish 80:80 - - detach nginx (left is port number and it can be
any you want => localhost:80)
8. Docker run –name mongo -d mongo
9. To show all created containers. (whenever container run types then new container
created for same image): docker container ls -a
10. To show logs in container by appname: docker container logs appname
11. To show logs in container by servicename: docker service logs servicename
12.
● Docker Container is a runtime instance of Docker Image
● Docker Service can run one type of Docker Images on various containers
5. Bind mounts have been around since the early days of Docker. Bind mounts have limited
functionality compared to volumes. When you use a bind mount, a file or directory on the
host machine is mounted into a container. The file or directory is referenced by its absolute
path on the host machine. By contrast, when you use a volume, a new directory is created
within Docker’s storage directory on the host machine, and Docker manages that directory’s
contents. The file or directory does not need to exist on the Docker host already. It is
created on demand if it does not yet exist. Bind mounts are very performant, but they rely on
the host machine’s filesystem having a specific directory structure available. If you are
developing new Docker applications, consider using named volumes instead. You can’t use
Docker CLI commands to directly manage bind mounts.
6. To add bind mounting to container: docker run -d -it –name devtest -- mount type=bind,
source= “$(pwd)”/target, target=/app nginx:latest
Docker Compose
1. To setup volumes/network and start all containers: docker-compose up
2. To stop all containers and remove cont/vol/net: docker-compose down
Routing Mesh
1. Routes ingress (incoming ) packets for a service to proper task.
2. Spans all nodes in swarm
3. Load balances Swarm services across their tasks
4. This is stateless load balancing and is at OSI layer 3 (TCP), not layer 4 (DNS)
5. If you want use stateful load balancer you can use Nginx with routing mesh
6. Two ways this works:
○ container -to-container in an overlay network. For example: one backend service
such as a database has two replicas. Front end service would not talk with the
database from their ip address. It would talk with VIP (virtual ip (private ip inside
virtual network) that ensures load is distributed among services) that Swarm put
in front of all services. Single VIP is assigned to all nodes and VIP distributes the
request among nodes.
○ External traffic incoming to published ports (all nodes listen). Load balancer
deployed in each node. All coming requests will get to node and LB in node will
direct it to service in another node or in it if it has.
Swarm Stacks
1. docker stack deploy rather than docker service create
2. Controls services, volumes, secrets, networks
3. Can not do build command
4. Compose ignores deploy, swarm ignores build
5. docker-compose up - for development env and integration testing
6. docker stack deploy - for prod env
7. docker stack deploy -c (for compose) “yml file” app_name
8. docker stack services app_name - shows with replication
9. docker stack ps app_name – shows in which nodes services run
10. To update stack just update file and then use deploy common again
11. If there are docker-compose.yml and docker-compose.override.yml. Then if you run
docker-compose up then the base file will run first then the override file will run.
12. For running docker-compose.test.yml for CI integration then write
Docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d
13. For running docker-compose.prod.yml for production then write
Docker-compose -f docker-compose.yml -f docker-compose.prod.yml config
Swarm Secrets
1. Swarm Raft DB is encrypted on disk by default
2. Only stored on disk on manager nodes
3. Secrets are first stored in Swarm then assigned to service
4. Only containers in assigned services can see secrets
5. They look like files in container but are actually in-memory file system
6. Docker secret create secret_name secret_file.txt
7. docker service create –name psql –secret psql_user –secret psql_pass -e
POSTGRES_PASSWORD_FILE=/run/secrets/psql_pass -e
POSTGRES_USER_FILE=/run/secrets/psql_user postgres
8. docker service update –secret-rm (remove secret and redeploy container)
9. external : true in compose file means that you have to create secret by cli tool not file
because it is not secure
Docker Healthcheck
1. Docker engine will exec’s the command in the container (e.g. curl localhost)
2. It expects exit 0 (OK) or exit 1 (Error)
3. Three container states: starting (30 second by default), healthy, unhealthy
4. Healthcheck status shows up in docker container ls
5. Check last 5 health checks with docker container inspect
6. Stack service will replace tasks if they fail health check
7. Stack service update wait for them before continuing
8. docker container run –name p2 -d –health-cmd=”pg_isready -U postgres | exit 1”
postgres - will create single container with health check status (it will go from starting to
running for long because it will check health status)
Kubernetes
1. Popular container orchestrator
2. Runs on top of docker as a set of apis in containers. But also can run on other
containers that is not docker powered
3. Many clouds provide it to you with gui
4. Many vendors make a distribution of it (own kubernetes like linux distribution) like
rancher. Decide which kubernetes version they use it.
5. Servers + change rate = benefit or orchestration
6. Swarm advantages:
● Swarm is easy to manage/deploy. But kubernetes has more features and
flexibility
● Swarm follows 80/20 rule which means 20 % of kubernetes features and 80 % of
use cases for running apps
● Swarm runs anywhere docker does
● secure by default
● easier to troubleshoot
7. Kubernetes advantages:
● Clouds will manage and deploy kubernetes for you
● Infrastructure vendors are making their own distributions
● Widest adoption and community
● Flexible: covers widest set of use cases
8. Kubectl - CLI to configure kubernetes and manage apps
9. Node - single server in the kubernetes cluster
10. Kubelet - kubernetes agent running on nodes. In swarm there is no agent because
swarm is built-in orchestrator
11. Kube-Proxy - to control the networking. Placed with kubelet in node
12. Control Plane - set of containers that manage the cluster. Includes API server,
scheduler, controller manager, Core DNS, etcd (another product which is distributed
storage system for key values like RAFT in Swarm). It is sometimes called master
13. Scheduler Container - know how and where containers are placed
14. Controller Manager - looks state of the cluster, take orders
15. Pod - one or more containers running together on one Node. Basic unit of deployment is
a pod. Containers are always in pods. You always deploy pods, not containers in
kubernetes.
16. Deployment - Deployment is a specialized term in the context of Kubernetes. It doesn't
necessarily refer to the deployment of applications or services. Rather, a deployment is
a file that defines a pod's desired behavior or characteristics. Pod is lower level than
deployment and replicaset.
17. Controller - for creating / updating pods and other objects. It has many types:
Deployment, ReplicaSet, StatefulSet, DaemonSet, Job, CronJob, etc.
18. Service - does not have any relation with deploying pods like in Swarm. It is a network
endpoint given to a set of pods. Everything else can access that set of pods using the
DNS name and port.
19. Namespace - filtered group of objects in cluster. It is just like a filter, not a security
feature.
20. There are three ways to create pods from the kubectl CLI:
● kubectl run - changing to be only for pod creation like docker run
● kubectl create - create some resources via CLI or YAML like docker create
● kubectl apply - create/update anything via YAML like stack deploy
21. The fundamental question is how to apply all of the K8s objects into the k8s cluster.
There are several ways to do this job.
● Using Generators (run, expose)
● Using Imperative way (create)
● Using Declarative way (apply)
- All of the above ways have a different purpose and simplicity. For instance, If you want
to check quickly whether the container is working as you desired then you might use
Generators .
- If you want to version control the k8s object then it's better to use a declarative way
which helps us to determine the accuracy of data in k8s objects.
ReplicaSet
Pod
Vol NIC
Nginx
Pod
Vol NIC
Nginx
K8S
Docker
Annotations:
5. Labels are very simple, limited in size, meant to describe resource, not meant to hold
complex, large or non-identifying info, which is annotations are used for.
Label Selectors:
6. The “glue” telling Services and Deployments which pods are theirs for creating link with
them.
7. Many resources use Label Selectors to link resource dependencies.
8. You will see these match up in the Service and Deployment YAML.
9. If that matching labeled pod is not deployed it will not allow you run the yaml file
successfully.
10. Use Labels and Selectors to control which pods go to which nodes.
Kubernetes Storage
1. Generally containers in kubernetes like any orchestrator are stateless by default.
2. StatefulSets is a new resource type, making Pods more sticky for making it persistent.
Volumes in Kubernetes:
1. There are 2 types of volumes: Volumes and PersistentVolumes
2. Volumes are tied to the lifecycle of a Pod. All containers in a single Pod can share them.
3. PersistentVolumes are created at the cluster level and outlive a Pod. Separates
storage config from Pod using it. Multiple pods can share them.
4. There is gonna be a separate team for configuring 3rd party storage.
5. CSI plugins are the new way to connect to storage. In the past 3rd party storage vendors
such as AWS, Azure came built in kubernetes as binary.
Ingress
1. None of our Service types work at OSI layer 7 (HTTP).
2. Ingress Controllers (optional) is not built in kubernetes and do this with 3rd party proxies
3. Ingress Controllers are not load balancers.