Docker TA 2
Docker TA 2
---Docker uses a layered architecture to build and manage container images efficiently.
Each layer corresponds to an instruction in the Dockerfile, forming a stack of read-only
image layers. ==---Base Image Layer: The bottom layer, usually a standard OS (like Ubuntu,
Python).---Intermediate Layers: Created by Dockerfile instructions like RUN, COPY, ADD.--
-Top Layer (Writable Layer): When a container runs, a writable layer is added on top to
allow changes without modifying the base image.---This architecture uses the Union File
System to merge these layers into a single view, supporting: Efficient storage via layer
reusability Faster builds due to layer caching
Clear separation of concerns (base vs app-specific)
Dockerfile:
---FROM python:3.9 # Use official Python 3.9 image as base
----WORKDIR /app # Set working directory in the container
----COPY . . # Copy all local files to the container's /app directory
-----CMD ["python", "read.py"] # Run the Python script when the container starts
====python file: ---with open("sample.txt", "r") as file: ---- print(file.read())
===Explanation:---FROM python:3.9: Pulls a Python image with version 3.9.---WORKDIR
/app: Sets the working directory to /app.----COPY . .: Copies all project files into the
image.----CMD [...]: Defines the default command to run the script.
3) Define Docker Runtime and Docker Engine. How do they work together in the
container lifecycle?
===Docker Engine:---Core Docker platform for building, running, and managing
containers.----Consists of:+Docker Daemon (dockerd): Runs in background, manages
images/containers. +Docker CLI: Command-line tool to interact with Docker. +Docker
REST API: Allows remote interaction.
===Docker Runtime:--The component that actually runs the containers.---Default runtime:
runc (complies with OCI standards).---Executes the container process in isolated
environments using Linux features (namespaces, cgroups).
===How They Work Together:----User runs a command: docker run myimage----Docker CLI
sends request to Docker Daemon (via REST API).--==-Docker Engine: +Pulls the image (if
not available locally).+Sets up the environment (network, storage). ==---Docker Runtime
(runc): +Launches the actual container. +Manages its lifecycle (start, stop, delete).
4) Design a Kubernetes deployment YAML file that launches a pod running an NGINX
container with 3 replicas and exposes it via a NodePort service
=apiVersion: apps/v1 =kind: Deployment =metadata: =name: nginx-deployment
=spec: = replicas: 3 =selector: = matchLabels: = app: nginx
= template: = metadata: = labels: = app: nginx = spec: =
containers: = - name: nginx = image: nginx:latest = ports: = -
containerPort: 80
=--- =apiVersion: v1 =kind: Service =metadata: = name: nginx-service
=spec: = type: NodePort = selector: = app: nginx = ports:
= - port: 80 = targetPort: 80 = nodePort: 30080
Explanation:---Deployment:=replicas: 3: Runs 3 pods.=nginx:latest: Uses the latest
NGINX image.=containerPort: 80: Exposes port inside container.-----Service:=type:
NodePort: Exposes the app on a static port (30080) on all worker nodes.=targetPort: 80:
Forwards traffic to the container’s port.
5) Kubernetes has a Master-Worker architecture. The Control Plane runs core
components to manage the cluster, and Worker Nodes run containerized applications.
Control Plane Components:==== 1. API Server (Brain of Kubernetes):+Acts as the
frontend for the Kubernetes control plane.+Accepts kubectl commands (REST API),
validates them, and updates etcd.+All communication (from users or internal
components) goes through the API server.==== 2. etcd (Cluster Database):
+A distributed key-value store that holds the entire state of the Kubernetes cluster.
+Stores information like nodes, pods, config, secrets, and service discovery data.
+Ensures consistency across the system.==== 3. Controller Manager: +Watches for
changes in cluster state (via API Server). +Ensures the desired state matches the current
state by taking necessary actions.+Includes multiple controllers like:--
ReplicationController--NodeController---EndpointController
==== 4. Scheduler:+Assigns pods to available nodes based on:---Resource availability
(CPU, memory)----Affinity/anti-affinity rules---Node taints/tolerationsn +Ensures efficient
use of cluster resources.==== Worker Node Components:==== 5. Kubelet:+Agent
on each node.+Communicates with the API server.+Ensures that the containers described
in the PodSpecs are running and healthy.+Reports node and pod status back to the control
plane.==== 6. Kube Proxy:+Manages networking for pods.+Implements network rules
to allow communication to and from pods and services.+Handles load balancing for
services (like ClusterIP, NodePort).
6) Defend pre-requisites for getting up and running with Docker
--To start using Docker effectively, several pre-requisites must be fulfilled, both in terms of
system requirements and software dependencies: 1. Supported Operating System
2. Hardware Requirements 3. Virtualization Support 4. Docker Installation 5.
Basic Networking and Linux Knowledge
7) Why is manual container management insufficient at scale, and how does
Kubernetes solve these problems?
--- Why Manual Container Management Fails at Scale:
=Lack of Automation:+Manually starting, stopping, and updating containers is time-
consuming.+Difficult to maintain uptime and consistency. =No Self-Healing:+If a container
crashes, it won’t restart automatically. =Scaling is Hard:+Manually increasing containers
during high traffic is inefficient. =Poor Load Balancing:+Requires manual setup to
distribute traffic across containers. =Configuration Drift:+Managing multiple containers
across environments leads to inconsistencies.
---- How Kubernetes Solves These Problems:=Automated Deployment & Scaling:+Use
Deployments to scale containers up/down automatically.=Self-Healing:+Kubelet ensures
containers are always running.+Restarts failed containers automatically.=Load
Balancing:+Services automatically distribute traffic to healthy pods.=Declarative
Configuration:+Use YAML files to define the desired state of the system.=Rolling Updates &
Rollbacks:+Kubernetes supports zero-downtime deployments with the ability to rollback if
needed.
8) ---Pod:+Smallest deployable unit in Kubernetes.+Can hold one or more
containers.+Containers in a pod share the same storage, network, and
namespace.+Example: A pod running a single NGINX container.
----Deployment:+Manages replica sets and ensures the desired number of pods are
running.+Handles rolling updates and rollbacks.+Automatically scales pods based on
demand.+Example: Deploy 3 replicas of a web application with the nginx:latest image.
----Service:+Exposes a set of pods to the network.+Provides load balancing across
multiple pods.+Types: ClusterIP, NodePort, LoadBalancer.+Example: NodePort service
exposing the NGINX deployment on port 30000.
----Docker Lifecycle Create – +Define a Docker image using a Dockerfile +Build – Create
an image from the Dockerfile +Run – Launch a container from the image +Stop/Restart –
Manage the container state (pause, stop, start) +Destroy – Remove unused
containers/images to free resources
---Container Orchestration – Kubernetes (K8s) is an open-source platform to manage
containerized workloads and services. It supports declarative configuration and
automation.------Key Features:= Self-Healing: Restart failed containers, replace
unresponsive nodes= Auto-Scaling: Dynamically adjusts the number of running
containers= Load Balancing: Automatically distributes network traffic= Rolling Updates:
Gradually update apps without downtime= Secrets & Configs: Secure handling of
sensitive data-------Core Concepts:= Pod: Smallest deployable unit, holds one or more
containers= Node: Machine (VM or physical) running pods= Cluster: Set of nodes
managed by a Kubernetes control plane= Deployment: Describes the desired state of
pods and updates them automatically= Service: Abstract way to expose a set of pods as
a network service
---Container Security and Monitoring ====Security: + Least privilege: Don’t run as root
Read-only filesystem + Seccomp, AppArmor profiles + Trusted registries Image
scanning tools: Trivy, Clair, Snyk + Use Kubernetes RBAC and NetworkPolicies
===Monitoring:+ Prometheus: Metrics collection + Grafana: Visualization + ELK Stack
(Elasticsearch, Logstash, Kibana): Log aggregation + Jaeger: Distributed tracing + Use
kubectl top, kubectl logs, and liveness/readiness probes
-----Kubernetes Cluster Setup and Configuration Tools:== kubectl: Command-line client
for Kubernetes=== kubeadm: Bootstrap Kubernetes clusters === minikube: Lightweight
cluster for local development ------Cluster Setup Steps:+Install Docker and Kubernetes
tools.+Run kubeadm init (on master node).+Join worker nodes via kubeadm join.+Apply
network plugin (e.g., Calico, Flannel).+Deploy workloads.
-----3. Container Deployment and Management
===Deployment Options:+Docker CLI – Used for single host environments.+Docker
Compose – For managing multi-container local setups.+Kubernetes – Suitable for
production-grade container orchestration.
===Management Tasks:---Start/Stop containers:+ docker start <container> / docker stop
<container>---View container logs:+ docker logs <container>-----Monitor resource usage:
+docker stats----Define health checks:+ Use HEALTHCHECK instruction in the Dockerfile.
----1. Creating a Container Image from Scratch;
Creating a container without any base image (e.g., Ubuntu, Alpine) using FROM scratch.
Use Cases:+Ideal for minimal, secure containers.+Common for statically compiled
binaries (e.g., Go, Rust).+Reduces attack surface by removing unnecessary packages.
Steps:+Statically compile the application (e.g., using Go).+Create a Dockerfile starting with
FROM scratch.+COPY the compiled binary into the image.+Define the CMD or
ENTRYPOINT to run the app.
=Example Dockerfile:++FROM scratch++COPY myapp /myapp++CMD ["/myapp"]
5. Container Networking and Storage======Networking:----Bridge Network (default):
Containers get isolated IPs and can communicate through Docker’s bridge.---Host
Network: Container shares the host’s networking stack. Used for performance or raw
socket access.---Overlay Network: Used in multi-host scenarios like Swarm or
Kubernetes.---Macvlan: Assigns MAC address to containers.---Port Mapping: docker run -p
8080:80 nginx.===Storage:---Volumes: Managed by Docker. Stored in
/var/lib/docker/volumes/.---Bind Mounts: Direct access to host filesystem. Used in
development.---tmpfs: RAM-backed. Used for sensitive data like secrets.