0% found this document useful (0 votes)
152 views45 pages

Devops Shack 50 Complex Kubernetes Scenario-Based Q&A: 1. Scenario: Zero-Downtime Deployment For Multiple Services

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views45 pages

Devops Shack 50 Complex Kubernetes Scenario-Based Q&A: 1. Scenario: Zero-Downtime Deployment For Multiple Services

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

DEVOPS SHACK

50 Complex Kubernetes Scenario-Based Q&A

1. Scenario: Zero-Downtime Deployment for Multiple Services

Question: How do you ensure a zero-downtime deployment for multiple services in a production
environment?

Answer: Achieving zero-downtime deployments requires careful use of rolling updates, readiness
probes, and traffic routing. Here’s how you can implement it:

1. Rolling Update: Use rolling updates with a small increment in replicas to ensure that only a
portion of the pods are updated at a time, reducing the risk of downtime:

apiVersion: apps/v1
kind: Deployment
metadata:
name: multi-service-deployment
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
2. Readiness Probes: Define readiness probes to ensure that the new pod version is ready to
serve traffic before it’s added to the load balancer:

readinessProbe:

httpGet:

path: /health

port: 8080

initialDelaySeconds: 10

periodSeconds: 5

3. Traffic Routing: Use an Ingress or Istio Gateway to route traffic based on header/cookie
values for canary testing:
apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

name: multi-service-route

spec:

hosts:

- multi-service.example.com

http:

- route:

- destination:

host: multi-service

subset: v2

weight: 10

- destination:

host: multi-service

subset: v1

weight: 90

2. Scenario: Blue-Green Deployment with Rollback Option

Question: How do you implement a blue-green deployment strategy with an easy rollback option?

Answer: In a blue-green deployment, two identical environments (blue and green) are maintained.
Traffic is shifted between them without downtime, allowing easy rollbacks.

1. Create Two Deployments (blue and green): Define two separate deployments for the blue
and green environments.

apiVersion: apps/v1

kind: Deployment

metadata:

name: myapp-blue

spec:

replicas: 5

template:

spec:
containers:

- name: myapp-container

image: myapp:v1

2. Update the Ingress: Modify the Ingress (or Istio Gateway) to point to the green deployment
when ready:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: myapp-ingress

spec:

rules:

- host: myapp.example.com

http:

paths:

- backend:

serviceName: myapp-green

servicePort: 80

3. Rolling Back: If the green deployment has issues, switch back to blue by modifying the
Ingress to point to myapp-blue again. This ensures that users always hit a stable
environment.

3. Scenario: Custom Metrics for Autoscaling

Question: How do you autoscale pods based on custom application metrics, such as requests per
second (RPS)?

Answer: To autoscale based on custom metrics, integrate a custom metrics API (such as Prometheus
Adapter) with the Horizontal Pod Autoscaler (HPA).

1. Expose Custom Metrics: Use a monitoring tool like Prometheus to export custom metrics
(e.g., RPS). Example metric:

http_requests_total{job="myapp"}

2. Create a Custom Metrics API Adapter: Deploy a custom metrics API adapter (e.g., k8s-
prometheus-adapter). This adapter translates Prometheus metrics into a format Kubernetes
understands.

3. Create an HPA for Custom Metrics: Define an HPA to autoscale based on the custom RPS
metric:
apiVersion: autoscaling/v2beta2

kind: HorizontalPodAutoscaler

metadata:

name: myapp-hpa

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: myapp

minReplicas: 2

maxReplicas: 10

metrics:

- type: Pods

pods:

metric:

name: http_requests_total

target:

type: AverageValue

averageValue: "100"

This setup ensures that Kubernetes scales the number of pods based on application RPS.

4. Scenario: Multi-Cluster Kubernetes Management

Question: How do you manage workloads in multiple Kubernetes clusters efficiently?

Answer: Managing workloads across multiple clusters requires the use of multi-cluster tools such as
KubeFed (Kubernetes Federation) or Rancher.

1. Install KubeFed: Install KubeFed to manage multiple clusters as a single entity:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-


sigs/kubefed/master/charts/kubefed/README.md

2. Federated Resources: With KubeFed, you can create federated deployments that are
automatically deployed to all member clusters:
apiVersion: types.kubefed.io/v1beta1

kind: FederatedDeployment

metadata:

name: myapp

spec:

template:

spec:

replicas: 3

selector:

matchLabels:

app: myapp

template:

metadata:

labels:

app: myapp

spec:

containers:

- name: myapp-container

image: myapp:v1

3. Rancher for Multi-Cluster Management: Alternatively, use Rancher to manage clusters


through a centralized dashboard. Rancher integrates with CI/CD tools and provides RBAC
controls across multiple clusters.

5. Scenario: Handling Node Failures

Question: What happens when a node fails, and how do you recover pods on that node?

Answer: Kubernetes automatically reschedules pods on healthy nodes when a node becomes
unavailable.

1. Pod Eviction: If a node fails, the Node Controller detects the failure after a default timeout
of 5 minutes. Pods on the failed node are marked as "Terminating" or "Unknown," and
Kubernetes attempts to reschedule them on a healthy node.

2. Node Termination Grace Period: You can configure the node termination grace period to a
lower value if you want faster eviction:
kubectl edit no <node-name>

# Set grace period in the taint

3. Pod Disruption Budgets: Use PodDisruptionBudget (PDB) to ensure that critical workloads
maintain minimum availability during node failures:

apiVersion: policy/v1beta1

kind: PodDisruptionBudget

metadata:

name: myapp-pdb

spec:

minAvailable: 80%

selector:

matchLabels:

app: myapp

6. Scenario: Handling Kubernetes Version Upgrades

Question: How do you safely upgrade a Kubernetes cluster without affecting running workloads?

Answer: The safest approach to upgrading a Kubernetes cluster involves the following steps:

1. Back Up Cluster Data: Use tools like etcd-backup or Velero to back up your etcd database
and resources:

velero backup create cluster-backup --include-cluster-resources

2. Drain Nodes: Drain and upgrade one node at a time:

kubectl drain <node-name> --ignore-daemonsets

3. Upgrade Control Plane: Upgrade the control plane first. If you are using a managed
Kubernetes service (like GKE, EKS, or AKS), this is usually automated. For self-hosted clusters,
you can use kubeadm:

kubeadm upgrade plan

kubeadm upgrade apply v1.21.1

4. Upgrade Worker Nodes: Once the control plane is upgraded, upgrade worker nodes. First,
drain a node, upgrade it, and uncordon it:

kubectl uncordon <node-name>

5. Monitor Cluster Health: Ensure that all nodes are running the upgraded version and that
workloads are stable by monitoring pod and node statuses.
7. Scenario: Logging and Monitoring in Kubernetes

Question: How do you centralize logs and monitor applications in a Kubernetes cluster?

Answer: Use the EFK (Elasticsearch, Fluentd, Kibana) stack or Prometheus + Grafana for centralized
logging and monitoring.

1. Centralized Logging with EFK:

o Fluentd collects logs from all nodes and forwards them to Elasticsearch.

o Kibana provides a UI to view and query logs.

2. Deploy Fluentd:

kubectl apply -f https://raw.githubusercontent.com/fluent/fluentd-kubernetes-


daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml

3. Monitoring with Prometheus: Use Prometheus to collect metrics from Kubernetes


components (e.g., API server, nodes, etcd) and application metrics.

helm install prometheus stable/prometheus

4. Visualize Metrics with Grafana: Grafana integrates with Prometheus to visualize real-time
cluster metrics like CPU, memory, and request rates.

helm install grafana stable/grafana

8. Scenario: Network Policies for Pod Security

Question: How do you restrict network traffic between pods in Kubernetes?

Answer: Use NetworkPolicies to control pod-to-pod communication based on labels and


namespaces.

1. Allow Traffic Only From Certain Pods: Define a NetworkPolicy that allows traffic only from
pods labeled as frontend to a backend service:

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-frontend

spec:

podSelector:

matchLabels:

app: backend

ingress:

- from:
- podSelector:

matchLabels:

role: frontend

2. Default Deny Policy: To deny all ingress traffic except for explicitly allowed rules, create a
default deny policy:

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: default-deny

spec:

podSelector: {}

policyTypes:

- Ingress

9. Scenario: Resource Quotas in Multi-Tenant Clusters

Question: How do you enforce resource limits in a multi-tenant cluster?

Answer: Use ResourceQuota to limit the amount of CPU, memory, or storage that each namespace
can use.

1. Create a ResourceQuota:

apiVersion: v1

kind: ResourceQuota

metadata:

name: compute-resources

spec:

hard:

requests.cpu: "10"

requests.memory: 64Gi

limits.cpu: "20"

limits.memory: 128Gi

2. Enforce Quotas for Pods and PVCs: You can also apply quotas to the number of pods,
services, and persistent volume claims (PVCs):
apiVersion: v1

kind: ResourceQuota

metadata:

name: pod-pvc-quota

spec:

hard:

pods: "100"

persistentvolumeclaims: "20"

10. Scenario: Handling Cluster Autoscaling

Question: How do you automatically scale the number of nodes in a cluster based on resource
demands?

Answer: Use Cluster Autoscaler to dynamically add or remove nodes based on pending pod resource
requests.

1. Install Cluster Autoscaler: For a cloud provider (e.g., GKE), use the provider’s CLI to enable
autoscaling:

gcloud container clusters update my-cluster --enable-autoscaling --min-nodes=3 --max-nodes=10

For custom clusters, deploy the Cluster Autoscaler:

kubectl apply -f https://github.com/kubernetes/autoscaler/releases/download/cluster-autoscaler-


1.21.0/cluster-autoscaler-autodiscover.yaml

2. Autoscaler Configuration: The autoscaler monitors pods' resource requests and increases or
decreases the node count accordingly.

11. Scenario: Securing Sensitive Data with Kubernetes Secrets

Question: How do you securely handle sensitive data such as API keys or passwords in Kubernetes?

Answer: Use Secrets to store sensitive data, and ensure it is encrypted at rest.

1. Create a Secret:

apiVersion: v1

kind: Secret

metadata:

name: db-credentials

type: Opaque

data:
username: YWRtaW4= # Base64 encoded

password: cGFzc3dvcmQ=

2. Use Secret in a Pod: Refer to the secret in the pod's environment variables:

env:

- name: DB_USERNAME

valueFrom:

secretKeyRef:

name: db-credentials

key: username

- name: DB_PASSWORD

valueFrom:

secretKeyRef:

name: db-credentials

key: password

3. Encrypt Secrets at Rest: In Kubernetes, enable encryption at rest for secrets by setting up an
EncryptionConfiguration.

12. Scenario: Pod Priority and Preemption

Question: How do you ensure that critical workloads are not starved of resources in a Kubernetes
cluster?

Answer: Use Pod Priority and Preemption to ensure that high-priority pods can evict lower-priority
pods when resources are constrained.

1. Create a PriorityClass:

apiVersion: scheduling.k8s.io/v1

kind: PriorityClass

metadata:

name: high-priority

value: 1000

preemptionPolicy: PreemptLowerPriority

2. Assign Priority to Pods: Assign the priority to critical pods:


apiVersion: apps/v1

kind: Deployment

metadata:

name: critical-workload

spec:

template:

spec:

priorityClassName: high-priority

containers:

- name: critical-container

image: myapp:latest

Pods with higher priorities will preempt (evict) lower-priority pods if resources are insufficient.

13. Scenario: Continuous Deployment with GitOps

Question: How do you implement a GitOps workflow for continuous deployment in Kubernetes?

Answer: Use ArgoCD or Flux for GitOps-based continuous deployment, where the Git repository
serves as the single source of truth for cluster configuration.

1. Install ArgoCD:

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-


cd/stable/manifests/install.yaml

2. Configure Application in ArgoCD: Define an application in ArgoCD that syncs your


Kubernetes manifests from a Git repository:

apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

name: myapp

spec:

destination:

namespace: default

server: https://kubernetes.default.svc

source:
repoURL: https://github.com/example/repo.git

targetRevision: HEAD

path: manifests

ArgoCD will automatically sync and apply changes whenever commits are pushed to the Git
repository.

14. Scenario: Kubernetes Multi-Tenancy with Namespaces

Question: How do you implement multi-tenancy in Kubernetes using namespaces?

Answer: Use Namespaces to isolate tenants within a shared Kubernetes cluster. Additionally, apply
RBAC (Role-Based Access Control) and NetworkPolicies to further enforce isolation.

1. Create Tenant-Specific Namespaces:

kubectl create namespace tenant-a

kubectl create namespace tenant-b

2. Create RBAC for Namespaces: Define roles and role bindings to limit access to specific
namespaces:

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

namespace: tenant-a

name: tenant-admin

rules:

- apiGroups: [""]

resources: ["pods", "services"]

verbs: ["get", "list", "create", "delete"]

3. Use NetworkPolicies for Isolation: Prevent pods in one namespace from communicating
with another:

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: deny-other-namespaces

namespace: tenant-a

spec:
podSelector: {}

ingress:

- from:

- namespaceSelector:

matchLabels:

name: tenant-a

15. Scenario: Implementing Service Mesh for Microservices

Question: How do you implement a service mesh for managing microservices communication in
Kubernetes?

Answer: Use Istio to implement a service mesh. Istio provides traffic management, security, and
observability for microservices.

1. Install Istio:

istioctl install --set profile=demo -y

2. Enable Automatic Sidecar Injection: Label your namespace to inject Istio’s Envoy sidecar
automatically:

kubectl label namespace default istio-injection=enabled

3. Configure Traffic Routing: Use VirtualService and DestinationRule resources to manage


traffic between microservices:

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

name: myapp

spec:

hosts:

- myapp.example.com

http:

- route:

- destination:

host: myapp

subset: v1

weight: 90
- destination:

host: myapp

subset: v2

weight: 10

4. Visualize Traffic with Kiali: Kiali provides a visual representation of traffic between services:

istioctl dashboard kiali

16. Scenario: Kubernetes Pod Security Policies (PSP)

Question: How do you enforce security policies to control pod permissions in Kubernetes?

Answer: Use Pod Security Policies (PSP) to restrict the actions that pods can perform (e.g., disallow
privileged containers).

1. Enable Pod Security Policies: Pod Security Policies must be enabled on the API server by
adding the following flags:

--enable-admission-plugins=PodSecurityPolicy

2. Define a Pod Security Policy: Create a PSP that restricts privilege escalation and limits
container capabilities:

apiVersion: policy/v1beta1

kind: PodSecurityPolicy

metadata:

name: restricted-psp

spec:

privileged: false

allowPrivilegeEscalation: false

runAsUser:

rule: MustRunAsNonRoot

seLinux:

rule: RunAsAny

fsGroup:

rule: MustRunAs

3. Apply PSP with RBAC: Use RBAC to assign the PSP to specific roles:

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding
metadata:

name: restricted-psp-role

subjects:

- kind: ServiceAccount

name: default

namespace: my-namespace

roleRef:

kind: Role

name: restricted-psp

apiGroup: policy/v1beta1

17. Scenario: Managing Multiple Environments in Kubernetes

Question: How do you manage multiple environments (e.g., dev, staging, prod) in a single
Kubernetes cluster?

Answer: Use Namespaces to separate environments and define environment-specific configurations


using ConfigMaps and Secrets.

1. Create Namespaces for Each Environment:

kubectl create namespace dev

kubectl create namespace staging

kubectl create namespace prod

2. Use Environment-Specific ConfigMaps and Secrets: Define environment-specific settings


using ConfigMaps:

apiVersion: v1

kind: ConfigMap

metadata:

name: myapp-config

namespace: dev

data:

DATABASE_URL: "postgres://dev-db"

3. Apply Environment-Specific RBAC: Ensure developers have access only to the appropriate
environment using RoleBindings:
apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: dev-access

namespace: dev

subjects:

- kind: User

name: developer

roleRef:

kind: Role

name: dev-role

apiGroup: rbac.authorization.k8s.io

18. Scenario: Immutable Container Images in Kubernetes

Question: How do you ensure that containers running in Kubernetes are immutable and that no
changes are made post-deployment?

Answer: To ensure immutability, follow these practices:

1. Use Read-Only Root Filesystem: Set the readOnlyRootFilesystem field to true in your pod’s
security context:

securityContext:

readOnlyRootFilesystem: true

2. Use Image SHA Digest for Deployments: Reference the immutable image by its SHA digest
instead of using a tag like latest:

containers:

- name: my-container

image: myapp@sha256:dbbc1c5ff...

3. Prevent Privileged Escalation: Use Pod Security Policies or container runtime policies to
prevent privilege escalation:

securityContext:

allowPrivilegeEscalation: false
19. Scenario: Pod Eviction Policies in Overcommitted Clusters

Question: How do you ensure critical pods are not evicted when a node is under resource pressure?

Answer: Use Pod Priority and Preemption to ensure that critical workloads are not evicted during
resource contention.

1. Assign Priority Classes: Create a PriorityClass for critical pods:

apiVersion: scheduling.k8s.io/v1

kind: PriorityClass

metadata:

name: high-priority

value: 1000

2. Assign Critical Workloads to High Priority: Assign the priority to the pod's specification:

apiVersion: apps/v1

kind: Deployment

metadata:

name: critical-service

spec:

template:

spec:

priorityClassName: high-priority

3. Preempt Low-Priority Pods: Pods with higher priorities will preempt lower-priority pods if
resources become constrained, ensuring that critical pods continue running.

20. Scenario: Kubernetes Cluster Expansion with Helm

Question: How do you scale Kubernetes resources using Helm?

Answer: Use Helm to package and deploy Kubernetes manifests. Helm supports versioning and
templating, making it easy to scale resources.

1. Install Helm Chart: Install a Helm chart with scalable resource configuration:

helm install myapp ./myapp-chart --set replicaCount=3

2. Modify Values for Scaling: Change the replicaCount in the values.yaml file:

replicaCount: 5

3. Upgrade Helm Release: Apply the scaling changes using helm upgrade:
helm upgrade myapp ./myapp-chart

Helm will manage the deployment, ensuring that the desired number of replicas is created.

21. Scenario: Helm Hooks for Pre/Post-Deployment Actions

Question: How do you execute custom commands or scripts before or after a Helm deployment?

Answer: Use Helm hooks to run custom actions before or after a Helm release. Helm hooks can
trigger jobs, config updates, or custom resource creations during specific lifecycle events (e.g., pre-
install, post-install).

1. Define a Helm Hook: You can define hooks in your Helm templates:

apiVersion: batch/v1

kind: Job

metadata:

name: my-pre-install-job

annotations:

"helm.sh/hook": "pre-install"

spec:

template:

spec:

containers:

- name: pre-install

image: busybox

command: ['sh', '-c', 'echo "Running pre-install job"']

2. Trigger the Hook: When you run the Helm install command, the pre-install job runs before
the actual resources are applied.

22. Scenario: Implementing Multi-Tenancy with RBAC and PSP

Question: How do you implement multi-tenancy in Kubernetes using RBAC and Pod Security Policies
(PSP)?

Answer: For multi-tenancy, you can use Namespaces, RBAC, and Pod Security Policies to isolate
different tenants and control access.

1. Namespace Isolation: Each tenant should have their own namespace:


kubectl create namespace tenant1

2. RBAC for Access Control: Use RoleBindings to grant users access only to their tenant's
namespace:

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: tenant1-admin

namespace: tenant1

subjects:

- kind: User

name: tenant1-user

roleRef:

kind: Role

name: admin

apiGroup: rbac.authorization.k8s.io

3. Pod Security Policies: Use Pod Security Policies (PSP) to restrict which types of pods can be
deployed in each tenant's namespace:

apiVersion: policy/v1beta1

kind: PodSecurityPolicy

metadata:

name: restricted

spec:

privileged: false

allowPrivilegeEscalation: false

runAsUser:

rule: MustRunAsNonRoot

23. Scenario: Scaling Stateful Workloads

Question: How do you scale a stateful application (e.g., a database) in Kubernetes while maintaining
data consistency?

Answer: Use StatefulSets to scale stateful applications like databases. StatefulSets provide stable
network identities and persistent storage for each pod.
1. Deploy a StatefulSet: Define a StatefulSet for a database (e.g., Cassandra):

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: cassandra

spec:

serviceName: "cassandra"

replicas: 3

selector:

matchLabels:

app: cassandra

template:

metadata:

labels:

app: cassandra

spec:

containers:

- name: cassandra

image: cassandra:latest

ports:

- containerPort: 9042

volumeMounts:

- name: cassandra-data

mountPath: /var/lib/cassandra

volumeClaimTemplates:

- metadata:

name: cassandra-data

spec:

accessModes: ["ReadWriteOnce"]

resources:

requests:
storage: 10Gi

2. Scaling a StatefulSet: Scale the StatefulSet by increasing the number of replicas:

kubectl scale statefulset cassandra --replicas=5

StatefulSets ensure that each pod gets a unique network identity and persistent volume.

24. Scenario: Canary Deployment with Istio and Kubernetes

Question: How do you implement a canary deployment strategy in Kubernetes using Istio?

Answer: Use Istio VirtualServices to route a small percentage of traffic to a new version of your
service while keeping most traffic on the stable version.

1. Define Two Service Versions: Deploy two versions of the service (v1 and v2):

apiVersion: apps/v1

kind: Deployment

metadata:

name: myapp-v1

spec:

template:

spec:

containers:

- name: myapp-container

image: myapp:v1

yaml

Copy code

apiVersion: apps/v1

kind: Deployment

metadata:

name: myapp-v2

spec:

template:

spec:

containers:

- name: myapp-container
image: myapp:v2

2. Create an Istio VirtualService: Define an Istio VirtualService to split traffic between the two
versions:

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

name: myapp

spec:

hosts:

- myapp.example.com

http:

- route:

- destination:

host: myapp

subset: v1

weight: 90

- destination:

host: myapp

subset: v2

weight: 10

3. Gradually Shift Traffic: As you gain confidence in v2, increase the traffic percentage to the
new version by adjusting the weights.

25. Scenario: Persistent Volumes in a Multi-Zone Kubernetes Cluster

Question: How do you ensure that persistent storage works correctly across multiple zones in a
Kubernetes cluster?

Answer: Use StorageClasses with zone-aware scheduling and dynamic provisioning to ensure that
persistent volumes are created in the same zone as the pods.

1. Create a StorageClass with Zone-Awareness: Use a cloud provider’s storage class that
supports zone-awareness (e.g., for AWS EBS):

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:
name: fast-storage

provisioner: kubernetes.io/aws-ebs

parameters:

type: gp2

zones: "us-east-1a,us-east-1b"

2. Define a PersistentVolumeClaim: Create a PVC that uses the zone-aware StorageClass:

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc-zone-aware

spec:

storageClassName: fast-storage

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 10Gi

3. Ensure Pods and PVCs Are Scheduled in the Same Zone: Kubernetes will automatically
schedule the pod in the same zone as the dynamically provisioned volume, ensuring optimal
performance and availability.

26. Scenario: Kubernetes Backup and Disaster Recovery

Question: How do you back up and restore a Kubernetes cluster in case of disaster recovery?

Answer: Use tools like Velero to back up cluster resources and persistent volumes.

1. Install Velero: Install Velero to manage backups:

velero install \

--provider aws \

--bucket <BUCKET_NAME> \

--backup-location-config region=<REGION> \

--use-restic

2. Create a Backup: Backup all cluster resources and volumes:


velero backup create my-cluster-backup --include-cluster-resources

3. Restore from Backup: Restore the cluster from the backup:

velero restore create --from-backup my-cluster-backup

Velero can back up entire namespaces, workloads, and persistent volumes, making disaster recovery
easier.

27. Scenario: CI/CD Pipeline Integration with Kubernetes

Question: How do you integrate a CI/CD pipeline with Kubernetes to automatically deploy code
changes?

Answer: Use Jenkins or GitLab CI with Kubernetes plugins to automate deployments on code
changes.

1. Configure Jenkins with Kubernetes Plugin: Install the Kubernetes plugin in Jenkins and
configure it to deploy to your cluster:

kubectl apply -f jenkins-deployment.yaml

2. Define a Jenkins Pipeline: Create a pipeline script that builds the code, builds the container
image, and deploys the image to Kubernetes:

pipeline {

agent any

stages {

stage('Build') {

steps {

sh 'docker build -t myapp:v1 .'

stage('Deploy') {

steps {

sh 'kubectl apply -f deployment.yaml'

3. Automate Deployments: Set up webhooks from your Git repository to trigger the pipeline on
code changes.
28. Scenario: Enforcing Resource Limits on Pods

Question: How do you enforce strict CPU and memory limits on pods to prevent resource
exhaustion?

Answer: Use resource requests and limits to enforce CPU and memory constraints on pods.

1. Define CPU and Memory Requests and Limits: Set requests and limits in your pod spec to
ensure that no pod exceeds its resource allocation:

resources:

requests:

memory: "64Mi"

cpu: "250m"

limits:

memory: "128Mi"

cpu: "500m"

2. Resource Quotas: Use ResourceQuotas to enforce limits at the namespace level to prevent
one tenant from consuming too many cluster resources:

apiVersion: v1

kind: ResourceQuota

metadata:

name: cpu-mem-quota

namespace: my-namespace

spec:

hard:

requests.cpu: "10"

requests.memory: "20Gi"

limits.cpu: "20"

limits.memory: "40Gi"

29. Scenario: Horizontal Pod Autoscaling with Custom Metrics

Question: How do you scale Kubernetes pods horizontally based on custom metrics, such as requests
per second (RPS)?

Answer: Use the Horizontal Pod Autoscaler (HPA) with custom metrics to scale based on application-
specific metrics.
1. Expose Custom Metrics: Use Prometheus to expose custom metrics (e.g., RPS):

http_requests_total{job="myapp"}

2. Install Custom Metrics Adapter: Install a custom metrics adapter (e.g., Prometheus Adapter)
to expose metrics to Kubernetes.

3. Configure HPA with Custom Metrics: Define an HPA that scales based on the custom RPS
metric:

apiVersion: autoscaling/v2beta2

kind: HorizontalPodAutoscaler

metadata:

name: myapp-hpa

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: myapp

minReplicas: 2

maxReplicas: 10

metrics:

- type: Pods

pods:

metric:

name: http_requests_total

target:

type: AverageValue

averageValue: "100"

Kubernetes will now scale the number of pods based on the number of HTTP requests per second.

30. Scenario: Monitoring and Alerting for Kubernetes Workloads

Question: How do you monitor Kubernetes workloads and set up alerts based on application
performance?

Answer: Use Prometheus and Grafana to monitor Kubernetes metrics and set up alerts based on
custom thresholds.
1. Install Prometheus: Deploy Prometheus in your Kubernetes cluster:

helm install prometheus stable/prometheus

2. Configure Grafana Dashboards: Use Grafana to visualize the metrics collected by


Prometheus. Import dashboards for Kubernetes metrics such as CPU, memory usage, and
pod health.

3. Set Up Alerts in Prometheus: Define alert rules to trigger notifications based on certain
conditions (e.g., high memory usage):

groups:

- name: example-alert

rules:

- alert: HighMemoryUsage

expr: node_memory_MemAvailable_bytes < 500000000

for: 5m

labels:

severity: warning

annotations:

summary: "Node memory is running low"

4. Integrate with Alertmanager: Use Alertmanager to send alerts to Slack, PagerDuty, or email
when thresholds are breached.

31. Scenario: RBAC for Fine-Grained Access Control

Question: How do you enforce fine-grained access control for different teams in a Kubernetes
cluster?

Answer: Use Role-Based Access Control (RBAC) to restrict access to resources based on user roles.

1. Create Roles and RoleBindings: Define roles for different teams and bind them to users or
groups:

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

namespace: dev-team

name: developer-role

rules:

- apiGroups: [""]
resources: ["pods", "services"]

verbs: ["get", "list", "create", "delete"]

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: developer-binding

namespace: dev-team

subjects:

- kind: User

name: alice

roleRef:

kind: Role

name: developer-role

apiGroup: rbac.authorization.k8s.io

2. ClusterRole and ClusterRoleBindings: For cluster-wide access, use ClusterRoles and


ClusterRoleBindings.

32. Scenario: Kubernetes Multi-Cluster Management with KubeFed

Question: How do you manage multiple Kubernetes clusters using a single control plane?

Answer: Use KubeFed (Kubernetes Federation) to manage multiple clusters as a single entity.

1. Install KubeFed: Install KubeFed to manage multiple clusters from a central control plane:

kubectl apply -f https://github.com/kubernetes-


sigs/kubefed/releases/download/v0.1.0/kubefed.yaml

2. Join Clusters: Add clusters to the federation:

kubefedctl join cluster-name --host-cluster-context=<context>

3. Deploy Federated Resources: Use FederatedDeployments to manage workloads across


multiple clusters:
apiVersion: types.kubefed.io/v1beta1

kind: FederatedDeployment

metadata:

name: myapp

spec:

template:

spec:

replicas: 3

selector:

matchLabels:

app: myapp

template:

metadata:

labels:

app: myapp

spec:

containers:

- name: myapp-container

image: myapp:v1

Federated resources are automatically synchronized across all member clusters.

33. Scenario: Canary Release with Helm and Kubernetes

Question: How do you deploy a new version of an application using a canary release strategy with
Helm?

Answer: Use Helm to package and deploy new versions of your application and leverage Istio or
Kubernetes Ingress to route traffic.

1. Create Helm Chart for Application: Define the Helm chart with support for multiple versions
(e.g., v1 and v2):
replicas: 3

image:

repository: myapp

tag: v1

2. Deploy Canary Release: Use Helm to deploy the new version (v2) as a canary:

helm install myapp ./myapp-chart --set image.tag=v2

3. Route Canary Traffic: Use Istio or Kubernetes Ingress to route a small percentage of traffic to
the canary version:

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

name: myapp

spec:

hosts:

- myapp.example.com

http:

- route:

- destination:

host: myapp

subset: v1

weight: 90

- destination:

host: myapp

subset: v2

weight: 10

Gradually increase the traffic to v2 and monitor performance.

34. Scenario: Handling Node Failures in a Multi-Zone Kubernetes Cluster

Question: How does Kubernetes handle node failures in a multi-zone cluster, and how do you ensure
pod rescheduling?

Answer: In a multi-zone cluster, Kubernetes ensures high availability by rescheduling pods to nodes
in healthy zones.
1. Enable Pod Anti-Affinity: Use podAntiAffinity to ensure that pods are spread across
different zones:

podAntiAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

- labelSelector:

matchExpressions:

- key: app

operator: In

values:

- myapp

topologyKey: "failure-domain.beta.kubernetes.io/zone"

2. Check Node Health: When a node in one zone fails, Kubernetes automatically evicts the
pods and reschedules them on healthy nodes in other zones.

3. Ensure Persistent Storage with Zone-Aware Scheduling: Use zone-aware StorageClasses


(e.g., for AWS EBS or GCP PD) to ensure that persistent volumes are provisioned in the same
zone as the pods.

35. Scenario: Immutable Infrastructure with Kubernetes

Question: How do you implement immutable infrastructure principles in Kubernetes to ensure


consistency across deployments?

Answer: To implement immutability, avoid making changes to running infrastructure. Instead, deploy
new versions of applications and recreate environments from scratch.

1. Use Immutable Container Images: Reference images by their SHA digest to ensure
consistency:

containers:

- name: myapp

image: myapp@sha256:e12345abcde...

2. Prevent Configuration Changes: Use ConfigMaps and Secrets to externalize configurations,


ensuring that the application remains immutable.

3. Read-Only File Systems: Set the root filesystem of your containers to be read-only:
securityContext:

readOnlyRootFilesystem: true

This ensures that your infrastructure remains immutable and consistent across environments.

36. Scenario: Deploying Stateful Applications in Kubernetes

Question: How do you deploy and scale a stateful application like Kafka in Kubernetes?

Answer: Use StatefulSets to deploy and manage stateful applications. StatefulSets provide stable
network identities, persistent storage, and ordered scaling.

1. Deploy Kafka StatefulSet: Define a StatefulSet for Kafka:

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: kafka

spec:

serviceName: "kafka"

replicas: 3

selector:

matchLabels:

app: kafka

template:

metadata:

labels:

app: kafka

spec:

containers:

- name: kafka

image: wurstmeister/kafka:latest

ports:

- containerPort: 9092

volumeMounts:

- name: kafka-data
mountPath: /var/lib/kafka

volumeClaimTemplates:

- metadata:

name: kafka-data

spec:

accessModes: ["ReadWriteOnce"]

resources:

requests:

storage: 20Gi

2. Expose Kafka as a Headless Service: Use a headless service to manage stateful


communication between Kafka brokers:

apiVersion: v1

kind: Service

metadata:

name: kafka

spec:

clusterIP: None

selector:

app: kafka

3. Scaling Kafka: StatefulSets ensure that each broker has a unique identity and persistent
volume. Scale Kafka by increasing the number of replicas:

kubectl scale statefulset kafka --replicas=5

37. Scenario: Kubernetes RBAC for Fine-Grained Access Control

Question: How do you enforce fine-grained access control in Kubernetes to restrict access to certain
resources for specific users?

Answer: Use Role-Based Access Control (RBAC) to enforce fine-grained access controls, limiting
access to resources based on user roles and permissions.

1. Create Roles for Specific Resources: Define roles that grant specific permissions (e.g., access
to pods and services) within a namespace:
apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: developer-role

namespace: dev

rules:

- apiGroups: [""]

resources: ["pods", "services"]

verbs: ["get", "list", "create", "delete"]

2. Bind Roles to Users: Use RoleBindings to associate users with specific roles:

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: developer-binding

namespace: dev

subjects:

- kind: User

name: alice

roleRef:

kind: Role

name: developer-role

apiGroup: rbac.authorization.k8s.io

3. Cluster-Wide Roles: Use ClusterRoles for cluster-wide access across namespaces:

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

name: admin-role

rules:

- apiGroups: [""]

resources: ["nodes", "pods"]

verbs: ["get", "list"]


38. Scenario: Multi-Cluster Kubernetes Management with Rancher

Question: How do you manage multiple Kubernetes clusters across different environments using
Rancher?

Answer: Use Rancher to manage multiple clusters across different cloud environments or on-premise
infrastructure.

1. Install Rancher: Deploy Rancher to manage clusters:

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest

helm install rancher rancher-latest/rancher --namespace cattle-system

2. Add Clusters to Rancher: Use Rancher’s UI to add clusters (e.g., GKE, EKS, on-prem) for
centralized management.

3. RBAC Across Clusters: Rancher allows you to define global RBAC policies that apply across
multiple clusters, ensuring consistent access control.

39. Scenario: Deploying Multi-Tier Applications in Kubernetes

Question: How do you deploy a multi-tier application (e.g., frontend, backend, database) in
Kubernetes with proper isolation?

Answer: Use Namespaces to isolate different tiers and NetworkPolicies to control traffic flow
between them.

1. Deploy Frontend, Backend, and Database: Define separate deployments for each tier (e.g.,
frontend, backend, and database):

apiVersion: apps/v1

kind: Deployment

metadata:

name: frontend

spec:

replicas: 3

template:

spec:

containers:

- name: frontend

image: nginx
apiVersion: apps/v1

kind: Deployment

metadata:

name: backend

spec:

replicas: 3

template:

spec:

containers:

- name: backend

image: my-backend

apiVersion: apps/v1

kind: Deployment

metadata:

name: database

spec:

replicas: 1

template:

spec:

containers:

- name: database

image: mysql

2. Use Namespaces for Isolation: Create separate namespaces for each tier (frontend, backend,
database):

kubectl create namespace frontend

kubectl create namespace backend

kubectl create namespace database

3. NetworkPolicies for Security: Use NetworkPolicies to restrict traffic between tiers. For
example, only allow traffic from the frontend to the backend:
apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-frontend-backend

namespace: backend

spec:

podSelector:

matchLabels:

app: backend

ingress:

- from:

- namespaceSelector:

matchLabels:

name: frontend

40. Scenario: Kubernetes Ingress for Multi-Domain Applications

Question: How do you configure Ingress to route traffic to multiple applications using different
domains or subdomains?

Answer: Use Ingress resources to route traffic to different services based on hostnames or paths.

1. Create Ingress Rules: Define Ingress rules for multiple domains or subdomains:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: multi-domain-ingress

spec:

rules:

- host: app1.example.com

http:

paths:

- backend:

service:
name: app1-service

port:

number: 80

- host: app2.example.com

http:

paths:

- backend:

service:

name: app2-service

port:

number: 80

2. Use a TLS Certificate: Secure traffic to your applications by adding a TLS certificate for the
domains:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: secure-ingress

spec:

tls:

- hosts:

- app1.example.com

- app2.example.com

secretName: tls-secret

rules:

- host: app1.example.com

http:

paths:

- backend:

service:

name: app1-service

port:
number: 80

- host: app2.example.com

http:

paths:

- backend:

service:

name: app2-service

port:

number: 80

3. Automatic Certificate Management: Use cert-manager to automatically issue and renew TLS
certificates for your Ingress:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-


manager.yaml

41. Scenario: Kubernetes Pod Priority and Preemption

Question: How do you ensure that critical workloads are not preempted or starved for resources in a
Kubernetes cluster under heavy load?

Answer: Use Pod Priority and Preemption to ensure that high-priority pods can evict lower-priority
pods when resources are scarce.

1. Define Priority Classes: Create a PriorityClass for critical workloads:

apiVersion: scheduling.k8s.io/v1

kind: PriorityClass

metadata:

name: high-priority

value: 1000

preemptionPolicy: PreemptLowerPriority

2. Assign Priority to Pods: Assign the priority to the pod’s specification:

apiVersion: apps/v1

kind: Deployment

metadata:

name: critical-app

spec:
template:

spec:

priorityClassName: high-priority

containers:

- name: myapp-container

image: myapp:latest

3. Preempt Low-Priority Pods: Kubernetes will preempt lower-priority pods to free up


resources for high-priority pods during resource contention.

42. Scenario: Debugging and Troubleshooting Pods

Question: A pod is stuck in a CrashLoopBackOff state. How do you debug and troubleshoot the
issue?

Answer: Follow these steps to troubleshoot the issue:

1. Check Pod Logs: View the pod’s logs to understand why it’s crashing:

kubectl logs <pod-name>

2. Describe the Pod: Use the kubectl describe command to get detailed information about the
pod’s status, events, and failure reasons:

kubectl describe pod <pod-name>

3. Execute a Shell in the Pod: If the container starts but crashes quickly, you can use kubectl
exec to access the pod and debug:

kubectl exec -it <pod-name> -- /bin/bash

4. Check Resource Limits: Ensure that the pod isn’t being killed due to exceeding CPU or
memory limits.

5. Investigate Readiness and Liveness Probes: Misconfigured probes can cause a pod to be
restarted or marked as unhealthy.

43. Scenario: Kubernetes Cluster Autoscaler

Question: How do you automatically scale the number of nodes in a Kubernetes cluster based on
resource demand?

Answer: Use the Cluster Autoscaler to dynamically adjust the number of nodes in the cluster based
on pod resource requests.

1. Install Cluster Autoscaler: Install the Cluster Autoscaler for your cloud provider (e.g., GKE,
AWS, or Azure). For GKE:
gcloud container clusters update my-cluster --enable-autoscaling --min-nodes=3 --max-nodes=10

2. Configure Autoscaler: The autoscaler monitors pending pods that cannot be scheduled due
to resource constraints and scales the cluster accordingly.

3. Ensure Pod Scheduling: Pods with unsatisfied resource requests trigger the autoscaler to add
new nodes.

44. Scenario: Managing Application Configuration Across Environments

Question: How do you manage application configuration for multiple environments (e.g., dev,
staging, prod) in Kubernetes?

Answer: Use ConfigMaps and Secrets to manage environment-specific configurations.

1. Create Environment-Specific ConfigMaps: Define a ConfigMap for each environment (e.g.,


dev, staging, prod):

apiVersion: v1

kind: ConfigMap

metadata:

name: app-config

namespace: dev

data:

DATABASE_URL: "postgres://dev-db"

2. Use the ConfigMap in Pods: Mount the ConfigMap as environment variables in the pod spec:

containers:

- name: myapp-container

envFrom:

- configMapRef:

name: app-config

3. Separate Namespaces for Environments: Create separate namespaces for each environment
to isolate configurations and resources:

kubectl create namespace dev

kubectl create namespace staging

kubectl create namespace prod


45. Scenario: High Availability for Kubernetes Control Plane

Question: How do you set up a highly available Kubernetes control plane?

Answer: To ensure high availability (HA) of the Kubernetes control plane, deploy multiple API server
instances and use etcd clusters with multiple members.

1. Deploy Multiple API Servers: For an HA setup, deploy multiple instances of the Kubernetes
API server across different nodes or zones.

2. Use a Load Balancer: Place a load balancer in front of the API servers to distribute traffic.

3. Deploy an HA etcd Cluster: Use an odd number of etcd members (e.g., 3 or 5) to form a
quorum-based consensus:

etcd --name infra0 --initial-advertise-peer-urls http://infra0:2380 \

--listen-peer-urls http://infra0:2380 \

--initial-cluster-token etcd-cluster-1 \

--initial-cluster infra0=http://infra0:2380,infra1=http://infra1:2380,infra2=http://infra2:2380 \

--initial-cluster-state new

4. Ensure Redundancy for Controller Manager and Scheduler: Deploy multiple instances of the
controller manager and scheduler, with leader election enabled to ensure one active
instance.

46. Scenario: CI/CD Pipeline for Kubernetes Using GitLab

Question: How do you set up a CI/CD pipeline in GitLab for deploying applications to Kubernetes?

Answer: Use GitLab CI’s Kubernetes integration to build, test, and deploy your applications
automatically.

1. Configure Kubernetes Integration: In GitLab, go to your project’s settings and configure


Kubernetes integration by providing the cluster credentials (API server URL, CA certificate,
and token).

2. Define a GitLab CI Pipeline: Create a .gitlab-ci.yml file that builds the application, builds a
Docker image, and deploys it to Kubernetes:

stages:
- build
- deploy

build:
stage: build
script:
- docker build -t registry.gitlab.com/myapp:v1 .
tags:
- docker

deploy:
stage: deploy
script:
- kubectl apply -f deployment.yaml
environment:
name: production

3. Set Up Deployment Webhooks: GitLab CI automatically triggers the pipeline on code pushes,
builds the application, and deploys it to Kubernetes.

47. Scenario: Immutable Infrastructure in Kubernetes

Question: How do you ensure immutability for your Kubernetes workloads to prevent configuration
drift?

Answer: To ensure immutability, follow these practices:

1. Use Immutable Container Images: Reference images by their SHA digest to ensure
consistency across environments:

containers:

- name: myapp

image: myapp@sha256:e12345abcde...

2. Read-Only File Systems: Set the root filesystem of your containers to read-only to prevent
accidental changes to the infrastructure:

securityContext:

readOnlyRootFilesystem: true

3. Externalize Configuration: Use ConfigMaps and Secrets for configuration, and avoid
hardcoding environment-specific settings in your containers.

4. Use Declarative Infrastructure: Use tools like Helm, Kustomize, or Terraform to manage
infrastructure as code (IaC), ensuring that changes are applied in a controlled, repeatable
manner.

48. Scenario: Logging and Monitoring in Kubernetes

Question: How do you implement centralized logging and monitoring in a Kubernetes cluster?

Answer: Use the EFK (Elasticsearch, Fluentd, Kibana) stack for centralized logging and Prometheus
and Grafana for monitoring.

1. Deploy Fluentd for Log Aggregation: Fluentd collects logs from all nodes and forwards them
to Elasticsearch:
kubectl apply -f https://raw.githubusercontent.com/fluent/fluentd-kubernetes-
daemonset/master/fluentd-daemonset-elasticsearch-rbac.yaml

2. Use Kibana for Log Analysis: Kibana provides a UI for searching and analyzing logs stored in
Elasticsearch.

3. Monitoring with Prometheus: Deploy Prometheus to collect metrics from Kubernetes


components (e.g., API server, nodes, etcd) and application metrics:

helm install prometheus stable/prometheus

4. Visualize Metrics with Grafana: Grafana integrates with Prometheus to provide dashboards
for real-time metrics visualization.

49. Scenario: Kubernetes Cluster Autoscaler for Cloud Providers

Question: How do you automatically scale the number of nodes in a Kubernetes cluster based on
resource demand?

Answer: Use the Cluster Autoscaler to automatically scale the number of nodes in a cluster based on
pod resource requests.

1. Install Cluster Autoscaler: For cloud providers, such as GKE or EKS, enable autoscaling:

gcloud container clusters update my-cluster --enable-autoscaling --min-nodes=3 --max-nodes=10

2. Configure Autoscaler: The Cluster Autoscaler monitors pod resource requests and scales the
cluster by adding or removing nodes to meet demand.

3. Ensure Resource Requests for Pods: Ensure that pods have defined CPU and memory
requests, so the autoscaler knows when to trigger scaling actions.

50. Scenario: Kubernetes Multi-Cluster Deployment Using KubeFed

Question: How do you manage workloads across multiple Kubernetes clusters using Kubernetes
Federation (KubeFed)?

Answer: Use KubeFed to manage multiple clusters from a centralized control plane, ensuring
consistent deployment and configuration across clusters.

1. Install KubeFed: Install KubeFed to manage multiple clusters:

kubectl apply -f https://github.com/kubernetes-


sigs/kubefed/releases/download/v0.1.0/kubefed.yaml

2. Join Clusters: Use kubefedctl to join member clusters:

kubefedctl join my-cluster --host-cluster-context=my-host-cluster

3. Deploy Federated Resources: Deploy federated resources across all member clusters:
apiVersion: types.kubefed.io/v1beta1

kind: FederatedDeployment

metadata:

name: myapp

spec:

template:

spec:

replicas: 3

selector:

matchLabels:

app: myapp

template:

metadata:

labels:

app: myapp

spec:

containers:

- name: myapp-container

image: myapp:v1

KubeFed ensures that deployments are synchronized across all clusters in the federation.

You might also like