Kubernetes Scenario Based Questions
Kubernetes Scenario Based Questions
For example:
Alternatively, you can edit the YAML file of the deployment and change the replicas
field:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 5
2. Describe the pod to get detailed information about its state and events:
3. Look for any error messages or failed container states in the logs. Common
issues include misconfigurations, missing files, or incorrect image versions.
4. Fix the issue based on the logs (e.g., fix environment variables, update the
image, or modify the configuration) and then restart the pod.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
Once the service is created, Kubernetes will provision a cloud load balancer (if
supported by the cloud provider) and assign an external IP address to access the
service.
4. Scenario: You want to run a job in Kubernetes that executes once and
then completes. How do you create a job in Kubernetes?
Answer: To create a one-time job, define a Job resource. Here's an example of a
simple Kubernetes job definition:
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
template:
spec:
containers:
- name: my-container
image: busybox
command: ["echo", "Hello, World!"]
restartPolicy: Never
This job will run the echo command once and then exit.
Example of a NetworkPolicy that allows traffic only to pods with the label role=db
in the same namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
role: db
ingress:
- from:
- podSelector:
matchLabels:
role: app
This policy allows only pods with the role=app label to communicate with pods labeled
role=db .
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:v2 # New version of the image
ports:
- containerPort: 80
Kubernetes will automatically perform a rolling update by replacing the old pods with
the new ones without downtime.
7. Scenario: You want to increase the memory limit of a pod. How do you
modify the pod's resource limits?
Answer: To modify the resource limits (CPU/memory) for a pod, you need to update the
resource requests and limits in the pod’s YAML definition.
Example modification:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
To ensure the pod is restarted with the new limits, you can delete the pod or trigger
a deployment update.
8. Scenario: You need to monitor the health of a pod. How do you
configure a liveness and readiness probe?
Answer: Liveness and readiness probes are configured within the pod's container
specification. These probes check the health of the application running inside the
container.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: 80
initialDelaySeconds: 5
periodSeconds: 10
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: /mnt/data
name: my-storage
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-pvc
This configuration ensures that the pod has access to persistent storage.
10. Scenario: You want to limit the CPU and memory usage for a container.
How do you set resource requests and limits?
Answer: In the pod or container spec, you can define resource requests (minimum
resources) and limits (maximum resources) for CPU and memory.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Requests: Kubernetes uses these values to schedule the pod (ensures the pod
gets enough resources).
Limits: Kubernetes enforces these limits (pod will be killed if it exceeds
them).
11. Scenario: You have a Kubernetes cluster with multiple worker nodes.
One of the nodes becomes unresponsive and needs to be replaced. Explain
the steps you would take to replace the node without affecting the
availability of applications running on the cluster.
Answer:
1. Drain the unresponsive node: Use the kubectl drain command to gracefully evict
all the pods running on the unresponsive node. This ensures that the pods are
rescheduled on other healthy nodes.
2. Cordon the unresponsive node: Use the kubectl cordon command to mark the node
as unschedulable. This prevents new pods from being scheduled on the node while
it's being replaced.
3. Remove the unresponsive node: Once all the pods are safely rescheduled, you can
remove the unresponsive node from the cluster, either by repairing it or
provisioning a new node.
4. Uncordon the node: Once the new node is ready, use the kubectl uncordon command
to mark it as schedulable again. This allows new pods to be scheduled on the
replacement node.
To ensure data retention for a stateful application, I would use the following
Kubernetes features:
1. Persistent Volumes (PVs) and Persistent Volume Claims (PVCs): I would create a
Persistent Volume that represents the storage resource (e.g., a network-
attached disk) and then create a Persistent Volume Claim that binds to the PV.
This ensures that the same volume is attached to the pod when it's rescheduled
or updated.
Possible reasons for a pod being stuck in the "Pending" state could include:
1. Insufficient resources: Check if the cluster has enough resources (CPU, memory,
storage) to accommodate the pod. You can use the kubectl describe pod <pod-
name> command to view detailed information about the pod, including any
resource-related issues.
2. Unschedulable nodes: Check if all the nodes in the cluster are in the "Ready"
state and can schedule the pod. You can use the kubectl get nodes command to
see the node status.
3. Pod scheduling constraints: Verify if the pod has any scheduling constraints or
affinity/anti-affinity rules that are preventing it from being scheduled. Check
the pod's YAML or manifest file for any such specifications.
4. Persistent Volume (PV) availability: If the pod requires a Persistent Volume,
ensure that the required storage is available and accessible.
5. Network-related issues: Check if there are any network restrictions or
misconfigurations preventing the pod from being scheduled or communicating with
other resources.
To identify the root cause and fix failing health checks for pods in a Kubernetes
Deployment:
1. Check the pod's logs: Use the kubectl logs <pod-name> command to retrieve the
logs of the failing pod. Inspect the logs for any error messages or exceptions
that could indicate the cause of the failure.
2. Verify health check configurations: Examine the readiness and liveness probe
configurations in the Deployment's YAML or manifest file. Ensure that the
endpoints being probed are correct, the expected response is received, and the
success criteria are appropriately defined.
3. Debug container startup: If the pods are failing to start, check the
container's startup commands, entrypoints, or initialization processes. Use the
kubectl describe pod <pod-name> command to get detailed information about the
pod, including any container-related errors.
4. Resource constraints: Inspect the resource requests and limits for the pods.
It's possible that the pods are exceeding the allocated resources, causing
failures. Adjust the resource specifications as necessary.
5. Image issues: Verify that the Docker image being used is correct and
accessible. Ensure that the image's version, registry, and repository details
are accurate.
6. Rollout issues: If the pods were recently deployed or updated, ensure that the
rollout process completed successfully. Check the deployment's status using
kubectl rollout status <deployment-name> and examine any rollout history with
kubectl rollout history <deployment-name> .
Alternatively, update the Deployment YAML: Modify the replicas field in the
Deployment's YAML or manifest file to the desired number of replicas. Then,
apply the changes using kubectl apply -f <path-to-deployment-yaml> .