Part 2 - Kubernetes Interview Questions For DevOps
Part 2 - Kubernetes Interview Questions For DevOps
DevOps Engineers
Written by Zayan Ahmed | 8 min read
43. How would you troubleshoot a Kubernetes Service not reaching its endpoints?
● Use kubectl describe service <service-name> to check if the Service is
correctly set up.
● Verify the associated Pods are running and reachable.
● Ensure the networking layer is functioning properly and check kube-proxy logs.
● Test with kubectl port-forward to confirm Pod accessibility.
44. If a Pod is consuming excessive memory and causing node instability, how
would you resolve it?
● Use resource limits and requests to prevent Pods from consuming excessive
memory.
● Investigate memory usage with kubectl top pod <pod-name> and analyze logs.
● Scale the application using the Horizontal Pod Autoscaler (HPA) to distribute the
load.
45. How would you handle a misconfigured Deployment that causes a failure in
your application?
● Roll back to the previous stable version using kubectl rollout undo.
● Investigate the issue by reviewing the Deployment manifest and logs to prevent
future errors.
● Enable readiness probes to ensure only healthy Pods receive traffic.
46. What would you do if your Kubernetes cluster experiences network latency
issues?
● Use kubectl get pods --all-namespaces -o wide to analyze Pod
distribution and networking setup.
● Investigate any bottlenecks or misconfigurations in kube-proxy, DNS, or load
balancers.
● Utilize tools like Istio or Linkerd to optimize network communication.
47. How would you configure Kubernetes for a multi-tenant environment?
● Set up Namespaces to isolate resources per tenant.
● Use RBAC to restrict access based on user roles.
● Implement Network Policies to control traffic and provide tenant-specific network
isolation.
1. Optimize Resource Management: Set appropriate CPU and memory requests and
limits for each Pod to prevent resource contention and optimize node utilization.
2. Implement Automated Scaling: Use Horizontal Pod Autoscaler (HPA) for scaling
Pods based on resource usage, and Cluster Autoscaler to manage node scaling
dynamically.
3. Leverage Rolling Updates and Rollbacks: Implement rolling updates for minimal
downtime and use kubectl rollout undo for fast rollback in case of deployment
issues.
4. Use Namespaces for Isolation: Separate environments or teams by using
Namespaces, enhancing security and resource management.
5. Enable Monitoring and Logging: Use Prometheus, Grafana, and Elasticsearch to
monitor cluster health and track logs, identifying and resolving issues proactively.
6. Secure Sensitive Data with Secrets Management: Use Kubernetes Secrets with
encryption, and consider tools like HashiCorp Vault or AWS Secrets Manager for
enhanced security.
7. Utilize Network Policies for Security: Implement Network Policies to define which
Pods can communicate with each other, reducing attack surface within the cluster.
8. Adopt Infrastructure as Code (IaC): Use Helm, Kustomize, or Terraform to manage
Kubernetes configurations, promoting repeatability and version control.
9. Employ Role-Based Access Control (RBAC): Configure RBAC to enforce the
principle of least privilege, ensuring that users and applications have only the
permissions necessary to perform their tasks.
10. Maintain Up-to-Date Cluster Versions: Regularly update Kubernetes to the latest
stable version to benefit from security patches and new features.
11. Plan for Backup and Disaster Recovery: Back up etcd data and create disaster
recovery plans for critical applications.
12. Utilize a Service Mesh for Advanced Communication Needs: If using
microservices, consider a service mesh like Istio or Linkerd for traffic management,
observability, and fault tolerance.