Docker Qna
Docker Qna
docker ps
If the container is running and has an IP, I verify that I mapped the correct port when I started
the container. If I forgot to expose the port, I restart it with:
If it's still not accessible, I check firewall rules (iptables on Linux) and whether another process is
using the port with:
This allows me to explore logs, inspect configurations, or run commands inside the container.
If the logs don't reveal the issue, I inspect the exit code:
docker inspect <container_id> --format='{{.State.ExitCode}}'
A non-zero exit code often means an application error. If I suspect a missing dependency, I
restart the container interactively:
4. How do I remove all stopped containers, unused images, and networks to free
up disk space?
5. I need to share data between my container and the host. How do I do it?
I optimize my Dockerfile:
I tag it correctly:
docker login
docker push username/myapp:v1
9. I need to deploy multiple containers that interact with each other. How do I do
it?
version: '3'
services:
app:
image: myapp
ports:
- "5000:5000"
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example
docker-compose up -d
12. I need to limit a container's CPU and memory usage. How do I do it?
This limits the container to 500MB RAM and half a CPU core.
13. I need to network multiple containers together. What’s the best approach?
15. I need to run a scheduled task inside a container. What’s the best way?
Final Thoughts
These scenarios cover common Docker challenges you'll face in the real world. If you
understand them, you’ll be more than ready to answer any interview question confidently! Let
me know if you want even more advanced cases.
Glad you liked it! Here are 15 real-world Docker container scenarios with answers in the first
person to help you prepare for work-related questions.
This runs the container in detached mode, meaning it won’t tie up my terminal.
3. A container is running but not responding to requests, how do you check what’s
wrong?
First, I’d verify if it’s running using docker ps. Then, I’d check logs with docker logs <container_name>
to see any errors. If networking is an issue, I’d inspect ports with docker inspect <container_name> or
docker port <container_name>. If necessary, I’d enter the container with docker exec -it
<container_name> bash and manually test connectivity.
4. How do you copy a file from your host machine into a running container?
This copies myfile.txt into the /app directory inside the container.
I’d use:
This stops all running containers by passing their IDs to docker stop.
docker stats
If the container is exceeding its limits, I’d update its memory constraints using --memory when
running it:
or run
11. You need to bind a container to a specific port on the host, how do you do it?
This binds port 80 inside the container to port 8080 on the host.
I’d use:
This shows which ports are mapped between the host and the container.
13. You need to remove a running container forcefully, how do you do it?
I’d use:
docker rm -f <container_name>
I’d run:
or
This helps me check IP addresses, DNS settings, and other network configurations.
This should get you fully prepared! Let me know if you want more practice questions.
Here are 15 real-world Docker image scenarios with responses in the first person, just like
you'd answer in an interview.
1. A teammate says, "I pulled an image, but it's not working. How do I check what's
inside it?"
Answer:
I would use docker inspect <image_name> to check the image metadata, including environment
variables, exposed ports, and entry points. If I need to explore the filesystem, I’d create a
container with docker run -it --rm <image_name> sh and navigate inside.
Answer:
I would use docker pull <image_name>:<tag>. For example, to pull Node.js version 18, I’d run docker
pull node:18. If I need to list available tags, I’d check Docker Hub or use docker search <image>.
Answer:
I would run docker pull <repo_name>/<image_name>:<tag>. If authentication is required, I’d log in first
using docker login.
5. I need to create a custom image with extra software installed. What’s the best
approach?
Answer:
The best approach is to write a Dockerfile. I would use FROM <base_image>, add my RUN
commands to install software, and build it with docker build -t my_custom_image .. This ensures the
image is reproducible.
Answer:
I would use docker images to see all locally stored images, including their repository, tag, size, and
ID.
7. I need to delete an image, but Docker says it's in use. What should I do?
Answer:
I would check running containers with docker ps -a, stop them using docker stop <container_id>,
remove them with docker rm <container_id>, and then delete the image using docker rmi <image_id>.
8. I accidentally created too many untagged images. How do I clean them up?
Answer:
I would use docker image prune to remove unused images. If I want to delete all unused images,
including those with tags, I’d use docker image prune -a.
Answer:
I would:
Answer:
I would use docker history <image_name> to see the list of commands that created the image, along
with the layer sizes.
Answer:
I would use docker commit <container_id> my_custom_image. However, this method is not
recommended because it's not reproducible. Instead, I would create a Dockerfile.
12. I need to use an image from a private registry. How do I authenticate?
Answer:
I would first log in using docker login <registry_url>, then pull the image using docker pull
<registry_url>/<image_name>:<tag>.
13. My container is running, but I want to check which image it came from. How?
Answer:
I would use docker ps to find the container ID, then run docker inspect <container_id> and check the
"Image" field.
14. I built an image, but I don’t see it when I run docker images. What’s wrong?
Answer:
I would check:
If I accidentally created a dangling image (<none> tag), using docker images -a.
If I built the image without tagging (docker build -t my_image .).
If the build failed—checking logs using docker build.
15. How do I export an image and transfer it to another machine without Docker
Hub?
Answer:
I would save the image as a tar file with:
These are practical, real-world scenarios that should help in any Docker interview!
Glad you liked it! Here are 15 real-world Dockerfile and image creation scenarios, with
answers in the first person, as if I were explaining my decisions in an interview or work setting.
1. A junior developer wrote a Dockerfile, but it's 1GB in size. How do you reduce it?
"I noticed the image was too large, so I optimized it in three ways:
2. Your Docker build is slow because dependencies keep reinstalling. How do you
speed it up?
"I optimized the build cache by copying package.json first before copying the full source code:
This way, as long as package.json doesn't change, Docker caches the dependencies and only re-
installs if necessary."
3. Your team is using latest tags in production images. Why is this a bad practice?
"Using latest makes deployments unpredictable because we don’t know which version will be
pulled. Instead, I enforce versioned tags like myapp:1.0.3 and maintain a myapp:stable tag for
controlled rollouts."
4. A teammate asks why we use WORKDIR instead of cd in RUN. How do you explain it?
"WORKDIR is persistent across layers, making the Dockerfile cleaner. If I used RUN cd /app, the
directory change wouldn’t persist in the next layer."
5. The app runs fine locally but crashes in Docker due to missing environment
variables. What do you do?
"I ensure that required environment variables are set using ENV in the Dockerfile:
ENV NODE_ENV=production
For sensitive variables, I use a .env file and pass it during runtime:
---
### **6. A container crashes, and you need to debug. What’s your approach?**
"I start by checking logs with:
```sh
docker logs <container_id>
7. You need to copy files into the container but exclude some. What’s the best
approach?
"I use a .dockerignore file to exclude unnecessary files like .git, node_modules, and local configs. This
keeps the image smaller and builds faster."
"No, I only use ADD when I need to extract compressed files or download from URLs. Otherwise,
COPY is preferred because it's simpler and more predictable."
9. Your container runs as root, and security teams flag it. How do you fix it?
10. You need to update a running container without downtime. What do you do?
"I use a rolling update strategy with Docker Compose or Kubernetes. Instead of modifying a
running container, I build a new version, push it to the registry, and deploy it using a blue-green
or canary strategy."
11. Your team wants to share images without using Docker Hub. What are the
options?
Or use cloud-based registries like AWS ECR, GCP Artifact Registry, or GitHub Container
Registry."
12. The app inside a container can't connect to a database. How do you fix it?
docker network ls
13. Someone added RUN apt-get install -y package but didn't clean up. What's the impact?
"Not cleaning up package lists increases image size. I modify it like this:
RUN apt-get update && apt-get install -y package && rm -rf /var/lib/apt/lists/*
14. Your Dockerfile builds but fails at runtime with "file not found". What’s wrong?
"I check if COPY or WORKDIR is set correctly. If a binary is missing, I ensure it's in the PATH. If it's a
permissions issue, I verify file ownership with ls -l inside the container."
15. The app needs a specific secret at runtime. How do you handle it securely?
"I avoid baking secrets into images. Instead, I use environment variables, secret management
tools like AWS Secrets Manager, or mount a secret file as a volume:
These scenarios should prepare you for real-world Docker challenges. Do you want me to
simulate a mock interview for you next?
Here are 15 real-world Docker storage scenarios with answers in the first person:
I realized that container storage is ephemeral, so I started using Docker volumes to persist my
data. Now, I mount a named volume like this:
2. I need to share data between multiple containers. What’s the best way?
I use a shared volume so that multiple containers can read and write to the same data location.
For example:
3. My container writes logs inside itself, but they disappear after a restart. How do
I persist logs?
I mount a named volume or a bind mount to persist logs. I prefer volumes for better
management:
4. I need my containerized app to read from a config file on my host machine. How
do I do this?
I use a bind mount to map my host config file into the container:
This ensures my application can read but not modify the data.
6. I need high-speed temporary storage inside a container. What’s the best
solution?
I use tmpfs mounts, which store data in RAM for fast access:
This ensures fast storage but is lost on restart—perfect for caching or temporary data.
I check if I’m using the correct Windows path format and enable file sharing in Docker
Desktop.
Example Windows bind mount:
docker run --rm -v my_volume:/data -v $(pwd):/backup ubuntu tar -cvf /backup/my_volume_backup.tar /data
docker run --rm -v my_volume:/data -v $(pwd):/backup ubuntu tar -xvf /backup/my_volume_backup.tar -C /data
10. My database container restarts and loses all data. How do I fix this?
I mount a persistent volume for the database storage:
This deletes all volumes not used by any container, freeing up disk space.
12. My container is storing too much data and filling up my disk. How do I check
and free space?
docker system df
13. I need to migrate a Docker volume to another machine. What’s the process?
If the data was inside the container’s writable layer (not a volume), it's lost.
If I used a named volume, I can recover it:
15. I need a multi-container app where multiple services share data. What’s the
best approach?
I use Docker Compose to manage multiple containers and share data through volumes.
Example docker-compose.yml:
version: "3.8"
services:
app:
image: myapp
volumes:
- shared_data:/app/data
worker:
image: worker
volumes:
- shared_data:/app/data
volumes:
shared_data:
These scenarios should prepare you for any real-world Docker storage issue. Let me know if
you want more advanced use cases!
Here are 15 real-world work scenario questions about Docker networking, answered in first
person as if I were in the role:
If the issue persists, I verify the container’s internal DNS resolution using ping or nslookup. If it's a
firewall issue, I check iptables rules with:
iptables -L -n
I use the -p flag to bind the container’s port to the host’s port. For example, if my app runs on
port 80 inside the container, I expose it on port 8080 like this:
Then, I check if it's accessible via curl http://localhost:8080 or from a remote machine.
3. You need containers to communicate across different hosts. How do you set that
up?
Now, when I deploy services, I attach them to this network so they can communicate across
multiple hosts.
This makes the container share the host’s network stack, allowing it to bind directly to host
ports.
6. You need a container to run in complete network isolation. What do you do?
This ensures the container has no external network access, enhancing security.
7. A container's port is exposed, but it's not accessible from the host. What do you
check?
First, I verify if the container is running and listening on the correct port:
docker ps
I also check for firewall rules blocking the port using iptables -L -n.
I use:
I run:
or
11. You need a container to appear as a separate network device on the LAN. What
do you do?
12. You need to troubleshoot a container’s connectivity. What steps do you take?
1. Check network settings:
2. docker network inspect my_network
3. Check if the container has an IP:
4. docker inspect my_container
5. Ping other containers:
6. docker exec -it my_container ping container2
7. Check firewall rules:
8. iptables -L -n
13. A service running in Swarm mode isn’t reachable. How do you debug it?
I use:
or
If it fails, I check /etc/resolv.conf inside the container to ensure the DNS settings are correct.
These 15 real-world scenarios cover troubleshooting, networking configurations, and
performance optimizations, ensuring you’re fully prepared for any Docker networking
interview.
Here are 15 real-world work scenarios with first-person answers to help you prepare for an
interview on Docker Compose.
1. A teammate asks: "Why should we use Docker Compose instead of running docker
run commands?"
👉 I’d explain that Docker Compose simplifies multi-container applications by defining everything
in a single docker-compose.yml file. Instead of manually running multiple docker run commands, I
can start all services with a single docker-compose up. It also makes configuration, networking, and
environment variable management much easier.
👉 I define all the required services in a docker-compose.yml file, specifying images, environment
variables, ports, and dependencies. To start the application, I run docker-compose up -d, which
launches all containers in detached mode. If I need to stop it, I run docker-compose down.
👉 First, I’d check the logs using docker-compose logs db to see if there are errors. If it's a MySQL or
PostgreSQL container, I’d check if the credentials are correct in environment variables. I’d also
confirm the volume is correctly mounted and the database service is healthy using docker-compose
ps.
👉 I’d edit docker-compose.yml and add a new service definition under services. I’d specify its image or
build context, set up ports, environment variables, and dependencies. After saving, I’d run docker-
compose up -d new-service to start only the new service without affecting the running ones.
5. The application works on my machine but fails on a colleague's system. How do I
troubleshoot?
👉 I’d first ask them to run docker-compose config to check for syntax errors. Then, I’d verify they are
using the correct .env file and have the necessary images pulled. If they still face issues, I’d
suggest running docker-compose up --build to force a rebuild of images.
6. My team needs to use different environment variables for dev and prod. How do
I manage this?
👉 I’d create two separate environment files: .env.dev and .env.prod. In docker-compose.yml, I’d use
env_file: .env to load the appropriate file. When deploying, I’d specify which file to use by setting
ENV_FILE=.env.prod docker-compose up -d in the terminal.
7. A container takes too long to start because a dependent service is not ready.
How do I handle this?
👉 I’d use the depends_on keyword in docker-compose.yml, but since it doesn’t wait for readiness, I’d
add a healthcheck. For example, in a database service, I’d configure test: ["CMD", "mysqladmin", "ping",
"-h", "localhost"] to ensure it’s ready before dependent services start.
👉 I’d explain that running docker-compose down stops containers but keeps named volumes. To
remove unused volumes, I’d use docker volume prune. If I want to remove only the project-specific
volumes, I’d run docker-compose down -v.
9. My company wants to scale services dynamically. How can I achieve this with
Docker Compose?
👉 I’d use the --scale option in Docker Compose. For example, to run three instances of the web
service, I’d execute docker-compose up --scale web=3 -d. This ensures load balancing among multiple
containers of the same service.
👉 I’d use docker-compose config. This command checks for syntax errors and merges environment
variables, ensuring the final configuration is valid before starting containers.
12. The database container data is lost every time I restart. How do I fix this?
👉 I’d check if the database service is using a named volume. If it’s missing, I’d update docker-
compose.yml to include volumes: - db_data:/var/lib/mysql. Named volumes ensure data persists even
when containers stop or restart.
👉 If I only need to update the image without changing other configurations, I’d use docker-
compose pull service-name to get the latest version. Then, I’d restart the container with docker-
compose up -d service-name.
These scenarios cover real-world situations you might face when working with Docker
Compose. Mastering these will make you confident in interviews and on the job!
Here are 15 real-world Docker Registry & Image Distribution interview questions in work
scenario format, with answers in first-person:
"In an air-gapped environment, I would deploy a private registry using the registry:2 image. Since
there’s no internet, I’d transfer images manually using USB or offline storage. I’d use docker save
to export images and docker load to import them. For security, I’d set up TLS using self-signed
certificates and enforce basic authentication with an .htpasswd file."
2. How do you ensure images are scanned for vulnerabilities before deployment?
"I integrate image scanning into the CI/CD pipeline. If using Docker Hub, I enable automated
scans. For private registries, I use Harbor or Trivy for vulnerability scanning. In AWS ECR, I turn
on built-in image scanning, and in Azure ACR, I use Microsoft Defender for Containers."
3. What would you do if pulling images from Docker Hub suddenly becomes slow?
"First, I’d check Docker Hub’s status and network latency using ping and traceroute. If it’s a rate
limit issue, I’d authenticate with Docker Hub to increase the pull limits. If the problem persists,
I’d consider setting up a private registry as a caching proxy using registry:2 with the proxy
configuration to mirror Docker Hub images locally."
4. Your team is new to Amazon ECR. How would you guide them on pushing and
pulling images?
"I’d start by ensuring everyone has AWS CLI installed and configured with IAM permissions.
Then, I’d walk them through authentication using aws ecr get-login-password. I’d explain how to
create a repository using aws ecr create-repository, tag an image with the correct ECR URL, and
push it using docker push. For pulling, I’d demonstrate docker pull after authentication."
"For access control, I use authentication and role-based access control (RBAC). In Harbor, I
configure RBAC roles and integrate LDAP. For AWS ECR, I manage IAM policies to restrict access.
In a self-hosted registry, I use Nginx as a reverse proxy with HTTP basic authentication and
token-based authentication when needed."
"I’d first verify the local image’s integrity using docker inspect before pushing again. If the issue
persists, I’d check the registry logs for errors. In cases of registry corruption, I’d restart the
service and, if necessary, rebuild it using backup data. If it’s an ECR or GCR issue, I’d check cloud
service logs and attempt a re-push."
7. Your company wants to switch from Docker Hub to Azure Container Registry
(ACR). How would you migrate images?
"I’d list all images and tags using docker images. Then, I’d authenticate to ACR using az acr login,
tag each image with the ACR URL, and push them using docker push. If migrating a large number
of images, I’d write a script using docker save and docker load to transfer them efficiently."
8. What would you do if unauthorized users gained access to your private registry?
"First, I’d revoke all credentials and generate new ones. I’d check logs to see which images were
accessed and if any were modified. Then, I’d enable two-factor authentication if supported,
enforce stricter IAM roles (in AWS ECR or ACR), and ensure transport security with TLS. Finally,
I’d scan all images for potential tampering."
"I’d set up an internal image cache using a private registry mirror. For example, I’d deploy
registry:2 with the proxy feature to cache frequently used images locally. In Kubernetes, I’d use
imagePullPolicy: IfNotPresent to avoid unnecessary pulls. I’d also optimize the network and use
smaller base images to reduce transfer size."
10. What steps do you take when an image push to Google Container Registry
(GCR) fails?
"I’d start by ensuring authentication with gcloud auth configure-docker. Then, I’d check permissions
in Google IAM. If the issue persists, I’d verify that my image is properly tagged with
gcr.io/PROJECT_ID/image-name. Finally, I’d check GCR service status and retry the push with debug
logging enabled."
"I enable image garbage collection to remove unused layers. In AWS ECR, I set up lifecycle
policies to delete old images automatically. In Harbor, I configure retention policies. I also use
lightweight base images (like Alpine) to minimize storage footprint."
12. A team member accidentally deleted an important image from your registry.
How do you recover it?
"If it’s a managed registry like ECR, I’d check if the image is cached in a Kubernetes node or a
local machine. If we have backups, I’d restore the image from there. If there’s no backup, I’d
rebuild the image from source code using the Dockerfile and dependencies."
13. What do you do if pulling an image results in a "manifest unknown" error?
"I’d first verify that the image exists in the registry using docker manifest inspect. If the tag is
incorrect, I’d check the correct tag in docker images or the registry UI. If it’s a multi-architecture
issue, I’d confirm that the correct platform is specified using --platform when pulling."
14. How would you handle multiple environments (dev, staging, production) with a
private registry?
"I’d use different repositories or prefixes for each environment (e.g., registry.company.com/dev/app,
registry.company.com/prod/app). I’d implement CI/CD automation to push to the appropriate
registry based on environment variables. I’d also set up access controls so that only production
services can pull from the production repository."
"I’d start by checking network latency using ping and traceroute. I’d also check CPU and disk I/O
on the registry server to rule out resource bottlenecks. If using AWS ECR or GCR, I’d check service
status and authentication latency. Finally, I’d optimize the image size using multi-stage builds
and reducing unnecessary layers."
These work scenarios cover real-world Docker registry issues and best practices, preparing you
for any technical interview question. Would you like me to refine any answers or add more?
Glad you liked it! Here are 15 real-world Docker security scenarios with answers in first-person
perspective, so you can confidently tackle any interview.
My Response:
I would first check the Dockerfile for any USER directives. If it's missing, I’d suggest modifying it to
create and switch to a non-root user:
Then, I’d educate the team on why running as root is risky—it increases the attack surface in
case of a container escape. Finally, I’d implement CI/CD checks to fail builds that run as root.
2. You need to scan images for vulnerabilities before deployment. What tools do
you use?
My Response:
I’d integrate Trivy into our CI/CD pipeline because it’s lightweight and fast:
For deeper analysis, I’d use Clair in our registry or docker scan (powered by Snyk). If
vulnerabilities are found, I’d assess their severity and either patch the base image or update
dependencies.
3. A containerized app needs to store an API key. How do you manage secrets
securely?
My Response:
I’d advise against using environment variables since they can be exposed via docker inspect or
logs. Instead, I’d use Docker Secrets if we’re using Swarm:
For non-Swarm environments, I’d integrate HashiCorp Vault or AWS Secrets Manager, ensuring
secrets are fetched dynamically at runtime rather than baked into images.
4. Your team accidentally pushed a sensitive file into a Docker image. What do you
do?
Then, I’d rotate all exposed credentials to prevent misuse. To avoid future issues, I’d enforce
.dockerignore files in all projects:
.env
secrets.json
I’d also integrate Snyk or Trivy into our pipeline to detect secrets in images before pushing
them.
My Response:
I’d enable Docker Content Trust (DCT) to allow only signed images:
export DOCKER_CONTENT_TRUST=1
I’d also enforce image source policies using tools like Notary or a private Docker registry with
image signing enabled. Finally, I’d conduct team training on avoiding unverified third-party
images.
Scenario: You need to prevent containers from communicating unless explicitly allowed.
My Response:
I’d create a user-defined bridge network and ensure only necessary containers are on it:
{
"icc": false
}
If stricter isolation is needed, I’d use --network=none for completely sandboxed containers.
My Response:
First, I’d isolate the container:
I’d also scan the container filesystem for unexpected modifications and use docker diff to
compare it with the base image.
Finally, I’d pull fresh images and implement security scanning to prevent future breaches.
My Response:
I’d immediately restrict access by enabling TLS authentication:
I’d also configure a firewall to allow only trusted IPs to access the daemon. Lastly, I’d disable
the Docker socket for non-root users.
For stricter controls, I’d use cgroups to define limits in daemon.json. I’d also enable resource
monitoring using docker stats or Prometheus to detect anomalies early.
My Response:
I’d use the no-new-privileges flag to block processes from gaining extra privileges:
Lastly, I’d enforce rootless Docker to ensure even if a process escalates inside the container, it
has no impact on the host.
My Response:
I’d use read-only mounts for sensitive paths:
12. A containerized app needs to bind to port 80, but you don’t want to run it as
root. How do you handle it?
My Response:
I’d run the app as a non-root user and use CAP_NET_BIND_SERVICE to allow binding to port 80:
The app itself would listen on 8080 inside the container, avoiding the need for root access.
13. How do you safely allow a container to access the host filesystem?
My Response:
I’d use bind mounts with read-only access:
For stricter security, I’d use tmpfs mounts instead of bind mounts when possible.
14. A containerized process needs system calls that are blocked by default. What
do you do?
My Response:
I’d create a custom seccomp profile allowing only the necessary syscalls.
15. How do you handle expired TLS certificates in your containerized apps?
My Response:
I’d automate certificate renewal using Certbot or AWS ACM and ensure containers reload them
dynamically without needing a restart.
Final Thoughts
This list covers practical, real-world Docker security scenarios you might face in a job. Let me
know if you want to go deeper into any topic!
If I notice imbalance, I inspect the constraints and redeploy with a placement strategy:
2. A rolling update failed, and now users are experiencing downtime. What do I
do?
docker node ls
If the node cannot be recovered, I replace it with a new one by adding a new worker.
For an automated approach, I integrate an external monitoring tool like Prometheus to auto-
scale based on CPU/memory usage.
If I find network issues, I ensure the service is attached to the correct overlay network.
If the issue is a crash loop, I fix the underlying cause (e.g., missing dependencies).
8. How do I update a service with zero downtime?
This updates two containers at a time with a 10-second delay to ensure smooth transitions.
version: '3'
services:
web:
image: nginx
deploy:
replicas: 3
db:
image: mysql
11. How do I ensure logs from all containers are collected centrally?
Then, I analyze what went wrong and apply a fix before reattempting the update.
This hands-on approach ensures I can handle any Docker Swarm issue confidently in an
interview or real-world scenario. Let me know if you want me to dive deeper into any topic!
Here are 15 real-world work scenario questions related to Kubernetes and Docker Swarm,
along with answers in first-person perspective:
1. You need to deploy a highly available web application. Which orchestration tool
do you choose?
I would use Kubernetes because it provides built-in high availability through ReplicaSets and
self-healing features. It ensures that if a node or pod fails, the application remains available by
rescheduling the workload to another node.
2. Your team is working on a small internal project that requires quick container
deployment. What would you use?
For a small internal project, I’d go with Docker Swarm since it’s lightweight, simple to set up,
and doesn’t require the overhead of Kubernetes. Swarm’s native integration with Docker makes
it ideal for quick deployments.
3. You need to perform a rolling update without downtime. How would you do it?
If I’m using Kubernetes, I’d define a new image version in the Deployment YAML and apply it.
Kubernetes will gradually update the pods, ensuring zero downtime.
In Kubernetes, I’d use Network Policies to control communication between pods and TLS
secrets for encryption. In Docker Swarm, I’d leverage overlay networks and Swarm Secrets.
6. You need to autoscale an application based on CPU load. How do you do it?
Docker Swarm lacks built-in auto-scaling, so I’d need an external tool like Prometheus +
Alertmanager to scale services manually.
For Kubernetes, I’d use Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). For
Docker Swarm, I’d use Docker Volumes or NFS mounts.
Since Kubernetes automatically recreates pods using ReplicaSets, the application won’t be
affected. To prevent future issues, I’d enforce RBAC policies to restrict deletion permissions.
10. Your application must be deployed across multiple cloud providers. Which tool
do you choose?
I’d choose Kubernetes, as it supports multi-cloud deployments using services like Google
Anthos or Azure Arc, allowing seamless workload distribution.
I’d use a Service of type LoadBalancer or configure an Ingress Controller for advanced routing.
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Kubernetes automatically detects the failure and reschedules affected pods on other healthy
nodes, ensuring high availability.
13. You need to ensure sensitive database credentials aren’t exposed in your
containers. How do you manage this?
In Docker Swarm, I’d use Swarm Secrets and mount them as files inside containers.
I’d deploy a second version of the application with a different label, then update the service
selector to switch traffic to the new version. Alternatively, I could use Istio for gradual traffic
shifting.
Would you like more scenario-based questions, or do you want a mock interview to practice
live responses?
Here are 15 real-world Docker work scenarios with answers in the first person. These responses
are structured to show practical problem-solving skills in an interview setting.
"I enable Docker BuildKit by setting DOCKER_BUILDKIT=1. Then, I optimize the Dockerfile by:
2. You need to deploy a Dockerized app to different CPU architectures. How do you
do it?
"I use Docker Buildx to build and push a multi-architecture image. I create a new builder
instance with:
Then, I build and push the image for both AMD64 and ARM64:
"I run docker stats to monitor live resource usage. Then, I check cgroups:
cat /sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes
"I use Docker BuildKit secrets instead of ENV. First, I pass the secret at build time:
6. You need to run a database and an app in Docker. How do you set it up?
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: example
app:
image: myapp
depends_on:
- db
This ensures the app waits for the database before starting."
7. Your Docker container can't reach an external API. How do you fix it?
"I check the CI/CD logs for errors, then try to reproduce the issue locally.
If the error is permission denied, I add:
chmod +x entrypoint.sh
---
FROM node:18
WORKDIR /app
COPY --from=builder /app/dist .
CMD ["node", "dist/index.js"]
10. You need to monitor multiple containers' resource usage. What do you do?
"I use docker stats for real-time monitoring. For long-term tracking, I integrate with Prometheus
and Grafana:
"I check:
---
### **12. How do you roll out a zero-downtime update for a Docker app?**
"I use **Docker Swarm or Kubernetes rolling updates**. In **Docker Compose**, I deploy with:
```sh
docker-compose up -d --scale app=2
For Kubernetes:
apiVersion: batch/v1
kind: CronJob
spec:
schedule: "0 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: myapp
image: myrepo/myapp
```"
---
### **14. How do you store persistent data in a Docker container?**
"I use **volumes** instead of bind mounts. For example:
```sh
docker volume create mydata
docker run -v mydata:/data myapp
15. How do you automatically remove stopped containers and dangling images?
Final Thoughts
These scenarios cover common challenges in real-world Docker environments. If you want to go
deeper on any topic, let me know!
Here are 15 real-world Docker in production scenarios with answers in the first person, so you
can confidently handle any interview.
1. A containerized application is running but logs show errors. How do you debug
it?
I first inspect the logs using docker logs <container_id>. If needed, I increase verbosity in the
application for more details. Next, I check if the container is healthy using docker ps or docker
inspect <container_id>. If the issue persists, I enter the container with docker exec -it <container_id>
/bin/sh to troubleshoot manually. I also check resource limits (docker stats) and network issues
(docker network inspect).
2. How do you set up centralized logging for multiple containers?
I use Fluentd, ELK (Elasticsearch, Logstash, Kibana), or Loki. I configure my containers to send
logs using a logging driver like:
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
For Kubernetes, I deploy Fluentd as a DaemonSet, ensuring all logs are forwarded to
Elasticsearch or Loki for storage and visualization.
3. A Docker image is too large, slowing deployments. How do you optimize it?
I start by using a smaller base image, like alpine, instead of ubuntu. I then apply multi-stage
builds to remove unnecessary files. For example:
FROM alpine
COPY --from=builder /app/myapp /app
CMD ["/app"]
I also remove unnecessary layers and use .dockerignore to exclude files like node_modules.
I set up Prometheus to scrape container metrics and Grafana to visualize them. I configure
cadvisor or node-exporter for detailed container metrics. A simple prometheus.yml might look like:
scrape_configs:
- job_name: 'docker'
static_configs:
- targets: ['localhost:9323']
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
env_file:
- .env
I check usage with docker stats and docker top <container_id>. If needed, I limit resources in docker-
compose.yml:
deploy:
resources:
limits:
cpus: "0.5"
memory: "512M"
I maintain two environments (blue and green). The live version (blue) serves traffic, while I
deploy the new version in green. Once tested, I switch the load balancer to green, making it live.
With NGINX:
upstream backend {
server green:5000;
}
First, I check logs (docker logs <container_id>) and restart policies (docker inspect). I also verify if a
health check is failing. If it's Kubernetes, I check the event logs:
If needed, I run the container interactively with docker run -it --entrypoint /bin/sh myapp to debug
manually.
For Kubernetes, I use ReplicaSets and horizontal pod autoscaling (HPA) to adjust instances based
on load.
I configure NGINX:
server {
listen 80;
location / {
proxy_pass http://myapp:5000;
}
}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-db-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Final Thoughts
These scenarios cover real-world Docker challenges. If you want a mock interview, I can
simulate an interviewer and challenge you with follow-ups. Would you like to try that?
15 Docker & Cloud Provider Work Scenarios (With Answers in 1st Person)
Q: How would you deploy a Docker container on AWS ECS using Fargate?
A: First, I create an ECS cluster using AWS Management Console or AWS CLI. Then, I define a
task definition, specifying the Docker image, CPU/memory, and networking settings. Next, I
create an ECS service, ensuring it runs the desired number of tasks and integrates with an ALB if
needed. Finally, I monitor the deployment using CloudWatch logs and ECS service events.
Q: If an application running on Azure AKS is experiencing high traffic, how do you scale it?
A: I use Horizontal Pod Autoscaler (HPA) to automatically scale pods based on CPU or memory
usage. I can configure this with a simple kubectl autoscale command or define it in a YAML file. If
node capacity is insufficient, I enable Cluster Autoscaler to provision more nodes dynamically.
Q: How would you migrate a monolithic app to microservices using Docker and AWS?
A: I start by breaking the monolith into smaller services and containerizing them. Each service
gets its own Docker container. I use AWS Fargate or EKS to run these microservices, and an API
Gateway for communication. I also implement service discovery with AWS Cloud Map and
monitoring with CloudWatch and Prometheus.
Q: How do you ensure security when deploying Docker containers on Azure AKS?
A: I enforce role-based access control (RBAC) to restrict permissions, enable Azure Policy for
compliance, and scan Docker images using Microsoft Defender for Containers. Additionally, I
use pod security policies and network policies to prevent unauthorized access.
Q: How do you store and pull private Docker images securely in AWS?
A: I push images to Amazon Elastic Container Registry (ECR), then configure authentication
with aws ecr get-login-password. I use IAM policies to restrict access and enable image scanning for
vulnerabilities.
Q: How do you reduce container hosting costs using AWS Spot Instances?
A: I configure EKS node groups with Spot Instances or use ECS Capacity Providers to mix Spot
and On-Demand instances for cost savings while maintaining availability.
These scenarios cover real-world Docker and cloud provider challenges you might face in an
interview or on the job. Would you like me to add more advanced scenarios?
Great! Here are 15 real-world Docker troubleshooting scenarios, each with a question and an
answer in the first person. These will help you think like an experienced DevOps engineer.
Answer:
This means the Docker daemon is either unresponsive or not running. First, I check if it's
running using:
If it’s running but unresponsive, I check for resource exhaustion (free -m for RAM, df -h for disk
space). If nothing else works, I check the Docker logs with journalctl -u docker --no-pager | tail -n 50.
Answer:
First, I check if my host has internet using ping google.com. If my host is fine, I run inside the
container:
If that doesn’t help, I inspect the network using docker network inspect bridge to check for
misconfigurations.
4. Port Conflict – How Do You Resolve It?
Scenario: I try to start a container, but it says the port is already in use.
Answer:
I check which process is using the port with:
If a non-Docker process is using it, I either stop it or restart my container on a different port (-p
8081:80).
Answer:
Docker Hub limits unauthenticated pulls. I log in with:
docker login
If I need a workaround, I pull from another registry (e.g., AWS ECR, GitHub Container Registry).
Answer:
I first check disk usage:
df -h
docker system df
Answer:
I check usage with:
docker stats
Answer:
I check if they’re on the same network using:
docker-compose up -d db
docker-compose up -d web
Answer:
I check volume mounts:
If the file is missing, I verify the source directory on my host. I also check permissions (chmod
777).
Answer:
This usually happens due to cached layers. I rebuild without cache:
Answer:
I check if the path exists:
ls -ld /my/data
mkdir -p /my/data
chmod 777 /my/data
Answer:
I limit log size in daemon.json:
{
"log-driver": "json-file",
"log-opts": { "max-size": "10m", "max-file": "3" }
}
Answer:
I check the logs:
Answer:
I check if it has a restart policy:
restart: always
These are real-world scenarios that will prepare you for any Docker interview. Want me to
create some tricky follow-up questions?