Docker Qa 2 Ans Print
Docker Qa 2 Ans Print
Docker Compose is a tool used for defining and running multi-container Docker
applications. With Compose, you can configure your application’s services, networks, and
volumes in a single YAML file, typically named docker-compose.yml. This makes it easier
to manage and orchestrate multiple containers that work together.
Basic Commands
1. docker-compose up
o Starts the services defined in the docker-compose.yml file. It creates
containers, networks, and volumes as needed. Use -d to run in detached
mode (in the background).
o Example: docker-compose up -d
2. docker-compose down
o Stops and removes the containers, networks, and volumes created by
docker-compose up.
o Example: docker-compose down
3. docker-compose build
o Builds the images defined in the Compose file. Use this command when
you change any Dockerfile or dependencies.
o Example: docker-compose build
4. docker-compose start
o Starts existing containers for the services defined in the Compose file.
o Example: docker-compose start
5. docker-compose stop
o Stops running containers without removing them. You can start them
again later.
o Example: docker-compose stop
6. docker-compose restart
o Stops and then starts the services again.
o Example: docker-compose restart
Service Management
7. docker-compose logs
Displays the logs from the services. You can specify a service name to view
o
logs for just that service.
o Example: docker-compose logs or docker-compose logs web
8. docker-compose exec
o Executes a command in a running container. Useful for running
commands like bash or a database client.
o Example: docker-compose exec web bash
9. docker-compose ps
o Lists the containers that are part of the Compose application, showing
their status.
o Example: docker-compose ps
Additional Options
11.docker-compose pull
o Pulls the images for the services defined in the Compose file from a
Docker registry.
o Example: docker-compose pull
12.docker-compose push
o Pushes the built images for the services to a Docker registry.
o Example: docker-compose push
13.docker-compose --version
o Displays the installed version of Docker Compose.
o Example: docker-compose --version
14.docker-compose help
o Lists all available commands and their usage.
o Example: docker-compose help
Docker comes with built-in network drivers that implement core networking functionality:
1. Bridge –
→ When a container is created without specifying the kind of driver, the container will only
be created in the bridge network, which is the default network.
→ Bridge networks create a software-based bridge between your host and the container.
→ Containers connected to the network can communicate with each other, but they’re
isolated from those outside the network.
→ Each container in the network is assigned its own IP address. Because the network’s
bridged to your host, containers are also able to communicate on the internet.
2. host -
→ Containers will not have any IP address they will be directly created in the system network
which will remove isolation between the docker host and containers.
→ They aren’t allocated their own IP addresses, and port binds will be published directly to
your host’s network interface. This means a container process that listens on port 80 will
bind to <your_host_ip>:80.
3. null –
→ IP addresses won’t be assigned to containers. These containers are not accessible to us
from the outside or from any other container.
→ Null network keeps the container in complete isolation that is they are not connected to
any network or container.
→ This network is generally used to run batch jobs which are scheduled programs that are
assigned to run without any further interaction.
4. overlay –
→ overlay network will enable the connection between multiple Docker demons and make
different Docker swarm services communicate with each other.
→ Overlay networks are distributed networks that span multiple Docker hosts.
→ The network allows all the containers running on any of the hosts to communicate with
each other without requiring OS-level routing support.
→ Overlay networks implement the networking for Docker Swarm clusters, but you can also
use them when you’re running two separate instances of Docker Engine with containers
that must directly contact each other. This allows you to build your own Swarm-like
environments.
Commands:
Combination: You can use both together. If you use both ENTRYPOINT and CMD, CMD
provides default arguments to the ENTRYPOINT.
Behavior:
→ If you define only CMD, you can specify a command when you run the container, and
that command will be executed.
→ If you define only ENTRYPOINT, that command will always run, regardless of what
you specify at runtime.
4) Explain multi stage build
Multi-stage builds in Docker are a technique that allows you to use multiple FROM
statements in a single Dockerfile. This approach enables you to separate the build
environment from the runtime environment, resulting in smaller and more efficient
Docker images.
# Stage 1: Build
FROM maven:3.8.1-openjdk-11 AS builder
# Stage 2: Runtime
FROM openjdk:11-jre-slim
Volumes in Docker are a mechanism for persisting and managing data generated by and
used by Docker containers. They allow data to exist outside of the container’s
filesystem(i.e. created on the host), ensuring that data is not lost when containers are
stopped or removed. This allows sharing data within containers by importing volume
directory in other containers.
2. Bind Mounts:
→ Bind mounts are another way to give containers access to files and folders on your host.
→ They directly mount a host directory into your container. Any changes made to the
directory will be reflected on both sides of the mount, whether the modification
originates from the host or within the container.
→ Bind mounts are best used for ad-hoc storage on a short-term basis. They’re convenient
in development workflows.
For example: bind mounting your working directory into a container automatically synchronizes
your source code files, allowing you to immediately test changes without rebuilding your Docker
image. Volumes are a better solution when you’re providing permanent storage to operational
containers. Because they’re managed by Docker, you don’t need to manually maintain
directories on your host. There’s less chance of data being accidentally modified and no
dependency on a particular folder structure. Volume drivers also offer increased performance
and the possibility of writing changes directly to remote locations.
Persistent Storage: Volumes provide a way to store data that persists even after the
container is stopped or deleted. This is crucial for databases and applications that require
data retention.
Managed by Docker: Docker manages the lifecycle of volumes, which makes it easier to
back up and restore data.
1. Creating a Volume
docker volume create my_volume
2. Using a Volume in a Container
To use a volume when running a container, you can specify it with the -v or --mount option.
Using -v Option:
Using --mount Option (more explicit and recommended for complex configurations):
docker volume ls
5. Removing a Volume
-it: Runs the container interactively with a terminal attached (useful for debugging).
-d: Runs the container in detached mode, allowing it to run in the background.
-itd: Combines both; the container runs interactively in the background.
10) while running the container the containers is not running error comings how to trouble
shoot it? or
Your workload deployed in docker containers now you noticed one container is
continuously restarting so what will be approach what are the commands you use,how
will you troubleshoot?
Dangling images are layers in Docker that are not associated with any tagged images. They
typically arise during the image build process when intermediate layers are created but not
tagged or when an image is updated and the previous version is left without a tag.
No Tag: They have a <none> tag and are not referenced by any running container or
application.
Space Consumption: They can consume disk space unnecessarily, as they represent
unused layers.
Common Cause: Dangling images are often a result of failed builds or repeated builds
that overwrite previous images.
The docker system prune command is a powerful tool for cleaning up unused Docker
resources. It helps to reclaim disk space by removing stopped containers, unused networks,
dangling images, and optionally, unused volumes.
What It Does:
1. Stops and Removes Stopped Containers: Any container that is not currently running will
be removed.
2. Removes Unused Networks: Networks that are not connected to any containers will be
deleted.
3. Cleans Up Dangling Images: It removes images that are not tagged or associated with
any containers.
4. Optional Volume Removal: By default, it does not remove unused volumes, but you can
include this by using the --volumes flag.
Command Syntax:
docker system prune
Cleans up unused data, including stopped containers, unused networks, dangling images, and build cache.
Options:
o -a, --all: Remove all unused images, not just dangling ones.
o -f, --force: Do not prompt for confirmation.
o --volumes: Remove all unused volumes.
2. Prune Containers
Options:
o -f, --force: Do not prompt for confirmation.
3. Prune Images
Removes dangling images (images not tagged and not referenced by any container).
4. Prune Networks
Options:
o -f, --force: Do not prompt for confirmation.
5. Prune Volumes
Options:
o -f, --force: Do not prompt for confirmation.
1. Image Builds: When you build an image, intermediate layers are created. If the build is
interrupted or a new build is done, old, untagged layers become dangling.
2. Image Updates: Pulling a new version of an image may leave behind layers that are no
longer tagged, turning them into dangling images.
3. Failed Builds: If a build fails after some layers are created, those layers can become
dangling because they aren’t part of a successful image.
4. Image Deletion: Removing a tagged image can leave behind layers that are no longer
referenced, causing them to become dangling.
Storage Location
Linux: /var/lib/docker
Windows: C:\ProgramData\Docker
macOS: Managed through Docker Desktop.
Cleanup
Yes, you can download images without specifying a tag. If you pull an image without a tag,
Docker defaults to the latest tag.
Important Notes:
Using Tags: It’s generally best practice to specify a tag to ensure you get the exact
version you need (e.g., ubuntu:20.04), as using latest may lead to unexpected
changes if the image is updated.
No Tagged Version: If the image doesn’t have a latest tag or if you try to pull an
image that doesn’t exist, Docker will return an error.
15) What is donot have file name docker-compose.yaml and what is use docker-
service.yaml or docker-api request.yaml?
If you don't have a file named docker-compose.yaml, you can still use Docker Compose
with alternative YAML file names, such as docker-service.yaml or docker-api-
request.yaml. However, you need to specify the file when running Docker Compose
commands.
Usage:
To use a different YAML file, you can use the -f option followed by the file name. For
example:
docker-compose -f docker-service.yaml up
This command tells Docker Compose to use docker-service.yaml instead of the default
docker-compose.yaml.
16) i have compose file i have three services those are dependent spinning the service 1
before service2 and spinning the srervice 2 before service 3?
To manage service dependencies in your Docker Compose file so that Service 1 starts before
Service 2, and Service 2 starts before Service 3, you can use the depends_on keyword in
your docker-compose.yaml file.
version: '3.8' # or the version you are using
services:
service1:
image: your_service1_image
# Additional configuration for service1
service2:
image: your_service2_image
depends_on:
- service1 # Ensures service1 starts before service2
service3:
image: your_service3_image
depends_on:
- service2 # Ensures service2 starts before service3
17) i have work load on docker i do not want to secrete hard code i am not show secrete in
docker file how to handle this?
Environment variables can be set in Docker in several ways. Here’s how to do it:
1. In docker-compose.yaml
You can define environment variables directly in your Docker Compose file:
yaml
Copy code
version: '3.8'
services:
myservice:
image: your_image
environment:
- MY_ENV_VAR=value
- ANOTHER_VAR=${ANOTHER_VAR} # Referencing an external
variable
2. Using a .env File
You can create a .env file in the same directory as your docker-compose.yaml. This file
can store environment variables:
MY_ENV_VAR=value
ANOTHER_VAR=another_value
yaml
Copy code
version: '3.8'
services:
myservice:
image: your_image
environment:
- MY_ENV_VAR
- ANOTHER_VAR
3. Using the Command Line
You can also set environment variables when running a container with the docker run
command:
bash
Copy code
docker run -e MY_ENV_VAR=value your_image
versioning in Docker is maintained through the use of tags and a few best practices. Here’s how
it works:
1. Image Tags
Each Docker image can have one or more tags that identify its version.
Tags are assigned when you build or push an image. For example:
You can also have a latest tag for the most recent version, but it's important to use
specific version tags for clarity.
2. Semantic Versioning
Follow a versioning scheme like Semantic Versioning (SemVer), which uses the format
major.minor.patch:
o Major: Introduces breaking changes.
o Minor: Adds features in a backward-compatible manner.
o Patch: Includes backward-compatible bug fixes.
19) why we not use root user as user? why we use ?how it do it?
1. Security Risks
2. Best Practices
Principle of Least Privilege: It's a best practice in security to run applications with the
least amount of privilege necessary to function. This reduces the potential impact of a
security breach.
1. Decomposition Complexity:
o Breaking the monolith into services is difficult and requires careful planning.
2. Data Management:
o Each service may need its own database, which can lead to data consistency
issues.
3. Communication Overhead:
o Services need to communicate over a network, which can introduce latency and
requires reliable APIs.
4. Deployment Challenges:
o Managing multiple services complicates deployment and often requires
orchestration tools like Kubernetes.
5. Monitoring and Logging:
o Tracking performance and logs across many services can be complex.
6. Cultural Resistance:
o Shifting to microservices often requires changes in team structure and practices,
which can meet resistance.
7. Testing Complexity:
o Testing interactions between services is more complicated than testing a single
application.
21) What is the basic difference between legacy monolithic and microservices
architecture what are disadvantages and advantages?
The basic difference between legacy monolithic architecture and microservices architecture lies in how
applications are structured and deployed.
Monolithic Architecture
Definition: A monolithic application is built as a single, unified unit. All components, including the user
interface, business logic, and data access, are tightly coupled and deployed together.
Advantages:
1. Simplicity: Easier to develop, test, and deploy initially since everything is in one place.
2. Performance: Lower latency in inter-component communication since all components are in the same process.
3. Ease of Deployment: Only one artifact to manage, simplifying the deployment process.
Disadvantages:
1. Scalability Issues: Scaling requires duplicating the entire application, which can lead to inefficiencies.
2. Tight Coupling: Changes in one part can affect the entire system, making updates risky and time-consuming.
3. Technology Lock-in: Difficult to adopt new technologies since everything is integrated.
4. Maintenance Challenges: As the application grows, the codebase can become complex and harder to manage.
Microservices Architecture
Definition: Microservices architecture breaks applications into smaller, independent services that communicate
over APIs. Each service focuses on a specific business capability and can be developed, deployed, and scaled
independently.
Advantages:
Disadvantages:
1. Complexity: Increased complexity in managing multiple services, including deployment, monitoring, and inter-
service communication.
2. Data Management: Managing data consistency across services can be challenging, especially in distributed
systems.
3. Network Latency: Communication over the network introduces latency and can complicate debugging.
4. Operational Overhead: Requires sophisticated DevOps practices, including orchestration and management of
multiple services.
Organizations are moving to microservices over monolithic architectures for several reasons:
Docker containers are called immutable because they are designed to be unchangeable once created. This
means:
1. Stateless: Changes made to a running container aren't saved back to the image.
2. Consistency: Every new container starts from the same base image, ensuring a consistent environment.
3. Deployment Ease: Updates require creating new images instead of modifying existing containers.
4. Scalability: Containers can be easily replicated without concerns about their state.
5. Rollback Simplicity: Reverting to a previous version is easy by redeploying an older image.
24) You are using docker ,how do you ensure your container using immutable images
what is the setup that ensures that?
To ensure your Docker containers use immutable images, follow these practices:
1. Versioned Images: Tag images with specific version numbers instead of using latest.
2. Use Dockerfile: Build images using a Dockerfile for consistency.
3. Immutable Infrastructure: Avoid modifying existing containers; deploy new images for updates.
4. CI/CD Pipelines: Implement automated CI/CD processes for building and deploying images.
5. Image Scanning: Use tools to scan images for vulnerabilities before deployment.
6. Container Orchestration: Use tools like Kubernetes or Docker Swarm to manage and enforce policies.
7. External Configuration: Keep configuration data outside the container.
8. Private Registry: Use a private Docker registry to manage and store approved images.
25) You have 30 to 40 containers,In your docker setup it is not possible to come and check
status by docker ps -a,so,is there any way that you can implement something in your
setup, basically can we check health check for your containers?
1. Use Docker Health Checks: Add a HEALTHCHECK instruction in your Dockerfile to define health
checks.
Dockerfile
Copy code
HEALTHCHECK --interval=30s CMD curl -f http://localhost/ || exit 1
2. Check Status: Use docker ps to see the health status (e.g., healthy, unhealthy).
3. Container Orchestration: Use tools like Kubernetes (readinessProbe, livenessProbe) or Docker
Swarm for built-in health checks.
4. Monitoring Tools: Integrate solutions like Prometheus or Grafana for tracking and alerts.
5. Log Aggregation: Use tools like the ELK Stack for real-time log monitoring.
27) I have my docker containers,and my instance as certain limit of memory and I noticed
my memory utilization is 80,90 percent now how will you investigate what is
happening?
1. Check Overall Usage: Use docker stats to see memory usage for all containers.
2. Inspect Individual Containers: Use docker inspect <container_id> for detailed info on specific
containers.
3. Check Logs: Review logs with docker logs <container_id> for errors or unusual behavior.
4. Analyze Processes: Enter the container using docker exec -it <container_id> /bin/bash and
check memory usage with tools like top or htop.
5. Review Application Code: Look for memory leaks or inefficiencies in the application.
6. Set Resource Limits: Consider adding memory limits to containers using --memory.
7. Monitor Swapping: Check swap usage with free -m to see if it's impacting performance.
8. Review Host Memory: Check the host's overall memory usage using free -m or top.
Scaling in Docker involves increasing or decreasing the number of running container instances
for a service. This can be achieved using Docker Compose or Docker Swarm.
Docker Compose: You can specify the number of replicas of a service using the scale
option in your docker-compose.yml file.
Docker Swarm: You can manage multiple Docker hosts as a single virtual host. Using
docker service scale, you can adjust the number of replicas for a service.
29) Is there any command to manually scaling,is there any way to automate?
Manual Scaling
Example:
Example:
Using Orchestration Tools: Tools like Kubernetes or Docker Swarm can automate scaling
based on resource utilization, traffic load, or other metrics.
Monitoring Tools: Integrate monitoring solutions (like Prometheus, Grafana) with auto-
scaling policies to adjust the number of container replicas automatically based on
metrics.
If you set a Docker container’s network to null, the container will not be connected to any
network. Here’s what happens:
Isolation: The container will be isolated from all other containers and will not have
access to the external network, including the host network.
Inaccessibility: Any services running within the container will not be accessible from
outside the container. You won't be able to communicate with it via IP addresses.
Default Network: If no network is specified and it defaults to null, Docker usually
connects the container to the default bridge network, but explicitly setting it to null
might lead to undefined behavior.
32) what are batch and schedule jobs which null network uses?
With a null network, you can run the following types of batch and scheduled jobs:
Batch Jobs
1. Data Processing Jobs: Analyze or transform data stored locally (e.g., CSV files).
2. Image Processing: Apply filters or transformations to local images.
3. Log Analysis: Process and analyze log files stored on the local filesystem.
4. File Backup: Create backups of local files without needing network access.
Scheduled Jobs
1. Using Container Names: They can resolve each other's names automatically. For
example:
curl http://containerB:port
curl http://172.18.0.2:port
34) what makes container is different from EC2 with respect to data
35) their are 2 conatiners if you delete container1 and stop container to what
happens to of containers how to retrieve
Effects
1. Container1 Deletion:
oIf you delete Container1, its filesystem and any data stored within it (unless it was
using a Docker volume) are permanently lost.
2. Stopping Container2:
o Stopping Container2 simply halts its execution. The container can be restarted
later, and its filesystem will remain intact unless deleted.
Data Retrieval
Answer:
docker stats: This command provides real-time statistics about the resource usage
of running containers, including CPU, memory, network I/O, and disk I/O.
docker inspect: This command returns detailed information about a container (or
image), such as its configuration, network settings, mounts, and state. It provides a
comprehensive view but not real-time resource usage.
39) how woud you publish same container with two ports
41) Think that you have 2 containers a and b you want to connect each securely
how to do
To securely connect two containers (A and B), you can follow these steps:
Create a user-defined bridge network. This ensures that the containers can communicate
securely and privately.
Use HTTPS or SSL/TLS: If your applications support it, configure them to use HTTPS
or SSL/TLS for secure communication between the containers.
1. Definition
Web Server: Serves static content (HTML, CSS, JavaScript, images) and handles HTTP
requests from clients (browsers).
Application Server: Hosts and executes dynamic web applications, managing business
logic and database interactions.
2. Content Served
3. Functionality
Web Server: Primarily focuses on HTTP requests and responses, often acting as a
reverse proxy to application servers.
Application Server: Provides additional services such as transaction management,
messaging, and security for running applications.
4. Examples
5. Communication
44)How are you going to connect database and webserver securely in docker
network?
to securely connect a database and a web server in a Docker network, follow these steps:
Use a user-defined bridge network to allow secure communication between the containers.
Start your database container on the secure network. Use environment variables to pass
sensitive information.
Start your web server container on the same network, ensuring it can communicate with the
database.
Pass the database credentials to the web server as environment variables, keeping them out of
the source code.
If your database supports it, enable SSL/TLS for database connections to encrypt data in transit.
Consider using firewall rules or Docker’s built-in features to restrict access to the database
container only to the web server.
Regularly review your security configurations and update passwords and access controls.
45) Think that there are some traffic spikes ,you are doing manually increasing
containers,how will you scale the containers?
Ans- docker swam,to set replicas
46) Think that you have hosted 5 containers out of which some are failing?what is the
reason?
Ans- check logs
1. Use Official Images: Start with trusted images from reliable sources.
2. Regularly Update Images: Keep images up to date with security patches.
3. Minimize Image Size: Use minimal base images to reduce the attack surface.
4. Implement User Namespaces: Enable user namespaces for added isolation.
5. Set Resource Limits: Use --memory and --cpus to limit resource usage.
6. Use Docker Secrets: Manage sensitive data securely.
7. Network Security: Isolate containers and restrict access with firewalls.
8. Scan for Vulnerabilities: Use tools like Trivy to identify vulnerabilities.
9. Enable Logging and Monitoring: Track container activity with logging solutions.
10. Secure the Docker Daemon: Limit access with TLS and authorized users.
11. Avoid Running as Root: Run applications as non-root users.
12. Use Security Benchmarks: Follow benchmarks like CIS Docker for best practices.
1. Limited to Local Development: Best for local testing, not ideal for production without orchestration
tools.
2. Complexity with Large Apps: Can become unwieldy for very large applications.
3. Single Host Limitation: Runs on a single host, not suitable for distributed systems.
4. Lacks Advanced Resource Management: No automatic scaling or load balancing features.
5. Dependency Management Issues: Handling complex service dependencies can be challenging.
6. Networking Constraints: May complicate integration with external services.
7. Limited Monitoring and Logging: Requires external tools for observability.
8. Performance Overhead: Running many services can impact performance.