Report Ta
Report Ta
Lab Assignment
Submitted By
Git merge is a command used to combine the changes from one or more branches into the current
branch. It integrates the history of these branches, ensuring that all changes are included and conflicts
are resolved.
Syntax:
git merge <branch-name>
Example of merge conflict
S.No Step Detail Screenshot
Step 1: Create a new directory and
initialize git.
Q.2 Explain docker domain specific keywords used in the Docker File.
S.N Keyword Purpose Example
o
1 FROM Specifies the base image for the FROM ubuntu:20.04
Docker image.
2 RUN Executes commands to install RUN apt-get update && apt-get
dependencies or configure the install -y python3
image.
3 CMD Specifies the default command to CMD ["python3", "app.py"]
run when a container starts (can be
overridden).
4 ENTRYPOINT Defines a command that will ENTRYPOINT ["python3",
always run as the container "app.py"]
starts (cannot be easily
overridden).
5 COPY Copies files or directories from the COPY app.py /app/
host to the container’s filesystem.
6 ADD Similar to COPY but also supports ADD app.tar.gz /app/
remote URLs and unpacking
archives (e.g., .tar files).
7 WORKDIR Sets the working directory for WORKDIR /app
instructions that follow, like RUN,
CMD, or ENTRYPOINT.
8 ENV Sets environment variables in the ENV APP_ENV=production
container.
9 EXPOSE Informs Docker that the container EXPOSE 8080
will listen on specific ports (for
documentation purposes).
10 VOLUME Creates a mount point for storing VOLUME /data
data outside the container.
11 LABEL Adds metadata to the image, such LABEL
as authorship or version info. maintainer="[email protected]"
12 USER Specifies the user to execute USER nonrootuser
subsequent commands as, for
security purposes.
13 ARG Defines build-time variables that ARG APP_VERSION
can be passed during the docker
build process.
14 ONBUILD Adds instructions that will execute ONBUILD RUN apt-get update
when the image is used as a base
for another Dockerfile.
15 STOPSIGNAL Sets the system signal to stop the STOPSIGNAL SIGTERM
container.
16 HEALTHCHECK Defines a command to test if the `HEALTHCHECK CMD curl
container is healthy. http://localhost
17 SHELL Specifies the default shell for SHELL ["/bin/bash", "-c"]
executing RUN commands (e.g.,
/bin/sh or powershell).
Q3) Create a Shell Script Program in order to find the list of prime numbers between 1 to 100
I have used raspberry pi terminal for executing this shell script. Following are the steps followed:
1. Create a script file “Prime_Number.sh”
2. Write the code (ctrl+x, y, enter)
3. Make the file executable using chmod
4. Run the script file
is_prime() {
num=$1
if [ $num -le 1 ]; then
return 1
fi
The Master Node is the control plane responsible for managing the Kubernetes cluster. It houses the
API server, scheduler, and controller manager, which collectively handle the orchestration of
containerized applications.
The API Server serves as the primary interface for users and other components to interact with the
cluster, managing the state of the system and ensuring that the desired state of the applications is
maintained.
The Worker Nodes are the machines where the actual applications run. Each worker node contains a
container runtime (such as Docker), a kubelet, and a kube-proxy. The kubelet is an agent that
communicates with the master node, ensuring that containers are running as expected. Kube-proxy is
responsible for managing network routing and load balancing across the various services within the
cluster.
Another essential component is etcd, a distributed key-value store that acts as the database for the
Kubernetes cluster, storing all configuration data, state information, and metadata. This allows
Kubernetes to maintain a consistent view of the cluster’s state and facilitates recovery in case of
failures.
By leveraging this architecture, Kubernetes enables efficient resource management, automatic scaling,
and seamless updates for applications. Its open-source nature fosters a vibrant community,
continuously enhancing its functionality and capabilities to meet the evolving needs of modern
application development and deployment.
Core Components of Kubernetes
The architecture of Kubernetes is built on several core components that work together to deliver a
robust platform for managing containerized applications. Understanding these components is crucial
for leveraging Kubernetes effectively.
Master Node (Kubernetes Control Plane)
The Master Node, or Control Plane, is the brain of the Kubernetes cluster. It manages the cluster's
state and orchestrates the deployment and operation of applications. Key components within the
Master Node include:
API Server: This is the primary interface for users and components to interact with the
Kubernetes cluster. It processes REST requests and updates the state of the cluster.
Scheduler: Responsible for assigning newly created pods to worker nodes based on resource
availability, constraints, and policies.
Controller Manager: This component regulates the state of the cluster, ensuring that the
desired state (as defined by the user) matches the current state. It manages various controllers,
including the Replication Controller and Node Controller.
Worker Nodes
Worker Nodes are the machines where application containers run. Each worker node hosts several
essential components:
Kubelet: An agent that ensures containers are running in a pod. It communicates with the
API Server to report the status of the containers.
Kube-proxy: This component manages network routing, allowing communication between
different services and ensuring load balancing across pods.
Pods
A Pod is the smallest deployable unit in Kubernetes, representing a single instance of a running
process in a cluster. It can contain one or more closely related containers that share storage and
network resources.
ReplicaSets
ReplicaSets ensure that a specified number of pod replicas are running at any given time. If a pod fails
or is terminated, the ReplicaSet automatically creates a new instance to maintain the desired number
of replicas.
Deployments
Deployments provide a declarative way to manage applications by defining the desired state of your
pods and ReplicaSets. They facilitate updates and rollbacks, ensuring that the application remains
available during changes.
Services
Services enable communication between different pods and external clients. They provide stable IP
addresses and DNS names, facilitating load balancing and service discovery.
etcd
As a distributed key-value store, etcd maintains the configuration data and state information for the
Kubernetes cluster. It ensures consistency and reliability, enabling the cluster to recover from failures.
Conclusion
Understanding Kubernetes architecture is of paramount importance for anyone involved in DevOps or
application development. As organizations increasingly migrate to containerized solutions, a deep
comprehension of how Kubernetes operates is essential for effectively managing these environments.
The architecture's design not only facilitates scalability and resilience but also enhances the overall
efficiency of application deployment and lifecycle management.
As Kubernetes continues to evolve, it is crucial for professionals to remain engaged in continuous
learning about its features and updates. Each new version of Kubernetes introduces enhancements and
optimizations that can significantly impact application performance and operational efficiency.
Staying informed about these changes allows teams to adopt best practices and leverage the full
potential of the platform.
Moreover, the dynamic nature of Kubernetes necessitates an agile approach to learning. The
community around Kubernetes is vibrant and active, often sharing insights and innovative strategies
that can help streamline workflows and improve the management of containerized applications.
Engaging with this community through forums, webinars, and open-source contributions not only
fosters knowledge but also encourages collaboration among peers.
In summary, a thorough understanding of Kubernetes architecture empowers DevOps professionals
and application developers to navigate the complexities of modern application deployment. By
committing to ongoing education and remaining adaptable to advancements in the platform, teams can
ensure they are well-equipped to meet the challenges of an ever-evolving technological landscape.