0% found this document useful (0 votes)
27 views10 pages

Report Ta

Uploaded by

shezrisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

Report Ta

Uploaded by

shezrisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Cloud And Devops: UCS 745

Lab Assignment

Submitted By

Radhika Jasra (102103815)


Q1) How Merging is done and how the merge conflict is resolved in the Git Hub?

Git merge is a command used to combine the changes from one or more branches into the current
branch. It integrates the history of these branches, ensuring that all changes are included and conflicts
are resolved.
Syntax:
git merge <branch-name>
Example of merge conflict
S.No Step Detail Screenshot
Step 1: Create a new directory and
initialize git.

Step 2: Create a file f1.txt and add


some data.
Step 3: Create a new branch called
“feature development.
Step 4: Modify Line 2 in f1.txt.

Step 5: Switch back to the main


branch and modify the same
line (Line 2) in f1.txt.
Step 5: Attempt to merge the changes
from feature-branch into
main.
Result Git detects a conflict because
the same line (Line 2) was
edited differently in both
branches.
Resolving This conflict
S.No Step Detail Screenshot
Step 1: Open f1.txt

Step 2: Resolve the conflict by


editing the file to keep the
changes you want

Step 3: After resolving the conflict,


add the file to the staging
area.

Q.2 Explain docker domain specific keywords used in the Docker File.
S.N Keyword Purpose Example
o
1 FROM Specifies the base image for the FROM ubuntu:20.04
Docker image.
2 RUN Executes commands to install RUN apt-get update && apt-get
dependencies or configure the install -y python3
image.
3 CMD Specifies the default command to CMD ["python3", "app.py"]
run when a container starts (can be
overridden).
4 ENTRYPOINT Defines a command that will ENTRYPOINT ["python3",
always run as the container "app.py"]
starts (cannot be easily
overridden).
5 COPY Copies files or directories from the COPY app.py /app/
host to the container’s filesystem.
6 ADD Similar to COPY but also supports ADD app.tar.gz /app/
remote URLs and unpacking
archives (e.g., .tar files).
7 WORKDIR Sets the working directory for WORKDIR /app
instructions that follow, like RUN,
CMD, or ENTRYPOINT.
8 ENV Sets environment variables in the ENV APP_ENV=production
container.
9 EXPOSE Informs Docker that the container EXPOSE 8080
will listen on specific ports (for
documentation purposes).
10 VOLUME Creates a mount point for storing VOLUME /data
data outside the container.
11 LABEL Adds metadata to the image, such LABEL
as authorship or version info. maintainer="[email protected]"
12 USER Specifies the user to execute USER nonrootuser
subsequent commands as, for
security purposes.
13 ARG Defines build-time variables that ARG APP_VERSION
can be passed during the docker
build process.
14 ONBUILD Adds instructions that will execute ONBUILD RUN apt-get update
when the image is used as a base
for another Dockerfile.
15 STOPSIGNAL Sets the system signal to stop the STOPSIGNAL SIGTERM
container.
16 HEALTHCHECK Defines a command to test if the `HEALTHCHECK CMD curl
container is healthy. http://localhost
17 SHELL Specifies the default shell for SHELL ["/bin/bash", "-c"]
executing RUN commands (e.g.,
/bin/sh or powershell).

Q3) Create a Shell Script Program in order to find the list of prime numbers between 1 to 100

I have used raspberry pi terminal for executing this shell script. Following are the steps followed:
1. Create a script file “Prime_Number.sh”
2. Write the code (ctrl+x, y, enter)
3. Make the file executable using chmod
4. Run the script file

Code for reference


#!/bin/bash

is_prime() {
num=$1
if [ $num -le 1 ]; then
return 1
fi

for ((i=2; i<=num/2; i++)); do


if ((num % i == 0 )); then
return 1
fi
done
return 0
}
for ((num=1; num<=100; num++)); do
is_prime $num
if (( $? == 0 )); then
echo $num
fi
done

Q4) Explain the Kubernetes Architecture in detail.

Introduction to Kubernetes Architecture


Kubernetes is an open-source platform designed to automate the deployment, scaling, and
management of containerized applications. As organizations increasingly adopt microservices and
containerization, Kubernetes has emerged as a critical tool in simplifying the complexities associated
with these technologies. Its architecture is built around the concept of clusters, which are collections
of machines that work together to run applications in a resilient and scalable manner.
At the core of Kubernetes architecture are several primary components.

The Master Node is the control plane responsible for managing the Kubernetes cluster. It houses the
API server, scheduler, and controller manager, which collectively handle the orchestration of
containerized applications.
The API Server serves as the primary interface for users and other components to interact with the
cluster, managing the state of the system and ensuring that the desired state of the applications is
maintained.
The Worker Nodes are the machines where the actual applications run. Each worker node contains a
container runtime (such as Docker), a kubelet, and a kube-proxy. The kubelet is an agent that
communicates with the master node, ensuring that containers are running as expected. Kube-proxy is
responsible for managing network routing and load balancing across the various services within the
cluster.

Another essential component is etcd, a distributed key-value store that acts as the database for the
Kubernetes cluster, storing all configuration data, state information, and metadata. This allows
Kubernetes to maintain a consistent view of the cluster’s state and facilitates recovery in case of
failures.
By leveraging this architecture, Kubernetes enables efficient resource management, automatic scaling,
and seamless updates for applications. Its open-source nature fosters a vibrant community,
continuously enhancing its functionality and capabilities to meet the evolving needs of modern
application development and deployment.
Core Components of Kubernetes
The architecture of Kubernetes is built on several core components that work together to deliver a
robust platform for managing containerized applications. Understanding these components is crucial
for leveraging Kubernetes effectively.
Master Node (Kubernetes Control Plane)
The Master Node, or Control Plane, is the brain of the Kubernetes cluster. It manages the cluster's
state and orchestrates the deployment and operation of applications. Key components within the
Master Node include:
 API Server: This is the primary interface for users and components to interact with the
Kubernetes cluster. It processes REST requests and updates the state of the cluster.
 Scheduler: Responsible for assigning newly created pods to worker nodes based on resource
availability, constraints, and policies.
 Controller Manager: This component regulates the state of the cluster, ensuring that the
desired state (as defined by the user) matches the current state. It manages various controllers,
including the Replication Controller and Node Controller.
Worker Nodes
Worker Nodes are the machines where application containers run. Each worker node hosts several
essential components:
 Kubelet: An agent that ensures containers are running in a pod. It communicates with the
API Server to report the status of the containers.
 Kube-proxy: This component manages network routing, allowing communication between
different services and ensuring load balancing across pods.
Pods
A Pod is the smallest deployable unit in Kubernetes, representing a single instance of a running
process in a cluster. It can contain one or more closely related containers that share storage and
network resources.
ReplicaSets
ReplicaSets ensure that a specified number of pod replicas are running at any given time. If a pod fails
or is terminated, the ReplicaSet automatically creates a new instance to maintain the desired number
of replicas.
Deployments
Deployments provide a declarative way to manage applications by defining the desired state of your
pods and ReplicaSets. They facilitate updates and rollbacks, ensuring that the application remains
available during changes.
Services
Services enable communication between different pods and external clients. They provide stable IP
addresses and DNS names, facilitating load balancing and service discovery.
etcd
As a distributed key-value store, etcd maintains the configuration data and state information for the
Kubernetes cluster. It ensures consistency and reliability, enabling the cluster to recover from failures.

Kubernetes Networking and Storage


Kubernetes employs a robust networking model that facilitates seamless communication between
pods, services, and external systems. At its core, Kubernetes adopts a flat networking structure,
allowing every pod in the cluster to communicate with each other without the need for Network
Address Translation (NAT). Each pod receives its own unique IP address, and all containers within a
pod share this address. This design simplifies the communication model significantly, as it eliminates
the complexity of port mappings.
Communication Between Pods and Services
Pods communicate via services, which act as stable endpoints for accessing a set of pods. Each service
is assigned a virtual IP address (ClusterIP) that remains constant, even if the underlying pods change.
Kubernetes uses a concept known as Service Discovery to facilitate this, where services are
automatically discovered through DNS. When a pod wants to communicate with another pod, it can
simply resolve the service name to its IP address, enabling direct communication.
In addition to internal communication, Kubernetes manages external access through Ingress and
LoadBalancer services. An Ingress resource provides HTTP and HTTPS routing to services based on
rules defined by the user, while LoadBalancer services provision an external load balancer that directs
traffic to the appropriate service.
Network Policies
To regulate the communication between pods, Kubernetes employs Network Policies. These policies
define rules that specify how groups of pods can communicate with each other and with external
entities. By default, all traffic is allowed, but Network Policies can enforce restrictions based on
namespaces, labels, and other selectors, enhancing the security posture of applications running in the
cluster.
Storage Solutions in Kubernetes
Kubernetes also provides a flexible storage model designed to meet various application needs. Key
components in this model include Persistent Volumes (PVs) and Persistent Volume Claims
(PVCs).
Persistent Volumes (PVs) are storage resources in the cluster that are provisioned by an
administrator. They abstract the underlying storage infrastructure, whether it be local disks, NFS, or
cloud storage solutions.
Persistent Volume Claims (PVCs) are requests for storage by users. A PVC specifies size and access
modes, allowing users to claim storage resources without needing to know the details of the
underlying infrastructure. The Kubernetes scheduler then binds PVCs to suitable PVs that meet the
request criteria.
This decoupling of storage from pods allows for more resilient and scalable architecture, as data can
persist beyond the lifecycle of individual pods, ensuring that applications can maintain state across
updates and failures.

Conclusion
Understanding Kubernetes architecture is of paramount importance for anyone involved in DevOps or
application development. As organizations increasingly migrate to containerized solutions, a deep
comprehension of how Kubernetes operates is essential for effectively managing these environments.
The architecture's design not only facilitates scalability and resilience but also enhances the overall
efficiency of application deployment and lifecycle management.
As Kubernetes continues to evolve, it is crucial for professionals to remain engaged in continuous
learning about its features and updates. Each new version of Kubernetes introduces enhancements and
optimizations that can significantly impact application performance and operational efficiency.
Staying informed about these changes allows teams to adopt best practices and leverage the full
potential of the platform.
Moreover, the dynamic nature of Kubernetes necessitates an agile approach to learning. The
community around Kubernetes is vibrant and active, often sharing insights and innovative strategies
that can help streamline workflows and improve the management of containerized applications.
Engaging with this community through forums, webinars, and open-source contributions not only
fosters knowledge but also encourages collaboration among peers.
In summary, a thorough understanding of Kubernetes architecture empowers DevOps professionals
and application developers to navigate the complexities of modern application deployment. By
committing to ongoing education and remaining adaptable to advancements in the platform, teams can
ensure they are well-equipped to meet the challenges of an ever-evolving technological landscape.

You might also like