Ca2 Int334
Ca2 Int334
Dr. Varsha
School of Computer Science and Engineering,
Lovely Professional University, Phagwara, Punjab.
Date: 10/04/2025
Acknowledgement
We extend our sincere gratitude to all those who contributed to the successful
completion of this project, "Deployment of a Static Restaurant Website using
Docker and Kubernetes."
First and foremost, we would like to express our deepest appreciation to our
project supervisor/advisor Dr. Varsha, whose constant support, expert
guidance, and valuable feedback were instrumental throughout the duration
of this project. Their encouragement and direction helped us stay focused and
strive for excellence at every stage.
We are also thankful to the faculty and staff of Lovely Professional
University for providing the infrastructure, technical resources, and a
learning-conducive environment that made it possible to bring this project to
life. Their commitment to nurturing innovation and skill development has
played a vital role in our academic journey.
Our heartfelt thanks go to our fellow teammates, whose collaboration,
consistency, and shared vision were key to the successful execution of this
project. Each member contributed significantly—through brainstorming,
problem-solving, coding, and testing—to turn our ideas into reality.
We would also like to extend our appreciation to the open-source community
for their invaluable contributions in building and maintaining tools and
platforms such as Docker and Kubernetes. Their efforts in fostering free and
collaborative technology made the core of this project possible.
Finally, we acknowledge the unwavering support and patience of our families
and friends. Their encouragement, motivation, and understanding helped us
overcome challenges and kept us going during times of stress and fatigue.
This project would not have been possible without the combined efforts and
support of all the individuals and organizations mentioned above. We are
sincerely grateful for their contributions and involvement in helping us achieve
our goal.
Introduction
Project Overview:
The goal of this project, “Deployment of a Static Restaurant Website using
Docker and Kubernetes,” is to demonstrate how modern DevOps tools can be
utilized to deploy a lightweight web application in a scalable and reliable
manner. The website serves as a digital representation of a restaurant,
showcasing menus, location, contact information, and aesthetic design to
attract customers online. While the content itself is static, the deployment
architecture follows enterprise-grade practices using containerization and
orchestration tools.
In this project, the static website is first containerized using Docker,
encapsulating all necessary files and dependencies into a lightweight,
portable container image. This image is then pushed to Docker Hub, acting as
a central registry. The Kubernetes cluster—comprising master and worker
nodes—is then used to orchestrate the deployment. The website is deployed
using a Deployment object in Kubernetes, which ensures that the specified
number of replicas are running and that any failures are automatically
handled. A NodePort service is used to expose the application externally so
that it can be accessed via a browser using the public IP of the node.
This project simulates a real-world production setup where developers and
DevOps engineers work together to containerize applications and deploy
them at scale. It reflects a strong understanding of not just how to build web
applications, but how to deploy and manage them in a cloud-native,
production-ready environment.
Project Purpose and Relevance:
In today's digital-first world, an online presence is no longer optional—it is a
necessity, even for small-scale and local businesses like restaurants.
Customers rely heavily on the internet for discovering new places, checking
menus, booking tables, and reading reviews. Hence, even a simple static
website must be hosted in a manner that ensures maximum uptime, fast
accessibility, and smooth performance across all devices and regions.
Traditional hosting methods, such as shared hosting or single-server
deployments, often lack the flexibility, fault tolerance, and
scalability required in modern web infrastructure. These setups may work for
low-traffic sites but struggle when demand increases, leading to slow
performance or downtime. Additionally, managing updates, monitoring, and
failover mechanisms in such environments often involves manual
intervention, which can be both time-consuming and error-prone.
To address these challenges, this project adopts a DevOps-centric
approach using Docker and Kubernetes—two of the most widely used tools
in cloud-native application deployment. Docker packages the static
restaurant website into a self-contained, lightweight container, ensuring
consistency across different environments. This allows developers to avoid "it
works on my machine" issues and rapidly roll out updates with confidence.
Kubernetes, on the other hand, takes care of the orchestration and
management of these containers. It handles load balancing, automated
restarts, self-healing, and horizontal scaling of pods based on user
demand. This means the website can be scaled up during peak hours (like
weekends or holidays) and scaled down during off-peak times—ensuring
optimal resource utilization without sacrificing user experience.
Why Docker?
Docker enables developers to package applications along with all their
dependencies into a single, consistent container image. This ensures that the
application behaves the same way across different environments—whether
it’s a developer’s laptop, a testing server, or a production cloud server. In this
project, Docker allows the entire restaurant website, including HTML, CSS,
and image assets, to be bundled into a single container image that is easy to
share, version, and deploy. It eliminates the common “it works on my
machine” problem and streamlines the CI/CD pipeline.
Why Kubernetes?
While Docker handles the packaging and running of
containers, Kubernetes is responsible for managing them at scale.
Kubernetes automates the deployment, scaling, and management of
containerized applications across clusters of machines. It ensures that the
desired number of replicas (pods) are always running and handles failures
automatically through self-healing. Moreover, Kubernetes services
like NodePort, LoadBalancer, and Ingress simplify external access and
routing.
In the context of this project:
• Kubernetes ensures high availability by running multiple replicas of the
website container.
• It supports scalability—we can easily scale up or down based on user
demand.
• It provides robust monitoring and management tools to observe the
state of the application.
• With a declarative configuration approach (YAML), it
ensures infrastructure as code, improving maintainability and
reproducibility.
By using Docker and Kubernetes together, this project not only serves as a
functional deployment of a static website but also exemplifies best practices
in containerized application delivery and cloud-native infrastructure
management.
Security Groups:
Step 2: Containerizing the Static Restaurant Website with Docker
Inside this directory, add the HTML, CSS, images, and any other assets that
make up the static restaurant website. The main file should typically be
project.html.
2. Add HTML File for the Website:
• Create and edit the index.html file to contain the content of the
restaurant website (e.g., menu, contact details, location, etc.). This file
will serve as the landing page when accessed through a web browser.
• Ensure that all assets (CSS files, images, etc.) are correctly linked within
the HTML file so that they display properly in the browser.
3. Create a Dockerfile:
Write a Dockerfile in the root of the project-site directory. This file will define
the base image (e.g., nginx), copy the website files into the container, and
configure the container to serve the static website.
Image of Dockerfile:
Code of Dockerfile:
FROM nginx:alpine
COPY project.html /usr/share/nginx/html/index.html
Build the Docker Image:
• Run the docker build command to create a Docker image from
the Dockerfile.
• The command would be something like:
docker build -t restaurant-website .
Push the Docker Image to Docker Hub:
docker login
docker push teja2610/project-site:latest
Image of Building Docker image and Pushing it into Dockerhub:
Code of service.yaml :
apiVersion: v1
kind: Service
metadata:
name: project-site-service
spec:
selector:
app: project-site
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30081
Deploy the Resources to Kubernetes Cluster:
• Apply the Deployment and Service YAML files using kubectl apply to create
the Kubernetes resources:
After deploying the resources using the kubectl apply -f command for both the
Deployment and Service, it's important to verify that everything is working as
expected and troubleshoot any potential issues that may arise.
This command will display all the pods and their current status:
• The kubectl get pods command allows you to see the status of all pods
running in the Kubernetes cluster. It provides a list of pods along with their
current state
(e.g., Running, Pending, Succeeded, Failed, CrashLoopBackOff), which
helps identify whether the pods are functioning as expected or if there are
issues.
By using kubectl get pods, you can monitor not only the pod's running status but
also the RESTARTS column. If a pod restarts frequently, it could indicate an issue
with the containerized application, such as a crash or error during startup. This
helps in troubleshooting and pinpointing issues within the pods.
Image of verifing the deployment:
From the output, we can see that the majority of the pods in the cluster are in
the Running state, indicating that they are functioning as expected. Specifically,
the pods project-site-deployment-657c9b547d-hvdjl and static-site-manual are
both running without any issues, showing 1/1 in the READY column and 0 in
the RESTARTS column.
• By using kubectl get svc, you can check whether the services are properly
configured to allow external or internal traffic. For instance,
a NodePort service will have an external port listed (e.g., 30081), allowing
external traffic to reach the service via the node's IP address. This helps in
troubleshooting if the service is accessible or if any ports are misconfigured.
The -o wide flag extends the output of the kubectl get pods command to show
more detailed information about each pod, such as the Node on which the pod is
running, the IP address of the pod, and the container image used. This additional
information helps you monitor the exact distribution of pods across nodes and
verify the correct image and resources are being used.
The output of the kubectl get pods -o wide command provides a detailed view of
the Kubernetes pods running in your cluster. Here's an explanation of each
column in the provided output:
1. NAME:
• project-site-deployment-657c9b547d-hvdjl: This is the name of the
pod. The name includes a part generated by Kubernetes
(657c9b547d) as a unique identifier for the deployment and a
random string (hvdjl) for the specific pod instance.
2. READY:
• 1/1: This indicates the pod has one container, and it is running and
ready to serve requests. The format is <ready-containers>/<total-
containers>. In this case, the pod has 1 container, and it is ready.
3. STATUS:
• Running: This means the pod is running as expected, and its
container is actively running.
4. RESTARTS:
• 0: This column shows the number of times the pod has restarted.
A 0 indicates that the pod hasn't experienced any restarts, meaning
there have been no crashes or issues with the pod.
5. AGE:
• 14h: This shows how long the pod has been running since it was
created. In this case, it has been running for 14 hours.
6. IP:
• 192.168.0.204: This is the internal IP address of the pod within the
Kubernetes cluster. This IP is used for communication between the
pod and other services or pods within the cluster.
7. NODE:
• ip-172-31-17-254: This shows the name of the node (a virtual or
physical machine) in the cluster where the pod is running. In this
case, the pod is running on the node with the IP address ip-172-31-
17-254.
8. NOMINATED NODE:
• <none>: This field is used to indicate if the pod has been nominated
to run on a specific node (e.g., during pod scheduling or preemption).
Since there is no nomination, it shows <none>.
9. READINESS GATES:
• <none>: This column shows the readiness gates, which are additional
conditions that must be met before a pod is considered ready. In this
case, no specific readiness gates have been defined, so it
shows <none>.
Website Access:
Once the service is exposed correctly, test the accessibility of the website. You
can do this by accessing the public IP address of the node and the port specified.
http://56.228.14.38:30081
1. 56.228.14.38:
• This is the IP address of one of the nodes in your Kubernetes cluster,
which is exposed to the external network. When you expose a
Kubernetes service using NodePort, the service is accessible via the IP
of any worker node along with the specified port number. In this
case, 56.228.14.38 is the external IP address of one of the nodes in
the cluster.
2. :30081:
This is the port number that the Kubernetes service is exposed on. Since you're
using NodePort to expose your application, the port 30081 tells Kubernetes to
route external traffic from that port to the application running inside the
cluster.
The specific service (for example, the static restaurant website) will be accessible
on port 30081 of the node’s public IP address.
Output of the deployment :
Uploading project in Git Hub: git hub project link
Readme File :
Learning Outcomes:
By the end of the project, we not only learned how to deploy a static website
using Kubernetes and Docker but also gained valuable insights into container
orchestration, automation, and scaling in a real-world cloud-native environment.