0% found this document useful (0 votes)
105 views103 pages

Ci - CD

The document outlines a DevOps project focused on implementing a CI/CD pipeline system to address challenges associated with building and managing complex monolithic applications. It proposes solutions such as adopting microservices architecture, automated testing, containerization with Docker, and orchestration using Kubernetes, along with specific tools like Terraform, Jenkins, and Prometheus for infrastructure management and monitoring. Detailed setup instructions for each tool and configuration steps are provided to facilitate the implementation of the proposed solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views103 pages

Ci - CD

The document outlines a DevOps project focused on implementing a CI/CD pipeline system to address challenges associated with building and managing complex monolithic applications. It proposes solutions such as adopting microservices architecture, automated testing, containerization with Docker, and orchestration using Kubernetes, along with specific tools like Terraform, Jenkins, and Prometheus for infrastructure management and monitoring. Detailed setup instructions for each tool and configuration steps are provided to facilitate the implementation of the proposed solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

DEVOPS PROJECT

Requirements : CI / CD Pipeline System


Problem statement :
1. Building Complex Monolithic Application is difficult.
2. Manual efforts to test various components/modules of the
project
3. Incremental builds are difficult to manage, test and deploy.
4. It was not possible to scale up individual modules
independently.
5. Creation of infrastructure and configure it manually is very time
consuming
6. Continuous manual monitoring the application is quite
challenging.

Mitigation:
1. Microservices Architecture – Instead of a monolithic structure,
consider breaking down the application into smaller, independent
services. This allows for better scalability, independent
deployment, and improved maintainability.
2. Automated Testing & CI/CD Pipelines – Implement automated
testing (unit, integration, and end-to-end tests) with CI/CD
pipelines to reduce manual efforts in testing and deployment.
Tool like Jenkins can help.
3. Containerization & Orchestration – Use Docker for
containerization and Kubernetes for orchestration to enable
better modularization, scalability, and ease of deployment.
4. Infrastructure as Code (IaC) – Automate infrastructure
provisioning with tool like Terraform to avoid manual
configuration and speed up deployment.
5. Monitoring & Observability – Utilize monitoring tools like
Prometheus, Grafana, for real-time application monitoring,
logging, and performance tracking.

Tools implemented in this Project:


1. Infrastructure Provisioning & Management

• Tool: Terraform
• Use Case: Automate cloud infrastructure provisioning AWS using
Infrastructure as Code (IaC).
• Benefits:
o Eliminates manual infrastructure setup.
o Ensures version-controlled, repeatable deployments.

2. Build Automation & Dependency Management

• Tool: Maven
• Use Case: Manage dependencies, automate builds, and package
Java applications.
• Benefits:
o Standardized build process for Java-based applications.
o Simplifies dependency management and project structure.
3. Continuous Integration & Deployment (CI/CD)

• Tool: Jenkins
• Use Case: Automate building, testing, and deploying applications
through pipelines.
• Benefits:
o Ensures frequent integration and fast feedback loops.
o Enables automated deployments with rollback capabilities.

4. Containerization & Deployment

• Tool: Docker
• Use Case: Containerize applications for consistent runtime
environments.
• Benefits:
o Removes environment dependency issues.
o Simplifies deployment across different environments.

5. Container Orchestration & Scaling

• Tool: Kubernetes
• Use Case: Deploy, manage, and scale containerized applications
using Kubernetes pods.
• Benefits:
o Enables microservices architecture for better scalability.
o Provides automated load balancing, self-healing, and
networking.
6. Monitoring & Observability

• Tools: Prometheus & Grafana


• Use Case:
o Prometheus collects and stores time-series monitoring data.
o Grafana visualizes metrics through interactive dashboards.
• Benefits:
o Real-time application monitoring and alerting.
o Improves visibility into system health and performance.

CI/CD Workflow with Tools Flow


1. Infrastructure Setup → Terraform

• Define infrastructure as code (IaC).


• Provision cloud resources like VMs, networking, databases, etc.

2. Version Control → Git & GitHub

• Developers push source code to GitHub repository.


• Git tracks changes and enables collaboration.

3. Build & Package → Maven

• Cleans project and compiles source code.


• Generates a .war file for the web application.

4. Jenkins Triggers the Pipeline

• The Jenkins Pipeline starts automatically when a new commit is


detected.
• Jenkins fetches the latest source code from GitHub.
5. Containerization → Docker

• Creates a Docker image using a Dockerfile.


• Ensures the application runs consistently across different
environments.
• Version controls executable containers.

6. Orchestration & Deployment → Kubernetes (K8s)

• Deploys containers as Pods in a Kubernetes cluster.


• Manages scalability, self-healing, and load balancing.

Terraform Cloud Resource Configuration :


• Install VS-CODE on your local machine
• Install Terraform Extension for Visual Studio Code
• Create Terraform Project and Add <filename>.tf file to create
script for resource config
• Enter the below script
• provider "aws" {
• region = "us-east-1"
• access_key = "xxxxxxxxxxxxxxxxxxx"
• secret_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
• }
• resource "aws_vpc" "terra" {
• cidr_block = "10.0.0.0/16"
• instance_tenancy = "default"

• tags = {
• Name = "terra"
• }
• }

• # Create Subnet

• resource "aws_subnet" "terrasub" {
• vpc_id = aws_vpc.terra.id
• cidr_block = "10.0.1.0/24"

• tags = {
• Name = "terrasub"
• }
• }

• # Internet Gateway

• resource "aws_internet_gateway" "terra_gw" {
• vpc_id = aws_vpc.terra.id

• tags = {
• Name = "terra_gw"
• }
• }

• # Route Table

• resource "aws_route_table" "myrt9" {
• vpc_id = aws_vpc.terra.id

• route {
• cidr_block = "0.0.0.0/0"
• gateway_id = aws_internet_gateway.terra_gw.id
• }

• tags = {
• Name = "myrt9"
• }
• }

• # Route Table Association

• resource "aws_route_table_association" "myrta9" {
• subnet_id = aws_subnet.terrasub.id
• route_table_id = aws_route_table.myrt9.id
• }

• # Security Groups

• resource "aws_security_group" "mysg9" {
• name = "mysg9"
• description = "Allow inbound traffic"
• vpc_id = aws_vpc.terra.id

• ingress {
• description = "HTTP"
• from_port = 80
• to_port = 80
• protocol = "tcp"
• cidr_blocks = ["0.0.0.0/0"]
• }

• ingress {
• description = "SSH"
• from_port = 22
• to_port = 22
• protocol = "tcp"
• cidr_blocks = ["0.0.0.0/0"]
• }

• egress {
• from_port = 0
• to_port = 0
• protocol = "-1"
• cidr_blocks = ["0.0.0.0/0"]
• ipv6_cidr_blocks = ["::/0"]
• }

• tags = {
• Name = "mysg9"
• }
• }

• # Create Instance

• resource "aws_instance" "terra_Ec2" {
• ami = "ami-0e1bed4f06a3b463d"
• instance_type = "t2.micro"
• associate_public_ip_address = true
• subnet_id = aws_subnet.terrasub.id
• vpc_security_group_ids = [aws_security_group.mysg9.id]
• key_name = "abc"
• count = 6

• tags = {
• Name = var.server_names[count.index]
• }
• }

• # Output for Public IP addresses
• # Output for details of all EC2 instances (Name, Public IP, Private IP)
• output "instance_details" {
• description = "Details of all EC2 instances with their names, public and
private IPs"
• value = [
• for instance in aws_instance.terra_Ec2 : {
• name = instance.tags["Name"]
• public_ip = instance.public_ip
• private_ip = instance.private_ip
• }
• ]
• }

• This script create 6 instance with necessary elements needed for


an server like security groups etc and give output of server
created with IP’s
• use terraform init – to initialize terraform within the folder
• use terraform plan – for terraform to plan how may objects need
to added , destroyed or changes
• use terraform apply – to apply all changes need as per the script
• Check all resource were created in the AWS console according to
configuration mentioned

Setup and Configuration Jenkins :


# Pre-requesite to install jenkins
• sudo -i
• sudo apt update -y
• sudo apt install git -y
• sudo apt install openjdk-17-jre -y
• java -version
• sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
• echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
• sudo apt-get update
• sudo apt-get install jenkins -y
# Post Installation activity run the below commands
• systemctl status jenkins
• systemctl stop jenkins
• systemctl start jenkins
• systemctl restart jenkins
• systemctl enable jenkin
# Open web browser : http://<Public_IP_Address>:8080/
# once configuration of master was done , now its time to configure
jenkins worker node as we are going to every work on worker node
• sudo -i
• apt update -y
• sudo apt update -y
• sudo apt install openjdk-17-jre -y
• java -version
• sudo apt install git -y
• git –version
• sudo apt install maven -y
• mvn --version
• apt install docker.io -y
# Jenkins Slave Node Configuration to Create SSH connection to
Master Node.
• useradd devopsadmin -s /bin/bash -m -d /home/devopsadmin
• su – devopsadmin
• ssh-keygen
• ssh-keygen -t ecdsa -b 521
• ls ~/.ssh
• id_ecdsa - private key
• id_ecdsa.pub – public
• cat id_ecdsa.pub > authorized_keys
• chmod 600 /home/devopsadmin/.ssh/*
• usermod -aG docker devopsadmin - run this command as root
user
# Login to Jenkins - Manage Jenkins - Attach the Slave Node to
jenkins Master
• Goto to Manage Jenkins
• Select Nodes
• On Nodes Dashboard, Click on New Node
• Give Node Name, and choose permanent agent.
Now successfully Setting up a Jenkins Master-Slave architecture helps
distribute workloads across multiple nodes, improving performance
and scalability.

Setup and Configuration K8 (Kubernetes) :


# Following Configuration is common for both master and worker
nodes
• sudo -i
• apt update -y
• sudo swapoff -a
• sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
• apt install docker.io -y
• Load the br_netfilter module required for networking
• To allow iptables to see bridged traffic, as required by
Kubernetes, we need to set the values of certain fields to 1
• sudo sysctl –system
• Install curl
• sudo apt install curl -y
• curl -fsSL https://download.docker.com/linux/ubuntu/gpg |
sudo apt-key add -sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs)
stable"
• sudo apt update -y
• sudo apt install -y containerd.io
• sudo mkdir -p /etc/containerd
• sudo containerd config default | sudo tee
/etc/containerd/config.toml
• sudo systemctl restart containerd
• sudo systemctl status containerd
# Install Kubernetes
• sudo apt-get update
• sudo apt-get install -y apt-transport-https ca-certificates curl gpg
• sudo mkdir -p -m 755 /etc/apt/keyrings
• curl -fsSL
https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
• echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-
keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' |
sudo tee /etc/apt/sources.list.d/kubernetes.list
• sudo apt-get update
• sudo apt-get install -y kubelet kubeadm kubectl
• sudo systemctl enable kubelet
# Following Installation should be performed on K8 Master
• sudo kubeadm config images pull
• sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-
preflight-errors=NumCPU --ignore-preflight-errors=Mem
• mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
• kubectl apply -f
https://github.com/coreos/flannel/raw/master/Documentation
/kube-flannel.yml
• Use the get nodes command to verify that our master node is
ready
• kubectl get nodes
• check whether all the default pods are running
• kubectl get pods --all-namespaces
• kubectl get nodes
• sudo kubeadm token create --print-join-command # To generate
New Token along with Kubeadm Join Command
• With the help of this Tokken used to join the worker node to
Master
• On Each Worker Node, Run the Join Command from Master’s
Output
#Log in to Docker and create access token
# And configure global cred on Jenkins as we are loggin into docker
hub to push image created
# Use snippet generator to create ssh publish to tranfer artifacts over
SSH to k8 server
# Now , we are all set to create pipeline as we done with all pre-
requisite on all configuration and integration

Setup and Configuration Grafana & prometheus For


Monitoring & Observability
# Prometheus installation
• Goto https://prometheus.io/download/ #this is to get link
address
• wget
https://github.com/prometheus/prometheus/releases/downloa
d/v2.48.0-rc.0/prometheus-2.48.0-rc.0.linux-amd64.tar.gz
• tar -zxvf prometheus-2.48.0-rc.0.linux-amd64.tar.gz
• Create following file: sudo vi
/etc/systemd/system/prometheus.service
-------------------------------------------------
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=root
Restart=on-failure

ExecStart=/root/prometheus-2.48.0-rc.0.linux-amd64/prometheus --
config.file=/root/prometheus-2.48.0-rc.0.linux-
amd64/prometheus.yml

[Install]
WantedBy=multi-user.target
• sudo systemctl daemon-reload
• sudo systemctl status prometheus
• sudo systemctl start prometheus
• systemctl enable Prometheus
# grafana installation :
• wget https://dl.grafana.com/oss/release/grafana-9.1.2-
1.x86_64.rpm
• sudo yum install grafana-9.1.2-1.x86_64.rpm -y
• sudo /bin/systemctl enable grafana-server.service
• sudo /bin/systemctl start grafana-server.service
• sudo /bin/systemctl status grafana-server.service
# <grafana external / public ip>:3000 -- to create exposure on
internet
# node exporter installation
• To be installed on node which we wish to monitor
• wget
https://github.com/prometheus/node_exporter/releases/down
load/v1.4.0-rc.0/node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
• tar -zxvf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz ------to
unzip the package
• create the following file
sudo vi /etc/systemd/system/node_exporter.service
-----------------------------------------------------------------------------------------------
-------------------------------------------------
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target

[Service]
User=root
Restart=on-failure

ExecStart=/root/node_exporter-1.4.0-rc.0.linux-
amd64/node_exporter
[Install]
WantedBy=multi-user.target
• sudo systemctl daemon-reload
• sudo systemctl status node_exporter
• sudo systemctl start node_exporter -- to enable and start node
exporter
• copy the IP address of the server that want to monitor
• navigate to PROMETHEUS server and go to installation path of
Prometheus
• vi prometheus.yml
• Add the target with valid node_exporter port
• sudo systemctl restart prometheus
• sudo systemctl status Prometheus
• Goto Prometheus server -- <prometheus-external-ip>:9090
• in the query field type up and click on execute to see the list of
servers up for monitoring
# Grafana Portal
• <grafana-external-ip>:3000
• Click on settings button --> Data Source --> Add Data Source -->
Select Prometheus
• paste the prometheus link
• See the Prometheus Data Source Created
Build Deployment Through Jenkins CI /CD Pipeline:
# Navigate to jenkins > new_job and give name for your
project and choose pipeline
Pipeline Script :
pipeline {
agent { label 'Jenkins_worker_2' }

environment {
DOCKERHUB_CREDENTIALS=credentials('capestone_docker')
}

stages {
stage('SCM_Checkout') {
steps {
echo "Perform SCM Checkout"
git 'https://github.com/Deepak1998226/star-agile-insurance-
project.git'
}
}

stage('Application Build') {
steps {
echo "Perform Application Build"
sh 'mvn clean package'
}
}

stage('Build Docker Image') {


steps {
sh 'docker version' // Check Docker version first to ensure
Docker is installed
sh "docker build -t deepak607/insurance-eta-
app:${BUILD_NUMBER} ."
sh 'docker image list' // List images to verify the image was
built successfully
sh "docker tag deepak607/insurance-eta-
app:${BUILD_NUMBER} deepak607/healthcare-eta-app:latest"
}
}

stage('Login2DockerHub') {
steps {
script {
withCredentials([usernamePassword(credentialsId:
'capestone_docker', usernameVariable:
'DOCKERHUB_CREDENTIALS_USR', passwordVariable:
'DOCKERHUB_CREDENTIALS_PSW')]) {
sh 'echo $DOCKERHUB_CREDENTIALS_PSW | docker
login -u $DOCKERHUB_CREDENTIALS_USR --password-stdin'
}
}
}
}

stage('Publish_to_Docker_Registry') {
steps {
sh "docker push deepak607/insurance-eta-
app:${BUILD_NUMBER}"
sh "docker push deepak607/insurance-eta-app:latest"
}
}
stage('Deploy to Kubernetes Cluster') {
steps {
script {
sshPublisher(publishers: [
sshPublisherDesc(
configName: 'deepak',
transfers: [
sshTransfer(
cleanRemote: false,
excludes: '',
execCommand: '''
echo "Testing kubectl connection..."
kubectl version
if [ $? -ne 0 ]; then
echo "kubectl is not working. Exiting."
exit 1
fi
echo "Applying Kubernetes Deployment..."
kubectl apply -f kubedeploy.yaml
''',
execTimeout: 120000,
flatten: false,
makeEmptyDirs: false,
noDefaultExcludes: false,
patternSeparator: '[, ]+',
remoteDirectory: '.',
remoteDirectorySDF: false,
removePrefix: '',
sourceFiles: '*.yaml'
)
],
usePromotionTimestamp: false,
useWorkspaceInPromotion: false,
verbose: true
)
])
}
}
}
}
}
## Navigate back to pipeline project page and click on Build Now
option and verify that build is been scheduled

# Verify console output as “SUCCESS” , means your pipeline has been


executed properly
# Now we can go to k8 master server and check that our deployment
has been made and nodeport service has been created to expose to
web browser and replica set been created according to yaml script
provided on github repo
# Now its time to expose to internet with
<public_ip>:nodeport number in my case its 31001 as I
created it for banking application deploy
# As a post deployment activity we need to continuously monitor our
server , as it may always lead to over CPU utilization or any problem
that can disrupt deployment server , for monitoring we are going to use
Prometheus and Grafana Prometheus – for collecting sources from our
target server Grafana – to visualize all source and set trigger
mechanism Install Prometheus from where you want to monitor , in my
case I want monitor all my k8 worker nodes from master node as it
handle all deployment builds
And enter promql queries on which you want monitor
In my case I wish to monitor
1. CPU utilization - 100 - (avg by (instance)
(rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
2. Disk Space Utilization - (1 -
(node_filesystem_avail_bytes{fstype!="tmpfs", mountpoint="/"} /
node_filesystem_size_bytes{fstype!="tmpfs", mountpoint="/"})) * 100
3. Total Available Memory - node_memory_MemAvailable_bytes /
node_memory_MemTotal_bytes * 100
And create dash board and club those to visualize better output
So we got output successfully and our deployment are done seamlessly
with all devops tool integrated and implemented properly.

You might also like