Devops Engineer - Lab Sessions - CICD Pipeline Scripts - Projets
Devops Engineer - Lab Sessions - CICD Pipeline Scripts - Projets
Docker image – artficats from jfrog – deployment into Kubenet applications , Automation of CICD
pipeline
Build and push docker imaged to docker hubregistry and Jfrog repository
Creating Kubernet manifest.yml for deployment to Jfrog respotory for application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
ports:
- containerPort: 8080
Kubernetctl Pipeline
Build and push docker imaged to docker hubregistry and Jfrog repository
Creating Kubernet manifest.yml for deployment to Jfrog respotory for application deployment
pipeline {
agent any
environment {
DOCKER_IMAGE = "<jfrog-repository>/<image-name>"
IMAGE_TAG = "latest"
stages {
steps {
script {
steps {
script {
stage('Deploy to Kubernetes') {
steps {
script {
Kubernet Artcteture
Installation, on ec2 instance , EKS , cluster confifuartion , cluster id, VPC,api, load baling
Compontes:
API scheduler
+--------------------------+
+--------------------------+
| |
| +--------------------+ |
| | API Server ||
| +--------------------+ |
| | Scheduler ||
| +--------------------+ |
| | Controller Manager| |
| +--------------------+ |
| | ETCD ||
| +--------------------+ |
+--------------------------+
+-----------------------------------+
| Kubernetes Worker Node 1 (Node) |
+-----------------------------------+
| |
| +---------------------------+ |
| | Kubelet | |
| +---------------------------+ |
| | Kube Proxy | |
| +---------------------------+ |
| | Container Runtime | |
| +---------------------------+ |
+-----------------------------------+
+-----------------------------------+
+-----------------------------------+
| |
| +---------------------------+ |
| | Kubelet | |
| +---------------------------+ |
| | Kube Proxy | |
| +---------------------------+ |
| | Container Runtime | |
| +---------------------------+ |
Creaetion of cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Application , depoyment
kubectl apply -f nginx-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Launch the EC2 instances, Mater and worker nodes along VPC, Security groups
installation of kubeadm,kubectl,kubelet
mkdir -p $HOME/.kube
- --etcd-servers=https://<etcd-server-ip>:2379
Pods,nodes,services, deployment,cluster
Pods details
cluster details
deployment details
Services
Logs
Namesapce
FROM node:14
WORKDIR /app
# Install dependencies
COPY package*.json ./
COPY . .
EXPOSE 3000
FROM node:14
WORKDIR /app
# Install dependencies
COPY package*.json ./
COPY . .
EXPOSE 4000
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: <your-registry>/user-service:v1
ports:
- containerPort: 3000
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: <your-registry>/order-service:v1
ports:
- containerPort: 4000
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 80
targetPort: 3000
clusterIP: None
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- protocol: TCP
port: 80
targetPort: 4000
clusterIP: None
apiVersion: v1
kind: Service
metadata:
name: user-service-lb
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
User servive
oreder service
// Example of Order Service making an HTTP request to User Service (Node.js example)
axios.get('http://user-service:80/users')
.then(response => {
console.log(response.data);
})
.catch(error => {
});
Updating microservices
Artecture
+-------------------+ +-------------------+
| | | |
| | | |
+-------------------+ +-------------------+
Webhook/Trigger
| |<----------------->| |
+-------------------+ +-------------------+
^ |
| |
| v
| +-------------------+
| | |
+-------------------+
HashiCorp Vault
AWS Secrets Manager
Azure Key Vault
CyberArk
Using HashiCorp Vault with Jenkins
AWS Integration
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['dynamic-ssh-key']) {
Build & push the docker image , push & pull from remote & local repository
Deploy the application in satge & in prodcutions env to kubernet & docker,yml.kubernet
pipeline {
agent any
environment {
stages {
stage('Checkout') {
steps {
}
}
stage('Build') {
steps {
stage('Test') {
steps {
sh 'mvn test'
stage('Package') {
steps {
steps {
sh '''
'''
stage('Deploy to Staging') {
steps {
stage('Deploy to Production') {
when {
branch 'main'
steps {
post {
always {
cleanWs()
success {
}
failure {
pipeline {
agent any
stages {
parallel {
stage('Service A') {
steps {
stage('Service B') {
steps {
}
Handling Multi-Environment Credentials
pipeline {
agent any
parameters {
environment {
CRED_ID = "db-cred-${params.ENV}"
stages {
stage('Use Credential') {
steps {
Pipline failed
Space allocations ,
Github,webhook configurauin
Docker
Arcteture
+------------------+
| (CLI/REST API) | | |
+------------------+ | |
| | |
v v v
| | | | | (Docker Hub) |
+------------------+
| Host OS Kernel |
| (Linux/Windows) |
+------------------+
Github, Gitlab
git status
git push
git pull
Merging branches
git log
Stashing changes
git stash
git commit
git diff
git tag v6
ndoying
Rebate , checkout ,
Githubbarches
verification mremote
git remote -v
Forking a Repository
Creating pull mrequest from current brnachto main brach or another repostorty
Name
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
uses: actions/setup-node@v2
with:
node-version: 14
Rebase Regularly:
bash
Copy code
git checkout feature/<feature-name>
git fetch origin
git rebase origin/main
Ensure all unit tests and integration tests pass before pushing.
git add .
feature/<feature-name>
release/<release-version>
Hot new faetue<features banch >
Jenkins Integration
Webhook Integration
GitHub Integration:
Bitbucket Integration:
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout scm
stage('Build') {
steps {
sh './build.sh'
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'npm install'
stage('Test') {
steps {
sh 'npm test'
post {
success {
Branch-Specific Logic
pipeline {
agent any
stages {
stage('Build') {
when {
branch 'main'
steps {
sh './build-production.sh'
stage('Test') {
when {
not {
branch 'main'
steps {
sh './run-tests.sh'
Docker Arcterture,
docker file creation, docker image , run the & acces the application deployment the ,
Push docker image to docker registry,
application deployment to EC2 instance , deployment to kubernet
# Base image
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
COPY . .
EXPOSE 5000
http://localhost:5000
docker login
Tag & push the docker image & containzed python application
Deploy to Kubernetes
Environment Variables:
dockerfile
Copy code
ENV APP_ENV=production
Verfication of coiantainer
dockerfile
Copy code
Docker dead,
Sapce allocation, configuration, containners conflicts,docker services
verification of diskspace
df -h
top
docker stop $(docker ps -q)
docker rm $(docker ps -a -q)
Dcoker version, start& stop. psuh,pull, from docker regisryt, volume, network,
containers,docket deamoen, import and export container, docket run on local envi, and
multpliple docker container through yaml,
docker --version
docker version
docker ps
docker ps -a
docker stop <container-id>
docker rm <container-id>
docker info
docker system df
docker network ls
docker network create my-network
docker network create my-network
docker volume ls
docker volume create <volume-name>
docker run -v my-volume:/app/data my-python-app:1.0
docker login
version: '3'
services:
app:
image: my-python-app:1.0
ports:
- "5000:5000"
db:
image: postgres:13
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
Automation scripts
Deployment EC2 instance, start,stop, run, Security group, IAM roles , Simple storage
Services
Deployment & confuguaration EC2 instance
provider "aws" {
region = "us-east-1"
}
tags = {
Name = "WebServer"
}
}
terraform init
terraform plan
terraform apply
terraform destroy
Terraform varaibles
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = var.region
}
State
Terrform modules
module "vpc" {
source = "./modules/vpc"
cidr = "10.0.0.0/16"
}
provider "aws" {
region = "us-east-1"
}
provider "azurerm" {
features = {}
}
Disaster Recovery
Remote backends,
Storage of state file, Simple stogare service, AWS,Azure and Google Cloud
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/terraform.tfstate"
region = "us-east-1"
}
}
Creation fo 5 instace
# Create a VPC
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "MainVPC"
}
}
subnet_id = aws_subnet.main_subnet.id
tags = {
Name = "Instance-${count.index + 1}" # Unique name for each instance
}
}
terraform init
terraform validate
terraform plan
terraform destroy
Outputs:
instance_ids = [
"i-0abcd1234efgh5678",
"i-1abcd1234efgh5678",
"i-2abcd1234efgh5678",
"i-3abcd1234efgh5678",
"i-4abcd1234efgh5678"
]
Terfaform congifiration