0% found this document useful (0 votes)
33 views50 pages

Devops Engineer - Lab Sessions - CICD Pipeline Scripts - Projets

devops lab sessions
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views50 pages

Devops Engineer - Lab Sessions - CICD Pipeline Scripts - Projets

devops lab sessions
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

Git hub, Jenjenks, Dcoker, Kubernet, Terraform , Ansible , Automated CICD pipeline

Docker image – artficats from jfrog – deployment into Kubenet applications , Automation of CICD
pipeline

Build and push docker imaged to docker hubregistry and Jfrog repository

Creating Kubernet manifest.yml for deployment to Jfrog respotory for application deployment

Automation of CICD pipeline

Build and push docket images into Jfrom reposory

docker build -t <jfrog-repository>/<image-name>:<tag> .

docker push <jfrog-repository>/<image-name>:<tag>

building kubernet deployment , manifest kubermet.yml

apiVersion: apps/v1

kind: Deployment

metadata:

name: my-app

labels:

app: my-app

spec:

replicas: 2

selector:

matchLabels:

app: my-app

template:

metadata:

labels:

app: my-app

spec:

containers:
- name: my-app

image: <jfrog-repository>/<image-name>:<tag> # Use the image from Artifactory

ports:

- containerPort: 8080

Kubernetctl Pipeline

# Ensure kubectl is configured with your Kubernetes cluster

kubectl config use-context <context-name>

# Apply the Kubernetes deployment

kubectl apply -f deployment.yaml

verification of deployment , kubernet

kubectl get pods

kubectl get services

kubectl logs <pod-name>

CICD automation Pipeline

Build and push docker imaged to docker hubregistry and Jfrog repository

Creating Kubernet manifest.yml for deployment to Jfrog respotory for application deployment

Automation of CICD pipeline

pipeline {

agent any

environment {

DOCKER_IMAGE = "<jfrog-repository>/<image-name>"

IMAGE_TAG = "latest"

stages {

stage('Build Docker Image') {

steps {
script {

sh "docker build -t ${DOCKER_IMAGE}:${IMAGE_TAG} ."

stage('Push Docker Image to Artifactory') {

steps {

script {

sh "docker push ${DOCKER_IMAGE}:${IMAGE_TAG}"

stage('Deploy to Kubernetes') {

steps {

script {

sh "kubectl apply -f deployment.yaml"

Kubernet Artcteture

Containernozed application , scallabale , deployment microservices

Master-worker nodes , pods,services ,

Installation, on ec2 instance , EKS , cluster confifuartion , cluster id, VPC,api, load baling

installation sudo apt packages ,


Kubadmin. Kubectl,kublet, kubectl,

Pods,Repliaaset,deployment , services,svc ,namespace,

logs, scalble, replifica, deployment , monitoring using Gra[hana, Nagois ,prehition

Compontes:

Kubernet Contarol panel,

API Server, replication

API scheduler

ETCD. Key pair values , for external , services, Load balancing

Master – worker nodes,

Containernozed application , scallabale , deployment microservices

+--------------------------+

| Kubernetes Control Plane|

+--------------------------+

| |

| +--------------------+ |

| | API Server ||

| +--------------------+ |

| | Scheduler ||

| +--------------------+ |

| | Controller Manager| |

| +--------------------+ |

| | ETCD ||

| +--------------------+ |

+--------------------------+

+-----------------------------------+
| Kubernetes Worker Node 1 (Node) |

+-----------------------------------+

| |

| +---------------------------+ |

| | Kubelet | |

| +---------------------------+ |

| | Kube Proxy | |

| +---------------------------+ |

| | Container Runtime | |

| +---------------------------+ |

+-----------------------------------+

+-----------------------------------+

| Kubernetes Worker Node 2 (Node) |

+-----------------------------------+

| |

| +---------------------------+ |

| | Kubelet | |

| +---------------------------+ |

| | Kube Proxy | |

| +---------------------------+ |

| | Container Runtime | |

| +---------------------------+ |

Creaetion of cluster

aws eks create-cluster --name <cluster-name> --role-arn <IAM-role-arn> --resources-vpc-config


subnetIds=<subnet-id-1>,<subnet-id-2>,securityGroupIds=<sg-id>
configarauon of kubernet

aws eks --region <region> update-kubeconfig --name <cluster-name>

Setting up EKS nodes on EC2 instances

aws eks create-nodegroup --cluster-name <cluster-name> --nodegroup-name <nodegroup-name> --


subnets <subnet-id> --instance-types <instance-type> --scaling-config
minSize=1,maxSize=3,desiredSize=2

Deployment applications in kubetnetctl

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 2

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

ports:

- containerPort: 80

Application , depoyment
kubectl apply -f nginx-deployment.yaml

Service – load balance – kubetmet

apiVersion: v1

kind: Service

metadata:

name: nginx-service

spec:

selector:

app: nginx

ports:

- protocol: TCP

port: 80

targetPort: 80

type: LoadBalancer

Appliaction – service on ngnx server

kubectl apply -f nginx-service.yaml

Monitoring Kubernet applications using mGrphana, Nagoios

Setting up kunernet Cluster on EC2 using Kubadmin tool

Launch the EC2 instances, Mater and worker nodes along VPC, Security groups

Installation , docer on both EC2 instances

Installaton kunbet componets , kubeadm,kubectl,kubelet


Configuration of kubernet master node

Configuration of kubentctl on masternode

sudo apt-get update

sudo apt-get install -y docker.io

sudo systemctl enable docker

sudo systemctl start docker

installation of kubeadm,kubectl,kubelet

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-add-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

Configuration of kubernet master node

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Configuration of kubentctl on masternode

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

setup the pod network through flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-


flannel.yml

Joining workernodes network using kubeadm, using token

sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>


verification of Master node

kubectl get nodes

configuration of ETCD , key pair values ,

separate ec2 instance , external server, API configuration

- --etcd-servers=https://<etcd-server-ip>:2379

Kubernetctl list of commands, scenoruous

Pods,nodes,services, deployment,cluster

Kubent rollout deploymeny

Pods details

kubectl get pods --all-namespaces

kubectl get pods -n <namespace>

kubectl get pods

kubectl describe pod <pod-name>

kubectl logs <pod-name> # Logs for the default container

kubectl logs <pod-name> -c <container-name> # Logs for a specific container

kubectl logs -f <pod-name>

kubectl delete pod <pod-name>

cluster details

kubectl config view

kubectl config current-context

kubectl get nodes

kubectl rollout undo deployment/<deployment-name>

deployment details

kubectl get deployments --all-namespaces


kubectl get deployments -n <namespace>

kubectl create deployment <deployment-name> --image=<image-name>

kubectl set image deployment/<deployment-name> <container-name>=<new-image>

kubectl set image deployment/<deployment-name> <container-name>=<new-image>

kubectl rollout status deployment/<deployment-name>

Services

kubectl get svc --all-namespaces

kubectl get svc -n <namespace>

kubectl describe svc <service-name>

kubectl expose pod <pod-name> --port=<port> --target-port=<target-port> --name=<service-name>

kubectl expose deployment <deployment-name> --port=<port> --target-port=<target-port> --


name=<service-name>

kubectl delete svc <service-name>

Logs

kubectl logs -n kube-system <api-server-pod-name>

kubectl logs <node-name>

kubectl describe <resource> <name>

kubectl get events

kubectl top nodes

kubectl top pods

Namesapce

kubectl config set-context --current --namespace=<namespace-name>

kubectl create namespace <namespace-name>


svale & deployment

kubectl scale deployment <deployment-name> --replicas=<number-of-replicas>

kubectl autoscale deployment <deployment-name> --min=<min-replicas> --max=<max-replicas> --


cpu-percent=<cpu-threshold>

Deploymment Microsrvices on Kubernet

Initialization of microservices ,user and orader sales

Dockerization of each microserviecs

Push the docket images to ecr, docerhubregistery

Create kubernet deployment yml services for microservices

Create kubernet service for each microsricve

Iniatiating external service – load balancer

Deployment microservices to kubenets

Managing interconecteion between microservices

Monitoring and scale microseoservices

Initialization of microservices ,user and orader sales

And Dockerization of each microserviecs

# Use a base image

FROM node:14

# Set the working directory

WORKDIR /app
# Install dependencies

COPY package*.json ./

RUN npm install

# Copy the application code

COPY . .

# Expose the necessary port

EXPOSE 3000

# Start the application

CMD ["node", "index.js"]

Order service docker file

# Use a base image

FROM node:14

# Set the working directory

WORKDIR /app

# Install dependencies

COPY package*.json ./

RUN npm install

# Copy the application code

COPY . .

# Expose the necessary port

EXPOSE 4000

# Start the application


CMD ["node", "index.js"]

Push doker user service file

docker build -t <your-registry>/user-service:v1 .

docker push <your-registry>/user-service:v1

Push doker order service file

docker build -t <your-registry>/order-service:v1 .

docker push <your-registry>/order-service:v1

Create kubernet deployment yml services for microservices

User service YML file

apiVersion: apps/v1

kind: Deployment

metadata:

name: user-service

spec:

replicas: 3

selector:

matchLabels:

app: user-service

template:

metadata:

labels:

app: user-service

spec:

containers:

- name: user-service

image: <your-registry>/user-service:v1

ports:

- containerPort: 3000
resources:

requests:

cpu: 250m

memory: 256Mi

limits:

cpu: 500m

memory: 512Mi

order service YML file

apiVersion: apps/v1

kind: Deployment

metadata:

name: order-service

spec:

replicas: 3

selector:

matchLabels:

app: order-service

template:

metadata:

labels:

app: order-service

spec:

containers:

- name: order-service

image: <your-registry>/order-service:v1

ports:

- containerPort: 4000

resources:

requests:
cpu: 250m

memory: 256Mi

limits:

cpu: 500m

memory: 512Mi

creation kubernet service for micriservices

user service YML service

apiVersion: v1

kind: Service

metadata:

name: user-service

spec:

selector:

app: user-service

ports:

- protocol: TCP

port: 80

targetPort: 3000

clusterIP: None

Orader YML service

apiVersion: v1

kind: Service

metadata:

name: order-service

spec:
selector:

app: order-service

ports:

- protocol: TCP

port: 80

targetPort: 4000

clusterIP: None

Iniatiating external service – load balancer

apiVersion: v1

kind: Service

metadata:

name: user-service-lb

spec:

selector:

app: user-service

ports:

- protocol: TCP

port: 80

targetPort: 3000

type: LoadBalancer

Deployment microservice s service to kubernet

User servive

kubectl apply -f user-service-deployment.yaml

kubectl apply -f user-service-service.yaml

oreder service

kubectl apply -f order-service-deployment.yaml

kubectl apply -f order-service-service.yaml

Load banalncing service – external service


kubectl get svc user-service-lb

pod interconnection -external between microservice

user service to orader service -external http

// Example of Order Service making an HTTP request to User Service (Node.js example)

const axios = require('axios');

axios.get('http://user-service:80/users')

.then(response => {

console.log(response.data);

})

.catch(error => {

console.error("Error calling User Service:", error);

});

Monitor & scale Microservice

kubectl get pods

kubectl scale deployment user-service --replicas=5

kubectl scale deployment order-service --replicas=5

kubectl top pods

kubectl logs <pod-name>

Updating microservices

kubectl set image deployment/user-service user-service=<new-image-name>

kubectl set image deployment/order-service order-service=<new-image-name>


Jenkins

Artecture,webhooks, authetification, secret keys, Azure keyvauly,aws manager, secret keys,

Manage pipeline,plugin, CICD pileline , Groovy and Declataiovive scripting

Triggering CICD pipeline using webhooks

Artecture

+-------------------+ +-------------------+

| | | |

| Developer | | Source Control |

| (Commits Code) +----------------->| (GitHub, etc.) |

| | | |

+-------------------+ +-------------------+

Webhook/Trigger

+-------------------+ Schedules Jobs +-------------------+

| |<----------------->| |

| Jenkins Master | | Jenkins Plugins |

| (Controller) |----------------->| (e.g., Git, Docker|

| | Assigns Jobs | Kubernetes) |

+-------------------+ +-------------------+

^ |

| |

| v

| +-------------------+

| | |

+--------------------------| Jenkins Agents |

(Results & Logs) | (Static/Dynamic)|


| |

+-------------------+

Real-Time Project Credential Management with Jenkins

Centralized Secret Management

 HashiCorp Vault
 AWS Secrets Manager
 Azure Key Vault
 CyberArk
Using HashiCorp Vault with Jenkins

1. Install the HashiCorp Vault Plugin in Jenkins.


2. Configure the Vault plugin in Manage Jenkins → Configure System:
o Vault Address
o Authentication Method (Token, AppRole, etc.)
3.

withVault([vaultSecrets: [[path: 'secret/data/myapp', secretValues: [[envVar: 'DB_USER', vaultKey:


'username'], [envVar: 'DB_PASS', vaultKey: 'password']]]]]) {

echo "Using DB credentials: $DB_USER"

AWS Integration

1. Install Plugins: Install the AWS Credentials Plugin.


2. Add AWS Credentials:
o Go to Manage Jenkins → Manage Credentials.
o Add an AWS Access Key credential.
o withAWS(credentials: 'aws-credential-id', region: 'us-east-1') {
o sh 'aws s3 ls'
o }

Dynamic Credential Injection

 enkins generates an SSH key during runtime.


 The public key is pushed to the target server.

 The private key is used for a temporary connection.

pipeline {

agent any

stages {

stage('Deploy') {
steps {

sshagent(['dynamic-ssh-key']) {

sh 'ssh user@server "ls -l"'

Pluginns for CICD pipeline

Jenkins , installation , prerequisites for CICD pipileline

Plugins and tools

 Git Plugin (or SCM-specific plugin)


 Pipeline Plugin
 Docker Pipeline Plugin (for Docker)
 Credentials Plugin
 Deployment-specific plugins (e.g., Kubernetes, AWS, Azure)

 Code Repository: Hosted in GitHub, GitLab, or Bitbucket.


 Build Tools: Maven, Gradle, npm, etc.
 Target Deployment Platform: Kubernetes, Docker, AWS, etc.

CICD pipeline workflow confuguraion

1. Source Code Management: Pull code from the repository.


2. Build Stage: Compile the code and generate artifacts.
3. Test Stage: Run automated unit tests, integration tests, and other checks.
4. Package/Artifact Storage: Store artifacts in a repository (e.g., Nexus, Artifactory, or
S3).
5. Deploy Stage: Deploy the application to staging or production.
Jenkins Configuaration

Create pipeline job

Configuration of Jenkins file in pipeline

Groovy pipeline : stages,steps,agents, when, conditions, post

Docker login using credentials

Checkin & checkout for github ,cloning

Build the applications for maven tool

Test the applications , junittestcase,automation scripts

Creation of Artifactory .jar,.war files in jfrog,Nexos, , package stage

Build & push the docker image , push & pull from remote & local repository

Deploy the application in satge & in prodcutions env to kubernet & docker,yml.kubernet

Post , pipeline monitoring ,cleansresource

Triggering the Pipeline

 Manual Trigger: Run the pipeline manually in Jenkins.


 Automated Trigger: Set up triggers in Jenkins:
o Poll SCM: Jenkins polls the repository at regular intervals.
o Webhook: Trigger the pipeline when a commit is pushed to the repository.
o

pipeline {

agent any

environment {

DOCKER_CREDENTIALS = credentials('docker-cred-id') // Use your Jenkins credentials ID

stages {

stage('Checkout') {

steps {

echo 'Cloning the repository...'

git branch: 'main', url: 'https://github.com/your-repo.git'

}
}

stage('Build') {

steps {

echo 'Building the application...'

sh 'mvn clean package'

stage('Test') {

steps {

echo 'Running tests...'

sh 'mvn test'

stage('Package') {

steps {

echo 'Storing artifacts...'

archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true

stage('Docker Build & Push') {

steps {

echo 'Building Docker image...'

sh 'docker build -t my-app:${BUILD_NUMBER} .'

echo 'Pushing Docker image to registry...'

sh '''

docker login -u $DOCKER_CREDENTIALS_USR -p $DOCKER_CREDENTIALS_PSW

docker tag my-app:${BUILD_NUMBER} my-docker-repo/my-app:${BUILD_NUMBER}


docker push my-docker-repo/my-app:${BUILD_NUMBER}

'''

stage('Deploy to Staging') {

steps {

echo 'Deploying to staging environment...'

sh 'kubectl apply -f k8s/deployment-staging.yaml'

stage('Deploy to Production') {

when {

branch 'main'

steps {

echo 'Deploying to production environment...'

sh 'kubectl apply -f k8s/deployment-production.yaml'

post {

always {

echo 'Cleaning up workspace...'

cleanWs()

success {

echo 'Pipeline executed successfully.'

}
failure {

echo 'Pipeline failed.'

Real-Time Scenario: CI/CD for Microservices

pipeline {

agent any

stages {

stage('Build All Services') {

parallel {

stage('Service A') {

steps {

build job: 'Service-A-Pipeline'

stage('Service B') {

steps {

build job: 'Service-B-Pipeline'

}
Handling Multi-Environment Credentials

pipeline {

agent any

parameters {

choice(name: 'ENV', choices: ['dev', 'prod'], description: 'Select Environment')

environment {

CRED_ID = "db-cred-${params.ENV}"

stages {

stage('Use Credential') {

steps {

withCredentials([usernamePassword(credentialsId: env.CRED_ID, usernameVariable:


'USER', passwordVariable: 'PASS')]) {

sh 'echo "Deploying with $USER"'

Pipline failed

Logs, isoalated logs

Dependency & network configuration , timeperiod out,

Congiguration filed, in jenjins

Authefification faled , brances, webhook triggerd failed

Space allocations ,

Github,webhook configurauin
Docker

Arcteture

+------------------+

| Docker Client | <------------------------------+-----------------+

| (CLI/REST API) | | |

+------------------+ | |

| | |

v v v

+------------------+ +-----------------+ +-------------------+

| Docker Daemon | <-----> | Container Engine| | Docker Registry |

| | | | | (Docker Hub) |

+------------------+ +-----------------+ +-------------------+

+------------------+

| Host OS Kernel |

| (Linux/Windows) |

+------------------+

Github, Gitlab

Gitgub, branch statrgies ,List of commands, scenrouis, projets,List of commands, scenorous

CICD pipeline , Jankins, webhooks, authetifications, Azure devops,AWS code deploy ,

Clone, add files, commit, push,pull, satge, status, merge,

git clone <repository-url>

git clone https://github.com/username/project.git

git status

git add <file>

git add . git add index.html


git commit -m "Commit message"

git commit -m "Added login functionality"

git push

git push origin main

git pull

git pull origin main

git branch <branch-name>

git branch feature/login

swithing to another branch

git checkout <branch-name>

git branch -d <branch-name>

git branch -d <branch-name>

Merging branches

git checkout <target-branch>

git merge <source-branch>

git checkout main

git merge feature/login

Commit changes, History logs

git log

git log --oneline –graph

Stashing changes

Temporally save chages

git stash

git stash apply


Merge conflicts

git add <file>

git commit

git diff

git tag <tag-name>

git tag v6

git push origin v6

ndoying

git reset HEAD~

git revert <commit-hash>

Rebate , checkout ,

git checkout feature/login

git rebase main

git rebase -i HEAD~<number-of-commits>

Overwiriting remore history

git push origin <branch> --force

Githubbarches

Settingup Remote Repository

git remote add origin <repository-url>

git remote add origin https://github.com/username/project.git

verification mremote
git remote -v

git push origin --delete <branch-name>

git push origin --delete <branch-name>

Forking a Repository

Making another copy

git fetch upstream

git merge upstream/main

Creating pull mrequest from current brnachto main brach or another repostorty

git push origin <branch-name>

navigation to Github repossoroty anf click on pull request

CICD pipeline , intergration with Gitgub , jenkenks

 creation of .github/workflows/main.yml file.



name: CI/CD Pipeline

Name

on:

push:

branches:

- main

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v2

- name: Set up Node.js

uses: actions/setup-node@v2
with:

node-version: 14

- run: npm install

- run: npm test

Dev,uat,prod, feature and hot branche, main branches

Featue brach , Github work , configuration

 Git hubs Braching stragies

 Example: feature/user-authentication, feature/payment-gateway.

 Keep Branches Small:

 Limit feature branches to a single feature or task to reduce complexity.

 Rebase Regularly:

 Rebase the feature branch on the base branch to stay updated.

bash
Copy code
git checkout feature/<feature-name>
git fetch origin
git rebase origin/main

 Test Locally Before Push:

 Ensure all unit tests and integration tests pass before pushing.

 Run CI/CD Pipelines on PRs:

Run the locally on amchine, befor deploying ,

Delete Merged Branches:

Cleaning up megeed brancehes

Adding Login featue

Creating branch, Code comite , push to remote ,

Creation pull request, review and merge


git checkout -b feature/login

git add .

git commit -m "Login functionality"

git push origin feature/login

Creation pull request, review and merge

Larger projects, Dev,UAT,Feature, prod branches

 feature/<feature-name>
 release/<release-version>
 Hot new faetue<features banch >

Jenkins Integration

Webhook Integration

 Use webhooks to trigger Jenkins builds on branch commits or pull requests.

GitHub Integration:

o Install the GitHub plugin in Jenkins.


o Set up a webhook in GitHub to notify Jenkins on changes.

Bitbucket Integration:

o Installation the Bitbucket plugin in Jenkins.


o Configuration webhook in Bitbucket to trigger builds.

Jenkins Integration with Branching Strategies

1. Multibranch Pipeline Job, Dev,Testing, New Features, Relaase brances ,


Prod branches ,
Jenkins, New item, Mulplibranch plugin

Configuration kkof Git gub repository

CICD pipeline automation fro each branch ,

pipeline {

agent any

stages {

stage('Checkout') {

steps {

checkout scm

stage('Build') {

steps {

echo "Building branch: ${env.BRANCH_NAME}"

sh './build.sh'

Pipeline for Pull Requests

pipeline {

agent any
stages {

stage('Build') {

steps {

sh 'npm install'

stage('Test') {

steps {

sh 'npm test'

post {

success {

echo "Build and tests passed for PR"

Branch-Specific Logic

pipeline {

agent any

stages {
stage('Build') {

when {

branch 'main'

steps {

sh './build-production.sh'

stage('Test') {

when {

not {

branch 'main'

steps {

sh './run-tests.sh'

Docker Arcterture,

List of commands , scenorous , docker dead, status

docker file creation, docker image , run the & acces the application deployment the ,
Push docker image to docker registry,
application deployment to EC2 instance , deployment to kubernet

Docker push,pull, insapect, valume, docket log, docker status,

Creation of Docker file

# Base image

FROM python:3.9-slim

# Set the working directory

WORKDIR /app

# Copy requirements file and install dependencies

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code

COPY . .

# Expose the application port

EXPOSE 5000

# Command to run the application

CMD ["python", "app.py"]

Build the docker image


docker build -t python-app:1.0 .

Run the docker application

docker run -d -p 5000:5000 python-app:1.0

Access the python application

http://localhost:5000

Hello, Dockerized Python Application!

Push the Docker Image to a Registry

Log in to Docker Hub

docker login

Tag & push the docker image & containzed python application

docker tag python-app:1.0 <your-dockerhub-username>/python-app:1.0

docker push <your-dockerhub-username>/python-app:1.0

Deploy to a Server or Cloud

Deploy to an EC2 Instance

 SSH into the EC2 instance.

Pull the Docker image

docker pull <your-dockerhub-username>/python-app:1.0

Deploy to Kubernetes

Creation of kubernet de[loyment file, yaml


apiVersion: apps/v1
kind: Deployment
metadata:
name: python-app
spec:
replicas: 2
selector:
matchLabels:
app: python-app
template:
metadata:
labels:
app: python-app
spec:
containers:
- name: python-app
image: <your-dockerhub-username>/python-app:1.0
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
type: LoadBalancer
selector:
app: python-app
ports:
- protocol: TCP
port: 80
targetPort: 5000

kubectl apply -f deployment.yaml

 Use Lightweight Base Images:

 Use python:3.9-alpine instead of python:3.9-slim for smaller image sizes.

 Minimize Layers in Dockerfile:

 RUN commands to reduce image layers.

 Environment Variables:

 Use ENV in the Dockerfile for configurable settings.

dockerfile
Copy code
ENV APP_ENV=production

 Verfication of coiantainer

 Add health checks for c ontainer management.

dockerfile
Copy code

HEALTHCHECK CMD curl --fail http://localhost:5000 || exit 1

Docker dead,
Sapce allocation, configuration, containners conflicts,docker services

Verifivation of docker service ,srtart and resyart

sudo systemctl status docker

sudo systemctl start docker


sudo systemctl enable docker
sudo systemctl restart docker

verification of diskspace

df -h

docker system prune -a

docker volume prune


at os level, docker stop

top
docker stop $(docker ps -q)
docker rm $(docker ps -a -q)

verification of network configuaration


docker network ls
sudo systemctl restart docker
rebuld docker congifuartion , if correupted
sudo rm -rf /var/lib/docker
sudo systemctl restart docker

docker details, container logs, docker services


docker info
docker logs <container-id>
docker inspect <container-id|image-id>
docker ps,
docker volume ls
docker run hello-world

Dcoker version, start& stop. psuh,pull, from docker regisryt, volume, network,
containers,docket deamoen, import and export container, docket run on local envi, and
multpliple docker container through yaml,
docker --version
docker version

sudo systemctl start docker # Start Docker


sudo systemctl stop docker # Stop Docker
sudo systemctl restart docker # Restart Docker
sudo systemctl status docker # Check status of Docker

docker pull <image-name>:<tag>


docker pull nginx:latest

docker run <options> <image-name>

docker run -d -p 8080:80 nginx:latest

docker ps
docker ps -a
docker stop <container-id>
docker rm <container-id>

docker stop <container-id>


docker rm <container-id>

docker rmi <image-id>

docker info
docker system df

docker build -t <image-name>:<tag> <path-to-dockerfile>


docker build -t my-python-app:1.0 .

docker logs <container-id>

docker logs 123456abc

docker attach <container-id>

docker exec -it <container-id> <command>

docker exec -it <container-id> <command>


docker system prune

docker system prune –volumes


docker inspect <container-id|image-id>

docker inspect 123456abc

docker tag <image-id> <repository>:<tag>

docker network ls
docker network create my-network
docker network create my-network

docker volume ls
docker volume create <volume-name>
docker run -v my-volume:/app/data my-python-app:1.0

docker login

docker push <repository>:<tag>


docker push my-dockerhub-user/my-python-app:1.0

docker pull <repository>:<tag>


docker pull nginx:latest

docker export <container-id> > <filename>.tar


docker import <filename>.tar <image-name>:<tag>
docker save <image-name>:<tag> > <filename>.tar
running docker on local deployment
docker run -d -p 8080:80 nginx
Multi-Container Application

version: '3'
services:
app:
image: my-python-app:1.0
ports:
- "5000:5000"
db:
image: postgres:13
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass

docker volume create db-data


docker run -v db-data:/var/lib/postgresql/data postgres

docker exec -it <container-id> bash

Terraform , Instaruxture as code,


State files, count and foreach
List of commands . scenoruois ,

Automation scripts
Deployment EC2 instance, start,stop, run, Security group, IAM roles , Simple storage
Services
Deployment & confuguaration EC2 instance
provider "aws" {
region = "us-east-1"
}

resource "aws_instance" "web" {


ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI
instance_type = "t2.micro"

tags = {
Name = "WebServer"
}
}

terraform init
terraform plan
terraform apply
terraform destroy

Terraform varaibles
variable "region" {
default = "us-east-1"
}

provider "aws" {
region = var.region
}
State

 Terraform maintains the current state of your infrastructure in a .tfstate file.


 Example commands:
o View state: terraform show
o Refresh state: terraform refresh

Terrform modules
module "vpc" {
source = "./modules/vpc"
cidr = "10.0.0.0/16"
}

Multicloud deployment resource

provider "aws" {
region = "us-east-1"
}

provider "azurerm" {
features = {}
}

resource "aws_instance" "web" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
resource "azurerm_resource_group" "example" {
name = "myResourceGroup"
location = "East US"
}

Separate configuraions, Dev,testing & Prod env

terraform workspace new staging


terraform workspace select staging
terraform apply

Disaster Recovery

CI/CD Pipeline Integration

Jenjins, github along CICD automation pipeline

Remote backends,
Storage of state file, Simple stogare service, AWS,Azure and Google Cloud

terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/terraform.tfstate"
region = "us-east-1"
}
}

Dynamic blocks , aws security


resource "aws_security_group" "example" {
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from_port
to_port = ingress.value.to_port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
}
}
}

Creation fo 5 instace

Terrform configuration file,


Vpc,subnet,instance , count, forech

# Specify the provider


provider "aws" {
region = "us-east-1" # Replace with your desired AWS region
}

# Create a VPC
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "MainVPC"
}
}

# Create a subnet in the VPC


resource "aws_subnet" "main_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "MainSubnet"
}
}

# Create 5 EC2 Instances


resource "aws_instance" "example" {
count = 5 # Number of instances to create
ami = "ami-0c55b159cbfafe1f0" # Replace with a valid AMI ID for your region
instance_type = "t2.micro"

subnet_id = aws_subnet.main_subnet.id

tags = {
Name = "Instance-${count.index + 1}" # Unique name for each instance
}
}

terraform init
terraform validate
terraform plan

aws ec2 describe-instances --filters Name=tag:Name,Values=Instance-*

terraform destroy

Outputs:

instance_ids = [
"i-0abcd1234efgh5678",
"i-1abcd1234efgh5678",
"i-2abcd1234efgh5678",
"i-3abcd1234efgh5678",
"i-4abcd1234efgh5678"
]

Terfaform congifiration

Usining dynamic , variable, aws, vpv,subnet,instance


Using modules, vpc,Simple storage service, instance
Reuasability,

Sepereate gitgub brancinh , dev,tesing, prod

Separate state files, refresh


Secret keys, aws manger
Remote backend
Storage of files , simple storage service ,.tffiles

You might also like