DevSecOps - Disney+ Hotstart App in Kubernetes Monitor
DevSecOps - Disney+ Hotstart App in Kubernetes Monitor
Kubernetes & Docker — for seamless, containerized, and scalable application deployments
Jenkins — to enable reusable, standardized, and consistent CI/CD pipelines
SonarQube, Trivy, OWASP Dependency-Check — ensuring robust security, vulnerability scanning,
and code quality automation
Prometheus & Grafana — for comprehensive real-time monitoring and alerting
Gmail Email alerts and collaboration-driven notifications
Parameterized environment orchestration — for on-demand infrastructure setup and teardown
Now, let’s get started and dig deeper into each of these steps: -
cd hotstar-kubernetes/scripts/
• Install the TOOLS in the VM machine via Scrtipts . add executable permission to shell script
chmod +x *.sh
Install Tools
http://<PUBLIC_IP>:8080
• Unlock Jenkins using an administrative password and install the suggested plugins.
Retrieve the initial admin password:
Install Plugins like JDK, SonarQube Scanner, NodeJs, OWASP Dependency Check
Goto Manage Jenkins →Plugins → Available Plugins →
Install below plugins
1. Eclipse Temurin Installer (Install without restart)
2. SonarQube Scanner (Install without restart)
3. NodeJs Plugin (Install Without restart) – 16.20.2
4. OWASP Dependency Check Plugins
5. Stage view
6. jdk
Docker plugin
7. Docker
8. Docker Commons
9. Docker Pipeline
10. Docker API
11. docker-build-step
• Setup SonarQube Server
we create a sonarqube container
docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
username admin
password admin
Update New password, This is Sonar Dashboard.
B. - Create Sonar token in order to connect with Jenkins
Click on Administration → Security → Users → Click on Tokens and Update Token → Give it a
name → and click on Generate Token
Now, go to Dashboard → Manage Jenkins → System and Add like the below image.
• webhook - http://34.228.235.18:8080/sonarqube-webhook/
7.
}
post {
always {
script {
def buildStatus = currentBuild.currentResult
def buildUser = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')[0]?.userId
?: 'Github User'
emailext (
subject: "Pipeline ${buildStatus}: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: """
<p>This is a Jenkins HOTSTAR CICD pipeline status.</p>
<p>Project: ${env.JOB_NAME}</p>
<p>Build Number: ${env.BUILD_NUMBER}</p>
<p>Build Status: ${buildStatus}</p>
<p>Started by: ${buildUser}</p>
<p>Build URL: <a href="${env.BUILD_URL}">${env.BUILD_URL}</a></p>
""",
to: '[email protected]',
from: '[email protected]',
replyTo: '[email protected]',
mimeType: 'text/html',
attachmentsPattern: 'trivyfs.txt,trivyimage.txt'
)
}
}
<public-ip:3000>
Our Applciation is live with this output
Email alert with Post build
Pipeline is successful
EKS cluster Step on aws
How to create an EKS cluster using AWS Console | Create node group | Configure Kubernetes cluster
Create user
bingouser
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Aws configure in VM
`
Option 1 for Asian Region ## Mumbai: <br/>
`
eksctl create cluster --name cloudaseem-cluster4 --region ap-south-1 --node-type t2.medium --
zones ap-south-1a,ap-south-1b
`
Option 2 for US ## N. Virgina: <br/>
`
eksctl create cluster --name cloudaseem-cluster4 --region us-east-1 --node-type t2.medium --zones
us-east-1a,us-east-1b
## Note: Cluster creation will take 5 to 10 mins of time (we have to wait). After cluster created we
can check nodes using below command.
Got to
project dir
cd hotstar-kubernetes/K8S/
File exist ,
Note to update this manifest.yml file with your Docker images name and apply
manifest.yml file
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotstar-deployment
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: hotstar
template:
metadata:
labels:
app: hotstar
spec:
containers:
- name: hotstar-container
image: aseemakram19/hotstar
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hotstar-service
spec:
type: LoadBalancer
selector:
app: hotstar
ports:
- port: 80
targetPort: 3000
You have successfully Deployed a Hotstar on Kubernetes with Loadbalancer Enabled with
AutoHealing.
Add Loadbalance in Cloudflare to apply domain and ssl to access by clinet with cname entry
I want to do this with build parameters to apply and destroy while building only.
you have to add this inside job like the below image
Note: Create Key pair to access monitoring server
Add script
pipeline {
agent any
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
}
parameters {
string(name: 'action', defaultValue: 'apply', description: 'Terraform action: apply or destroy')
}
stages {
stage('Checkout from Git') {
steps {
git branch: 'main', credentialsId: 'github-token', url:
'https://github.com/Aseemakram19/hotstar-kubernetes.git'
}
}
stage('Terraform version') {
steps {
sh 'terraform --version'
}
}
stage('Terraform init') {
steps {
dir('Terraform') {
sh '''
terraform init \
-backend-config="access_key=$AWS_ACCESS_KEY_ID" \
-backend-config="secret_key=$AWS_SECRET_ACCESS_KEY"
'''
}
}
}
stage('Terraform validate') {
steps {
dir('Terraform') {
sh 'terraform validate'
}
}
}
stage('Terraform plan') {
steps {
dir('Terraform') {
sh '''
terraform plan \
-var="access_key=$AWS_ACCESS_KEY_ID" \
-var="secret_key=$AWS_SECRET_ACCESS_KEY"
'''
}
}
}
stage('Terraform apply/destroy') {
steps {
dir('Terraform') {
sh '''
terraform ${action} --auto-approve \
-var="access_key=$AWS_ACCESS_KEY_ID" \
-var="secret_key=$AWS_SECRET_ACCESS_KEY"
'''
}
}
}
}
post {
success {
echo ' Terraform execution completed successfully!'
}
failure {
echo ' Terraform execution failed! Check the logs.'
}
}
}
Verify the Monitoring server
1. Grafana installation
Access and create a garafan.sh , add permission , and xecute it
#!/bin/bash
# Script to install Grafana on a Linux instance
- job_name: 'blackbox'
metrics_path: /probe
params:
module: [http_2xx] # Look for a HTTP 200 response.
static_configs:
- targets:
- http://prometheus.io # Target to probe with HTTP.
- http://IP:3000 # Target to probe with HTTPS.
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 13.232.214.2:9115 # The blackbox exporter's real hostname.
- job_name: node_exporter
static_configs:
- targets:
- 'IP:9100'
Save the file (CTRL + X, then Y, and Enter).
restart
./Prometheus &
Add
Go to Dashboards → Import.
Enter Dashboard ID: 13659 (Prometheus Blackbox Exporter).
Click Load.
Select Prometheus as the Data Source.
Click Import.
## Step - 4 :
1. Delete Monitoring Server with Jenkins pipeline with action as destroy
2. delete Cluster and other resources we have used in AWS Cloud to avoid billing ##