DevSecOps Pipeline CI CD Implementation Complete Guide 1742406725
DevSecOps Pipeline CI CD Implementation Complete Guide 1742406725
Introduction:
In this article, we will implement a complete CI/CD pipeline of a Type Script
application which is build using Vite and NPM, following DevSecOps best
practices. By integrating security into every phase of development and
deployment, we ensure a secure, automated and efficient workflow making this a
fundamental part of the software delivery process.
Overview:
1. What is DevSecOps ?
2. Why DevSecOps ?
4. Containerization
5. DevSecOps Implementation
1. What is DevSecOps ?
Many people limit DevSecOps to just security in CI/CD pipelines whereas
DevSecOps isn’t just about scanning code for vulnerabilities in CI/CD. It is about
taking security into every aspect of DevOps, which means integrating security at
every stage of the Software Development Life Cycle (SDLC). In simple terms, it
extends beyond CI/CD to ensure security in infrastructure provisioning, managing
secrets and runtime protection.
Example:
Infrastructure As Code (IAC): Using Hashicorp Vault to store sensitive credentials
instead of hardcoding them in terraform scripts.
1. AI generated Code
2. Cyber Attacks
Developers might push code with known CVE (Common Vulnerability Exposure),
exposing application to attacks.
Unpatched third party libraires can lead to security flaws.
Get Started
Before running the code, check the README file to get an understanding of the
project. In this file, we talk about the features, technologies used, the project
structure, logic and getting started guide. If you carefully check the src folder,
most of the files are .tsx file, which means the code is written in TypeScript.
Three things we need to know about the TypeScript file:
2. Vite: Build tool which is used for building and running the application locally
2. npm
Steps:
npm install
This command packages everything and puts into the dist folder which is created
newly.
Browse the localhost with port 5137 to see the application running.
4. Containerization
We will implement a Multi Stage Dockerfile, where in the first stage we will
create a working directory and copy the package files inside the working
directory. Then install the dependencies and copy the rest of the code to the
container and finally run the build. Through this we will get a dist folder.
The reason for installing the dependencies first and later copying the rest of
the code is that docker uses layer caching (no need to download again the
dependencies and can run directly the application which saves the build time
significantly)
# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
In the second stage, we will copy the already generated dist folder from the
first stage and copy it to the nginx location. Expose the port and run the CMD
# Production stage
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
PS: I have separated the stages to explain each stage, in practical both the stage
are written inside one Dockerfile.
# Build stage
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Open your terminal again in the same location where your project exists.
Browse to the localhost with port 9099 to see the application running.
5. DevSecOps Implementation
What’s inside the pipeline ? Stages Explained
Inside the pipeline, there are multiple components such as stating when does this
pipeline should trigger to different stages or jobs as we call it in github actions.
Below we will explore different stages to understand what exactly the job
performs.
Stages/Jobs Explained
1. Unit Testing: This step ensures all the tests passes before proceeding.
2. Static Code Analysis: This job performs linting to ensure code follows best
practices.
3. Build: This job compiles the application and generates the necessary artifacts.
If you have worked before with Github Workflow, then you can understand that
these YAML are self explanatory. Then too, here I will try to explain each stage
separately to understand better.
on:
push:
branches: [ main ]
paths-ignore:
- 'kubernetes/deployment.yaml' # Ignore changes to this file to prevent lo
ops
pull_request:
branches: [ main ]
jobs: There are individual series of steps which you want to execute. You can
refer it to a stage, so in this case it is the unit testing stage. You can have as many
jobs and steps in a single action file.
In the Github actions file, the two most common used keywords are:
: Predefined actions, Reusable existing code from Github Actions
uses
Marketplace.
jobs:
test:
name: Unit Testing
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
In these below steps, it does exactly the same as the previous job only difference
is the last step where you run lint.
lint:
name: Static Code Analysis
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
3. Build Job
In this job the needs parameter tells that the build job should only run if the test and
lint job is completed. Later in the job, the artifacts which is created using the build
command is uploaded and the path is mentioned.
build:
name: Build
runs-on: ubuntu-latest
needs: [test, lint]
steps:
- name: Checkout code
uses: actions/checkout@v4
4. Containerization
In this job, it is responsible for multiple steps such as building, scanning, login into
the registry, extracting the tags, pushing to the GHCR. Let me break down each
step of this docker job.
docker:
name: Docker Build and Push
runs-on: ubuntu-latest
needs: [build]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
outputs:
image_tag: ${{ steps.set_output.outputs.image_tag }}
Depends on the build job, which means it will execute only after the build
stage is completed.
Environment variables are set and output variable are defined which will be
used later.
Steps Explained
Setting up docker build, similar to docker build but with advanced features like
multi platform build and caching mechanism.
4️⃣ Login to Github Container Registry (GHCR)
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
Extracts the first 7 characters of the commit SHA and saves this tag as a output
variable.
In this job, it is responsible for modifying the kubernetes deployment file with the
latest docker image tag and committing the changes back to the repository.
update-k8s:
name: Update Kubernetes Deployment
runs-on: ubuntu-latest
needs: [docker]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
This job will only execute once the docker build job is completed.
It will run only when the push event occurs on the main branch.
1️⃣ Checkout the code
- name: Checkout code
uses: actions/checkout@v4
with:
token: ${{ secrets.TOKEN }}
Adds and Commits the changes with new image tag, skips to trigger another
CI/CD run by using skip ci. Pushes the changes and if no changes then prints “No
changes to commit”.
on:
push:
branches: [ main ]
paths-ignore:
- 'kubernetes/deployment.yaml' # Ignore changes to this file to prevent lo
ops
pull_request:
branches: [ main ]
jobs:
test:
name: Unit Testing
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
lint:
build:
name: Build
runs-on: ubuntu-latest
needs: [test, lint]
steps:
- name: Checkout code
uses: actions/checkout@v4
docker:
name: Docker Build and Push
runs-on: ubuntu-latest
needs: [build]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
outputs:
image_tag: ${{ steps.set_output.outputs.image_tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
update-k8s:
name: Update Kubernetes Deployment
runs-on: ubuntu-latest
needs: [docker]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
token: ${{ secrets.TOKEN }}
Make sure to add the PAT (Personal Access Token) into the repository secret
before committing the code.
Now let us commit this code. As soon as you commit the file, the workflow will be
triggered as we have set the on: parameter to push. So every time you push a
commit to your repo in the main branch, it will build and push a new docker
image in the repository.
Let us check the Actions tab to see the workflow running.
After the jobs are completed, the deployment file must have updated the
docker image name. Let us pull the latest image and run it to see the changes.
Copy the image name which just got updated the previous step and try to run
the container using the below command.
Perfect! The workflow triggered and the name is updated in our application
AWS Account
AWS CLI
Kubectl
Docker
Steps:
2. Select an ubuntu OS with t2.medium as the type, keeping the rest as default.
The last step of the complete CI/CD implementation is to deploy the application on
a Kubernetes Cluster.
Steps:
https://kind.sigs.k8s.io/docs/user/quick-start/#installation
The above command will install all the dependencies required for the ArgoCD.
We will check if all the resources are running then proceed to the next step.
All of them are the running state, so now we can proceed to the next step.
Now we will access the ArgoCD UI. For that you need to enable port
forwarding for the ArgoCD server. Run the below command to achieve the
same.
Make sure to add port 9000 in the EC2 inbound security group.
Copy the public IP of the EC2 instance, go to your browser and try to access
the ArgoCD page.
Once the cluster is created, update the context using the below command.
http://15.206.165.191:30900
Go back to your terminal, the password is available using the below command.
Copy the password and exit. This password is base64 encoded, so decode
using the below command.
This is the password for the argocd server. Paste it and login into the server.
To deploy our project, we will create an application inside this argocd server.
Click on the create application button → give it a name → project name will be
default → select automatic sync → in the repository URL paste your github
repo URL → give the path name as kubernetes → Cluster URL will be default
and finally the namespace will also be default.
You will notice that the cluster has been failed to deploy. This is because, the
imagePullSecret has not been configured.
We have provided the parameter, but we need to set the secrets with the PAT.
Go to the terminal again, run the below command which will create the secret.
Edit the values accordingly.
Check again the pods, this time it will be healthy. This shows our ArgoCD is in
sync with the code.
Check the deployment file in the repository for the newly created image tag.
Now check the argocd server, it should have picked up the change and should
be in sync with the github repository.
Congratulations 🎉
You have just implemented a complete End to End DevSecOps
Pipeline successfully.
Hope you have learnt something from this article!
Conclusion:
In this article, we implemented a complete DevSecOps pipeline from scratch.
Whether you are a beginner or a experienced professional this guide provides a
practical approach following best practices to achieve fully automated, secure and
efficient software delivery process.
If you found this post useful, give it a like 👍
Repost ♻️
Follow @Bala Vignesh for more such posts 💯🚀