0% found this document useful (0 votes)
28 views23 pages

Devsecops Viva - Edited

Viva prep

Uploaded by

Shravani Surve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views23 pages

Devsecops Viva - Edited

Viva prep

Uploaded by

Shravani Surve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

1. What is Git?

o Git is a distributed version control system (DVCS) that tracks changes to files. This
means Git records the history of changes made to a file or a set of files and allows you
to go back to any previous version of your project. It’s commonly used by software
developers to manage source code versions but can also be used to manage any type of
file.
o
2. What is Version Control?
o Version control is a system that keeps track of changes to files or projects over time
so that you can revert back to previous versions when needed. It is essential for projects
where multiple people are collaborating or where you want to keep a history of changes
for future reference.
o
3. What is GitHub?
o GitHub is a platform built around Git that helps developers store their Git repositories
online. It provides additional features like collaborative tools, issue tracking, code
reviews, and more. By using GitHub, you can sync your local repository (stored on
your computer) with a remote repository (stored on GitHub's servers) so that others can
collaborate on your project.

Experiment 1: Version Control with Git

Theory of Git:

Git is often used by developers to keep track of code and allow collaboration. It enables multiple
developers to work on the same project without overwriting each other's work. Git also keeps a history
of all changes, so you can revert to previous versions of files if something goes wrong.

Git Bash :

• Git Bash is a command-line interface (CLI) that allows users to interact with Git using Unix-
like commands in a Windows environment. It provides Git-specific commands and basic Unix
commands like ls and cd for navigating and managing files.

Basic Git Commands:

Here are the essential Git commands you'll use when working with version control.

1. git init:
o Initializes a new Git repository in your project folder. It creates a hidden .git folder
that Git uses to track changes.
o Example:

git init

2. git status:
o Tells you the status of files in your working directory, whether they have been
modified, are untracked, or are staged for a commit.
o Example:

git status
3. git add:
o Stages files (marks them for inclusion in the next commit). Files must be staged before
they can be committed.
o Example:

git add <filename> # Add a single file


git add . # Add all changes

4. git commit:
o Records the changes you’ve made to the repository. Each commit represents a version
of the project at that point in time. Every commit requires a message describing what
was changed.
o Example:

git commit -m "Commit message"

5. git log:
o Shows the history of commits. You can see who made changes, when, and what the
commit message was.
o Example:

git log

6. git clone:
o Copies an existing remote Git repository (like one on GitHub) to your local machine.
o Example:

git clone https://github.com/user/repo.git

7. git push:
o Sends your local commits to the remote repository (such as GitHub). This is how you
upload your local changes to GitHub for others to see and collaborate on.
o Example:

git push origin master

8. git pull:
o Fetches and merges changes from the remote repository into your local repository. This
ensures your local project is up-to-date with the latest changes on GitHub.
o Example:

git pull origin master

Understanding Git Workflow

In Git, files go through a series of stages:

1. Untracked: These are new files that have not yet been added to version control.
2. Modified: Files that have been changed but not yet staged.
3. Staged: Files that have been marked to be committed (using git add).
4. Committed: Changes that have been saved to the repository’s history.
Cloning a Repository

• Cloning means making a copy of an existing Git repository. This could be from a remote
GitHub repository to your local machine or vice versa. For instance, when you clone a project
from GitHub, you create a full local copy of it on your machine.
o Example command:
o git clone https://github.com/username/repository.git

Types of Cloning:

1. Remote cloning: Copying a repository from a remote server (like GitHub) to your local
machine.
o Example: git clone https://github.com/user/repo.git
2. Local cloning: Copying from one local directory to another.

Git Status and States:

In Git, files can be in one of the following states:

• Untracked: Git is not tracking this file.


• Modified: The file has been changed but not yet staged.
• Staged: The file is ready to be committed.
• Unmodified: The file has not been changed since the last commit.

You can check the status of files with:

git status

Example Workflow

Here’s a step-by-step example of how to use Git for version control on your project:

1. Initialize the Git Repository:

git init
This will create a new repository in your project directory.

2. Check the Status:

git status
This will show which files are untracked or modified.

3. Add Files to Staging Area:

git add .
This stages all changes for the next commit.

4. Commit the Changes:

git commit -m "Initial commit"


5. Link to a Remote GitHub Repository:

git remote add origin https://github.com/username/repo.git


6. Push the Changes to GitHub:

git push -u origin master


This uploads your local commits to the GitHub repository.

Syncing Local Repositories with GitHub

When working with GitHub, your local repository and remote repository (on GitHub) need to stay in
sync. The following commands help with this:

1. Pulling Changes:

git pull origin master


This updates your local repository with the latest changes from GitHub.

2. Pushing Changes:

git push origin master


This uploads your local changes to the remote repository.

Conclusion for Experiment

In this experiment, you learned how to implement version control using Git and GitHub, including
initializing a Git repository, staging and committing changes, and syncing your local repository with a
remote repository on GitHub. You also learned the basic Git commands that are commonly used in day-
to-day development.

Additional Concepts for DevSecOps:

• DevSecOps: An approach that integrates security practices into the DevOps process. It aims to
automate security testing and embed security into the development pipeline, ensuring that
security is addressed early in the software development lifecycle.

Experiment 2: Creating a Branch and Performing Various Git Commands

AIM:

Create a Git branch and perform various commands.

THEORY:

In Git, branches are used to create separate lines of development. Each branch is essentially a pointer
to a particular commit in your repository. When you create a new branch, Git creates a reference to the
current commit. You can make changes and commit them to this new branch without affecting the main
branch (usually named main or master). This is useful for working on new features or bug fixes in
isolation. Once the work on a branch is done and tested, it can be merged back into the main branch,
keeping the project stable.

Git's branch implementation is lightweight compared to other version control systems. Instead of
duplicating files across directories, Git only points to a specific commit, making branches fast and
efficient.

The main advantages of branching are:

• Parallel development: Multiple branches allow multiple developers or teams to work on


different features simultaneously without interfering with each other.
• Safe experimentation: Branches let you work on changes that may not be ready for production
without disturbing the stability of the main branch.
• Collaborative work: Different team members can work on different branches and then merge
their changes when they're ready.

STEPS AND OUTPUT:

1. Create a Local Repository

• First, create a new folder in Visual Studio called localrepo. You can do this by using the
mkdir (make directory) command in GitBash or any terminal.

mkdir localrepo

In this folder, you will initialize a Git repository.

2. Create Two Files: index.html and style.css

• Inside the localrepo folder, create two files:


o index.html
o style.css

You can create these files through Visual Studio or any text editor.

3. Initialize a Git Repository

• Navigate to the localrepo folder and initialize it as a Git repository using the git init
command.

cd localrepo
git init

• This initializes the folder as a Git repository, and a hidden .git folder is created.

4. Add Files to Staging Area


• Add the two files (index.html and style.css) to the Git staging area using the git add
command.

git add .

• The . indicates that all new or modified files in the current directory will be added to the staging
area.

5. Commit the Changes

• Now that the files are staged, commit them to the repository with a meaningful message using
git commit.

git commit -m "files inserting"

• This will create a commit with the message "files inserting" that represents the current state of
your files.

6. Check the Status

• Use the git status command to check the status of your repository. It will show which
files have been modified or added since the last commit.

git status

7. Create a GitHub Repository

• Go to your GitHub account and create a new repository. Name it localrepo and do not
initialize it with a README.md file.
o Note: The repository on GitHub will be used to store your code remotely.

8. Connect Local Repo to Remote Repo

• In GitBash, connect your local repository to the remote GitHub repository using the following
command:

git remote add origin


https://github.com/yourusername/localrepo.git

• Replace yourusername with your actual GitHub username and localrepo with the name
of the repository you created on GitHub.

9. Verify Remote URL

• To verify the connection between the local and remote repositories, use the git remote -
v command.

git remote -v

• This will show the URLs of the connected remote repositories (fetch and push URLs).

10. Check the Current Branch


• Use the git branch command to see which branch you are currently working on.

git branch

• This will show the current branch. If it shows main, you can skip the next step.

11. Rename master Branch to main (if required)

• If your branch is named master and you want to rename it to main, use the following
command:

git branch -M main

12. Push the Changes to GitHub

• Push your local commits to the remote repository using the git push command.

git push origin main

• Alternatively, you can use the -u flag to set the upstream branch, so next time you can simply
run git push without specifying the remote and branch:

git push -u origin main

13. Git Workflow

• The general workflow when using Git and GitHub is as follows:


o Create a GitHub repository.
o Clone the repository to your local machine.
o Make changes (edit files, add new files).
o Add the changes to the staging area (git add).
o Commit the changes (git commit).
o Push the changes to GitHub (git push).

CONCLUSION:

In this experiment, we learned how to create a branch and perform various Git commands. We
initialized a local repository, staged files, committed changes, connected the local repository to a remote
GitHub repository, and finally pushed the changes to GitHub. We also explored the concept of
branching, which allows for isolated development and collaboration in a project.

Experiment 3: Creating a Branch and Performing Various Git Commands

AIM:

Create a Git branch and perform various commands.


THEORY:

Git branches are used to create separate lines of development, allowing multiple versions of the project
to exist simultaneously. A branch is essentially a pointer to a particular commit in your repository.
When you create a new branch, Git creates a reference to the current commit, letting you work
independently from the main branch (commonly main or master).

Branches allow for:

• Parallel development: Teams can work on different features simultaneously.


• Safe experimentation: Work on new features or bug fixes in isolation.
• Collaboration: Multiple contributors can work on separate branches and merge their changes
into the main project when ready.

Branches in Git are lightweight because they do not copy files but simply point to a commit. This makes
Git branches efficient and fast for development.

STEPS AND OUTPUT:

1. Create a Local Repository: Use the following command to create a new folder called
localrepo and initialize it as a Git repository.

mkdir localrepo
cd localrepo
git init

2. Create Two Files: Inside the localrepo folder, create two files:
o index.html
o style.css Use any editor to create these files.
3. Add Files to Staging: Add the newly created files to the Git staging area.

git add .

4. Commit Changes: Commit the changes with a meaningful message.

git commit -m "Initial commit with index.html and style.css"

5. Check the Status: Verify the current status of the repository.

git status

6. Create a GitHub Repository: Create a new repository on GitHub named localrepo


without a README.md file.
7. Connect Local to Remote Repository: Connect the local repository to the remote GitHub
repository using the following command:

git remote add origin


https://github.com/yourusername/localrepo.git

8. Verify Remote URL: Check if the remote URL has been set correctly.
git remote -v

9. Check the Current Branch: View the current working branch

git branch

10. Rename Branch to Main (if necessary): If your branch is named master, rename it to main.

git branch -M main

11. Push Changes to GitHub: Push the changes to the remote repository on GitHub.

git push origin main

12. Set Upstream Branch: Set the upstream branch, so that future pushes will not require
specifying the branch.

git push -u origin main

13. Git Workflow: The general workflow for working with Git and GitHub is:

• Create a repository on GitHub.


• Clone the repository to your local machine.
• Make changes to the code.
• Add changes to the staging area (git add).
• Commit the changes (git commit).
• Push the changes to GitHub (git push).

CONCLUSION:

In this experiment, we successfully created a Git branch and performed various commands like
initializing a repository, adding and committing files, connecting the repository to GitHub, and pushing
changes. Branching is a powerful Git feature that allows developers to work on different features or
fixes without affecting the main branch, promoting collaborative and parallel development.

Experiment 4: Creating a Branch and Performing Various Git Commands

AIM:

Create a Git branch and perform various Git commands.


THEORY:

In Git, branches are used to manage separate lines of development, allowing multiple versions of a
project to exist simultaneously. Each branch is essentially a pointer to a specific commit in the
repository, enabling independent work without affecting the main branch (main or master).

Key benefits of branching include:

• Parallel development: Different features can be developed simultaneously without


interference.
• Safe experimentation: Developers can experiment with new features or bug fixes in isolation.
• Collaboration: Multiple contributors can work on their respective branches and merge their
changes when they are stable.

Git branches are lightweight because they only point to a commit, making them fast and efficient for
development.

STEPS AND OUTPUT:

1. Create a Local Repository: Create a new folder named localrepo and initialize it as a Git
repository.

mkdir localrepo
cd localrepo
git init

2. Create Two Files: Inside the localrepo folder, create two files:
o index.html
o style.css Use any text editor to create these files.
3. Add Files to Staging: Add the newly created files to the Git staging area.

git add .

4. Commit Changes: Commit the changes with a descriptive message.

git commit -m "Initial commit with index.html and style.css"

5. Check the Status: Verify the current status of the repository.

git status

6. Create a GitHub Repository: Create a new repository on GitHub named localrepo


without initializing it with a README.md file.
7. Connect Local to Remote Repository: Connect the local repository to the remote repository
on GitHub.

git remote add origin


https://github.com/yourusername/localrepo.git

8. Verify Remote URL: Ensure the connection to the remote repository is properly set.
git remote -v

9. Check the Current Branch: Check which branch you are currently working on.

git branch

10. Rename Branch to Main (if required): If your branch is named master, rename it to main.

git branch -M main

11. Push Changes to GitHub: Push the local commits to the GitHub repository.

git push origin main

12. Set Upstream Branch: Set the upstream branch to make future pushes easier.

git push -u origin main

13. Git Workflow Summary: The general workflow with Git and GitHub involves:

• Creating a repository on GitHub.


• Cloning the repository to your local machine.
• Making changes to the code.
• Staging the changes using git add.
• Committing the changes with git commit.
• Pushing the changes to GitHub using git push.

CONCLUSION:

In this experiment, we successfully created a Git branch and performed various Git commands such as
initializing a repository, adding and committing files, connecting the local repository to a remote
GitHub repository, and pushing changes. Branching in Git allows for efficient and isolated
development, enabling developers to work on different features or fixes without disrupting the main
project

Experiment 5: Running Containers of Different Applications and Operating Systems Using


Docker

AIM:

To use Docker to run containers for different applications and operating systems.

Docker is a platform as a service (PaaS) product that utilizes operating system-level virtualization to
deliver software in packages known as containers. Containers are isolated environments that bundle
their own software, libraries, and configuration files. Despite being isolated, they share the host
operating system’s kernel, making them more lightweight compared to traditional virtual machines.

Key features of Docker containers:

• Isolation: Each container runs its application and dependencies in isolation.


• Lightweight: Containers share the same kernel, reducing overhead compared to virtual
machines.
• Portability: Docker containers can be run on any system that supports Docker, ensuring
consistent environments.

Docker is widely used for creating reproducible environments, simplifying application deployment, and
improving scalability.

STEPS AND OUTPUT:

Step 1: Install Docker Desktop

Download and install Docker Desktop from the official Docker website.

Step 2: Create a Folder for Your Docker Project

1. Create a folder on your desktop for the project (e.g., docker_project).


2. Open the folder using Visual Studio Code (VS Code).

Step 3: Create a Python File

1. Inside the project folder, create a Python file named devops.py.


2. Add the following code to the file to print "Hello, Docker!" using Python:

python

print("Hello, Docker!")

Step 4: Create a Dockerfile

1. In the same folder, create a file named Dockerfile (no file extension).
2. Add the following content to the Dockerfile to specify the instructions to build the Docker
image:

Dockerfile

# Use Python image


FROM python:3.8-slim-buster

# Set working directory inside the container


WORKDIR /app

# Copy the local Python file to the container


COPY devops.py .

# Command to run the Python file


CMD ["python", "devops.py"]

Step 5: Build and Run the Docker Container

1. Open the VS Code terminal.


2. Build the Docker image using the command:

docker build -t sample1 .

3. Run the Docker container using the following command:

docker run sample1

Expected Output:

Hello, Docker!

Step 6: Create a Numpy Program

1. Write a Python program that creates a simple numpy array.

python

import numpy as np
array = np.array([1, 2, 3, 4, 5])
print("Numpy Array:", array)

2. Create a requirements.txt file to specify the required package (numpy):

txt

numpy

Step 7: Build and Run the Numpy Program

1. Modify the Dockerfile to install the necessary dependencies from the


requirements.txt file:

Dockerfile

# Use Python image


FROM python:3.8-slim-buster

# Set working directory inside the container


WORKDIR /app

# Copy the local files to the container


COPY devops.py .
COPY requirements.txt .

# Install required packages


RUN pip install -r requirements.txt

# Command to run the Python file


CMD ["python", "devops.py"]

2. Build the Docker image for the numpy program:

docker build -t sample2 .

3. Run the container with the numpy program:

docker run sample2

Expected Output:

javascript

Numpy Array: [1 2 3 4 5]

CONCLUSION:

In this experiment, we successfully installed Docker, created Docker containers for running Python
programs, and used Docker's containerization feature to run both a simple Python script and a program
with numpy dependencies. Docker allows efficient application deployment by isolating the execution
environment, making it a powerful tool for reproducible development and deployment.

Experiment 6: Installation of Ansible

AIM:

To install Ansible using Windows Subsystem for Linux (WSL) and set up a basic automation
environment.

THEORY:

Ansible is an open-source automation tool used for configuration management, application deployment,
and task automation. It allows IT administrators to automate repetitive tasks and manage large
environments with ease. Ansible uses a declarative approach, meaning users define the desired system
state rather than scripting specific steps.

Key Features:
• Declarative Language: Users specify the end state of systems rather than providing detailed
step-by-step commands.
• Playbooks: YAML files that list the series of tasks and configurations to be executed on
managed systems.
• Inventory: Hosts or groups of hosts that Ansible manages.
• Modules: Predefined units of work that execute specific tasks (e.g., install packages, copy
files).
• Tasks: The individual operations to be performed in a playbook.
• Roles: Reusable components that group related tasks, variables, and handlers.
• Handlers: Special tasks triggered by changes in system state (e.g., restart a service if a
configuration file is modified).
• Variables: Dynamic elements that allow customization in playbooks.
• Templates: Configurable files using Jinja2 templating for dynamic content.
• Facts: Collected system information used to make decisions in playbooks.
• Idempotency: Ensures that executing a task multiple times results in the same outcome without
side effects.
• Agentless Architecture: No need to install agent software on managed systems; uses SSH or
other communication protocols.
• Push-Based Model: Commands are executed from a central node and "pushed" to managed
hosts.

Applications:

• Configuration Management: Automatically configure systems and apply consistent setups


across environments.
• Application Deployment: Streamline deployment of applications and dependencies.
• Infrastructure Provisioning: Automate creation of infrastructure components like virtual
machines, network configurations, and cloud resources.
• CI/CD Pipelines: Integrate into continuous integration/continuous deployment pipelines.
• Security and Compliance: Automate security updates, patch management, and policy
enforcement.

PROCEDURE:

Step 1: Install Ubuntu via WSL (Windows Subsystem for Linux)

1. Open Command Prompt as Administrator and install WSL and Ubuntu using the following
command:

wsl --install

This command installs WSL and sets Ubuntu as the default Linux distribution.

Step 2: Check for Python and Install It

1. Update the package list to ensure the latest versions of software are available:
sudo apt-get update

2. Install Python and pip, which is required for Ansible:

sudo apt-get install python3-pip

3. Verify the Python installation:

python3 --version

Step 3: Install Ansible

1. Open the Ubuntu session in WSL and install Ansible using pip:

python3 -m pip install ansible

2. If Ansible is already installed, you can re-run the command to update or reinstall:

python3 -m pip install --upgrade ansible

CONCLUSION:

The experiment successfully demonstrated the installation of Ansible on Windows using the Windows
Subsystem for Linux (WSL). By following the steps to install Ubuntu, verify Python installation, and
install Ansible, we have set up an automated configuration management environment. This environment
enables users to leverage Ansible's powerful automation capabilities for various IT tasks, such as system
configuration, application deployment, and infrastructure provisioning.

Proper setup and verification of dependencies like Python are crucial for a functional Ansible
environment, ensuring efficient task automation and system management.

Experiment 7: Installation of Ansible

AIM:

To install Ansible using Windows Subsystem for Linux (WSL) and set up a basic automation
environment.

THEORY:

Ansible is an open-source automation tool designed for configuration management, application


deployment, and task automation. It empowers IT administrators to automate repetitive tasks and
manage large environments efficiently. Ansible operates on a declarative model, allowing users to
specify the desired state of systems without scripting every step.

Key Features:
• Declarative Language: Users define the end state of systems rather than the steps to achieve
that state.
• Playbooks: YAML files that outline a series of tasks and configurations to execute on managed
systems.
• Inventory: Lists of hosts or groups of hosts managed by Ansible.
• Modules: Predefined units of work that perform specific tasks (e.g., installing packages,
copying files).
• Tasks: Individual operations executed within a playbook.
• Roles: Reusable components that organize related tasks, variables, and handlers.
• Handlers: Special tasks activated by changes in system state (e.g., restarting a service after a
configuration update).
• Variables: Dynamic elements for customizing playbooks.
• Templates: Configurable files that utilize the Jinja2 templating engine for dynamic content.
• Facts: Information gathered from managed systems to inform playbook decisions.
• Idempotency: Ensures that repeated execution of a task yields the same outcome without
unintended side effects.
• Agentless Architecture: No agent software is required on managed systems; communication
occurs through SSH or other protocols.
• Push-Based Model: Commands are executed from a central control node and pushed to target
hosts.

Applications:

• Configuration Management: Automate the setup and maintenance of system configurations


across multiple servers.
• Application Deployment: Streamline the deployment of applications and their dependencies.
• Infrastructure Provisioning: Automate the creation and management of virtual machines and
other infrastructure components.
• CI/CD Pipelines: Integrate into continuous integration and deployment workflows.
• Security and Compliance: Automate security updates, patch management, and compliance
enforcement.

PROCEDURE:

Step 1: Install Ubuntu via WSL (Windows Subsystem for Linux)

1. Open Command Prompt as Administrator.


2. Run the following command to install WSL and set Ubuntu as the default Linux distribution:

wsl --install

Step 2: Check for Python and Install It

1. Update the package list to ensure you have the latest versions of software:

sudo apt-get update

2. Install Python and pip, which are required for Ansible:

sudo apt-get install python3-pip


3. Verify the Python installation:

python3 --version

Step 3: Install Ansible

1. Open the Ubuntu session in WSL and install Ansible using pip:

python3 -m pip install ansible

2. If Ansible is already installed, you can update or reinstall it using:

python3 -m pip install --upgrade ansible

CONCLUSION:

The experiment successfully demonstrated the installation of Ansible on Windows using the Windows
Subsystem for Linux (WSL). By following the steps to install Ubuntu, verify the Python installation,
and install Ansible, a functional automated configuration management environment was set up.

This environment allows users to take advantage of Ansible's robust automation capabilities for a
variety of IT tasks, including system configuration, application deployment, and infrastructure
provisioning. Ensuring that dependencies like Python are correctly installed is essential for a smooth
Ansible setup, which ultimately facilitates effective task automation and system management.

Experiment 8: Testing Vulnerabilities Using Snyk and SonarQube

AIM:

1. To implement application and code security testing using Snyk.


2. To implement Static Application Security Testing (SAST) using SonarQube.

THEORY:

1. Snyk:

Snyk is a developer-first security tool that enables users to find and fix vulnerabilities in applications,
including open source dependencies and container images. It integrates seamlessly into the development
workflow, allowing developers to address security issues as part of their coding process.

• Key Features:
o Vulnerability Scanning: Automatically scans code and dependencies for known
vulnerabilities.
o Fix Recommendations: Provides actionable advice on how to resolve identified
vulnerabilities, including suggesting safe upgrades.
o Integration Support: Works with various CI/CD tools, enabling continuous security
checks.
2. SonarQube:

SonarQube is an open-source platform for continuous inspection of code quality. It provides Static
Application Security Testing (SAST) to identify security vulnerabilities and code quality issues early
in the development lifecycle.

• Key Features:
o Code Analysis: Analyzes source code for bugs, vulnerabilities, and code smells.
o Quality Gates: Allows users to define thresholds for code quality and security metrics,
ensuring code meets predefined standards before merging.
o Extensive Language Support: Supports multiple programming languages and
integrates with various development tools.

PROCEDURE:

1. Implementing Application and Code Security Testing Using Snyk:

1. Installation:
o Install Snyk CLI globally using npm:

npm install -g snyk

2. Authentication:
o Log in to Snyk using:

snyk auth

3. Run Snyk Test:


o Navigate to the project directory where your application code resides and run:

snyk test

o This command scans your project for known vulnerabilities in dependencies.


4. Fix Vulnerabilities:
o If vulnerabilities are detected, Snyk provides recommendations for remediation. Use
the wizard to apply fixes:

snyk wizard

o Follow the on-screen prompts to address the issues.

2. Implementing Static Application Security Testing Using SonarQube:

1. Installation:
o Download and install SonarQube from the official website.
o Start the SonarQube server by navigating to the bin directory and running:

./sonar.sh start

o Access the SonarQube dashboard via http://localhost:9000.


2. Project Configuration:
o Create a new project in SonarQube.
o In the root directory of your project, create a sonar-project.properties file
with the following configuration:

sonar.projectKey=your_project_key
sonar.projectName=Your Project Name
sonar.projectVersion=1.0
sonar.sources=.

3. Run Analysis:
o Install SonarScanner, which is required to analyze the code. Run the scanner from your
project directory:

sonar-scanner

4. Review Results:
o Once the analysis is complete, navigate back to the SonarQube dashboard to review the
findings, including security vulnerabilities, code quality metrics, and suggestions for
improvement.

CONCLUSION:

The experiment successfully demonstrated the use of Snyk for application and code security testing,
along with the implementation of Static Application Security Testing using SonarQube. By leveraging
these tools, developers can identify and mitigate vulnerabilities in their applications, enhancing overall
security and code quality.

Integrating security checks early in the development process ensures that potential issues are addressed
promptly, contributing to a more secure software development lifecycle. The findings from Snyk and
SonarQube provide valuable insights into improving code practices and maintaining a secure
application environment

Experiment: Cloud and Infrastructure as Code

AIM:

1. To create and work with a virtual machine on a cloud platform (GCP/AWS).


2. To implement a Terraform script for deploying compute, storage, and network infrastructure
on a public cloud platform.

THEORY:

1. Cloud Computing:

Cloud computing refers to the delivery of computing services—including servers, storage, databases,
networking, software, and analytics—over the internet ("the cloud"). It offers flexibility, scalability,
and cost-efficiency, enabling users to access and manage resources on-demand.
2. Infrastructure as Code (IaC):

Infrastructure as Code is a key DevOps practice that involves managing and provisioning computing
infrastructure through machine-readable definition files, rather than physical hardware configuration or
interactive configuration tools. This approach allows for automation, consistency, and version control
in infrastructure management.

3. Terraform:

Terraform is an open-source IaC tool developed by HashiCorp that enables users to define and provision
cloud infrastructure using declarative configuration files. It supports multiple cloud providers, including
AWS and GCP, allowing for a unified approach to managing resources across platforms.

• Key Features:
o Declarative Configuration: Users define the desired state of infrastructure, and
Terraform manages the underlying execution.
o Resource Graph: Automatically determines the order of resource creation based on
dependencies.
o State Management: Maintains a state file to track the infrastructure and manage
changes over time.

PROCEDURE:

1. Creating and Working with a Virtual Machine on AWS:

1. Login to AWS Management Console:


o Go to the AWS Management Console and log in with your credentials.
2. Launch EC2 Instance:
o Navigate to the EC2 dashboard.
o Click on "Launch Instance" to create a new virtual machine.
o Choose an Amazon Machine Image (AMI) and select an instance type (e.g., t2.micro
for free tier).
o Configure instance details, such as network settings and IAM role if necessary.
o Add storage and configure security groups to allow inbound traffic (e.g., SSH for
Linux).
o Review and launch the instance, creating a new key pair for SSH access.
3. Connect to the EC2 Instance:
o Once the instance is running, connect to it using SSH:

ssh -i /path/to/your-key.pem ec2-user@your-ec2-public-ip

2. Implementing a Terraform Script:

1. Install Terraform:
o Download and install Terraform from the official website.
2. Set Up Terraform Configuration:
o Create a directory for your Terraform project and navigate into it.
o Create a file named main.tf and define your infrastructure resources. Below is an
example Terraform script to create an EC2 instance in AWS:

hcl
provider "aws" {
region = "us-east-1"
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0" # Replace with your
preferred AMI
instance_type = "t2.micro"

tags = {
Name = "Terraform-Example"
}
}

3. Initialize Terraform:
o Run the following command to initialize your Terraform working directory:

terraform init

4. Plan the Deployment:


o Execute the command to see the actions Terraform will take to achieve the desired
state:

terraform plan

5. Apply the Configuration:


o Deploy the resources by running:

terraform apply

o Confirm the action by typing "yes" when prompted.


6. Manage Infrastructure:
o To view the current state of your infrastructure, you can use:

terraform show

o To remove the created resources, run:

terraform destroy

CONCLUSION:

The experiment successfully demonstrated the creation and management of a virtual machine on AWS
and the use of Terraform for infrastructure deployment. By leveraging cloud computing and
Infrastructure as Code principles, users can efficiently provision and manage resources, ensuring
consistency and reproducibility in their cloud environments.

Implementing Terraform scripts allows for scalable infrastructure management, reducing the
complexity and manual effort associated with traditional deployment methods. This approach not only
enhances productivity but also fosters collaboration within development and operations teams by
enabling version control and automation in infrastructure provisioning

You might also like