0% found this document useful (0 votes)
334 views117 pages

ADVANCEDDEVOPS

This document describes an experiment conducted by a student to understand AWS CodeBuild, CodePipeline, and CodeDeploy. CodeBuild is used to build applications, CodePipeline manages deployments through different stages (build, test, deploy), and CodeDeploy deploys builds to EC2 instances. The student set up a CodeBuild project to build a sample app, deployed it to S3 using CodePipeline, and then deployed the build to an EC2 instance using CodeDeploy. The student received a total of 15 marks for timely submission and good understanding of the concepts and process.

Uploaded by

Ayush Premjith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
334 views117 pages

ADVANCEDDEVOPS

This document describes an experiment conducted by a student to understand AWS CodeBuild, CodePipeline, and CodeDeploy. CodeBuild is used to build applications, CodePipeline manages deployments through different stages (build, test, deploy), and CodeDeploy deploys builds to EC2 instances. The student set up a CodeBuild project to build a sample app, deployed it to S3 using CodePipeline, and then deployed the build to an EC2 instance using CodeDeploy. The student received a total of 15 marks for timely submission and good understanding of the concepts and process.

Uploaded by

Ayush Premjith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 117

Ramrao Adik Institute of Technology

DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 1

EXPERIMENT TITLE To understand the benefits of Cloud Infrastructure and


Setup AWS Cloud9 IDE, Launch AWS Cloud9 IDE and
Perform Collaboration Demonstration.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 22/07/2021

SUBMISSION DATE 29/07/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

EXPERIMENT NO:1
Practical name: Benefits of cloud infrastructure and setup AWS cloud9 IDE,
launch AWS cloud9 IDE and perform collaboration Demonstration.
Aim: To understand the benefits of cloud infrastructure and setup AWS cloud9
IDE, launch AWS cloud9 IDE and perform collaboration Demonstration.
Theory:
What is cloud9?
- AWS Cloud9 is an IDE (that is integrated development environment) and
operates in the cloud. It allows writing, running or debugging any code while
using the web browser. Cloud9 has the code editor, a debugger and a
terminal.
- Cloud9 contains some of the most used programming languages such as PHP
and JavaScript among other languages; therefore, you won’t require
installing programs on your devices before commencing on your projects.

- As cloud-based IDE, Cloud9 ensures you can work on your tasks from
anywhere whether you are at home, office or on-site while accessing an
internet –connected gadget. Cloud9 offers seamless experience in the
development of serverless applications.
- In addition to its programming language support, AWS Cloud9 enables a
developer to build, edit and debug AWS Lambda functions. The environment
comes with preconfigured support for software developer's kits, libraries and
plug-ins that are required to build serverless application.
- Cloud9 can run on a managed Amazon EC2 instance, or on any SSH-
supported Linux server, whether it's on AWS, in a private data center or in
another cloud.
- A developer can access the Cloud9 terminal from anywhere with an internet
connection, and share code with other members of the development team.
- Cloud9 updates in real time, so a developer can see code entered, edited or
deleted from another member of the development team as it happens, and can
also chat with other developers in the IDE terminal.
Benefits of cloud9:

1. Flexible Browser Coding:


- AWS cloud provides flexibility in running development environments such
as Linux server running on SHH or Amazon EC2. You can, therefore, write,
run or debug application using a browser without you installing another local
IDE.
- Cloud9 has a code editor and a debugger, which come with useful and
efficient tools for code hinting and code completion. The two also provide
step-through debugging. Additionally, the cloud9 terminal offers browser-
based coding experience. It allows you to install an additional program or
enter new commands.

2. Collaborative coding:
- Another benefit of utilizing Cloud9 is that it allows collaborative coding with
your team. While using Cloud9, it is easy to code and interact with your
team. It is possible to share the development environment, using a few clicks
and pair-program. During the collaboration, each team member can observe
commands from other members and at the
Same time chat with each other in the IDE.

3. Develop Applications without the need for a service:


- Cloud9 is also advantageous in that it facilitates easy writing, running and
debugging server-less applications. It offers an environment for testing A W
S functions. Additionally, you can reiterate you code directly thereby saving
time and enhancing the quality of the codes.

4. Access AWS:
- Cloud9 provides a terminal, which is inclusive of privileges to Amazon EC2
that hosts the development environment together with the AWS Interface.
You can efficiently run commands. You can also access AWS functions
easily.

Steps to create AWS account:

1. Open the Amazon Web Services (AWS) home page.


2. Choose Create an AWS Account.
Note: If you signed in to AWS recently, choose Sign in to the Console.
If create a new AWS account isn't visible, first choose Sign in to a different
account, and then choose Create a new AWS account.

3. Enter your account information, and then choose Continue. Be sure that you
enter your account information correctly, especially your email address. If
you enter your email address incorrectly, you can't access your account.

4. Choose Personal or Professional.
Note: Personal accounts and professional accounts have the same features
and functions.
5. Enter your company or personal information.
Important: For professional AWS accounts, it's a best practice to enter the
company phone number rather than a personal cell phone. Configuring a root
account with an individual email address or a personal phone number can
make your account insecure.
6. Read and accept the AWS Customer Agreement.
Note: Be sure that you read and understand the terms of the AWS Customer
Agreement.
7. Choose Create Account and Continue.

8. On the Payment Information page, enter the information about your payment


method, and then choose Verify and Add.

9. Choose your country or region code from the list. Enter a phone number
where you can be reached in the next few minutes. Enter the code displayed
in the CAPTCHA, and then submit. In a few moments, an automated system
contacts you. Enter the PIN you receive, and then choose verify code. Select
free plan and you will get directed to aws page click on sign in to console.
Steps to share an environment using Cloud9:

Step 1: Open aws click on Services and go to cloud9. Click on create environment
and give name to the environment, next step and create environment.
Step 2: Your aws cloud9 environment will be displayed. (All integration will take
place here.)
Step 4: Take another browser go to aws management console. Type IAM service
click on that. Go to users and add users. Give the username, select custom password
set it and click on next permissions.
Step 5: Click on Create group give the groupname and create group. Click on next
tag, next review and create user.
Step 6: Go to Cloud9 environment select file - new from template - HTML file.
Write your html code, select share option in right hand side, add the user you have
created and invite. A security warning box will appear select ok.
Step 7: Open your incognito window – login to aws - search cloud9 – click on
menu icon in left side - click on shared with you and open IDE. Now the user you
created will be able to access the file created by root user.
Conclusion: Hence, we understood the benefits of cloud infrastructure and setup
AWS cloud9 IDE, launch AWS cloud9 IDE and perform collaboration
Demonstration.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 2

EXPERIMENT TITLE To Build Your Application using AWS CodeBuild and


Deploy on S3 / SEBS using AWS CodePipeline, deploy
Sample Application on EC2 instance using AWS
CodeDeploy.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 29/07/2021, 05/08/2021

SUBMISSION DATE 12/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

EXPERIMENT NO : 2
Aim: To Build Your Application using AWS CodeBuild and Deploy on S3 / SEBS
using AWS CodePipeline, deploy Sample Application on EC2 instance using AWS
CodeDeploy.
Theory :
What is CodeBuild –
AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles
your source code, runs unit tests, and produces artifacts that are ready to deploy.
CodeBuild eliminates the need to provision, manage, and scale your own build
servers. It provides prepackaged build environments for popular programming
languages and build tools such as Apache Maven, Gradle, and more. You can also
customize build environments in CodeBuild to use your own build tools.
CodeBuild scales automatically to meet peak build requests.
CodeBuild provides these benefits:
 Fully managed – CodeBuild eliminates the need to set up, patch, update, and
manage your own build servers.
 On demand – CodeBuild scales on demand to meet your build needs. You
pay only for the number of build minutes you consume.
 Out of the box – CodeBuild provides preconfigured build environments for
the most popular programming languages. All you need to do is point to your
build script to start your first build.
What is CodeDeploy –
CodeDeploy is a deployment service that automates application deployments to
Amazon EC2 instances, on-premises instances, serverless Lambda functions, or
Amazon ECS services.
You can deploy a nearly unlimited variety of application content, including:
 Code

 Serverless AWS Lambda functions


 Web and configuration files
 Executables
 Packages
 Scripts
 Multimedia files
CodeDeploy can deploy application content that runs on a server and is stored in
Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy
can also deploy a serverless Lambda function. You do not need to make changes to
your existing code before you can use CodeDeploy.
CodeDeploy makes it easier for you to:
 Rapidly release new features.

 Update AWS Lambda function versions.


 Avoid downtime during application deployment.
 Handle the complexity of updating your applications, without many of the
risks associated with error-prone manual deployments.
The service scales with your infrastructure so you can easily deploy to one instance
or thousands.
What is CodePipeline –
AWS CodePipeline is an Amazon Web Services product that automates the
software deployment process, allowing a developer to quickly model, visualize and
deliver code for new features and updates. This method is called continuous
delivery.
AWS CodePipeline automatically builds, tests and launches an application each
time the code is changed; a developer uses a graphic user interface to model
workflow configurations for the release process within the pipeline. A development
team can specify and run actions or a group of actions, which is called a stage. For
example, a developer would specify which tests CodePipeline will run and to which
pre-production environments it should deploy. The service can then run these
actions through the parallel execution process, in which multiple processors handle
computing tasks simultaneously to accelerate workflows.
AWS CodePipeline integrates with several Amazon services. It pulls source code
from Amazon Simple Storage Service and deploys to both AWS CodeDeploy and
AWS Elastic Beanstalk. A developer can also integrate AWS Lambda functions or
third-party DevOps tools, such as GitHub or Jenkins. AWS CodePipeline also
supports custom systems and actions through the AWS command line interface.
These custom actions include build, deploy, test and invoke, which facilitate unique
release processes. The developer must create a job worker to poll CodePipeline for
job requests, then run the action and return a status result.
An administrator grants permissions to AWS CodePipeline through AWS Identity
and Access Management (IAM). IAM roles control which end users can make
changes to the application release workflow.

Steps :
Step 1: Create an S3 bucket for your application
To create an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. In Bucket name, enter a name for your bucket
In Region, choose the Region where you intend to create your pipeline, such
as US West (Oregon), and then choose Create bucket.
4. After the bucket is created, a success banner displays. Choose Go to bucket
details.
5. On the Properties tab, choose Versioning. Choose Enable versioning, and
then choose Save.
When versioning is enabled, Amazon S3 saves every version of every object
in the bucket.
6. On the Permissions tab, leave the defaults. For more information about S3
bucket and object permissions, see Specifying Permissions in a Policy.
7. Next, download a sample and save it into a folder or directory on your local
computer.
1. Choose one of the following.
Choose SampleApp_Windows.zip if you want to follow the steps
in this tutorial for Windows Server instances.
1. If you want to deploy to Amazon Linux instances using
CodeDeploy, download the sample application
here: SampleApp_Linux.zip.
2. If you want to deploy to Windows Server instances using
CodeDeploy, download the sample application
here: SampleApp_Windows.zip.
2. Download the compressed (zipped) file. Do not unzip the file.
8. In the Amazon S3 console, for your bucket, upload the file:
1. Choose Upload.
2. Drag and drop the file or choose Add files and browse for the file.
3. Choose Upload.
Step 2: Create Amazon EC2 Windows instances and install the CodeDeploy
agent
To create an instance role
1. Open the IAM console at https://console.aws.amazon.com/iam/).
2. From the console dashboard, choose Roles.
3. Choose Create role.

4. Under Select type of trusted entity, select AWS service. Under Choose a


use case, select EC2, and then choose Next: Permissions.
5. Search for and select the policy
named AmazonEC2RoleforAWSCodeDeploy, and then choose Next:
Tags.

6. Choose Next: Review. Enter a name for the role


To launch instances
1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2. From the console dashboard, choose Launch instance, and select Launch
instance from the options that pop up.

3. On the Step 1: Choose an Amazon Machine Image (AMI) page, locate


the Microsoft Windows Server 2019 Base option, and then choose Select.
(This AMI is labeled "Free tier eligible" and can be found at the top of the
list.)
4. On the Step 2: Choose an Instance Type page, choose the free tier
eligible t2.micro type as the hardware configuration for your instance, and
then choose Next: Configure Instance Details.

5. On the Step 3: Configure Instance Details page, do the following:


 In Number of instances, enter 2.
 In Auto-assign Public IP, choose Enable.
 In IAM role, choose the IAM role you created in the previous
procedure (for example, EC2InstanceRole).

 Expand Advanced Details, and in User data, with As text selected,


enter the following:
<powershell>
New-Item -Path c:\temp -ItemType "directory" -Force
powershell.exe -Command Read-S3Object -BucketName bucket-name/latest
-Key codedeploy-agent.msi -File c:\temp\codedeploy-agent.msi
Start-Process -Wait -FilePath c:\temp\codedeploy-agent.msi -WindowStyle
Hidden
</powershell>

6. Leave the Step 4: Add Storage page unchanged, and then choose Next: Add


Tags.
7. On the Add Tags page, choose Add Tag. Enter Name in the Key field,
enter MyCodePipelineDemo in the Value field, and then choose Next:
Configure Security Group.

8. On the Configure Security Group page, allow port 80 communication so


you can access the public instance endpoint.
9. Choose Review and Launch.
10.On the Review Instance Launch page, choose Launch. When prompted for
a key pair, choose Proceed without a key pair.
When you are ready, select the acknowledgment check box, and then
choose Launch Instances.

11.Choose View Instances to close the confirmation page and return to the


console.
12.You can view the status of the launch on the Instances page. When you
launch an instance, its initial state is pending. After the instance starts, its
state changes to running, and it receives a public DNS name. (If the Public
DNS column is not displayed, choose the Show/Hide icon, and then
select Public DNS.)
13.It can take a few minutes for the instance to be ready for you to connect to it.
Check that your instance has passed its status checks. You can view this
information in the Status Checks column.
Step 3: Create an application in CodeDeploy
To create an application in CodeDeploy
1. Open the CodeDeploy console
at https://console.aws.amazon.com/codedeploy.
2. If the Applications page does not appear, on the AWS CodeDeploy menu,
choose Applications.
3. Choose Create application.

4. In Application name, enter MyDemoApplication.


5. In Compute Platform, choose EC2/On-premises.
6. Choose Create application.

To create a deployment group in CodeDeploy


1. On the page that displays your application, choose Create deployment
group.

2. In Deployment group name, enter MyDemoDeploymentGroup.

3. In Service Role, choose a service role that trusts AWS CodeDeploy with, at
minimum, the trust and permissions described in Create a Service Role for
CodeDeploy. To get the service role ARN, see Get the Service Role ARN
(Console).
4. Under Deployment type, choose In-place.
5. Under Environment configuration, choose Amazon EC2 Instances.
Choose Name in the Key field, and in the Value field,
enter MyCodePipelineDemo.

6. Under Deployment configuration,
choose CodeDeployDefault.OneAtaTime.
7. Under Load Balancer, clear Enable load balancing. You do not need to set
up a load balancer or choose a target group for this example.
8. In the Advanced section, leave the defaults.
9. Choose Create deployment group.

Step 4: Create your first pipeline in CodePipeline


To create a CodePipeline automated release process
1. Sign in to the AWS Management Console and open the CodePipeline
console at http://console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page,
choose Create pipeline.

3. In Step 1: Choose pipeline settings, in Pipeline name,


enter MyFirstPipeline.
4. In Service role, do one of the following:
1. Choose New service role to allow CodePipeline to create a new
service role in IAM. In Role name, the role and policy name both
default to this format: AWSCodePipelineServiceRole-region-
pipeline_name. For example, this is the service role created for
this tutorial: AWSCodePipelineServiceRole-eu-west-2-
MyFirstPipeline.
2. Choose Existing service role to use a service role already created in
IAM. In Role name, choose your service role from the list.
5. Leave the settings under Advanced settings at their defaults, and then
choose Next.

6. In Step 2: Add source stage, in Source provider, choose Amazon S3.


In Bucket, enter the name of the S3 bucket you created in Step 1: Create an
S3 bucket for your application. In S3 object key, enter the object key with or
without a file path, and remember to include the file extension. For example,
for SampleApp_Windows.zip, enter the sample file name as shown in
this example:
SampleApp_Windows.zip
Choose Next step.
Under Change detection options, leave the defaults. This allows
CodePipeline to use amazon CloudWatch Events to detect changes in your
source bucket.
Choose Next.
7. In Step 3: Add build stage, choose Skip build stage, and then accept the
warning message by choosing Skip again. Choose Next.
8. In Step 4: Add deploy stage, in Deploy provider, choose AWS
CodeDeploy. The Region field defaults to the same AWS Region as your
pipeline. In Application name, enter MyDemoApplication, or choose
the Refresh button, and then choose the application name from the list.
In Deployment group, enter MyDemoDeploymentGroup, or choose it
from the list, and then choose Next.
9. In Step 5: Review, review the information, and then choose Create pipeline.
10.The pipeline starts to run. You can view progress and success and failure
messages as the CodePipeline sample deploys a webpage to each of the
Amazon EC2 instances in the CodeDeploy deployment.
Congratulations! You just created a simple pipeline in CodePipeline. 

To verify your pipeline ran successfully


1. View the initial progress of the pipeline. The status of each stage changes
from No executions yet to In Progress, and then to
either Succeeded or Failed. The pipeline should complete the first run within
a few minutes.
2. After Succeeded is displayed for the action status, in the status area for
the Deploy stage, choose Details. This opens the AWS CodeDeploy console.
3. In the Deployment group tab, under Deployment lifecycle events, choose
an instance ID. This opens the EC2 console.
4. On the Description tab, in Public DNS, copy the address, and then paste it
into the address bar of your web browser. View the index page for the sample
application you uploaded to your S3 bucket.

Conclusion : Hence we have build our Application using AWS CodeBuild and
deployed it on S3 / SEBS using AWS CodePipeline. We also deployed Sample
Application on EC2 instance using AWS CodeDeploy.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 3

EXPERIMENT TITLE To understand the Kubernetes Cluster Architecture,


install and Spin Up a Kubernetes Cluster on Linux
Machines/Cloud Platforms

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 12/09/2021, 19/09/2021

SUBMISSION DATE 26/08/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
EXPERIMENT NO. 3
Aim: To understand the Kubernetes Cluster Architecture, install and Spin Up a
Kubernetes Cluster on Linux Machines/Cloud Platforms.
Theory :
Kubernetes Architecture

Kubernetes Control
Plane

Architectural overview of

Kubernetes control plane taxonomy

The control plane is the system that


maintains a record of all Kubernetes
objects. It continuously manages object
states, responding to changes in the
cluster; it also works to make the actual state of system objects match the desired
state. The API Server provides APIs to support lifecycle orchestration (scaling,
updates, and so on) for different types of applications. Most resources contain
metadata, such as labels and annotations, desired state (specification) and observed
state (current status). Controllers work to drive the actual state toward the desired
state. The Controller Manager is a daemon that runs the core control loops, watches
the state of the cluster, and makes changes to drive status toward the desired state.
The Cloud Controller Manager integrates into each public cloud for optimal support
of availability zones, VM instances, storage services, and network services for
DNS, routing and load balancing. The Scheduler is responsible for the scheduling
of containers across the nodes in the cluster; it takes various constraints into
account, such as resource limitations or guarantees, and affinity and anti-affinity
specifications.

Cluster Nodes

Kubernetes node taxonomy


Cluster nodes are machines that run containers and are managed by the master
nodes. The Kubelet is the primary and most important controller in Kubernetes. It’s
responsible for driving the container execution layer, typically Docker.

Pods and Services


Pods are one of the crucial concepts in Kubernetes, as they are the key construct
that developers interact with. The previous concepts are infrastructure-focused and
internal architecture.

This logical construct packages up a single application, which can consist of


multiple containers and storage volumes. Usually, a single container (sometimes
with some helper program in an additional container) runs in this configuration – as
shown in the diagram below. There are various types of pods: ReplicaSet,
Deployment, Daemonset, StatefulSet, Job and CronJob.
Kubernetes Pod Architecture

Kubernetes Services
Services are the Kubernetes way of configuring a proxy to forward traffic to a set of
pods. Instead of static IP address-based assignments, Services use selectors (or
labels) to define which pods uses which service. These dynamic assignments make
releasing new versions or adding pods to a service really easy. Anytime a Pod with
the same labels as a service is spun up, it’s assigned to the service.
Kubernetes Networking
Networking Kubernetes has a distinctive networking model for cluster-wide, podto-
pod networking. In most cases, the Container Network Interface (CNI) uses a
simple overlay network (like Flannel) to obscure the underlying network from the
pod by using traffic encapsulation (like VXLAN); it can also use a fully-routed
solution like Calico. In both cases, pods communicate over a cluster-wide pod
network, managed by a CNI provider like Flannel or Calico. Within a pod,
containers can communicate without any restrictions. Containers within a pod exist
within the same network namespace and share an IP. This means containers can
communicate over localhost. Pods can communicate with each other using the pod
IP address, which is reachable across the cluster. Moving from pods to services, or
from external sources to services, requires going through kube-proxy.

Steps to Install Kubernetes on Ubuntu


Set up Docker
Step 1: Install Docker
Kubernetes requires an existing Docker installation. If you already have Docker
installed, skip ahead to Step 2.
If you do not have Kubernetes, install it by following these steps:
1. Update the package list with the command:
on-master&slave$sudo apt-get update
Master :

Worker :

2. Next, install Docker with the command:


on-master&slave$sudo apt-get install docker.io
Master :

Worker :

3. Check the installation (and version) by entering the following:


on-master&slave$docker --version
Master :

Worker :

Step 2: Start and Enable Docker


1. Set Docker to launch at boot by entering the following:
on-master&slave$sudo systemctl enable docker
2. Verify Docker is running:
on-master&slave$sudo systemctl status docker
To start Docker if it’s not running:
on-master&slave$sudo systemctl start docker
Master :

Worker :

Install Kubernetes
Step 3: Add Kubernetes
Signing Key
Since you are downloading Kubernetes from a non-standard repository, it is
essential to ensure that the software is authentic. This is done by adding a signing
key.
1.Enter the following to add a signing key:
on-master&slave$curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add
If you get an error that curl is not installed, install it with:
on-master&slave$sudo apt-get install curl
Master :
Worker :

Step 4: Add Software Repositories


Kubernetes is not included in the default repositories. To add them, enter the
following:
on-master&slave$sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-
xenial main"
Master :

Worker :

Step 5: Kubernetes Installation Tools


Kubeadm (Kubernetes Admin) is a tool that helps initialize a cluster. It fast-tracks
setup by using community-sourced best practices. Kubelet is the work package,
which runs on every node and starts containers. The tool gives you command-line
access to clusters.
1.Install Kubernetes tools with the command:
on-master&slave$sudo apt-get install kubeadm kubelet kubectl -y
on-master&slave$sudo apt-mark hold kubeadm kubelet kubectl
Allow the process to complete.
Master :
Worker :

2.Verify the installation with:


on-master&slave$kubeadm version
Master :

Worker :

Kubernetes Deployment
Step 6: Begin Kubernetes Deployment
Start by disabling the swap memory on each server:
on-master&slave$sudo swapoff –a
Step 7: Assign Unique Hostname for Each Server Node
Decide which server to set as the master node. Then enter the command:
on-master$sudo hostnamectl set-hostname master-node
Next, set a worker node hostname by entering the following on the worker server:
on-slave$sudo hostnamectl set-hostname worker01
Master :

Worker :
Step 8: Initialize Kubernetes on Master Node
Switch to the master server node, and enter the following:
on-master$sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-
errors=all
Once this command finishes, it will display a kubeadm join message at the end.
Make a note of the whole entry. This will be used to join the worker nodes to the
cluster.
Master :

Next, enter the following to create a directory for the cluster:


kubernetes-master:~$ mkdir -p $HOME/.kube
kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubernetes-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 9: Deploy Pod Network to Cluster


A Pod Network is a way to allow communication between different nodes in the
cluster. This tutorial uses the flannel virtual network.
Enter the following:
kubernetes-master:~$ sudo kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-
flannel.yml
Allow the process to complete.

Verify that everything is running and communicating:


kubernetes-master:~$ kubectl get pods --all-namespaces

Step 10: Join Worker Node to Cluster


As indicated in Step 7, you can enter the kubeadm join command on each worker
node to connect it to the cluster.
Switch to the worker01 system and enter the command you noted from Step 7:
kubeadm join 172.31.30.35:6443 --token qkhdq4.7mp7gdk0bacuo67n --discovery-
token-ca-cert-hash
sha256:91e4e49fc787ee61d4f096ee717726d27470312af51f9f0fc139f01d600479c3
Worker :

Wait a few minutes; then you can check the status of the nodes.
Switch to the master server, and enter:
kubernetes-master:~$ kubectl get nodes
The system should display the worker nodes that you joined to the cluster.
Output
master Readymaster 1d v1.14.0
worker1 Ready <none> 1d v1.14.0
If all of your nodes have the value Ready for STATUS, it means that they’re part of
the cluster and ready to run workloads.

Now that your cluster is verified successfully, let’s schedule an example Nginx
application on the cluster.
Running An Application on the Cluster
You can now deploy any containerized application to your cluster. To keep things
familiar, let’s deploy Nginx using Deployments and Services to see how this
application can be deployed to the cluster. You can use the commands below for
other containerized applications as well, provided you change the Docker image
name and any relevant flags (such as ports and volumes).

Still within the master node, execute the following command to create a
deployment named nginx:
kubernetes-master:~$kubectl create deployment nginx --image=nginx

A deployment is a type of Kubernetes object that ensures there’s always a specified


number of pods running based on a defined template, even if the pod crashes during
the cluster’s lifetime. The above deployment will create a pod with one container
from the Docker registry’s Nginx Docker Image.
Next, run the following command to create a service named nginx that will expose
the app publicly. It will do so through a NodePort, a scheme that will make the pod
accessible through an arbitrary port opened on each node of the cluster:
kubernetes-master:~$kubectl expose deploy nginx --port 80 --target-port 80 --type
NodePort

Services are another type of Kubernetes object that expose cluster internal services
to clients, both internal and external. They are also capable of load balancing
requests to multiple pods, and are an integral component in Kubernetes, frequently
interacting with other components.
Run the following command:
kubernetes-master:~$kubectl get services
From the third line of the above output, you can retrieve the port that Nginx is
running on. Kubernetes will assign a random port that is greater than 30000
automatically, while ensuring that the port is not already bound by another service.

Conclusion : Understood the Kubernetes Cluster Architecture, installed and Spined


Up a Kubernetes Cluster on Linux Machines/Cloud Platforms.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 4

EXPERIMENT TITLE To install Kubectl and execute Kubectl commands to


manage the Kubernetes cluster and deploy Your First
Kubernetes Application.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 26/08/2021
02/09/2021
SUBMISSION DATE 09/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No: 4
Aim: To install Kubectl and execute Kubectl commands to manage the Kubernetes
cluster and deploy Your First Kubernetes Application.
Theory :
The Kubernetes command-line tool,kubectl, allows you to run commands against
Kubernetes clusters. You can use kubectl to deploy applications, inspect and
manage cluster resources, and view logs.
Pods and Container Introspection Commands

Function Command

Lists all current pods Kubectl get pods

Describes pod names Kubectl describe pod<name>

Lists all replication controllers Kubectl get rc


Lists replication controllers in a namespace Kubectl get rc –
namespace=”namespace”

Shows the replication controller name Kubectl describe rc <name>

Lists services Kubectl get cvc

Shows a service name Kubectl describe svc<name>

Deletes a pod Kubectl delete pod<name>

Watches nodes continuously Kubectl get nodes -w

Debugging Commands

Function Command

Executes the command on service by choosing a Kubectl


container exec<service><commands
>[-c<
$container>]

Gets logs from the service for a container Kubectl logs -f<name>>[-c<
$container>]

Shows metrics for a node Kubectl top node


Shows metrics for a pod Kubectl top pod

Cluster Introspection Commands

Function Command

To get version-related information Kubectl version

To get cluster-related information Kubectl cluster-info

To get configuration details Kubectl config g view

To get information about a node Kubectl describe node<node>

Quick Commands

Function Command

Launching a pod with a name and image. Kubectl


run<name> —
image=<image-
name>

To create a service detailed in <manifest.yaml> Kubectl create -f


<manifest.yaml>
To scale the replication counter, counting the number Kubectl scale –
of instances. replicas=<count>rc<name>

Mapping the external port to the internal replication Expose rc<name> –


port. port=<external>– target-
port=<internal>

Stopping all pods in <n> Kubectl drain<n>– delete-local-


data– force–ignore-daemonset

To create a namespace. Kubectl create namespace


<namespace>

To let the master node run pods. Kubectltaintnodes –all-


node-
role.kuernetes.io/master-

Steps to deploy application on Kubernetes cluster:


execute the following command to create a deployment named nginx:
kubernetes-master:~$kubectl create deployment nginx --image=nginx

Next, run the following command to create a service named nginx that will expose
the app publicly.
kubernetes-master:~$kubectl expose deploy nginx --port 80 --target-port 80 --type
NodePort

Run the following command:


kubernetes-master:~$kubectl get services
From the third line of the above output, you can retrieve the port that Nginx is
running on. Kubernetes will assign a random port that is greater than 30000
automatically, while ensuring that the port is not already bound by another service.

To see the deployed container on worker node switch to worker01


on-slave#docker ps

If you want to scale up the replicas for a deployment (nginx in our case) the use the
following command:
kubernetes-master:~$kubectl scale --current-replicas=1 --replicas=2
deployment/nginx

kubernetes-master:~$kubectl get pods

kubernetes-master:~$kubectl describe deployment/nginx


If you would like to remove the Nginx application, first delete the nginx service
from the master node:
kubernetes-master:~$kubectl delete service nginx

Run the following to ensure that the service has been deleted:
kubernetes-master:~$kubectl get services

Then delete the deployment:


kubernetes-master:~$kubectl delete deployment nginx

Run the following to confirm that this worked:


kubernetes-master:~$kubectl get deployments
How to gracefully remove a node from Kubernetes?
On Master Node Find the node
kubernetes-master:~$kubectl get nodes Drain it
kubernetes-master:~$kubectl drain nodetoberemoved Delete it
kubernetes-master:~$kubectl delete node nodetoberemoved

On Worker Node (nodetoberemoved). Remove join/init setting from node


kubernetes-slave:~$kubeadm reset
Press y to proceed

kubernetes-slave:~$docker ps

kubernetes-master:~$kubectl get nodes

Conclusion : Installed Kubectl and executed Kubectl commands to manage the


Kubernetes cluster and deployed Kubernetes Application.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 5

EXPERIMENT TITLE To understand terraform lifecycle, core


concepts/terminologies and install it on a Linux
Machine.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 09/09/2021

SUBMISSION DATE 16/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No : 5
Aim: To understand terraform lifecycle, core concepts/terminologies and install it
on a Linux Machine.
Theory :
Lifecycle is a nested block that can appear within a resource block. The lifecycle
block and its contents are meta-arguments, available for all resource blocks
regardless of type.
The following arguments can be used within a lifecycle block:
1. create_before_destroy (bool) - By default, when Terraform must change a
resource argument that cannot be updated in-place due to remote API limitations,
Terraform will instead destroy the existing object and then create a new
replacement object with the new configured arguments.
The create_before_destroy meta-argument changes this behavior so that the new
replacement object is created first, and the prior object is destroyed after the
replacement is created.
This is an opt-in behavior because many remote object types have unique name
requirements or other constraints that must be accommodated for both a new and
an old object to exist concurrently.
2. prevent_destroy (bool) - This meta-argument, when set to true, will cause
Terraform to reject with an error any plan that would destroy the infrastructure
object associated with the resource, as long as the argument remains present in
the configuration.
This can be used as a measure of safety against the accidental replacement of
objects that may be costly to reproduce, such as database instances. However, it
will make certain configuration changes impossible to apply, and will prevent the
use of the terraform destroy command once such objects are created, and so this
option should be used sparingly.
3. ignore_changes (list of attribute names) - By default, Terraform detects any
difference in the current settings of a real infrastructure object and plans to
update the remote object to match configuration.
The ignore_changes feature is intended to be used when a resource is created
with references to data that may change in the future, but should not affect said
resource after its creation. In some rare cases, settings of a remote object are
modified by processes outside of Terraform, which Terraform would then
attempt to "fix" on the next run. In order to make Terraform share management
responsibilities of a single object with a separate process, the ignore_changes
meta-argument specifies resource attributes that Terraform should ignore when
planning updates to the associated remote object.
Core terminologies :
Main commands:
init Prepare your working directory for other commands
Check whether the configuration is valid
validate
plan Show changes required by the current configuration
apply Create or update infrastructure
Destroy previously-created infrastructure
destroy

All other commands:


console Try Terraform expressions at an interactive command
prompt
fmt Reformat your configuration in the standard style
force- Release a stuck lock on the current workspace
unlock
get Install or upgrade remote Terraform modules
graph Generate a Graphviz graph of the steps in an operation
import Associate existing infrastructure with a Terraform resource
login Obtain and save credentials for a remote host
logout Remove locally-stored credentials for a remote host
output Show output values from your root module
providers Show the providers required for this configuration
refresh Update the state to match remote systems
show Show the current state or a saved plan
state Advanced state management
taint Mark a resource instance as not fully functional
test Experimental support for module integration testing
untaint Remove the 'tainted' state from a resource instance
version Show the current Terraform version
workspace Workspace management

Steps to install terraform on linux machine :


1. Ensure that your system is up to date, and you have the gnupg, software-
properties-common, and curl packages installed. You will use these
packages to verify HashiCorp's GPG signature, and install HashiCorp's
Debian package repository.
2. Add the HashiCorp GPG key.

3. Add the official HashiCorp Linux repository.


4. Update to add the repository, and install the Terraform CLI.

5. Verify that the installation worked by opening a new terminal session and
listing Terraform's available subcommands.
Conclusion: Successfully installed terraform on Linux machine.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 6

EXPERIMENT TITLE To Build, change, and destroy AWS / GCP /Microsoft


Azure/ DigitalOcean infrastructure Using Terraform.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 16/09/2021

SUBMISSION DATE 23/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No : 6
Aim: To Build, change, and destroy AWS / GCP /Microsoft Azure/ DigitalOcean
infrastructure Using Terraform..
Theory :
Plan
The terraform plan command evaluates a Terraform configuration to determine the
desired state of all the resources it declares, then compares that desired state to the
real infrastructure objects being managed with the current working directory and
workspace. It uses state data to determine which real objects correspond to which
declared resources, and checks the current state of each resource using the relevant
infrastructure provider's API.
Once it has determined the difference between the current state and the desired
state, terraform plan presents a description of the changes necessary to achieve the
desired state. It does not perform any actual changes to real world infrastructure
objects; it only presents a plan for making changes.
Plans are usually run to validate configuration changes and confirm that the
resulting actions are as expected. However, terraform plan can also save its plan as
a runnable artifact, which terraform apply can use to carry out those exact changes.
Apply
The terraform apply command performs a plan just like terraform plan does, but
then actually carries out the planned changes to each resource using the relevant
infrastructure provider's API. It asks for confirmation from the user before making
any changes, unless it was explicitly told to skip approval.
By default, terraform apply performs a fresh plan right before applying changes,
and displays the plan to the user when asking for confirmation. However, it can also
accept a plan file produced by terraform plan in lieu of running a new plan. You
can use this to reliably perform an exact set of pre-approved changes, even if the
configuration or the state of the real infrastructure has changed in the minutes since
the original plan was created.
Change infrastructure
Infrastructure is continuously evolving, and Terraform helps you manage that
change. As you change Terraform configurations, Terraform builds an execution
plan that only modifies what is necessary to reach your desired state.
Destroy infrastructure
The terraform destroy command destroys all of the resources being managed by the
current working directory and workspace, using state data to determine which real
world objects correspond to managed resources. Like terraform apply, it asks for
confirmation before proceeding.
A destroy behaves exactly like deleting every resource from the configuration and
then running an apply, except that it doesn't require editing the configuration. This
is more convenient if you intend to provision similar resources at a later date.
Steps to build an infrastructure :
1. Configure the AWS CLI from your terminal. Follow the prompts to input your
AWS Access Key ID and Secret Access Key.
$ aws configure

2. Write configuration
The set of files used to describe infrastructure in Terraform is known as a
Terraform configuration. You will write your first configuration to define a
single AWS EC2 instance.
Each Terraform configuration must be in its own working directory. Create a
directory for your configuration.
$ mkdir learn-terraform-aws-instance
3. Change into the directory.
$ cd learn-terraform-aws-instance

4. Create a file to define your infrastructure.


$ touch main.tf
Open main.tf in your text editor, paste in the configuration below, and save the
file.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}

required_version = ">= 0.14.9"


}

provider "aws" {
profile = "default"
region = "us-west-2"
}

resource "aws_instance" "app_server" {


ami = "ami-830c94e3"
instance_type = "t2.micro"

tags = {
Name = "ExampleAppServerInstance"
}
}

1. Initialize the directory


When you create a new configuration — or check out an existing configuration
from version control — you need to initialize the directory with terraform init.
Initializing a configuration directory downloads and installs the providers
defined in the configuration, which in this case is the aws provider.
$ terraform init

2. Format and validate the configuration


Format your configuration. Terraform will print out the names of the files it
modified, if any. In this case, your configuration file was already formatted
correctly, so Terraform won't return any file names.
$ terraform fmt

Validate your configuration. The example configuration provided above is valid,


so Terraform will return a success message.
$ terraform validate

3. Create infrastructure
Apply the configuration now with the terraform apply command. Terraform will
print output similar to what is shown below
$ terraform apply
Steps to change an infrastructure :
1. Configuration
Now update the ami of your instance. Change the aws_instance.app_server
resource under the provider block in main.tf by replacing the current AMI ID
with a new one.
Replace "ami-830c94e3" with "ami-08d70e59c07c61a3a".

2. Apply Changes
After changing the configuration, run terraform apply again to see how
Terraform will apply this change to the existing resources.
$ terraform apply
Steps to destroy an infrastructure :
The terraform destroy command terminates resources managed by your Terraform
project. This command is the inverse of terraform apply in that it terminates all the
resources specified in your Terraform state. It does not destroy resources running
elsewhere that are not managed by the current Terraform project.
$ terraform destroy
Conclusion : Successfully built, changed and destroyed an AWS infrastructure
using terraform.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 7

EXPERIMENT TITLE To understand Static Analysis SAST process and learn


to integrate Jenkins SAST to SonarQube/GitLab.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 23/09/2021

SUBMISSION DATE 30/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No: 7

Aim: To understand Static Analysis SAST process and learn to integrate Jenkins SAST to
SonarQube.
Theory :
What is SAST?
Static application security testing (SAST), or static analysis, is a testing methodology that
analyzes source code to find security vulnerabilities that make your organization’s applications
susceptible to attack. SAST scans an application before the code is compiled. It’s also known as
white box testing.
What is Jenkins?
Jenkins is an open-source automation tool written in Java with plugins built for Continuous
Integration purposes. Jenkins is used to build and test your software projects continuously making
it easier for developers to integrate changes to the project, and making it easier for users to obtain
a fresh build. It also allows you to continuously deliver your software by integrating with a large
number of testing and deployment technologies.
With Jenkins, organizations can accelerate the software development process through automation.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test,
package, stage, deploy, static analysis, and much more.
Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of
Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins
for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc.
The image below depicts that Jenkins is integrating various DevOps stages:

What is SonarQube?
SonarQube is a Code Quality Assurance tool that collects and analyzes source code, and provides
reports for the code quality of your project. It combines static and dynamic analysis tools and
enables quality to be measured continually over time. Everything from minor styling choices, to
design errors are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyze where the code is messing up and determine whether or
not it is styling issues, code defeats, code duplication, lack of test coverage, or excessively
complex code. The software will analyze source code from different aspects and drills down the
code layer by layer, moving module level down to the class level, with each level producing
metric values and statistics that should reveal problematic areas in the source code that needs
improvement.
Sonarqube also ensures code reliability, Application security, and reduces technical debt by
making your code base clean and maintainable. Sonarqube also provides support for 27 different
languages, including C, C++, Java, Javascript, PHP, GO, Python, and much more.SonarQube also
provides Ci/CD integration, and gives feedback during code review with branch analysis and pull
request decoration.

Steps to install SonarQube on windows :


Step 1. Download Community edition from https://www.sonarqube.org/downloads/
Step 2. Extract .zip file and navigate to bin folder
Step 3. Based on your machine configuration go to folder for 32-bit OS move to windows-x86–32
and for 64-bit OS move to windows-x86–64
Step 4. Run StartSonar.bat and after few minute it will start your SonarQube server.

Step 5. Open browser and http://localhost:9000/ (9000 is default) you will be navigated to below
window
Default Login and Password is admin.

Steps to integrate SonarQube with Jenkins.


Step 1 : The first step to integrate the sonarqube installation with jenkins devops environment is
to generate an access token.
Go to Administration > Security > Users > Tokens > Generate token with some name > Copy the
token
This token will be used in Jenkins for Sonar authentication
Step 2 : Then we need to configure the sonarqube installations with jenkins by using the
generated access tokens. The steps to be followed in order to integrate with jenkins are,
Go to Manage Jenkins> Go to SonarQube servers section> Add SonarQube> Put a proper
name(Your own choice)> Put the server URL as http://localhost:9000 if the server is running in
the same server or if you install it as a separate server or if your running it on a different port you
can put the respective Server URL> Click add> And select secret text> Add the generated token
as secret key and save.
Conclusion : Hence, understood Static Analysis SAST process and successfully
integrated Jenkins with SonarQube.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504
EXPERIMENT NO. 8

EXPERIMENT TITLE Create a Jenkins CICD Pipeline with SonarQube /


GitLab Integration to perform a static analysis of the
code to detect bugs, code smells, and security
vulnerabilities on a sample Web / Java / Python
application.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 23/09/2021

SUBMISSION DATE 30/09/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No: 8

Aim: To Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to perform a static
analysis of the code to detect bugs, code smells, and security vulnerabilities on a sample Web /
Java / Python application.
Theory :
What is SonarQube?
SonarQube is a Code Quality Assurance tool that collects and analyses source code, and provides
reports for the code quality of your project. It combines static and dynamic analysis tools and
enables quality to be measured continually over time. Everything from minor styling choices, to
design errors are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyse where the code is messing up and determine whether or
not it is styling issues, code defeats, code duplication, lack of test coverage, or excessively
complex code. The software will analyse source code from different aspects and drills down the
code layer by layer, moving module level down to the class level, with each level producing
metric values and statistics that should reveal problematic areas in the source code that needs
improvement.
SonarQube also ensures code reliability, Application security, and reduces technical debt by
making your code base clean and maintainable. SonarQube also provides support for 27 different
languages, including C, C++, Java, JavaScript, PHP, GO, Python, and much more. SonarQube
also provides Ci/CD integration, and gives feedback during code review with branch analysis and
pull request decoration.
Why SonarQube Jenkins integration is important?
SonarQube is an open-source tool for continuous inspection of code quality. It performs static
analysis of code, thus detecting bugs, code smells and security vulnerabilities. In addition, it also
can report on the duplicate code, unit tests, code coverage and code complexities for multiple
programming languages. Hence, in order to achieve Continuous Integration with fully automated
code analysis, it is important to integrate SonarQube with CI tools such as Jenkins.
What is the role of Gitlab?
SonarQube's integration with GitLab Self-Managed and GitLab.com allows you to maintain code
quality and security in your GitLab projects.
With this integration, you'll be able to:
 Authenticate with GitLab - Sign in to SonarQube with your GitLab credentials.
 Import your GitLab projects - Import your GitLab Projects into SonarQube to easily set up
SonarQube projects.
 Analyse projects with GitLab CI/CD - Integrate analysis into your build pipeline. Starting
in Developer Edition, SonarScanners running in GitLab CI/CD jobs can automatically
detect branches or merge requests being built so you don't need to specifically pass them
as parameters to the scanner.
 Report your Quality Gate status to your merge requests - (starting in Developer Edition)
See your Quality Gate and code metric results right in GitLab so you know if it's safe to
merge your changes.
What is CI/CD pipeline?
A CI/CD pipeline is a series of steps that must be performed in order to deliver a new version of
software. Continuous integration/continuous delivery (CI/CD) pipelines are a practice focused on
improving software delivery using either a DevOps or site reliability engineering (SRE)
approach.
A CI/CD pipeline introduces monitoring and automation to improve the process of application
development, particularly at the integration and testing phases, as well as during delivery and
deployment. Although it is possible to manually execute each of the steps of a CI/CD pipeline,
the true value of CI/CD pipelines is realized through automation.
Elements of a CI/CD pipeline
 The steps that form a CI/CD pipeline are distinct subsets of tasks grouped into what is
known as a pipeline stage. Typical pipeline stages include:
 Build - The stage where the application is compiled.
 Test - The stage where code is tested. Automation here can save both time and effort.
 Release - The stage where the application is delivered to the repository.
 Deploy - In this stage code is deployed to production.
 Validation and compliance - The steps to validate a build are determined by the needs of
your organization. Image security scanning tools, like Clair, can ensure the quality of
images by comparing them to known vulnerabilities (CVEs).

What is bug?
A software bug is an error, flaw or fault in a computer program or system that causes it to
produce an incorrect or unexpected result, or to behave in unintended ways. After a product is
released or during public beta testing, bugs are still apt to be discovered. When this occurs, users
have to either find a way to avoid using the "buggy" code or get a patch from the originators of
the code. Although bugs typically just cause annoying computer glitches, their impact can be
much more serious.
Most bugs arise from mistakes and errors made in either a program's design or its source code, or
in components and operating systems used by such programs. A few are caused by compilers
producing incorrect code. A program that contains many bugs, and/or bugs that seriously interfere
with its functionality, is said to be buggy (defective). Bugs can trigger errors that may have ripple
effects. Bugs may have subtle effects or cause the program to crash or freeze the computer. Other
bugs qualify as security bugs and might, for example, enable a malicious user to bypass access
controls in order to obtain unauthorized privileges.
What are code smells?
Code smells are not bugs or errors. Instead, these are absolute violations of the fundamentals of
developing software that decrease the quality of code. Having code smells does not certainly
mean that the software won’t work, it would still give an output, but it may slow down
processing, increased risk of failure and errors while making the program vulnerable to bugs in
the future. Smelly code contributes to poor code quality and hence increasing the technical debt.
Code smells indicate a deeper problem, but as the name suggests, they are sniffable or quick to
spot. The best smell is something easy to find but will lead to an interesting problem, like classes
with data and no behaviour. Code smells can be easily detected with the help of tools.

Conclusion : Hence, understood why integrating Jenkins with SonarQube is


important and also what are bugs, code smells.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 9

EXPERIMENT TITLE To Understand Continuous monitoring and Installation


and configuration of Nagios Core, Nagios Plugins and
NRPE (Nagios Remote Plugin Executor) on Linux
Machine. / Splunk

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 30/09/2021

SUBMISSION DATE 07/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No: 9
Aim:
To Understand Continuous monitoring and Installation and configuration of
Nagios Core, Nagios Plugins asnd NRPE (Nagios Remote Plugin Executor) on
Linux Machine/Spunk.

Theory :
What is continuous monitoring?
Continuous monitoring is a technology and process that IT organizations may
implement to enable rapid detection of compliance issues and security risks within
the IT infrastructure. Continuous monitoring is one of the most important tools
available for enterprise IT organizations, empowering SecOps teams with real-time
information from throughout public and hybrid cloud environments and supporting
critical security processes like threat intelligence, forensics, root cause analysis, and
incident response.
The goal of continuous monitoring and the reason that organizations implement
continuous monitoring software solutions is to increase the visibility and
transparency of network activity, especially suspicious network activity that could
indicate a security breach, and to mitigate the risk of cyber attacks with a timely
alert system that triggers rapid incident response.
What is nagios?
Nagios monitors your entire IT infrastructure to ensure systems, applications,
services, and business processes are functioning properly. In the event of a failure,
Nagios can alert technical staff of the problem, allowing them to begin remediation
processes before outages affect business processes, end-users, or customers. With
Nagios you’ll never be left having to explain why an unseen infrastructure outage
hurt your organization’s bottom line.
What is nagios plugins (NRPE)?
The Nagios daemon which run checks on remote machines in NRPE (Nagios
Remote Plugin Executor). It allows you to run Nagios plugins on other machines
remotely. You can monitor remote machine metrics such as disk usage, CPU load
etc. It can also check metrics of remote windows machines through some windows
agent addons.

What is Objects Nagios Commands and Notification?


One of the features of Nagios' object configuration format is that you can create
object definitions that inherit properties from other object definitions. An
explanation of how object inheritance works can be found here. I strongly suggest
that you familiarize yourself with object inheritance once you read over the
documentation presented below, as it will make the job of creating and maintaining
object definitions much easier than it otherwise would be. Also, read up on the
object tricks that offer shortcuts for otherwise tedious configuration tasks.
A command definition is just that. It defines a command. Commands that can be
defined include service checks, service notifications, service event handlers, host
checks, host notifications, and host event handlers. Command definitions can
contain macros, but you must make sure that you include only those macros that are
"valid" for the circumstances when the command will be used. More information
on what macros are available and when they are "valid" can be found here. The
different arguments to a command definition are outlined below.
Definition Format:
define command{
command_name command_name
command_line command_line
}
You can have Nagios notify you of problems and recoveries pretty much anyway
you want: pager, cellphone, email, instant message, audio alert, electric shocker,
etc. How notifications are sent depend on the notification commands that are
defined in your object definition files. Specific notification methods (paging, etc.)
are not directly incorporated into the Nagios code as it just doesn't make much
sense. The "core" of Nagios is not designed to be an all-in-one application. If
service checks were embedded in Nagios' core it would be very difficult for users to
add new check methods, modify existing checks, etc. Notifications work in a
similiar manner. There are a thousand different ways to do notifications and there
are already a lot of packages out there that handle the dirty work, so why re-invent
the wheel and limit yourself to a bike tire? Its much easier to let an external entity
(i.e. a simple script or a full-blown messaging system) do the messy stuff. Some
messaging packages that can handle notifications for pagers and cellphones are
listed below in the resource section.
What is service monitoring?
Nagios is recognized as the top solution to monitor servers in a variety of different
ways. Server monitoring is made easy in Nagios because of the flexibility to
monitor your servers with and without agents. With over 3500 different addons
available to monitor your servers, the community at the Nagios Exchange have left
no stone unturned. Nagios is fully capable of monitoring Windows servers, Linux
servers, Unix servers, Solaris, AIX, HP-UX, and Mac OS/X and more.
Monitoring different servers using nagios.
The following servers can be monitored using nagios.
 Windows Monitoring
 Linux Monitoring
 UNIX Monitoring
 AIX Monitoring
 HP-UX Monitoring
 Solaris Monitoring
What is service monitoring?
In the world of IT, service monitoring refers to a system used by hosting providers
to check on servers within a network. The system’s purpose is to ascertain whether
each server is online and working as it should be. It’s a great way to get an
overview of how effectively solutions are operating.
What is port monitoring?
Service port monitoring involves monitoring services running on different ports.
This can include services running on ports such as Telnet, TCP/IP port, etc. It helps
you to monitor TCP ports efficiently and updates the status based on a pre-defined
threshold.
Steps to install NAGIOS server and perform port and service monitoring :
1) Download the EPEL Repository to download and install packages required for
Nagios by the following command.
$ sudo amazon-linux-extras install epel
Enter y and continue, and it will install the EPEL repository
2) Now Install Nagios, nrpe, and Nagios-Plugins by using the below command.
$sudo yum install nagios nrpe nagios-plugins-all
3) Now run below command to auto-start Nagios service after the Server restart.
$chkconfig –level 3 nagios on

4) Install and start httpd service and run chkconfig on so as to keep httpd service
running after server restart.
$yum install httpd
$ service httpd start
$ chkconfig httpd on

5) Install php
$ yum install php
6) Now open contact.cfg/contacts.cfg file by below command to change the contact
information such as contact_name, alias, and email address. Make sure to enter the
email address where you want to receive Nagios alerts.
$ vi /etc/nagios/objects/contacts.cfg
7) Now use below command to check your nagios configuration.
$ /usr/sbin/nagios -v /etc/nagios/nagios.cfg
Note : In the above screenshot, you can see Total Warnings: 0 and Total Errors: 0
which means you can now restart the Nagios service. If you get any error, you need
to resolve the error first then restart the Nagios service else Nagios service will not
start.
8) check Nagios service status and if it is stoped please start the service
$ service nagios status
$ service nagios start
$ chkconfig nagios on

Note : Open port 80 for your Public IP on EC2 Install security group in AWS so as
to Nagios on your browser.

9) Now Open Nagios Monitoring Tool in your browser by entering the following
URL
http://{Public IP of Nagios Server}/nagios
Default Username and Password of Nagios are as follows:
Username: nagiosadmin
Password: nagiosadmin

11) Now you can see the status of Host and its services on the Nagios.
Conclusion : Understood Continuous monitoring and installed and configured
Nagios Core, Nagios Plugins and NRPE (Nagios Remote Plugin Executor) on
Linux Machine. Also performed port and service monitoring.

Ramrao Adik Institute of Technology


DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504
EXPERIMENT NO. 10

EXPERIMENT TITLE To understand AWS Lambda, its workflow, various


functions and create your first Lambda functions using
Python / Java / Nodejs.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 07/10/2021

SUBMISSION DATE 14/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY

Experiment No: 10

Aim: To understand AWS Lambda, its workflow, various functions and create your first lambda
function using python / Java / Nodejs.

Theory:
What is AWS Lambda?

Lambda is a compute service that lets you run code without provisioning or managing servers.
Lambda runs your code on a high-availability compute infrastructure and performs all of the
administration of the compute resources, including server and operating system maintenance,
capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you
can run code for virtually any type of application or backend service.
You can invoke your Lambda functions using the Lambda API, or Lambda can run your
functions in response to events from other AWS services. For example, you can use Lambda to:
 Build data-processing triggers for AWS services such as Amazon Simple Storage Service
(Amazon S3) and Amazon DynamoDB.
 Process streaming data stored in Amazon Kinesis.
 Create your own backend that operates at AWS scale, performance, and security.
Lambda is a highly available service. 

Lambda features:
The following key features help you develop Lambda applications that are scalable, secure, and
easily extensible:
1. Concurrency and scaling controls:
Concurrency and scaling controls such as concurrency limits and provisioned
concurrency give you fine-grained control over the scaling and responsiveness of your
production applications.
2. Functions defined as container images:
Use your preferred container image tooling, workflows, and dependencies to build, test,
and deploy your Lambda functions.
3. Code signing:
Code signing for Lambda provides trust and integrity controls that let you verify that
only unaltered code that approved developers have published is deployed in your
Lambda functions.
4. Lambda extensions:
You can use Lambda extensions to augment your Lambda functions. For example, use
extensions to more easily integrate Lambda with your favorite tools for monitoring,
observability, security, and governance.
5. Function blueprints:
A function blueprint provides sample code that shows how to use Lambda with other
AWS services or third-party applications. Blueprints include sample code and function
configuration presets for Node.js and Python runtimes.
6. Database access:
A database proxy manages a pool of database connections and relays queries from a
function. This enables a function to reach high concurrency levels without exhausting
database connections.
7. File systems access:
You can configure a function to mount an Amazon Elastic File System (Amazon EFS)
file system to a local directory. With Amazon EFS, your function code can access and
modify shared resources safely and at high concurrency.

AWS Lambda: Creating function

Once you have an account, log in to AWS.


1. Create a Lambda
Once you’re at the console, you can start setting up your function. Click on the services
menu near the upper right-hand side of the page. Then, you’ll see an entry for  Lambda under
the Compute menu. Click the Lambda entry, and AWS will take you to your Lambda
console. Click the Create Function button.

2. Pick a Blueprint
Now it’s time to finishing creating your function. You’ll use Python for this function
because you can enter the code right into the console. First, select the  Use a Blueprint box in
the center of the Create Function page.

Then, type Hello in the search box. Press enter and AWS will search for blueprints with
Hello in the name. One of them will be hello-world-python. Select this and click  Configure.
3. Configure and Create Your Function
This will take you to a form where you will name your function, select a role, and edit the
Python code.

Enter a name, and leave the default role. The default role allows your lambda to send system
out logs to CloudWatch.
Let’s take a quick look at the Python code included in the blueprint.
import json print('Loading function') def lambda_handler(event, context): #print("Received
event: " + json.dumps(event, indent=2)) print("value1 = " + event['key1']) print("value2 = "
+ event['key2']) print("value3 = " + event['key3']) return event['key1'] # Echo back the first
key value #raise Exception('Something went wrong')

AWS will call the lambda_handler function each time an event is triggered. This function
prints the values associated with three JSON fields: “key1,” “key2,” and “key3.”
Click the Create Function button at the bottom of the form.

You’ve created a Lambda function! Now let’s make an edit using the web editor. Let’s make
a simple edit and uncomment the JSON dump on line 7. Scroll down and you’ll see the
editor. The Save button at the top right of the page should go from being grayed-out to
orange. Once you hit the Save button, you should see a banner at the top of the page
indicating that the function was updated. Now it’s time to test it. Fortunately, AWS makes
this very easy.
4. Test Your Lambda Function
Click the Test button that is next to the Save button. AWS will display a form that looks
similar to this:

This test will pass a simple JSON document to your function with the three keys it expects
set to “value1,” “value2,” and “value3.” That’s good enough for a start. Click
the Create button at the bottom.

AWS saves your test, and you can run it from the function page with the  Test button. This
makes it easy to set up different test cases and run them by name.
Click the test button. AWS will run your test and display a result box.
The test succeeded, and you can see your log result if you click the details disclosure icon.

Conclusion: Hence, we understood AWS Lambda, its workflow, various functions and create
your first lambda function using python / Java / Nodejs.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022

COURSE NAME : Advance DevOps Lab


COURSE CODE ITL504

EXPERIMENT NO. 11

EXPERIMENT TITLE To create a Lambda function which will log “An Image
has been added” once you add an object to a specific
bucket in S3.

NAME OF STUDENT Ayush Premjith


ROLL NO. 19IT2034
CLASS TE - IT
SEMESTER V
GIVEN DATE 14/10/2021

SUBMISSION DATE 21/10/2021

CORRECTION DATE

REMARK

TIMELY PRESENTATION UNDERSTANDING TOTAL


SUBMISSION MARKS
4 4 7 15

NAME& SIGN. Prof. Sujata Oak


OF FACULTY
Experiment No: 11

Aim: To create a Lambda function which will log “An image has been added” once you add an
object to a specific bucket in s3.s

Theory:
AWS Lambda is a serverless compute service that runs your code in response to events and
automatically manages the underlying compute resources for you. You can use AWS Lambda to
extend other AWS services with custom logic, or create your own back end services that operate
at AWS scale, performance, and security. AWS Lambda can automatically run code in response
to multiple events, such as HTTP requests via Amazon API Gateway, modifications to objects
in Amazon S3 buckets, table updates in Amazon DynamoDB, and state transitions in AWS Step
Functions.

Lambda runs your code on high-availability compute infrastructure and performs all the
administration of the compute resources, including server and operating system maintenance,
capacity provisioning and automatic scaling, code and security patch deployment, and code
monitoring and logging. All you need to do is supply the code.

What is a Lambda function?

The code you run on AWS Lambda is called a “Lambda function.” After you create your Lambda
function, it is always ready to run as soon as it is triggered, similar to a formula in a spreadsheet.
Each function includes your code as well as some associated configuration information, including
the function name and resource requirements. Lambda functions are “stateless”, with no affinity
to the underlying infrastructure, so that Lambda can rapidly launch as many copies of the function
as needed to scale to the rate of incoming events.

After you upload your code to AWS Lambda, you can associate your function with specific AWS
resources, such as a particular Amazon S3 bucket, Amazon DynamoDB table, Amazon Kinesis
stream, or Amazon SNS notification. Then, when the resource changes, Lambda will execute
your function and manage the compute resources as needed to keep up with incoming requests.

What is Amazon S3 bucket?


An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services'
(AWS) Simple Storage Service (S3), an object storage offering. Amazon S3 buckets, which are
similar to file folders, store objects, which consist of data and its descriptive metadata.
S3 bucket features:
AWS offers several features for Amazon S3 buckets. An IT professional can enable versioning
for S3 buckets to preserve every version of an object when an operation is performed on it, such
as a copy or delete operation. This helps an IT team prevent accidental deletion of an object.
Likewise, upon bucket creation, a user can set up server access logs, object-level API logs, tags
and encryption. Also, S3 Transfer Acceleration helps execute fast, secure transfers from a client
to an S3 bucket via AWS edge locations.

What are triggers?


A trigger is a Lambda resource or a resource in another service that you configure to invoke your
function in response to lifecycle events, external requests, or on a schedule. Your function can
have multiple triggers. Each trigger acts as a client invoking your function independently. Each
event that Lambda passes to your function only has data from one client or trigger.

What are log files?

Log files are automatically created to store a record of all the events from your application.
Almost everything you use creates or adds to a log file. From the operating system your computer
runs to the apps on your phone, they all make log files. They record things that you don't
normally track in your own error messages, like specific database columns that are causing errors.

It keeps track of every event that happens in your application from the minute you start running it
to the second you stop it. Any calls you make to third party APIs or any scripts that run in the
background will have a record here. This is your source for finding everything that happens
behind the scenes of your application.

Why are log files important?

The reason we need logs is because they hold information that can't be found anywhere else.
An error will be recorded in the logs so that only someone with access to the server could see
those kinds of errors. Most of the time this is where you should look when you can't figure out
what's wrong with your code after hours of debugging. The answer might not always be here, but
it will give you another place to go check.
Once you start looking in the log files when you have weird errors, it becomes easier to find
ways to fix them. At the minimum, you will rule out another place to look. A lot of new developers
don't know about logs so it's important that we take the time to teach them so they can learn how
to better research bugs.

Create a bucket and upload a sample object:

Create an Amazon S3 bucket and upload a test file to your new bucket. Your Lambda function
retrieves information about this file when you test the function from the console.

To create an Amazon S3 bucket using the console

1. Open the Amazon S3 console.


2. Choose Create bucket.
3. Under General configuration, do the following:
a. For Bucket name, enter a unique name.
b. For AWS Region, choose a Region. Note that you must create your Lambda
function in the same Region.

4. Choose Create bucket.

After creating the bucket, Amazon S3 opens the Buckets page, which displays a list of all buckets
in your account in the current Region.

To upload a test object using the Amazon S3 console

1. On the Buckets page of the Amazon S3 console, choose the name of the bucket that you
created.
2. On the Objects tab, choose Upload.
3. Drag a test file from your local machine to the Upload page.
4. Choose Upload.

Create the Lambda function

Use a function blueprint to create the Lambda function. A blueprint provides a sample function
that demonstrates how to use Lambda with other AWS services. Also, a blueprint includes sample
code and function configuration presets for a certain runtime.

To create a Lambda function from a blueprint in the console

1. Open the Functions page on the Lambda console.


2. Choose Create function.
3. On the Create function page, choose Use a blueprint.
4. Under Blueprints, enter s3 in the search box.
5. In the search results, do the following:
 For a Python function, choose s3-get-object-python.
6. Choose Configure.
7. Under Basic information, do the following:
 For Function name, enter my-s3-function.
 For Execution role, choose Create a new role from AWS policy templates.
 For Role name, enter my-s3-function-role.

8. Under S3 trigger, choose the S3 bucket that you created previously.


When you configure an S3 trigger using the Lambda console, the console modifies your
function's resource-based policy to allow Amazon S3 to invoke the function.
9. Choose Create function.

Test in the console:

Invoke the Lambda function manually using sample Amazon S3 event data.

To test the Lambda function using the console

1. On the Code tab, under Code source, choose the arrow next to Test, and then
choose Configure test events from the dropdown list.
2. In the Configure test event window, do the following:
a. Choose Create new test event.
b. For Event template, choose Amazon S3 Put (s3-put).
c. For Event name, enter a name for the test event. For example, mys3testevent.

d. Choose Create.
Test with the S3 trigger

Invoke your function when you upload a file to the Amazon S3 source bucket.

To test the Lambda function using the S3 trigger

1. On the Buckets page of the Amazon S3 console, choose the name of the source bucket
that you created earlier.
2. On the Upload page, upload a few .jpg or .png image files to the bucket.

3. Open the Functions page on the Lambda console.


4. Choose the name of your function (my-s3-function).
5. To verify that the function ran once for each file that you uploaded, choose the Monitor
tab. This page shows graphs for the metrics that Lambda sends to CloudWatch. The count
in the Invocations graph should match the number of files that you uploaded to the
Amazon S3 bucket.

Conclusion: Hence we created a Lambda function which logged “An image has been added”
once we add an object to a specific bucket in s3.
ASSIGNMENT 1

CASE STUDY: ADVANCED DEVOPS IN INSTAGRAM

Instagram is a fantastic case study for Adv DevOps because


their software-engineering process shows a fundamental
understanding of Adv DevOps thinking and a focus on quality
attributes through automation assisted process. Recall, Adv
DevOps practitioners espouse a driven focus on quality
attributes to meet business needs, leveraging automated
processes to achieve consistency and efficiency. Today we all
know about Instagram and are fascinated about the number of
users using the application on daily basis. If you aren't please
see the stats below 2021 . Instagram has 1.386 billion users all
over the world who upload 95 million photos and videos a day.
It is hard to believe how instagram scaled so well while it just
started with two developers in year 2010. Lets take a dive in
their story and try to understand how they learnt their lessons.
In 2010 just before the launch of instagram both the developers
(founders Mike & Kevin) were wondering how many downloads
will they have on the first day. The number of downloads they
had on first day was 25,000. But it just didn't stop there. They
got 100,000 users in their first week and all they had as their
infrastructure was a server having less computing power then a
Macbook Pro. So Soon they called up hosting provider asking
for another server only to know that they will require around 2-
4 days to provide them one. Looking at the unpredictable
growth of
instagram in the very first week they knew that asking for
servers with such high turnaround time will not be working.
This is when they
decided to switch for Amazon Web Services (AWS). With AWS they
got capability to get new servers as and when load increased. And
the perk was whenever there was less load they could stop servers
and reduce the cost. In a DevOps organization, leaders must ask:
What can we do to incentivize the organization to achieve the
outcomes we want? How can we change our organization to drive
ever-closer to our goals? To master DevOps and dramatically
improve outcomes in your organization, this is the type of thinking
you must encourage. Then in 2012 came the android app for
instagram and it was the most anticipated one. Over a million new
people joined Instagram in the first 12 hours of the launch – it was
an incredible response. So instagram was growing and making all the
noise until one day when instagram was DOWN. A quick check
showed that Amazon Web Services was down. All this was because a
huge storm had hit Virginia and half of the instagram instances had
lost power. Next hours were very tedious and as they had to rebuild
the whole infrastructure from almost scratch doing one server at a
time. This was the time when team understood how important it was
to automate their infrastructure. Not only it was useful to save time
but also helped to work more effectively as there would be less of
manual intervention.

ASSIGNMENT 2
SELF LEARNING: AWS LAMBDA

What is AWS Lambda?


Lambda is a compute service that lets you run code without provisioning or
managing servers. Lambda runs your code on a high-availability compute
infrastructure and performs all of the administration of the compute
resources, including server and operating system maintenance, capacity
provisioning and automatic scaling, code monitoring and logging. With
Lambda, you can run code for virtually any type of application or backend
service. All you need to do is supply your code in one of the languages that
Lambda supports.

You organize your code into Lambda functions. Lambda runs your


function only when needed and scales automatically, from a few
requests per day to thousands per second. You pay only for the
compute time that you consume—there is no charge when your
code is not running.

You can invoke your Lambda functions using the Lambda API, or
Lambda can run your functions in response to events from other
AWS services. For example, you can use Lambda to:
 Build data-processing triggers for AWS services such as Amazon
Simple Storage Service (Amazon S3) and Amazon DynamoDB.
 Process streaming data stored in Amazon Kinesis.
 Create your own backend that operates at AWS scale, performance, and
security.

When should I use Lambda?


Lambda is an ideal compute service for many application
scenarios, as long as you can run your application code using the
Lambda standard runtime environment and within the resources
that Lambda provides.

When using Lambda, you are responsible only for your code.
Lambda manages the compute fleet that offers a balance of
memory, CPU, network, and other resources to run your code.
Because Lambda manages these resources, you cannot log in to
compute instances or customize the operating system on 
Lambda performs operational and administrative activities on
your behalf, including managing capacity, monitoring, and logging
your Lambda functions.

If you need to manage your own compute resources, AWS has other
compute services to meet your needs. For example:

 Amazon Elastic Compute Cloud (Amazon EC2) offers a wide range of EC2
instance types to choose from. It lets you customize operating systems, network
and security settings, and the entire software stack. You are responsible for
provisioning capacity, monitoring fleet health and performance, and using
Availability Zones for fault tolerance.
 AWS Elastic Beanstalk enables you to deploy and scale applications onto Amazon
EC2. You retain ownership and full control over the underlying EC2 instances.

Getting started with Lambda


To get started with Lambda, use the Lambda console to create a function.
In a few minutes, you can create a function, invoke it, and then view logs,
metrics, and trace data.
You can author functions in the Lambda console, or with an IDE toolkit,
command line tools, or the AWS SDKs. The Lambda console provides
a code editor for non-compiled languages that lets you modify and test
code quickly. The AWS Command Line Interface (AWS CLI) gives you
direct access to the Lambda API for advanced configuration and
automation use cases.

Accessing Lambda
You can create, invoke, and manage your Lambda functions using any of
the following interfaces:
AWS Management Console – Provides a web interface for you to access
your functions. For more information, see Lambda console.
AWS Command Line Interface (AWS CLI) – Provides commands for a broad
set of AWS services, including Lambda, and is supported on Windows,
macOS, and Linux. For more information, see Using Lambda with the AWS
CLI.
AWS SDKs – Provide language-specific APIs and manage many of the
connection details, such as signature calculation, request retry handling,
and error handling. For more information, see AWS SDKs.
AWS CloudFormation – Enables you to create templates that define your
Lambda applications. For more information, see AWS Lambda applications.
AWS CloudFormation also supports the AWS Cloud Development Kit (CDK).
AWS Serverless Application Model (AWS SAM) – Provides templates and a
CLI to configure and manage AWS serverless applications. For more
information, see AWS SAM.

Lambda concepts
Lambda runs instances of your function to process events. You can invoke
your function directly using the Lambda API, or you can configure an AWS
service or resource to invoke your function.
Function
A function is a resource that you can invoke to run your code in Lambda. A
function has code to process the events that you pass into the function or
that other AWS services send to the function.

Trigger
A trigger is a resource or configuration that invokes a Lambda function.
Triggers include AWS services that you can configure to invoke a function
and event source mappings. An event source mapping is a resource in
Lambda that reads items from a stream or queue and invokes a function.
For more information, see Invoking AWS Lambda functions and Using AWS
Lambda with other services.

Event
An event is a JSON-formatted document that contains data for a Lambda
function to process. The runtime converts the event to an object and
passes it to your function code. When you invoke a function, you determine
the structure and contents of the event.

Example service event – Amazon SNS notification

"Records": [

"Sns": {

"Timestamp": "2019-01-02T12:45:07.000Z",

"Signature":
"tcc6faL2yUC6dgZdmrwh1Y4cGa/ebXEkAi6RibDsvpi+tE/1+82j...6
5r==",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",

"Message": "Hello from SNS!",

...

Execution environment
An execution environment provides a secure and isolated runtime environment for your
Lambda function. An execution environment manages the processes and resources that
are required to run the function. The execution environment provides lifecycle support
for the function and for any extensions associated with your function.

Instruction set architecture


The instruction set architecture determines the type of computer processor that Lambda
uses to run the function. Lambda provides a choice of instruction set architectures:

 arm64 – 64-bit ARM architecture, for the AWS Graviton2 processor.


 x86_64 – 64-bit x86 architecture, for x86-based processors.

AWS Lambda applications


An AWS Lambda application is a combination of Lambda functions, event
sources, and other resources that work together to perform tasks. You can
use AWS CloudFormation and other tools to collect your application's
components into a single package that can be deployed and managed as
one resource. Applications make your Lambda projects portable and enable
you to integrate with additional developer tools, such as AWS CodePipeline,
AWS CodeBuild, and the AWS Serverless Application Model command line
interface (SAM CLI).
The AWS Serverless Application Repository provides a collection of
Lambda applications that you can deploy in your account with a few clicks.
The repository includes both ready-to-use applications and samples that
you can use as a starting point for your own projects. You can also submit
your own projects for inclusion.
AWS CloudFormation enables you to create a template that defines your
application's resources and lets you manage the application as a stack. You
can more safely add or modify resources in your application stack. If any
part of an update fails, AWS CloudFormation automatically rolls back to the
previous configuration. With AWS CloudFormation parameters, you can
create multiple environments for your application from the same
template. AWS SAM extends AWS CloudFormation with a simplified syntax
focused on Lambda application development.
The AWS CLI and SAM CLI are command line tools for managing Lambda
application stacks. In addition to commands for managing application
stacks with the AWS CloudFormation API, the AWS CLI supports higher-
level commands that simplify tasks like uploading deployment packages
and updating templates. The AWS SAM CLI provides additional
functionality, including validating templates and testing locally.
When creating an application, you can create its Git repository using either
CodeCommit or an AWS CodeStar connection to GitHub. CodeCommit
enables you to use the IAM console to manage SSH keys and HTTP
credentials for your users. AWS CodeStar connections enables you to
connect to your GitHub account. For more information about connections,
see What are connections? in the Developer Tools console User Guide.

Step 1 :- Login to your AWS Management Console, Search for IAM in


Services and press it
Step 2 :- Search for roles and click on it.

Step 3 :- Click on “Create role” Button, It will show like this

Step 4 :- Select the “Lambda” Use Case Option. And click


Next:Permissions
Step 5 :- Search for “AmazonDynamoDBFullAccess” in search bar
and select it . Click on Next: Tags and add tags (optional) and click
on Review Button

Step 6 :- Give role name and click on “Create Role” button.


Step 7 :- You will see the message as “Role has been created”

Step 8 :- Search for AWS lambda in services , Select AWS Lambda

Step 9 :- Click on “Create function” button.


Step 10 :- Give name to your Lambda Function

Step 11 :- Select python 3.6 for “Runtime” Option

Step 12 :- Click on “Change Default Execution role” and select “use


existing role” and select your previously created role from IAM.
Step 13 :- Click on “Create function” button you will see this
message in green color

Step 14 :- Now go to AWS S3 Service and Create Bucket


Step 15 :- Click on “Create Bucket” Button , Then Enter Your
Bucket Name , Then Unchecked the “Block all public access”
option , Then leave all settings as it is and Click on “Create
Bucket” Button
Step 16 :- After clicking on Create Bucket Button, You will see
this Message in Green Color

Step 17 :- Now go to AWS lambda Service, click on Function


Name which you have created , paste the code.

Step 18 :- Scroll Up and Now click on “Add trigger” button.

Step 19 :- Now Search for S3 and Select it.


Step 20 :- Add Bucket name (that you have created previously )
and keep default settings as it is and checked the “Recursive
Invocations” Message then Click on “Add” button

Step 21 :- Now you will see message that trigger is


added to function.as you can see in below image

Step 22 :- Now go to AWS DynamoDB Service

Step 23 :- Click on “Create table” button.


Step 24 :- Add your “table name and primary key value” and Click
on
“Create” button.

Step 25 :- Now you can see the Dashboard of your Created Table

Step 26 :- Now go to Amazon S3 bucket and select your bucket


which you have created and Upload any file by clicking on orange
upload button , then click on ADD FILE and Click on “Upload”
button

Step 27 :- After Uploading files you will see “Uploaded Successfully”


message in green color.
Step 28 :- Now go to DynamoDB Service and Click on your table
and go to items option.

You might also like