ADVANCEDDEVOPS
ADVANCEDDEVOPS
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022
EXPERIMENT NO. 1
CORRECTION DATE
REMARK
EXPERIMENT NO:1
Practical name: Benefits of cloud infrastructure and setup AWS cloud9 IDE,
launch AWS cloud9 IDE and perform collaboration Demonstration.
Aim: To understand the benefits of cloud infrastructure and setup AWS cloud9
IDE, launch AWS cloud9 IDE and perform collaboration Demonstration.
Theory:
What is cloud9?
- AWS Cloud9 is an IDE (that is integrated development environment) and
operates in the cloud. It allows writing, running or debugging any code while
using the web browser. Cloud9 has the code editor, a debugger and a
terminal.
- Cloud9 contains some of the most used programming languages such as PHP
and JavaScript among other languages; therefore, you won’t require
installing programs on your devices before commencing on your projects.
- As cloud-based IDE, Cloud9 ensures you can work on your tasks from
anywhere whether you are at home, office or on-site while accessing an
internet –connected gadget. Cloud9 offers seamless experience in the
development of serverless applications.
- In addition to its programming language support, AWS Cloud9 enables a
developer to build, edit and debug AWS Lambda functions. The environment
comes with preconfigured support for software developer's kits, libraries and
plug-ins that are required to build serverless application.
- Cloud9 can run on a managed Amazon EC2 instance, or on any SSH-
supported Linux server, whether it's on AWS, in a private data center or in
another cloud.
- A developer can access the Cloud9 terminal from anywhere with an internet
connection, and share code with other members of the development team.
- Cloud9 updates in real time, so a developer can see code entered, edited or
deleted from another member of the development team as it happens, and can
also chat with other developers in the IDE terminal.
Benefits of cloud9:
2. Collaborative coding:
- Another benefit of utilizing Cloud9 is that it allows collaborative coding with
your team. While using Cloud9, it is easy to code and interact with your
team. It is possible to share the development environment, using a few clicks
and pair-program. During the collaboration, each team member can observe
commands from other members and at the
Same time chat with each other in the IDE.
4. Access AWS:
- Cloud9 provides a terminal, which is inclusive of privileges to Amazon EC2
that hosts the development environment together with the AWS Interface.
You can efficiently run commands. You can also access AWS functions
easily.
3. Enter your account information, and then choose Continue. Be sure that you
enter your account information correctly, especially your email address. If
you enter your email address incorrectly, you can't access your account.
4. Choose Personal or Professional.
Note: Personal accounts and professional accounts have the same features
and functions.
5. Enter your company or personal information.
Important: For professional AWS accounts, it's a best practice to enter the
company phone number rather than a personal cell phone. Configuring a root
account with an individual email address or a personal phone number can
make your account insecure.
6. Read and accept the AWS Customer Agreement.
Note: Be sure that you read and understand the terms of the AWS Customer
Agreement.
7. Choose Create Account and Continue.
9. Choose your country or region code from the list. Enter a phone number
where you can be reached in the next few minutes. Enter the code displayed
in the CAPTCHA, and then submit. In a few moments, an automated system
contacts you. Enter the PIN you receive, and then choose verify code. Select
free plan and you will get directed to aws page click on sign in to console.
Steps to share an environment using Cloud9:
Step 1: Open aws click on Services and go to cloud9. Click on create environment
and give name to the environment, next step and create environment.
Step 2: Your aws cloud9 environment will be displayed. (All integration will take
place here.)
Step 4: Take another browser go to aws management console. Type IAM service
click on that. Go to users and add users. Give the username, select custom password
set it and click on next permissions.
Step 5: Click on Create group give the groupname and create group. Click on next
tag, next review and create user.
Step 6: Go to Cloud9 environment select file - new from template - HTML file.
Write your html code, select share option in right hand side, add the user you have
created and invite. A security warning box will appear select ok.
Step 7: Open your incognito window – login to aws - search cloud9 – click on
menu icon in left side - click on shared with you and open IDE. Now the user you
created will be able to access the file created by root user.
Conclusion: Hence, we understood the benefits of cloud infrastructure and setup
AWS cloud9 IDE, launch AWS cloud9 IDE and perform collaboration
Demonstration.
EXPERIMENT NO. 2
CORRECTION DATE
REMARK
EXPERIMENT NO : 2
Aim: To Build Your Application using AWS CodeBuild and Deploy on S3 / SEBS
using AWS CodePipeline, deploy Sample Application on EC2 instance using AWS
CodeDeploy.
Theory :
What is CodeBuild –
AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles
your source code, runs unit tests, and produces artifacts that are ready to deploy.
CodeBuild eliminates the need to provision, manage, and scale your own build
servers. It provides prepackaged build environments for popular programming
languages and build tools such as Apache Maven, Gradle, and more. You can also
customize build environments in CodeBuild to use your own build tools.
CodeBuild scales automatically to meet peak build requests.
CodeBuild provides these benefits:
Fully managed – CodeBuild eliminates the need to set up, patch, update, and
manage your own build servers.
On demand – CodeBuild scales on demand to meet your build needs. You
pay only for the number of build minutes you consume.
Out of the box – CodeBuild provides preconfigured build environments for
the most popular programming languages. All you need to do is point to your
build script to start your first build.
What is CodeDeploy –
CodeDeploy is a deployment service that automates application deployments to
Amazon EC2 instances, on-premises instances, serverless Lambda functions, or
Amazon ECS services.
You can deploy a nearly unlimited variety of application content, including:
Code
Steps :
Step 1: Create an S3 bucket for your application
To create an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
at https://console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. In Bucket name, enter a name for your bucket
In Region, choose the Region where you intend to create your pipeline, such
as US West (Oregon), and then choose Create bucket.
4. After the bucket is created, a success banner displays. Choose Go to bucket
details.
5. On the Properties tab, choose Versioning. Choose Enable versioning, and
then choose Save.
When versioning is enabled, Amazon S3 saves every version of every object
in the bucket.
6. On the Permissions tab, leave the defaults. For more information about S3
bucket and object permissions, see Specifying Permissions in a Policy.
7. Next, download a sample and save it into a folder or directory on your local
computer.
1. Choose one of the following.
Choose SampleApp_Windows.zip if you want to follow the steps
in this tutorial for Windows Server instances.
1. If you want to deploy to Amazon Linux instances using
CodeDeploy, download the sample application
here: SampleApp_Linux.zip.
2. If you want to deploy to Windows Server instances using
CodeDeploy, download the sample application
here: SampleApp_Windows.zip.
2. Download the compressed (zipped) file. Do not unzip the file.
8. In the Amazon S3 console, for your bucket, upload the file:
1. Choose Upload.
2. Drag and drop the file or choose Add files and browse for the file.
3. Choose Upload.
Step 2: Create Amazon EC2 Windows instances and install the CodeDeploy
agent
To create an instance role
1. Open the IAM console at https://console.aws.amazon.com/iam/).
2. From the console dashboard, choose Roles.
3. Choose Create role.
3. In Service Role, choose a service role that trusts AWS CodeDeploy with, at
minimum, the trust and permissions described in Create a Service Role for
CodeDeploy. To get the service role ARN, see Get the Service Role ARN
(Console).
4. Under Deployment type, choose In-place.
5. Under Environment configuration, choose Amazon EC2 Instances.
Choose Name in the Key field, and in the Value field,
enter MyCodePipelineDemo.
6. Under Deployment configuration,
choose CodeDeployDefault.OneAtaTime.
7. Under Load Balancer, clear Enable load balancing. You do not need to set
up a load balancer or choose a target group for this example.
8. In the Advanced section, leave the defaults.
9. Choose Create deployment group.
Conclusion : Hence we have build our Application using AWS CodeBuild and
deployed it on S3 / SEBS using AWS CodePipeline. We also deployed Sample
Application on EC2 instance using AWS CodeDeploy.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022
EXPERIMENT NO. 3
CORRECTION DATE
REMARK
Kubernetes Control
Plane
Architectural overview of
Cluster Nodes
Kubernetes Services
Services are the Kubernetes way of configuring a proxy to forward traffic to a set of
pods. Instead of static IP address-based assignments, Services use selectors (or
labels) to define which pods uses which service. These dynamic assignments make
releasing new versions or adding pods to a service really easy. Anytime a Pod with
the same labels as a service is spun up, it’s assigned to the service.
Kubernetes Networking
Networking Kubernetes has a distinctive networking model for cluster-wide, podto-
pod networking. In most cases, the Container Network Interface (CNI) uses a
simple overlay network (like Flannel) to obscure the underlying network from the
pod by using traffic encapsulation (like VXLAN); it can also use a fully-routed
solution like Calico. In both cases, pods communicate over a cluster-wide pod
network, managed by a CNI provider like Flannel or Calico. Within a pod,
containers can communicate without any restrictions. Containers within a pod exist
within the same network namespace and share an IP. This means containers can
communicate over localhost. Pods can communicate with each other using the pod
IP address, which is reachable across the cluster. Moving from pods to services, or
from external sources to services, requires going through kube-proxy.
Worker :
Worker :
Worker :
Worker :
Install Kubernetes
Step 3: Add Kubernetes
Signing Key
Since you are downloading Kubernetes from a non-standard repository, it is
essential to ensure that the software is authentic. This is done by adding a signing
key.
1.Enter the following to add a signing key:
on-master&slave$curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add
If you get an error that curl is not installed, install it with:
on-master&slave$sudo apt-get install curl
Master :
Worker :
Worker :
Worker :
Kubernetes Deployment
Step 6: Begin Kubernetes Deployment
Start by disabling the swap memory on each server:
on-master&slave$sudo swapoff –a
Step 7: Assign Unique Hostname for Each Server Node
Decide which server to set as the master node. Then enter the command:
on-master$sudo hostnamectl set-hostname master-node
Next, set a worker node hostname by entering the following on the worker server:
on-slave$sudo hostnamectl set-hostname worker01
Master :
Worker :
Step 8: Initialize Kubernetes on Master Node
Switch to the master server node, and enter the following:
on-master$sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-
errors=all
Once this command finishes, it will display a kubeadm join message at the end.
Make a note of the whole entry. This will be used to join the worker nodes to the
cluster.
Master :
Wait a few minutes; then you can check the status of the nodes.
Switch to the master server, and enter:
kubernetes-master:~$ kubectl get nodes
The system should display the worker nodes that you joined to the cluster.
Output
master Readymaster 1d v1.14.0
worker1 Ready <none> 1d v1.14.0
If all of your nodes have the value Ready for STATUS, it means that they’re part of
the cluster and ready to run workloads.
Now that your cluster is verified successfully, let’s schedule an example Nginx
application on the cluster.
Running An Application on the Cluster
You can now deploy any containerized application to your cluster. To keep things
familiar, let’s deploy Nginx using Deployments and Services to see how this
application can be deployed to the cluster. You can use the commands below for
other containerized applications as well, provided you change the Docker image
name and any relevant flags (such as ports and volumes).
Still within the master node, execute the following command to create a
deployment named nginx:
kubernetes-master:~$kubectl create deployment nginx --image=nginx
Services are another type of Kubernetes object that expose cluster internal services
to clients, both internal and external. They are also capable of load balancing
requests to multiple pods, and are an integral component in Kubernetes, frequently
interacting with other components.
Run the following command:
kubernetes-master:~$kubectl get services
From the third line of the above output, you can retrieve the port that Nginx is
running on. Kubernetes will assign a random port that is greater than 30000
automatically, while ensuring that the port is not already bound by another service.
EXPERIMENT NO. 4
CORRECTION DATE
REMARK
Experiment No: 4
Aim: To install Kubectl and execute Kubectl commands to manage the Kubernetes
cluster and deploy Your First Kubernetes Application.
Theory :
The Kubernetes command-line tool,kubectl, allows you to run commands against
Kubernetes clusters. You can use kubectl to deploy applications, inspect and
manage cluster resources, and view logs.
Pods and Container Introspection Commands
Function Command
Debugging Commands
Function Command
Gets logs from the service for a container Kubectl logs -f<name>>[-c<
$container>]
Function Command
Quick Commands
Function Command
Next, run the following command to create a service named nginx that will expose
the app publicly.
kubernetes-master:~$kubectl expose deploy nginx --port 80 --target-port 80 --type
NodePort
If you want to scale up the replicas for a deployment (nginx in our case) the use the
following command:
kubernetes-master:~$kubectl scale --current-replicas=1 --replicas=2
deployment/nginx
Run the following to ensure that the service has been deleted:
kubernetes-master:~$kubectl get services
kubernetes-slave:~$docker ps
EXPERIMENT NO. 5
CORRECTION DATE
REMARK
Experiment No : 5
Aim: To understand terraform lifecycle, core concepts/terminologies and install it
on a Linux Machine.
Theory :
Lifecycle is a nested block that can appear within a resource block. The lifecycle
block and its contents are meta-arguments, available for all resource blocks
regardless of type.
The following arguments can be used within a lifecycle block:
1. create_before_destroy (bool) - By default, when Terraform must change a
resource argument that cannot be updated in-place due to remote API limitations,
Terraform will instead destroy the existing object and then create a new
replacement object with the new configured arguments.
The create_before_destroy meta-argument changes this behavior so that the new
replacement object is created first, and the prior object is destroyed after the
replacement is created.
This is an opt-in behavior because many remote object types have unique name
requirements or other constraints that must be accommodated for both a new and
an old object to exist concurrently.
2. prevent_destroy (bool) - This meta-argument, when set to true, will cause
Terraform to reject with an error any plan that would destroy the infrastructure
object associated with the resource, as long as the argument remains present in
the configuration.
This can be used as a measure of safety against the accidental replacement of
objects that may be costly to reproduce, such as database instances. However, it
will make certain configuration changes impossible to apply, and will prevent the
use of the terraform destroy command once such objects are created, and so this
option should be used sparingly.
3. ignore_changes (list of attribute names) - By default, Terraform detects any
difference in the current settings of a real infrastructure object and plans to
update the remote object to match configuration.
The ignore_changes feature is intended to be used when a resource is created
with references to data that may change in the future, but should not affect said
resource after its creation. In some rare cases, settings of a remote object are
modified by processes outside of Terraform, which Terraform would then
attempt to "fix" on the next run. In order to make Terraform share management
responsibilities of a single object with a separate process, the ignore_changes
meta-argument specifies resource attributes that Terraform should ignore when
planning updates to the associated remote object.
Core terminologies :
Main commands:
init Prepare your working directory for other commands
Check whether the configuration is valid
validate
plan Show changes required by the current configuration
apply Create or update infrastructure
Destroy previously-created infrastructure
destroy
5. Verify that the installation worked by opening a new terminal session and
listing Terraform's available subcommands.
Conclusion: Successfully installed terraform on Linux machine.
EXPERIMENT NO. 6
CORRECTION DATE
REMARK
Experiment No : 6
Aim: To Build, change, and destroy AWS / GCP /Microsoft Azure/ DigitalOcean
infrastructure Using Terraform..
Theory :
Plan
The terraform plan command evaluates a Terraform configuration to determine the
desired state of all the resources it declares, then compares that desired state to the
real infrastructure objects being managed with the current working directory and
workspace. It uses state data to determine which real objects correspond to which
declared resources, and checks the current state of each resource using the relevant
infrastructure provider's API.
Once it has determined the difference between the current state and the desired
state, terraform plan presents a description of the changes necessary to achieve the
desired state. It does not perform any actual changes to real world infrastructure
objects; it only presents a plan for making changes.
Plans are usually run to validate configuration changes and confirm that the
resulting actions are as expected. However, terraform plan can also save its plan as
a runnable artifact, which terraform apply can use to carry out those exact changes.
Apply
The terraform apply command performs a plan just like terraform plan does, but
then actually carries out the planned changes to each resource using the relevant
infrastructure provider's API. It asks for confirmation from the user before making
any changes, unless it was explicitly told to skip approval.
By default, terraform apply performs a fresh plan right before applying changes,
and displays the plan to the user when asking for confirmation. However, it can also
accept a plan file produced by terraform plan in lieu of running a new plan. You
can use this to reliably perform an exact set of pre-approved changes, even if the
configuration or the state of the real infrastructure has changed in the minutes since
the original plan was created.
Change infrastructure
Infrastructure is continuously evolving, and Terraform helps you manage that
change. As you change Terraform configurations, Terraform builds an execution
plan that only modifies what is necessary to reach your desired state.
Destroy infrastructure
The terraform destroy command destroys all of the resources being managed by the
current working directory and workspace, using state data to determine which real
world objects correspond to managed resources. Like terraform apply, it asks for
confirmation before proceeding.
A destroy behaves exactly like deleting every resource from the configuration and
then running an apply, except that it doesn't require editing the configuration. This
is more convenient if you intend to provision similar resources at a later date.
Steps to build an infrastructure :
1. Configure the AWS CLI from your terminal. Follow the prompts to input your
AWS Access Key ID and Secret Access Key.
$ aws configure
2. Write configuration
The set of files used to describe infrastructure in Terraform is known as a
Terraform configuration. You will write your first configuration to define a
single AWS EC2 instance.
Each Terraform configuration must be in its own working directory. Create a
directory for your configuration.
$ mkdir learn-terraform-aws-instance
3. Change into the directory.
$ cd learn-terraform-aws-instance
provider "aws" {
profile = "default"
region = "us-west-2"
}
tags = {
Name = "ExampleAppServerInstance"
}
}
3. Create infrastructure
Apply the configuration now with the terraform apply command. Terraform will
print output similar to what is shown below
$ terraform apply
Steps to change an infrastructure :
1. Configuration
Now update the ami of your instance. Change the aws_instance.app_server
resource under the provider block in main.tf by replacing the current AMI ID
with a new one.
Replace "ami-830c94e3" with "ami-08d70e59c07c61a3a".
2. Apply Changes
After changing the configuration, run terraform apply again to see how
Terraform will apply this change to the existing resources.
$ terraform apply
Steps to destroy an infrastructure :
The terraform destroy command terminates resources managed by your Terraform
project. This command is the inverse of terraform apply in that it terminates all the
resources specified in your Terraform state. It does not destroy resources running
elsewhere that are not managed by the current Terraform project.
$ terraform destroy
Conclusion : Successfully built, changed and destroyed an AWS infrastructure
using terraform.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022
EXPERIMENT NO. 7
CORRECTION DATE
REMARK
Aim: To understand Static Analysis SAST process and learn to integrate Jenkins SAST to
SonarQube.
Theory :
What is SAST?
Static application security testing (SAST), or static analysis, is a testing methodology that
analyzes source code to find security vulnerabilities that make your organization’s applications
susceptible to attack. SAST scans an application before the code is compiled. It’s also known as
white box testing.
What is Jenkins?
Jenkins is an open-source automation tool written in Java with plugins built for Continuous
Integration purposes. Jenkins is used to build and test your software projects continuously making
it easier for developers to integrate changes to the project, and making it easier for users to obtain
a fresh build. It also allows you to continuously deliver your software by integrating with a large
number of testing and deployment technologies.
With Jenkins, organizations can accelerate the software development process through automation.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test,
package, stage, deploy, static analysis, and much more.
Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of
Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins
for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc.
The image below depicts that Jenkins is integrating various DevOps stages:
What is SonarQube?
SonarQube is a Code Quality Assurance tool that collects and analyzes source code, and provides
reports for the code quality of your project. It combines static and dynamic analysis tools and
enables quality to be measured continually over time. Everything from minor styling choices, to
design errors are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyze where the code is messing up and determine whether or
not it is styling issues, code defeats, code duplication, lack of test coverage, or excessively
complex code. The software will analyze source code from different aspects and drills down the
code layer by layer, moving module level down to the class level, with each level producing
metric values and statistics that should reveal problematic areas in the source code that needs
improvement.
Sonarqube also ensures code reliability, Application security, and reduces technical debt by
making your code base clean and maintainable. Sonarqube also provides support for 27 different
languages, including C, C++, Java, Javascript, PHP, GO, Python, and much more.SonarQube also
provides Ci/CD integration, and gives feedback during code review with branch analysis and pull
request decoration.
Step 5. Open browser and http://localhost:9000/ (9000 is default) you will be navigated to below
window
Default Login and Password is admin.
CORRECTION DATE
REMARK
Experiment No: 8
Aim: To Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to perform a static
analysis of the code to detect bugs, code smells, and security vulnerabilities on a sample Web /
Java / Python application.
Theory :
What is SonarQube?
SonarQube is a Code Quality Assurance tool that collects and analyses source code, and provides
reports for the code quality of your project. It combines static and dynamic analysis tools and
enables quality to be measured continually over time. Everything from minor styling choices, to
design errors are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyse where the code is messing up and determine whether or
not it is styling issues, code defeats, code duplication, lack of test coverage, or excessively
complex code. The software will analyse source code from different aspects and drills down the
code layer by layer, moving module level down to the class level, with each level producing
metric values and statistics that should reveal problematic areas in the source code that needs
improvement.
SonarQube also ensures code reliability, Application security, and reduces technical debt by
making your code base clean and maintainable. SonarQube also provides support for 27 different
languages, including C, C++, Java, JavaScript, PHP, GO, Python, and much more. SonarQube
also provides Ci/CD integration, and gives feedback during code review with branch analysis and
pull request decoration.
Why SonarQube Jenkins integration is important?
SonarQube is an open-source tool for continuous inspection of code quality. It performs static
analysis of code, thus detecting bugs, code smells and security vulnerabilities. In addition, it also
can report on the duplicate code, unit tests, code coverage and code complexities for multiple
programming languages. Hence, in order to achieve Continuous Integration with fully automated
code analysis, it is important to integrate SonarQube with CI tools such as Jenkins.
What is the role of Gitlab?
SonarQube's integration with GitLab Self-Managed and GitLab.com allows you to maintain code
quality and security in your GitLab projects.
With this integration, you'll be able to:
Authenticate with GitLab - Sign in to SonarQube with your GitLab credentials.
Import your GitLab projects - Import your GitLab Projects into SonarQube to easily set up
SonarQube projects.
Analyse projects with GitLab CI/CD - Integrate analysis into your build pipeline. Starting
in Developer Edition, SonarScanners running in GitLab CI/CD jobs can automatically
detect branches or merge requests being built so you don't need to specifically pass them
as parameters to the scanner.
Report your Quality Gate status to your merge requests - (starting in Developer Edition)
See your Quality Gate and code metric results right in GitLab so you know if it's safe to
merge your changes.
What is CI/CD pipeline?
A CI/CD pipeline is a series of steps that must be performed in order to deliver a new version of
software. Continuous integration/continuous delivery (CI/CD) pipelines are a practice focused on
improving software delivery using either a DevOps or site reliability engineering (SRE)
approach.
A CI/CD pipeline introduces monitoring and automation to improve the process of application
development, particularly at the integration and testing phases, as well as during delivery and
deployment. Although it is possible to manually execute each of the steps of a CI/CD pipeline,
the true value of CI/CD pipelines is realized through automation.
Elements of a CI/CD pipeline
The steps that form a CI/CD pipeline are distinct subsets of tasks grouped into what is
known as a pipeline stage. Typical pipeline stages include:
Build - The stage where the application is compiled.
Test - The stage where code is tested. Automation here can save both time and effort.
Release - The stage where the application is delivered to the repository.
Deploy - In this stage code is deployed to production.
Validation and compliance - The steps to validate a build are determined by the needs of
your organization. Image security scanning tools, like Clair, can ensure the quality of
images by comparing them to known vulnerabilities (CVEs).
What is bug?
A software bug is an error, flaw or fault in a computer program or system that causes it to
produce an incorrect or unexpected result, or to behave in unintended ways. After a product is
released or during public beta testing, bugs are still apt to be discovered. When this occurs, users
have to either find a way to avoid using the "buggy" code or get a patch from the originators of
the code. Although bugs typically just cause annoying computer glitches, their impact can be
much more serious.
Most bugs arise from mistakes and errors made in either a program's design or its source code, or
in components and operating systems used by such programs. A few are caused by compilers
producing incorrect code. A program that contains many bugs, and/or bugs that seriously interfere
with its functionality, is said to be buggy (defective). Bugs can trigger errors that may have ripple
effects. Bugs may have subtle effects or cause the program to crash or freeze the computer. Other
bugs qualify as security bugs and might, for example, enable a malicious user to bypass access
controls in order to obtain unauthorized privileges.
What are code smells?
Code smells are not bugs or errors. Instead, these are absolute violations of the fundamentals of
developing software that decrease the quality of code. Having code smells does not certainly
mean that the software won’t work, it would still give an output, but it may slow down
processing, increased risk of failure and errors while making the program vulnerable to bugs in
the future. Smelly code contributes to poor code quality and hence increasing the technical debt.
Code smells indicate a deeper problem, but as the name suggests, they are sniffable or quick to
spot. The best smell is something easy to find but will lead to an interesting problem, like classes
with data and no behaviour. Code smells can be easily detected with the help of tools.
EXPERIMENT NO. 9
CORRECTION DATE
REMARK
Experiment No: 9
Aim:
To Understand Continuous monitoring and Installation and configuration of
Nagios Core, Nagios Plugins asnd NRPE (Nagios Remote Plugin Executor) on
Linux Machine/Spunk.
Theory :
What is continuous monitoring?
Continuous monitoring is a technology and process that IT organizations may
implement to enable rapid detection of compliance issues and security risks within
the IT infrastructure. Continuous monitoring is one of the most important tools
available for enterprise IT organizations, empowering SecOps teams with real-time
information from throughout public and hybrid cloud environments and supporting
critical security processes like threat intelligence, forensics, root cause analysis, and
incident response.
The goal of continuous monitoring and the reason that organizations implement
continuous monitoring software solutions is to increase the visibility and
transparency of network activity, especially suspicious network activity that could
indicate a security breach, and to mitigate the risk of cyber attacks with a timely
alert system that triggers rapid incident response.
What is nagios?
Nagios monitors your entire IT infrastructure to ensure systems, applications,
services, and business processes are functioning properly. In the event of a failure,
Nagios can alert technical staff of the problem, allowing them to begin remediation
processes before outages affect business processes, end-users, or customers. With
Nagios you’ll never be left having to explain why an unseen infrastructure outage
hurt your organization’s bottom line.
What is nagios plugins (NRPE)?
The Nagios daemon which run checks on remote machines in NRPE (Nagios
Remote Plugin Executor). It allows you to run Nagios plugins on other machines
remotely. You can monitor remote machine metrics such as disk usage, CPU load
etc. It can also check metrics of remote windows machines through some windows
agent addons.
4) Install and start httpd service and run chkconfig on so as to keep httpd service
running after server restart.
$yum install httpd
$ service httpd start
$ chkconfig httpd on
5) Install php
$ yum install php
6) Now open contact.cfg/contacts.cfg file by below command to change the contact
information such as contact_name, alias, and email address. Make sure to enter the
email address where you want to receive Nagios alerts.
$ vi /etc/nagios/objects/contacts.cfg
7) Now use below command to check your nagios configuration.
$ /usr/sbin/nagios -v /etc/nagios/nagios.cfg
Note : In the above screenshot, you can see Total Warnings: 0 and Total Errors: 0
which means you can now restart the Nagios service. If you get any error, you need
to resolve the error first then restart the Nagios service else Nagios service will not
start.
8) check Nagios service status and if it is stoped please start the service
$ service nagios status
$ service nagios start
$ chkconfig nagios on
Note : Open port 80 for your Public IP on EC2 Install security group in AWS so as
to Nagios on your browser.
9) Now Open Nagios Monitoring Tool in your browser by entering the following
URL
http://{Public IP of Nagios Server}/nagios
Default Username and Password of Nagios are as follows:
Username: nagiosadmin
Password: nagiosadmin
11) Now you can see the status of Host and its services on the Nagios.
Conclusion : Understood Continuous monitoring and installed and configured
Nagios Core, Nagios Plugins and NRPE (Nagios Remote Plugin Executor) on
Linux Machine. Also performed port and service monitoring.
CORRECTION DATE
REMARK
Experiment No: 10
Aim: To understand AWS Lambda, its workflow, various functions and create your first lambda
function using python / Java / Nodejs.
Theory:
What is AWS Lambda?
Lambda is a compute service that lets you run code without provisioning or managing servers.
Lambda runs your code on a high-availability compute infrastructure and performs all of the
administration of the compute resources, including server and operating system maintenance,
capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you
can run code for virtually any type of application or backend service.
You can invoke your Lambda functions using the Lambda API, or Lambda can run your
functions in response to events from other AWS services. For example, you can use Lambda to:
Build data-processing triggers for AWS services such as Amazon Simple Storage Service
(Amazon S3) and Amazon DynamoDB.
Process streaming data stored in Amazon Kinesis.
Create your own backend that operates at AWS scale, performance, and security.
Lambda is a highly available service.
Lambda features:
The following key features help you develop Lambda applications that are scalable, secure, and
easily extensible:
1. Concurrency and scaling controls:
Concurrency and scaling controls such as concurrency limits and provisioned
concurrency give you fine-grained control over the scaling and responsiveness of your
production applications.
2. Functions defined as container images:
Use your preferred container image tooling, workflows, and dependencies to build, test,
and deploy your Lambda functions.
3. Code signing:
Code signing for Lambda provides trust and integrity controls that let you verify that
only unaltered code that approved developers have published is deployed in your
Lambda functions.
4. Lambda extensions:
You can use Lambda extensions to augment your Lambda functions. For example, use
extensions to more easily integrate Lambda with your favorite tools for monitoring,
observability, security, and governance.
5. Function blueprints:
A function blueprint provides sample code that shows how to use Lambda with other
AWS services or third-party applications. Blueprints include sample code and function
configuration presets for Node.js and Python runtimes.
6. Database access:
A database proxy manages a pool of database connections and relays queries from a
function. This enables a function to reach high concurrency levels without exhausting
database connections.
7. File systems access:
You can configure a function to mount an Amazon Elastic File System (Amazon EFS)
file system to a local directory. With Amazon EFS, your function code can access and
modify shared resources safely and at high concurrency.
2. Pick a Blueprint
Now it’s time to finishing creating your function. You’ll use Python for this function
because you can enter the code right into the console. First, select the Use a Blueprint box in
the center of the Create Function page.
Then, type Hello in the search box. Press enter and AWS will search for blueprints with
Hello in the name. One of them will be hello-world-python. Select this and click Configure.
3. Configure and Create Your Function
This will take you to a form where you will name your function, select a role, and edit the
Python code.
Enter a name, and leave the default role. The default role allows your lambda to send system
out logs to CloudWatch.
Let’s take a quick look at the Python code included in the blueprint.
import json print('Loading function') def lambda_handler(event, context): #print("Received
event: " + json.dumps(event, indent=2)) print("value1 = " + event['key1']) print("value2 = "
+ event['key2']) print("value3 = " + event['key3']) return event['key1'] # Echo back the first
key value #raise Exception('Something went wrong')
AWS will call the lambda_handler function each time an event is triggered. This function
prints the values associated with three JSON fields: “key1,” “key2,” and “key3.”
Click the Create Function button at the bottom of the form.
You’ve created a Lambda function! Now let’s make an edit using the web editor. Let’s make
a simple edit and uncomment the JSON dump on line 7. Scroll down and you’ll see the
editor. The Save button at the top right of the page should go from being grayed-out to
orange. Once you hit the Save button, you should see a banner at the top of the page
indicating that the function was updated. Now it’s time to test it. Fortunately, AWS makes
this very easy.
4. Test Your Lambda Function
Click the Test button that is next to the Save button. AWS will display a form that looks
similar to this:
This test will pass a simple JSON document to your function with the three keys it expects
set to “value1,” “value2,” and “value3.” That’s good enough for a start. Click
the Create button at the bottom.
AWS saves your test, and you can run it from the function page with the Test button. This
makes it easy to set up different test cases and run them by name.
Click the test button. AWS will run your test and display a result box.
The test succeeded, and you can see your log result if you click the details disclosure icon.
Conclusion: Hence, we understood AWS Lambda, its workflow, various functions and create
your first lambda function using python / Java / Nodejs.
Ramrao Adik Institute of Technology
DEPARTMENT OF INFORMATION
TECHNOLOGY
ACADEMIC YEAR: 2021-2022
EXPERIMENT NO. 11
EXPERIMENT TITLE To create a Lambda function which will log “An Image
has been added” once you add an object to a specific
bucket in S3.
CORRECTION DATE
REMARK
Aim: To create a Lambda function which will log “An image has been added” once you add an
object to a specific bucket in s3.s
Theory:
AWS Lambda is a serverless compute service that runs your code in response to events and
automatically manages the underlying compute resources for you. You can use AWS Lambda to
extend other AWS services with custom logic, or create your own back end services that operate
at AWS scale, performance, and security. AWS Lambda can automatically run code in response
to multiple events, such as HTTP requests via Amazon API Gateway, modifications to objects
in Amazon S3 buckets, table updates in Amazon DynamoDB, and state transitions in AWS Step
Functions.
Lambda runs your code on high-availability compute infrastructure and performs all the
administration of the compute resources, including server and operating system maintenance,
capacity provisioning and automatic scaling, code and security patch deployment, and code
monitoring and logging. All you need to do is supply the code.
The code you run on AWS Lambda is called a “Lambda function.” After you create your Lambda
function, it is always ready to run as soon as it is triggered, similar to a formula in a spreadsheet.
Each function includes your code as well as some associated configuration information, including
the function name and resource requirements. Lambda functions are “stateless”, with no affinity
to the underlying infrastructure, so that Lambda can rapidly launch as many copies of the function
as needed to scale to the rate of incoming events.
After you upload your code to AWS Lambda, you can associate your function with specific AWS
resources, such as a particular Amazon S3 bucket, Amazon DynamoDB table, Amazon Kinesis
stream, or Amazon SNS notification. Then, when the resource changes, Lambda will execute
your function and manage the compute resources as needed to keep up with incoming requests.
Log files are automatically created to store a record of all the events from your application.
Almost everything you use creates or adds to a log file. From the operating system your computer
runs to the apps on your phone, they all make log files. They record things that you don't
normally track in your own error messages, like specific database columns that are causing errors.
It keeps track of every event that happens in your application from the minute you start running it
to the second you stop it. Any calls you make to third party APIs or any scripts that run in the
background will have a record here. This is your source for finding everything that happens
behind the scenes of your application.
The reason we need logs is because they hold information that can't be found anywhere else.
An error will be recorded in the logs so that only someone with access to the server could see
those kinds of errors. Most of the time this is where you should look when you can't figure out
what's wrong with your code after hours of debugging. The answer might not always be here, but
it will give you another place to go check.
Once you start looking in the log files when you have weird errors, it becomes easier to find
ways to fix them. At the minimum, you will rule out another place to look. A lot of new developers
don't know about logs so it's important that we take the time to teach them so they can learn how
to better research bugs.
Create an Amazon S3 bucket and upload a test file to your new bucket. Your Lambda function
retrieves information about this file when you test the function from the console.
4. Choose Create bucket.
After creating the bucket, Amazon S3 opens the Buckets page, which displays a list of all buckets
in your account in the current Region.
1. On the Buckets page of the Amazon S3 console, choose the name of the bucket that you
created.
2. On the Objects tab, choose Upload.
3. Drag a test file from your local machine to the Upload page.
4. Choose Upload.
Use a function blueprint to create the Lambda function. A blueprint provides a sample function
that demonstrates how to use Lambda with other AWS services. Also, a blueprint includes sample
code and function configuration presets for a certain runtime.
Invoke the Lambda function manually using sample Amazon S3 event data.
1. On the Code tab, under Code source, choose the arrow next to Test, and then
choose Configure test events from the dropdown list.
2. In the Configure test event window, do the following:
a. Choose Create new test event.
b. For Event template, choose Amazon S3 Put (s3-put).
c. For Event name, enter a name for the test event. For example, mys3testevent.
d. Choose Create.
Test with the S3 trigger
Invoke your function when you upload a file to the Amazon S3 source bucket.
1. On the Buckets page of the Amazon S3 console, choose the name of the source bucket
that you created earlier.
2. On the Upload page, upload a few .jpg or .png image files to the bucket.
Conclusion: Hence we created a Lambda function which logged “An image has been added”
once we add an object to a specific bucket in s3.
ASSIGNMENT 1
ASSIGNMENT 2
SELF LEARNING: AWS LAMBDA
You can invoke your Lambda functions using the Lambda API, or
Lambda can run your functions in response to events from other
AWS services. For example, you can use Lambda to:
Build data-processing triggers for AWS services such as Amazon
Simple Storage Service (Amazon S3) and Amazon DynamoDB.
Process streaming data stored in Amazon Kinesis.
Create your own backend that operates at AWS scale, performance, and
security.
When using Lambda, you are responsible only for your code.
Lambda manages the compute fleet that offers a balance of
memory, CPU, network, and other resources to run your code.
Because Lambda manages these resources, you cannot log in to
compute instances or customize the operating system on
Lambda performs operational and administrative activities on
your behalf, including managing capacity, monitoring, and logging
your Lambda functions.
If you need to manage your own compute resources, AWS has other
compute services to meet your needs. For example:
Amazon Elastic Compute Cloud (Amazon EC2) offers a wide range of EC2
instance types to choose from. It lets you customize operating systems, network
and security settings, and the entire software stack. You are responsible for
provisioning capacity, monitoring fleet health and performance, and using
Availability Zones for fault tolerance.
AWS Elastic Beanstalk enables you to deploy and scale applications onto Amazon
EC2. You retain ownership and full control over the underlying EC2 instances.
Accessing Lambda
You can create, invoke, and manage your Lambda functions using any of
the following interfaces:
AWS Management Console – Provides a web interface for you to access
your functions. For more information, see Lambda console.
AWS Command Line Interface (AWS CLI) – Provides commands for a broad
set of AWS services, including Lambda, and is supported on Windows,
macOS, and Linux. For more information, see Using Lambda with the AWS
CLI.
AWS SDKs – Provide language-specific APIs and manage many of the
connection details, such as signature calculation, request retry handling,
and error handling. For more information, see AWS SDKs.
AWS CloudFormation – Enables you to create templates that define your
Lambda applications. For more information, see AWS Lambda applications.
AWS CloudFormation also supports the AWS Cloud Development Kit (CDK).
AWS Serverless Application Model (AWS SAM) – Provides templates and a
CLI to configure and manage AWS serverless applications. For more
information, see AWS SAM.
Lambda concepts
Lambda runs instances of your function to process events. You can invoke
your function directly using the Lambda API, or you can configure an AWS
service or resource to invoke your function.
Function
A function is a resource that you can invoke to run your code in Lambda. A
function has code to process the events that you pass into the function or
that other AWS services send to the function.
Trigger
A trigger is a resource or configuration that invokes a Lambda function.
Triggers include AWS services that you can configure to invoke a function
and event source mappings. An event source mapping is a resource in
Lambda that reads items from a stream or queue and invokes a function.
For more information, see Invoking AWS Lambda functions and Using AWS
Lambda with other services.
Event
An event is a JSON-formatted document that contains data for a Lambda
function to process. The runtime converts the event to an object and
passes it to your function code. When you invoke a function, you determine
the structure and contents of the event.
"Records": [
"Sns": {
"Timestamp": "2019-01-02T12:45:07.000Z",
"Signature":
"tcc6faL2yUC6dgZdmrwh1Y4cGa/ebXEkAi6RibDsvpi+tE/1+82j...6
5r==",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
...
Execution environment
An execution environment provides a secure and isolated runtime environment for your
Lambda function. An execution environment manages the processes and resources that
are required to run the function. The execution environment provides lifecycle support
for the function and for any extensions associated with your function.
Step 25 :- Now you can see the Dashboard of your Created Table