Daily Notes DevOps
Daily Notes DevOps
Vagrant-
Layer on top of Virtual box, a CLI used for provisioning virtual machines.
Using below URL, we can download the virtual machine base boxes. Which usually contains
a box name and user name.
https://app.vagrantup.com/boxes/search
Vagrant file- Vagrant file contains configuration details that are require to provision
virtual machines and also VM provider info. Refer below file for reference.
Vagrant.configure("2") do |config|
# Box
config.vm.box = "centos/7"
After this step you are inside the Linux kernel of your VM.
There are few important features that will be useful for developer. Ex- Synced folder, port
forwarding which can configured using vagrant file itself. Let’s elaborate on those.
a) Synced folder- This will sync the folders from host drive to vagrant machine folder.
Below is the command.
config.vm.synced_folder "src/", "/srv/website"
b) Port forwarding- Using vagrant file
config.vm.network:forwarded_port, guest:80, host:80
c) In Vagrant file, we can set private IP address also.
config.vm.network “private_network”, ip: “162.18.0.101”
d) Set memory for virtual machine
config.vm.provider “virtualbox” do |vb|
vb.memory = “2000”
end
e) Set virtual CPUs-
cat proc/cpuinfo
config.vm.provider “virtualbox” do |vb|
vb.cpus = “6”
end
--07/08/2020
ssh into CentOS
pwd //present working directory will be displayed
mkdir //creates directory
cd /dir/ //switches directory
ls //list view for particular directory
ls -al // lists hidden files as well
ll //to see permissions on the folder or directory
ctrl + l ==> clear screen shortcut
sudo yum install httpd -y
sudo service httpd status
netstat -tulnp
cd /etc/httpd/conf.d/
cd ../conf
ls
free -th check total and free RAM on linux
sudo vi httpd.conf --> file contains configuration information and ports
Any web server will run on port 80 and secure server will run on 443
Note-to run the server using vagrant, add port 80 in below location.
virtual box>>networking>>advanced>>port forwarding>>add host port and guest port as '80'
enter “localhost” in web browser (virtual box server accessed by vagrant)
Query -
1. yesterday tried to create a folder, folder got created but could not see it even after being a root
user.
I’m using AWS Linux instance.
2.i'm using AWS for Linux server as I could not do vagrant up. I was returned with error code 416.
curious to know why was the error?
--10/08/2020
--11/08/2020
Creating users and give sudo access to the users
sudo useradd username
sudo passwd username (prompts change password)
sudo vi /etc/ssh/sshd_config
Tip-To search for a word in file /word
Change password authentication to yes in sshd_config file
To login the terminal with the user created
su harini –
exit harini user
sudo visudo
add username ALL=(ALL) NOPASSWD: ALL in the users below root user
sudo reboot //reboots the server
ssh with the user created and enter password
vim install.sh
sh install.sh //to execute the script
#!/bin/bash
sudo yum update -y
touch index.html
echo “Hi Harini welcome to first class” >index.html
sudo chown -R root:root index.html
sudo yum install httpd -y
sudo service httpd start
cp index.html /var/www/html/
sudo service httpd restart
13/08/2020
Class-Learnt to create a script using which we can automate processes such as installation, updation
for packages.
Script name created by me -Installscript.sh
#!/bin/bash
echo "welcome to installation services"
if [ $process == install ]
then if [ $package == httpd ]
then if [ -f /usr/lib/systemd/system/httpd.service ]
then echo "process already exists"
else
sudo yum install httpd -y
sudo service httpd start
fi
elif [ $package == all ]
then sudo yum update -y
else
echo "invalid package"
fi
elif [ $process == uninstall]
then if [ $package == httpd ]
then if [ -f /usr/lib/systemd/system/httpd.service ]
then
echo "package exists, removing the package"
sudo yum remove httpd -y
else
echo "package unavailable"
fi
elif [ $package == all ]
then sudo yum update -y
else
echo "invalid package"
fi
else
echo "invalid process"
fi
Task:
Write a script which creates a file and move it to directory /var/app/ which is to be created by
another sh script.
Script name created by me -bashcript.sh
#!/bin/bash
#updating the server
sudo yum update -y
14/08/2020
Taught for and while loops
Task assigned:
Write a shell script which executes the following
1) Create a directory with your name which is provided through user input
2) create sub directories which are the following dev qa stage prod
3) now using conditional looping check when the I value in for is dev, qa, stage, prod each then a file
should be created inside the respective sub directory with the name of the file as that of the sub dir
name.
Script-name created by me -loops.sh
#!/bin/sh
mkdir $dir
chmod 777 $dir
cd $i
touch ${i}file1.txt
cd ..
done
i=0
# Loop upto size of array
for var in "${arr[@]}"
do
#echo $var
# To print index, ith
# element
if [[ $var =~ [A-Z] ]]
then
echo $var
# Increment the i = i + 1
((i+=1))
fi
done
17/08/2020-
Gitlab-
#!bin/bash/
After executing the above script execute below commands to modify external URL as
“localhost”/ ec2 IP address in gitlab.rb file
sudo vim etc/gitlab/gitlab.rb
sudo gitlab-ctl reconfigure
After the reconfiguration is done, all users will be forced to use the HTTPS protocol when
accessing the GitLab site.
18-08-2020-
Create a new project in GitLab.
Clone the project to local machine.
19-08-2020-
Created a new project projecttest-002
Create branches and push the code to new branch.
git clone url
cd projectfolder (shows master branch)
git checkout -b dev (creates a branch and switch to new branch)
//create a file here
git status (new file will appear as untracked file)
git add . (adding the file to branch)
git commit -m “msg”
git push –set-upstream dev (this will move the newly created branch to git
repository with changes made)
#!/bin/bash
cd $dir
Error- could not execute sh script but were able to manually create the branches.
What I did-I have created dev branch and committed the changes to the branch. While I
stay in dev branch, I created qa branch.(all the changes that are in dev branch will be
copied to qa branch). Then I switched to master branch and created other branches
stage and prod ,it contains the copy of master.
Note to self -always created branches from master branch during practice.
Push branches to git
Switch to master branch
git push --set-upstream origin qa
git push --set-upstream origin stage
git push --set-upstream origin prod
20/08/2020-
Create merge requests
Protect Dev, QA, Stage, Prod branches, settings protected repositoryselect
the branch maintainer for merge and no one for push.
Git checkout dev
Modify/create a file and try to push, git says access rejected.
Go to UI, create merge request, select source and destination branches. Select
necessary details.
As I’m the admin approve and merge the request.
Git pull in git bash (pulls all the modifications to Qa branch in local machine)
21/08/2020-
http://40.87.97.212/root/real_time-101
25/08/2020
Create a new vagrant machine
Update
Install java
username -admin
password-jenkins
Declarative Pipeline
http://13.233.65.13:8080/env-vars.html/
Ansible is CI and Configuration management tool. We use playbooks which are YAML scripts
for configuring or deploying servers, other tools are chef, puppet.
Ansible installation-
sudo yum install epel-release -y
Sudo vi /etc/ansible/hosts
[test]
Centos@ipaddress
From gitlab digital lync, clone ansible- playbooks-101 on ansible machine
Cd ansible- playbooks-101
Create update.yml file -to update the machine.
-Hosts: test
Become: yes
Tasks:
- name : Updating the machine
Yum:
Name: ‘*’
State: latest
Issue faced today -vagrant remained in stopping state and couldn’t perform any
action on the machine.
Fix- cd C:\Program Files\Oracle\VirtualBox
VBoxManage.exe startvm <vm-uuid> --type emergencystop
Then restart the machine using vagrant up
03/09/2020
Ansible galaxies are used for the playbooks to be used over and over without re writing the
code. In ansible galaxies roles are written
08/09/2020
Integrating Jenkins with ansible to execute code on dev server
28/09/2020
Docker installation run images
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
sudo usermod -aG docker centos
Jenkins image
Docker pull Jenkins
docker container run -d -p 81:8080 --name=jenkinsimg Jenkins
docker exec -it jenkinsimg /bin/bash
29/09/2020
We have 2 modes to run an image interactive terminal mode and daemon mode.
Using daemon mode, session will remain running until unless we terminate it or stop the
image.
Ex- docker container run -d -p 81:8080 --name=jenkinsimg Jenkins
Using Interactive terminal mode, session will be terminate when we exit from -it mode.
Ex- docker container run -it -p 81:80 --name=testcentos centos
docker container run -it -p 80:80 docker container run -it -p 80:80
--name=test ubuntu --name=testcentos centos
vim index.html
docker build . -t mywebserver:Vr1
docker image ls
docker run -it -p 80:80 mywebserver:Vr4
05/09/2020
1.Create 3 VMs,
2.install docker,
Will generate join id using which nodes can be added to master, Add port 2377 to security
group before join on slaves
docker swarm join --token SWMTKN-1-
5g3iv56cya1im4eiwkzuyose9uv00xn5wvveisbxucl0nl7e8p-9xidoh1jobuo68rsvfrp5kphc
172.31.8.173:2377
13/10/2020
Mithun Notes
15-02-2020 7:00 PM
==================
Docker
Containarization Platfrom using which we can package our applications(code) & required
softwares to run application in the form of
containers.
Components/Terminology:
=======================
Docker Image
Docker Container
Docker Registry(Repository)
Public --> hub.docker.com (Public Accessable from anywhere)
Private --> Can be accessable with in private network(Nexus,JFrog)
Dockerfile
ex:
# docker hub dockerhandson is username of my repo.Replace dockerhandson with your
user name.
docker build -t dockrhandson/java-web-app:1 .
ex:
#Private Repo
docker login -u <username> -p <password> <URL>
# Docker push
ex:
docker run -d -p 8080:8080 --name javawebappcontainer dockerhandson/java-web-app:1
16-02-2020 6:00 AM
==================
docker version
docker --version
docker info
Image Commands:
==============
Docker Image: It's package which will have all the required components like application
code & softwares which are required to run application.
Custom Images: We can create our own image using docker file on top of base image
which can have application code + Softwares.
FROM nginx
COPY index.html /usr/share/nginx/html
Ex:
docker build -t dockerhandson/testnginx .
ex:
docker login -u username -p password
ex:
docker login -u admin -p password 189.81.23.41:8081
ex:
docker push dockerhandson/testnginx
ex:
docker rmi dockerhandson/testnginx:latest
# It will delete all stopped containers ,all unused images,un used networks.
docker system prune
How can we move image from one system to another system with out repo?
3) In destination server execute docker load to load image from tar file.
docker load -i <FileName>.tar
docker build
docker push
docker login
docker images or docker image ls
docker images -q
docker rmi <imageId/Name>
docker rmi -f <imageId/Name> <imageId/Name>
docker rmi -f $(docker images -q)
docker image prune
docker system prune
docker image inspect <imageId/Name>
docker inspect <imageId/Name>
docker history <imageId/Name>
docker search <imageName>
docker save
docker load
Contanier Commands
==================
# It will create container from the image but it will not start the container.
ex:
docker run -d -p 80:80 --name nginxcontianer dockerhandson/testnginx
# Inspect Container
# Start Container
docker start <containerId/Name>
# Stop Container
docker stop <containerId/Name>
# Restart Container
docker restart <containerId/Name>
# Un Pause Container
docker unpause <containerId/Name>
# Container to system
docker cp <containerName>:/<ContainerFilePath> <SystemFileName>
# System to Cotnainer
docker cp <SourceFile/SystemFilePath> <containerName>:/<ContainerFilePath>
# Remove Containers
# Who can we delete containers which are created from specific image?
Ex:
Flipkart(Example)
Features
--> SignUp
--> SignIn
--> CheckOut(AddToCart)
--> Payments
--> Orders
--> MyAccount
Disadvantages:
1) Maintaince:
2) Scaling: We will endup scaling entire application (all features) eventhough we want to
scall one few features since all features are part of sample code base hence same build
package.
Flipkart
Features
--> SignUp --> Git Repo --> singup.jar/war --> DockerImage --> Container
--> SignIn --> Git Repo --> signin.jar/war --> DockerImage --> Container
--> CheckOut(AddToCart) --> Git Repo --> checkout.jar/war
--> Payments --> Git Repo --> checkout.jar/war
--> Orders --> Git Repo --> checkout.jar/war
--> MyAccount --> Git Repo --> checkout.jar/war
Dockerfile Keywords
==================
#Sample Dockerfile
FROM tomcat:openjdk-8
COPY target/*war /usr/local/tomcat/webapps/java-web-app.war
FROM --> FROM indicates from which base image() you want to create your own image.
ex:
(Only OS no Softwares)
FROM ubuntu
FROM centos
FROM tomcat:openjdk-8 (OS + Java+Tomcat Sofwares)
FROM openjdk:8 (OS + Java8)
FROM nginx (OS + nginx)
FROM node (OS + node software)
FROM python(OS + python)
Note: In one docker file we can have more than one FROM keywords. But 99% In
Dockerfile we will use only one FROM <baseimageName>. Base Image is dependson on
what type of application we want to build&run as part of docker.
MAINTAINER --> It's depricated in latest versions. It's just info about who is
maintaining/created the docker file. It's about author of dockefile.
ex:
MAINTAINER [email protected]
COPY --> It can copy files from host/local system(Where u are building an image) to image
while creating an image.
ex:
ADD --> It can copy files from host/local system to an image. And also it can download
files from remote http/s locations.
Ex:
ADD http://mirrors.estointernet.in/apache/tomcat/tomcat-8/v8.5.51/bin/apache-tomcat-
8.5.51.tar.gz /opt/tomcat
RUN --> RUN keyword indicates the command that needs to be executed
on the image. RUN keywords will be processed or executed
while creating a image. We can have n number of RUN keyword
in a Dockerfile. All RUN keywords will be executed.
EX:
CMD --> CMD key word will indicate what command has to be executed while creating a
container.We can have more than on CMD in docker file.But only
latest/recent CMD will be executed.
ENTRYPOINT --> Entry Point also will get executed while creating a container.
22-Feb-2020 7:00 PM
===================
Contaienr should run alywas same process(program) with defulat arguments . But
dynamically i should be able to change arguments while creating a container.
All three instructions (RUN, CMD and ENTRYPOINT) can be specified in shell form or exec
form. Let’s get familiar with these forms first.
Shell form
<instruction> <command>
Examples:
When instruction is executed in shell form it calls /bin/sh -c <command> under the hood
and normal shell processing happens. For example, the following snippet in Dockerfile
when container runs as docker run -it <image> will produce output
Exec form
This is the preferred form for CMD and ENTRYPOINT instructions.
Examples:
When instruction is executed in exec form it calls executable directly, and shell processing
does not happen. For example, the following snippet in Dockerfile
Hello, $name
Note that variable name is not substituted.
Note:
Whether you're using ENTRYPOINT or CMD (or both) the recommendation is to always
use the exec form so that's it's obvious which command is running as PID 1 inside your
container.
• You may also run into problems with the shell form if you're building a minimal
image which doesn't even include a shell binary. When Docker is constructing the
command to be run it doesn't check to see if the shell is available inside the container -- if
you don't have /bin/sh in your image, the container will simply fail to start.
When using the shell form of either the ENTRYPOINT or CMD instruction.If we need to
send any sort of POSIX(SIGKILL,SIGTERM,SIGSTOP) signals to the container since /bin/sh
won't forward signals to child processes.
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
https://www.ctl.io/developers/blog/post/dockerfile-entrypoint-vs-cmd/
23-Feb-2020 6:00 AM
===================
EXPOSE
The EXPOSE instruction does not actually publish the port. It functions as a type of
documentation between the person who builds the image and the person who runs the
container, about which ports are intended to be published.
EX:
EXPOSE <port>
EXPOSE 8080
EXPOSE 8081
EXPOSE 3306
WORKDIR --> We can set Working directory using WORKDIR key for image/container.
ex:
WORKDIR /usr/local/tomcat
LABEL --> We can set labels(metadata) for an image using LABEL KeyWord.
Ex:
VOLUME --> VOLUME keyword will mount contianer file system to docker host file
system. We will use Volume to take a back up of contianers file system.
EX:
VOLUME <ContianerFolderPath>
VOLUME /var/lib/jenkins
VOLUME /data/db
USER --> It will set USER for a image/container so that container will run as the given
USER.
USER jenkins
USER nexus
ex:
ARG --> We define argumetns so that we can refer any where in Dockefile and also we can
dynamically pass argument values while creating an image.
ARG <key>=<value>
Docker Networks
One cotnainer can to talk to another container if both contaienrs are in same docker
network.
# List Networks
docker network ls
1) dridge(default bridge)
2) host
3) none
While creating a container if we don't mention the docker network name containers will
be created in default brige network. If containers are in defualt bridge network
communication happens using only IP. Cotnainers can't access each other using cotnaienr
name.
bridge
default bridge
custom bridge
none --> Container will be created in none/null network.Container will not have ip.Can't
be accessable.
# Create Network
Syntax: docker network create -d <driver> <networkName>
Ex:
docker network create -d bridge flipkartnetwork
# Inspect network
docker network inspect <networkNameOrId>
Login to tomcatcotnainer and ping db container using name & IP to test connectivity.
23-Feb-2020 7:00 PM
==================
Volumes:
=======
1) Create docker network using below commond(If it's not created already)
2) Create Spring Application Container in above network & which will talk to mongo data
base container
4) Access Spring application & insert data it will be inserted to mongo db. Delete and
recreate mongo container
what ever you have inserted will no longer be availbale. As once we delete contaienr
data also will be deleted
in container.
7) Access Spring application & insert data it will be inserted to mongo db. Delete and
recreate mongo container
with same volume mapping. You can see the data back.
1) Create IAM User with EC2 Full Access and user access key & Secret Key of the same.
Replace your access key & secret below.
Docker Compose:
It's tool for defining and runing multicontainer applications.It's a yml file.
We have to run long docker run commands to deploy multi continer applications.
With Compose
We will define all the serivces(cotainers) details in compose file using compose file we can
deploy multi container applications.
services:
springapp:
image: dockerhandson/spring-boot-mongo
ports:
- 8080:8080
networks:
- flipkartbridge
container_name: springappcotnaienr
mongo:
image: mongo
container_name: mongo
networks:
- flipkartbridge
volumes:
- mongobkp:/data/db
volumes:
mongobkp:
networks:
flipkartbridge:
driver: bridge
Example 2: (Volumes & Networks will not be created by docker compose.As we set
volumes and networks as external)
==========
version: "3.1"
services:
springboot:
image: dockerhandson/spring-boot-mongo
container_name: springboot
ports:
- 8080:8080
mongo:
image: mongo
container_name: mongo
volumes:
- mongobackup:/data/db
volumes:
mongobackup:
external: true
networks:
default:
external:
name: springappnetwork
docker-compose conig
docker-compose up -d
docker-compose down
Example 3: (Volumes & Networks will be created by docker compose.We can pass
environment varibles to containers while creating if required).
==========
If Custom Name
docker-compose-wordpress.yml
version: '3.1'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wordpressnetwork
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
networks:
- wordpressnetwork
volumes:
db_data:
networks:
wordpressnetwork:
driver: bridge
docker-compose -f docker-compose-wordpress.yml up -d
29-Feb-2020 6:00 AM
#!/bin/bash
sudo apt-get update
sudo apt-get install curl -y
sudo curl -fsSL get.docksal.io | bash
sudo usermod -aG docker ubuntu
Note: Make Sure You Open Required/All Ports in AWS Security Groups.
======================================================================
# Initialize docker swarm cluster by exeuting below command on docker server which you
want make it as Manager
======================================================================
docker run imageName --> It will create/deploy one application in single machine -->
docker service create
# List Services
docker service ls
# List Services process
docker service ps <servicenName>
# Scale Services
docker service scale javawebapp=3
# Stack Deploy
version: '3.1'
services:
springboot:
image: dockerhandson/spring-boot-mongo:latest
restart: always
container_name: springboot
ports:
- 8182:8080
working_dir: /opt/app
depends_on:
- mongo
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
mongo:
image: mongo
container_name: mongo
# ports: # for demo/debug purpose only
# - 27018:27017
volumes:
- data:/data/db
- data-bkp:/data/bkp
restart: always
volumes:
data:
data-bkp:
=================================================================
docker stack ls
version: '3.1'
services:
springboot:
image: dockerhandson/spring-boot-mongo:latest
restart: always
container_name: springboot
ports:
- 8182:8080
working_dir: /opt/app
depends_on:
- mongo
deploy:
replicas: 2
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
mongo:
image: mongo
container_name: mongo
volumes:
- data:/data/db
- data-bkp:/data/bkp
restart: always
volumes:
data:
external: true
data-bkp:
external: true
networks:
default:
external:
name: flipkartoverlay
Docker volumes are persistent as they work separately from container life cycle.
Using docker volumes, data can be shared across multiple containers. Sharing can be
done between host and container also.
When we create a container, a volume needs to be initialized. Volumes will not be
garbage collected
Types of volumes
1. Anonymous volumes-
o Not user friendly
o Hard to maintain
o These volumes are controlled by docker itself ( can be access
using sudoer permissions)
o Command - docker run -dt -v /path_of_data <image>
o Ex- docker run -dt –name myserver -v /dir1 centos
o Docker volume inspect <volume-id> to inspect where the
data volume is located.
2. Named volumes-
o These volumes are controlled by docker itself ( can be access
using sudoer permissions)
o Easy to maintain and mount on multiple containers and are
user friendly
o Command - docker run -dt -v vol-name:/path_of_data
<image>
o docker run -dt –name myserver -v dir-vol:/dir1 centos
o
3. Host volumes/bind mount-
o Not controlled or managed by docker , these are custom and
must be maintained by host itself.
o Docker container inspect <container name> to inspect the
volumes
Docker Compose – Compose is used for defining and running multi container applications.
With compose use YAML files to configure application services.
Install docker compose
It is a 3 step process-
1. Define your app’s environment with a Dockerfile
2. Define the services that make up your app in docker-compose.yml so they
can be run together in an isolated environment.
Version: ‘1’
Services:
Web:
Image: “httpd”
Ports:
- “8090:80”
App:
Image: “nginx”
Ports:
- “9090:80”
nodeApp:
Image: “ravi2krishna/nodeapp”
Ports:
- “8080:80”
docker-compose up -d
Kubernetes
Container helps to reduce software running costs. Consume less CPU, RAM etc. However,
container brings scalability challenges.
Kubernetes contains master node and worker nodes
Adaptation of docker swarm is Kubernetes where user can host 2 application in same
cluster. Namespace are used to differentiate between applications. (kubectl get
namespaces). Services supporting Kubernetes like EKS, AKS
Kubernetes contain lot of objects.
All containers in a pod will have same ip address. IP address will be assigned to pod.
DB, webserver, api extc can be hosted on single pod.
The container running in pod will not be exposed.
Replica set- manages the number of pods that are supposed to be up and running
Pod- It is a smallest unit, that contains the container specification.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
Kubectl installation
KOPS installation
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s
https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name
| cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
Nana-
Most basic configuration of deployment
kubectl create deployment nginx-depl --image=nginx
Secret and ConfigMap must be created first as we will reference it in a deployment file
Generate username and password from linux box using below commands
echo -n “username” | base64
echo -n “password” | base64
22-oct-2020
Prometheus and Grafana
These are infrastructure monitoring tools.
Below are the installation steps for Prometheus
sudo mkdir -p /etc/prometheus
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
[Install]
WantedBy=mutli-user.target
Install Grafana
wget https://dl.grafana.com/oss/release/grafana-7.2.2-1.x86_64.rpm
sudo yum install grafana-7.2.2-1.x86_64.rpm
sudo service Grafana-server start
Sonarqube installation steps:
Github- github.com/SonarSource/sonarqube
Install java
Wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-6.7.7.zip
Unzip sonarqube-6.7.7.zip
~/ sonarqube-6.7.7/bin/linux-x-64/sonar.sh status
~/ sonarqube-6.7.7/bin/linux-x-64/sonar.sh start
~/ sonarqube-6.7.7/bin/linux-x-64/sonar.sh stop
Sonarqube runs on port 9000
Generate token
Copy the maven sonar command
mvn sonar:sonar \
-Dsonar.host.url=http://13.232.51.121:9000 \
-Dsonar.login=8ed15c7cb3517c7c14944e77a8ed45ac108d0796
And enter in project folder, it will check for code leakages and vulnerabilities
Nexus installation steps:\
Binary repo’s are 2 types-
1. Snapshots- These are mutable changing from time to time as code changes are
made. These are basically unstable versions (development).
Identify with <vr-snapshot>
2. Releases- These are stable version that has only single release.
3. Identify using snapshots using pom.xml -<version> directive. Snapshot is not visible
attached to version.
4. Use mvn deploy command to push executable files to repository.
sudo yum install wget unzip git java-1.8* -y
wget https://download.sonatype.com/nexus/oss/nexus-2.14.20-02-bundle.tar.gz
sudo tar -xvf nexus-2.14.20-02-bundle.tar.gz
~/nexus-2.14.20-02/bin/nexus status
~/nexus-2.14.20-02/bin/nexus start
http://65.1.86.133:8081/nexus/
http://localhost:8081/nexus/
update pom.xml with distribution management tag. Remove snapshot in version name for
releases
Add 1.0.0 -SNAPSHOT for snapshots of jar files
<distributionManagement>
<snapshotRepository>
<id>snapshots</id>
<url>http://13.126.139.209:8081/nexus/content/repositories/snapshots</url>
</snapshotRepository>
<repository>
<id>releases</id>
<url>http://13.126.139.209:8081/nexus/content/repositories/releases</url>
</repository>
</distributionManagement>
</project>
<servers>
<server>
<id>snapshots</id>
<username>admin</username>
<password>admin123</password>
</server>
<server>
<id>releases</id>
<username>admin</username>
<password>admin123</password>
</server>
</servers>
Tomcat webserver setup-
Install tomcat binary
wget https://downloads.apache.org/tomcat/tomcat-8/v8.5.63/bin/apache-tomcat-
8.5.63.tar.gz
tar -xvf apache-tomcat-8.5.63.tar.gz
~/apache-tomcat-8.5.63/bin/startup.sh
LAMP stack-
Install web server, serve an application on the sever.
For serving php applications install php, similarly install java for java applications and python
for python-built applications.
For HA, setup a load balancer and register running instance on classic load balancers, wait till
the instances are registered (In service status) query browser with elb dns name.
If any of the servers are down http request won’t be sent to that sever.
Issue faced-
Registered 2 instance each in different AZ, one of the instances went OutOfService-healthy
threshold check failed.
Configuring Database-
Wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm