0% found this document useful (0 votes)
1 views

deploy on aws using amazon linux

This document outlines the steps to deploy a multicontainer web application using Docker and Jenkins on an Amazon Linux instance. It covers SSH access, system updates, Java and Jenkins installation, GitHub integration, and Docker setup, including memory management for optimal performance. The guide concludes with instructions on verifying installations and running Docker commands.

Uploaded by

ashishbig45chill
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

deploy on aws using amazon linux

This document outlines the steps to deploy a multicontainer web application using Docker and Jenkins on an Amazon Linux instance. It covers SSH access, system updates, Java and Jenkins installation, GitHub integration, and Docker setup, including memory management for optimal performance. The guide concludes with instructions on verifying installations and running Docker commands.

Uploaded by

ashishbig45chill
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Deployment of multicontainer web app

using docker jenking on aws amazon


linux

Step 1: SSH Into Your Instance


First things first, let’s SSH into the Amazon Linux instance
where we’re setting up Jenkins. Open up your terminal and
run:

Step 2: Update the System


It’s always a good practice to ensure that your system is up
to date. Open a terminal window and enter the following
command:

sudo dnf update


Step 3: Install Java
Jenkins is a Java application, so Java is a prerequisite. Install
Java with the following command:

sudo dnf install java-17-amazon-corretto -y

Verify the installation with:

java -version

Step 4: Add the Jenkins Repository


Next, add the Jenkins repository to your system with the
following commands:

sudo wget -O /etc/yum.repos.d/jenkins.repo \


https://pkg.jenkins.io/redhat-stable/jenkins.repo

Import a key file from Jenkins-CI to enable installation from


the package:

sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key


Step 5: Install Jenkins
Now, install Jenkins by running:

sudo dnf install jenkins -y

Step 6: Start and Enable Jenkins


Enable and start the Jenkins service using:

sudo systemctl enable jenkins


sudo systemctl start jenkins

Step 7: Access Jenkins


Now, open your web browser and access Jenkins by
navigating to:

http://your_amazon_linux_instance_ip:8080

You will see a setup wizard and be prompted to enter the


administrator password.

Retrieve the password with the following command:

sudo cat /var/lib/jenkins/secrets/initialAdminPassword


Step 8: Complete the Setup Wizard
Now for the fun part! After accessing Jenkins in your web
browser, you’ll be greeted by the Jenkins setup wizard. Let’s
walk through it together:

1. Install Plugins

2. Create the First Admin User

3. Finish the Setup


 Finally, hit “Save and Finish.” Boom, you’ve set up
Jenkins!

Create First Admin


User
Enter Jenkins URL and click Save and
Finish

Finally, Jenkins is ready. Click on the button Start using


Jenkins
3.5 Connect GitHub and Jenkins
Create new job by clicking New
item

Enter an item name. Select Freestyle project. Add description.


Select GitHub project and add project
url

Select Git as Code Management and add Repository URL. Click on Add to
add
key

Generate SSH key on console using following commands:


ssh-keygen
cd .ssh
Ls
cat id_rsa.pub

Copy the public


key
Go to GitHub. Click
on Settings

Click on SSH and GPS keys on the left pane and click on Add SSH
key

Paste the SSH key and click Add SSH


key
Go to Jenkins, and select SSH Username with private key in
kind

On console, enter the command cat id_rsa and copy the private
key
Paste the private key in jenkins
wizard
Select ubuntu(This is for github and jenkins integration) in
credentials

Enter */master in Branch Specifier and


click Save
3.6 Get code in jenkins
In jenkins, click on Build Now on the left
pane

Now, click on #1 and select Console Output to view the


console
To check whether we got the code on EC2 instance, go to console, and enter
the following commands:
sudo cd /var/lib/jenkins/workspace/todo-node-app
ls

Install git if not installed


Before funning build
1. local system’s command prompt or terminal1.
2. Run System Update: Before installing Git, it’s a good practice to
update the existing packages and refresh the system’s repository
cache. You can do this by running the following command:
3. sudo yum update
4. Install Git: You don’t need to add any third-party repository to get Git
on your Amazon Linux 2. Simply use the default YUM package manager
and you will have it on your cloud VM via the amzn2-core system repo.
Here is the command to follow:
sudo yum install git

Remount the /tmp directory with increased size1:


sudo mount -o remount,size=5G /tmp

You can check if the size of the /tmp directory has been increased by running
the following command:

df -h /tmp

restart Jenkins
sudo systemctl restart jenkins

install node js and angular cli

1. Update the Package Manager: Once connected to the EC2 instance,


update the package manager by running the following command 12:
2. sudo yum update
3. Install Node.js and npm: Install Node.js and npm by running the
following command12:
4. sudo yum install nodejs
5. Check Node.js and npm Version: Once the installation is completed,
you can check the Node.js and npm version available on your system
by running the following commands:
6. node -v
7. npm -v
8. Install Angular CLI: Install the Angular CLI by running the following
command342:
9. sudo npm install -g @angular/cli
10. Check Angular CLI Version: Once the installation is completed,
you can check the Angular CLI version available on your system by
running the following command:

ng –version
npm install indie frontend and backend

Step 2: Installing Docker and Docker Compose:

1. Apply pending updates using the yum command:

sudo yum update

2. Search for the Docker package:

sudo yum search docker

3. Get version information:

sudo yum info docker


4. To install Docker, run

sudo yum install docker


5. Add group membership for the default ec2-user so you can
run all docker commands without using the sudo command:

sudo usermod -a -G docker ec2-user

6. Enable docker service at AMI boot time

sudo systemctl enable docker

7. Start the Docker service:

sudo systemctl start docker

8. To verify the docker service status on your AMI instance,


run

sudo systemctl status docker

9. To download and install Compose standalone, run:


sudo curl -SL
https://github.com/docker/compose/releases/download/v2.17.2/docker-compose-
linux-x86_64 -o /usr/local/bin/docker-compose

10. Apply executable permissions to the standalone binary in


the target path for the installation.
sudo chmod +x /usr/local/bin/docker-compose

11. Verify both docker-compose and docker on your AWS


Linux AMI:

For better experience so ur build wont


stuck during docker compose up
increase virtual ram or memory space

sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=512


sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1
echo '/var/swap.1 swap swap defaults 0 0' | sudo tee -a /etc/fstab

Swap space in Linux acts as an extension of your physical RAM, providing


virtual memory to help maintain system stability and performance. When
your RAM is fully utilized, swap space allows processes to continue running
by moving inactive or less frequently used memory pages to a designated
area on the hard drive12.
Here’s what swap space does:
 Prevents Memory-Related Crashes: It helps avoid system crashes or
unresponsiveness when RAM is unavailable.
 Memory Overcommitment: Allows the system to allocate more memory to
processes than physically available.
 Supports Large Programs and Multitasking: Enables running large
applications and multiple tasks simultaneously.
 Improves Stability: By offloading data temporarily, it safeguards critical
processes.
 Flexibility: Users can create, resize, or remove swap files without disk
repartitioning.
 Virtual Memory Management: Manages the allocation of virtual memory
resources.
 Resource Allocation: Helps in distributing memory resources efficiently.

Now run docker commands…….

You might also like