0% found this document useful (0 votes)
16 views94 pages

Terraform End-End Document

The document provides a step-by-step guide for installing Terraform and creating various AWS resources using Terraform scripts. It covers setting up a custom network, creating EC2 instances, S3 buckets with versioning, and using data sources and provisioners. Additionally, it explains how to import existing resources into Terraform and manage configurations effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views94 pages

Terraform End-End Document

The document provides a step-by-step guide for installing Terraform and creating various AWS resources using Terraform scripts. It covers setting up a custom network, creating EC2 instances, S3 buckets with versioning, and using data sources and provisioners. Additionally, it explains how to import existing resources into Terraform and manage configurations effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 94

Aws & devops by veera nareshit

Installation Terraform
DAY1
 Install Terraform on your local system

Step 1: Click the link – https://developer.hashicorp.com/terraform/install


Step 2: select window >386 >download

Step3: Extract all from download

Step4: after extract copy the full path

Aws & devops by veera nareshit


Aws & devops by veera nareshit

C:\Users\Asus\Downloads\terraform_1.7.3_windows_386

Step 5: click on Edit environment variables for your account

Step 6: click on path and edit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Step 7 : click on new > paste the path > ok

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Step 8: open cmd & check version

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Terraform Codes
Aws & devops by veera nareshit
Aws & devops by veera nareshit

DAY 2
 Custom Network

1st block : provider.tf


provider "aws" {
access_key = "AKIA4HJWDM3GLTF7HTUH"
secret_key = "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"
region = "us-east-1"

2nd block : Main.tf


#create vpc

resource "aws_vpc" "custnw" {

cidr_block = "10.0.0.0/16"

tags = {

Name = "ankit_vpc"

#Create Internet Gateway and attach to VPC

Aws & devops by veera nareshit


Aws & devops by veera nareshit

resource "aws_internet_gateway" "custnw" {

vpc_id = aws_vpc.custnw.id

tags = {

Name = "Ankit Internet Gateway"

#Create subnet & attach to vpc

resource "aws_subnet" "custnw" {

vpc_id = aws_vpc.custnw.id

cidr_block = "10.0.0.0/24"

Aws & devops by veera nareshit


Aws & devops by veera nareshit

tags = {

Name = "Ankit subnet"

#Create RT and attach to vpc

resource "aws_route_table" "custnw" {

vpc_id = aws_vpc.custnw.id

tags = {

Name = "Ankit Rt"

#associate route table with internetgateway

Aws & devops by veera nareshit


Aws & devops by veera nareshit

route {

cidr_block = "0.0.0.0/0"

gateway_id = aws_internet_gateway.custnw.id

#associate route table with subnet

Aws & devops by veera nareshit


Aws & devops by veera nareshit

resource "aws_route_table_association" "custnw" {

route_table_id = aws_route_table.custnw.id

subnet_id = aws_subnet.custnw.id

#cust security group


resource "aws_security_group" "custnw_sg" {

name = "custnw_sg"

description = "Allow TLS inbound traffics"

vpc_id = aws_vpc.custnw.id

ingress {

description = "TLS from VPC"

from_port = 80

to_port = 80

Aws & devops by veera nareshit


Aws & devops by veera nareshit

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

description = "TLS from VPC"

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

description = "TLS from VPC"

from_port = 443

to_port = 443

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

egress {

from_port =0

to_port =0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]

 Custom EC2 Instance


#Create custom ec2 instance

Aws & devops by veera nareshit


Aws & devops by veera nareshit

resource "aws_instance" "custnw" {

ami = var.ami

instance_type = var.instance_type

key_name = var.key_name

subnet_id = aws_subnet.custnw.id

associate_public_ip_address = true

tags = {

Name = "CustANKITec2"

3rd block : Variable.tf

Aws & devops by veera nareshit


Aws & devops by veera nareshit

4th block : Terraform.tfvars

DAY 3
 S3 BUCKET CREATION WITH VERSIONING
Aws & devops by veera nareshit
Aws & devops by veera nareshit

2nd block : Main.tf


#Create S3 Bucket

resource "aws_s3_bucket" "devankit" {

bucket = "terrabucketcreate"

#Get version enabled of created s3 bucket

resource "aws_s3_bucket_versioning" "versioning_adhvikanand" {

bucket = aws_s3_bucket.devankit.id

versioning_configuration {

status = "Enabled"

 OUTPUT BLOCK CODES AND SENSATIVE


CONCEPT

2nd block : Main.tf

#Create the fresh EC2 instance and print the output of public ip,
public dns and private ip dns
++Don't print output of privateip_by using sensative.
resource "aws_instance" "MrSingh" {

Aws & devops by veera nareshit


Aws & devops by veera nareshit

ami = var.ami

instance_type = var.instance_type

key_name = var.key_name

tags = {

Name = "MrSinghec2"

#to print output, we have written code in output.tf


5th block : Output.tf

DAY 4
Aws & devops by veera nareshit
Aws & devops by veera nareshit

 Backend.tf script
#we are creating one S3 Bucket and try to see the whole creation process inside
terraform.tfstate.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

#terraform.tfstate can be vanished or it will not get seen into the local as above by using
configuring backend.tf block. Means after we do terraform apply terraform.tfstate will get
created and it will capture also the running process whatever any creation deletion any
ongoing process it will able to capture but it will not located into local as above it will get
located into backend.tf

DAY 5
 IMPORT : import resource into terraform
To do any further changes in created ec2 instance we import or clone to our local
system and control the main.tf for further changes for ec2 instance.

First we create a resource block before that we will create a ec2 instance

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now we will map the ec instance id with our local ec2 resource block
terraform import aws_instance.importec2 i-0e5ffb92c68b388e7

Now we can give all ami instance_type key_name by the refrence of statefile
because state file recorded capture all details of that ec2 while importing to our local

Aws & devops by veera nareshit


Aws & devops by veera nareshit

We can refer the details from statefile and code on our main resource block

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now suppose I want to make further changes on it I will give another key pair
previous at first before import it was redhat in statefile means first ec2 before import
have redhat key_name

As Now I have taken full control let me modify as per my wants


Let me change key name to “Whitehat(new key_name)” from redhat (old key_name)
and also let me tag a name “beautiful instance” we can rule it because we own it
now by taking control of it through import command.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 6

 DATA SOURCE
Here we can use custom network where we already have vpc created and inside vpc
my subnet , my internet g/w RT everything configured and that all vpc configuration
attached to our placed ec2 public instance inside public subnet.

But here we can create any instance at any time and we can call same cust netwoek
configuration where we already have our vpc details and all.

This can be done with help of data source.

So we already have the custom network configured we can copy the whole
configuration and paste to our new folder.

#create vpc
resource "aws_vpc" "custnw" {
cidr_block = "10.0.0.0/16"
tags = {

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Name = "ankit_vpc"
}
}
#create Internet Gateway and attach to VPC
resource "aws_internet_gateway" "custnw" {
vpc_id = aws_vpc.custnw.id
tags = {
Name = "Ankit Internet Gateway"
}
}
#create subnet attach to vpc

resource "aws_subnet" "custnw" {


vpc_id = aws_vpc.custnw.id
cidr_block = "10.0.0.0/24"
tags = {
Name = "Ankit subnet"
}
}

#create RT and attach to vpc


resource "aws_route_table" "custnw" {
vpc_id = aws_vpc.custnw.id
tags = {
Name = "Ankit Rt"
}
#associate route table with internetgateway
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.custnw.id
}
}
#associate route table with subnet
resource "aws_route_table_association" "custnw" {
route_table_id = aws_route_table.custnw.id
subnet_id = aws_subnet.custnw.id
}
#cust security group
resource "aws_security_group" "custnw_sg" {
name = "custnw_sg"
description = "Allow TLS inbound traffics"
vpc_id = aws_vpc.custnw.id

ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80

Aws & devops by veera nareshit


Aws & devops by veera nareshit

protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "TLS from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "TLS from VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]

}
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now we can create fresh ec2 instance and inside it only we call vpc which we
already taken from old and pasted to our new dir.

Lets create a fresh ec2 instance.

Go to vpc
Subnet ID (from here copying subnet id and pasting inside data source block)

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Like above we passed the value of subnet into fresh ec2 via creating data source.
Same we can pass Security Group as well
Go to vpc
Security Group ID (from here copying SG id and pasting inside data source block)

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 7
 Provisioners

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Sandbox access key secret key will not work so try with personal account

- Go to IAM click on Ankit drop down security credentials and create a both access key and secret keys

Copy and paste both to the provider.tf

1. LOCAL EXECUTION PROCESS


(key_name : GoldenMrSingh)

#create key pair define name of the key and path of local public from
locla system

resource "aws_key_pair" "AnkitGoldenKey" {


key_name = "GoldenMrSingh"
public_key = file("~/.ssh/id_rsa.pub")
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

#Create Security group but before that create a VPC so that http port
can be enabled for apache web server

resource "aws_vpc" "Myownvpc" {


cidr_block = "10.0.0.0/16"
tags = {
Name = "MyVPC"
}

}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console

resource "aws_instance" "ANKITEC2" {


ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = aws_key_pair.AnkitGoldenKey.key_name

connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
#provisioner local exec. will help to save this all into one new file
in local you can create that new one inside from here.

provisioner "local-exec" {
command = "touch Mrsingh"
}

2. REMOTE EXECUTION PROCESS


(key_name: SilverMrSingh)

provider "aws" {
access_key = "AKIAXWB4SSTJ5JKWVRO3"

Aws & devops by veera nareshit


Aws & devops by veera nareshit

secret_key = "fAQR/dtnuSr1oLi8tmqOgyv4CvdhwdVwOl8vLnkj"
region = "us-east-1"
} #create key
pair define name of the key and path of local public from locla system

resource "aws_key_pair" "AnkitGoldenKey" {


key_name = "SilverMrSingh"
public_key = file("~/.ssh/id_rsa.pub")
}

#Create Security group but before that create a VPC so that http port
can be enabled for apache web server
resource "aws_vpc" "Myownvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "MyVPC"
}

}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console
resource "aws_instance" "ANKITEC2"{
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = aws_key_pair.AnkitGoldenKey.key_name

connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}

#remote execution process


provisioner "remote-exec" {
inline = [
"touch AIRTICKET","echo going to bhopal from delhi >>
AIRTICKET",
]

}
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

3. FILE PROVISIONER PROCESS


(key_name = BronzeSingh)

File provisioner to copy a file from local to the remote EC2 instance

# let me create one file name as India and i will send that to ec2 from
local

#create key pair define name of the key and path of local public from
locla system

Aws & devops by veera nareshit


Aws & devops by veera nareshit

resource "aws_key_pair" "AnkitGoldenKey" {


key_name = "BronzeMrSingh"
public_key = file("~/.ssh/id_rsa.pub")
}

#Create Security group but before that create a VPC so that http port
can be enabled for apache web server
resource "aws_vpc" "Myownvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "MyVPC"
}

}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console
resource "aws_instance" "ANKITEC2"{
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = aws_key_pair.AnkitGoldenKey.key_name

connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}

#File provisioner to copy a file from local to the remote EC2 instance
# let me create one file name as India and i will send that to ec2 from
local
provisioner "file" {
source = "India"
destination = "/home/ec2-user/India"
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

So we have send our file which is India to ec2 instance.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

4. USER DATA

We can compare both User data and file provisioner both are same but
User data is easy.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 8
Creation of S3 and Dynamo DB Table
Day1 folder or under Resource_for_s3_dynamo_table folder
<Creation of s3 bucket and dynamo db >

First you have to create s3 bucket then init and apply once it
completed then create DynamoDB again init and apply.
# create s3 bucket
provider "aws" {
access_key = "AKIAXHZH6XUYQIXXAMWG"
secret_key = "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"
region = "us-east-1"

resource "aws_s3_bucket" "newbuck" {


bucket = "sonuxtyuaankit-s3"

#create dynamoDB
resource "aws_dynamodb_table" "round" {
name = "aws_dynamodbtable"

Aws & devops by veera nareshit


Aws & devops by veera nareshit

hash_key = "LockID"
read_capacity = 20
write_capacity = 20
attribute {
name = "LockID"
type = "S"
}
}

Day2 folder or backend_configuration folder


<Creation of backend.tf and ec2 instance>
Second day , you have to create backend.tf then init and apply after that
- You will see terraform.tfstate file will not get seen as it will be
invisible and get stored in backend.tf configuration
then create one ec2 instance and init then apply
- You will see it will in acquiring state lock once it get done lock will
be released means with help of dynamodb and terraform.tfstate
will able to decide will service need to run first at same time
when request come to create both on same time as it lock the
ec2 instance first once it done then next.

In backend.tf
terraform {
backend "s3" {
encrypt = true
bucket = "sonuxtyuaankit-s3"
dynamodb_table = "aws_dynamodbtable"
key = "terraform.tfstate"
region = "us-east-1"
}
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

In main.tf
Create ec2 instance
#create ec2
provider "aws" {
access_key = "AKIAXHZH6XUYQIXXAMWG"
secret_key = "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"
region="us-east-1"

resource "aws_instance" "AWSCOWORK" {


ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "redhat"

Terraform init

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 9

WORKSPACE :

As we can see the workspace is default where we have created one S3 bucket
named as defaultworkspacebuckzez..

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now suppose we wanted to work on same directory but in another


workspace and wanted to change something in the existing bucket.
Let’s create a new workspace named as khwabsa for same existing
bucket.
 Create New Workspace

Aws & devops by veera nareshit


Aws & devops by veera nareshit

As our workspace is now switched from default to Khwabsa. Now


let’s modify something on existing bucket in same directory.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

We have again created a new workspace ambu

Now if we do any changes on existing bucket


the changes will be applied inside my current
workspace which is newly created named as ambu
inside terraform.tfstate.d

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Teraaform.tfstate.d : whenever we will create


a new workspace terraform.tfstate.d will
generate where the newly workspace stored
along with the default workspace that was
generated previously with terraform.tfstate.
 Switch to different workspace from existing Workspace

 Delete created Workspace

Depend on:
Depends on means if s3 is dependent upon ec2 then ec2 will
create first and then s3.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

LifeCycle:
As usual when we modify and changes in existing instance it will
destroy the previous and apply a latest changes on existing
instance.
But if we create_before_destroy with help of Lifecycle argument
we can able to bypass the destroy step it will first create a frest
change with new instance along with the old instance within old
parameter.
provider "aws" {
access_key = "AKIAWBU7VN4Q3UKVR4NK"
secret_key = "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"
region = "us-east-1"

Aws & devops by veera nareshit


Aws & devops by veera nareshit

}
#previos ec2 instance which we already have created at the time of
dependson
#resource "aws_instance" "saradin" {
#ami = "ami-0440d3b780d96b29d"
#instance_type = "t2.micro"
#key_name = "abc"
#tags = {
#Name = "sanamreyec2"
#}

#}

#If we will change the key name previos it was abc now will change
before anything get destroy
resource "aws_instance" "saradin" {
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "xyz"
tags = {
Name = "create_before)destroy"
}
lifecycle {
create_before_destroy = true #this will create the same existing
ec3 with latest keypair and terminate the existing one with old
keyname.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Prevent destroy means if it set as a false then as usual it does it take


first destroy old change then create new change in existing instance.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Ignore change [tags] means in this lifecycle


parameter saying that if you do manual change
to name a instance it will be there saved but
till only terraform tags not come means :
First, we giving a tag to a instance manually
name as manualtag

Second, we will run the script(ignore


change{tag} ) in terraform to remain the
changes as before.
resource "aws_instance" "saradin" {
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "abc"
tags = {
Name = "create_before)destroy"
}
#lifecycle {
#create_before_destroy = true #this will create the same existing
ec3 with latest keypair and terminate the existing one with old
keyname.

#}

#lifecycle {

Aws & devops by veera nareshit


Aws & devops by veera nareshit

#prevent_destroy = true #means destroy will be in prevented state


it can destroy intill
#it will set a false
#}
lifecycle {
ignore_changes = [ tags ]
#means if manually anyone change the tags it will show but when
teeraform script run
#it will come back to same tag what terraform script will say
}
}

See the manual changes get terminated and old


one cameback.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 10
Modules
It’s a concept, where the templet can
be called from anywhere to create a
instance.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

 First create a directory /modules

 Create once ec2 instance template inside


modules directory into main.tf file and
variable.tf

Aws & devops by veera nareshit


Aws & devops by veera nareshit

 We can now init and apply but we get error


as parameter which we passing consist empty
values. So next step we will create once
directory root/ and we will copy all the
files and content inside modules/ into
root/.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

So inside /root we have moved all my parameteer files.


Now we will try to access /root to use inside templete to create ec2 isntance from other
folder we can call. We will make one other folder.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 11
Count
Here we are creating number of instance.Counts -2 means no.of
two instance example

Aws & devops by veera nareshit


Aws & devops by veera nareshit

"aws" {
access_key = "AKIAVWCE7CNU5LKFYXIT"
secret_key = "6ihDc5YtCFTdiWZ+LUwAxg7i+OPluDbBCplXDONn"
region = "us-east-1"

resource "aws_instance" "terraformcounts" {


ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "redhat"
count = 2
tags = {
Name = "raymonds-${count.index}"
}
}

Now we will create count of noumber of two instance with two diffrenet names –

Aws & devops by veera nareshit


Aws & devops by veera nareshit

provider "aws" {
access_key = "AKIAVWCE7CNU5LKFYXIT"
secret_key = "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"
region = "us-east-1"

resource "aws_instance" "terraformcounts" {


ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "redhat"
count = length(var.tags)
tags = {
Name = var.tags[count.index]
}
}

variable "tags" {
type = list(string)
default = [ "ankit_ec2","bholu_ec2" ]

PROBLEM with the count is if any instance or element destroy then the 2 count come beofre position
example if bholu is deleted then ankit will occupy back to bholu place then it will dusturb because it
is set as count 0 1 2 if any delete ut will palce before. So to avoid this problem For Each come to the
picture.

For_each:
Aws & devops by veera nareshit
Aws & devops by veera nareshit

As specified in the count meta-argument, that the default


behaviour of a resource is to create a single infrastructure object
which can be overridden by using count, but there is one more
flexible way of doing the same which is by using for_each meta
argument.

The for_each meta argument accepts a map or set of strings.


Terraform will create one instance of that resource for each
member of that map or set. To identify each member of the
for_each block, we have 2 objects:

each.key: The map key or set member corresponding to each


member.
each.value: The map value corresponding to each member.

For example :
## Example for_each

# variables.tf
variable "ami" {
type = string
default = "ami-0440d3b780d96b29d"
}

variable "instance_type" { ##each.key


type = string
default = "t2.micro"
}

variable "server" { ##each.value


type = set(string)
default = ["Devlopment", "Testing", "Production"]
}

# main.tf
resource "aws_instance" "sandbox" {

Aws & devops by veera nareshit


Aws & devops by veera nareshit

ami = var.ami
instance_type = var.instance_type
for_each = var.server
tags = {
Name = each.value # for a set, each.value and each.key is the same
}
}

Aws & devops by veera nareshit


Aws & devops by veera nareshit

DAY 12
CICD PROCESS with TERRAFORM GIT
CICD TOOLS JENKINS
First we will create Instance Jenkins with instance type t2.medium
2xcpu4Gib memory to start our automation process.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now install

Aws & devops by veera nareshit


Aws & devops by veera nareshit

All 4 verions installed

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Jenkins is now in active status

Lets login to jenkin by taking copy of public ip and


Jenkins by defaults run on 8080 port

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Paste the password on administrator password

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

2 type of pipelines --
Gurvi script declarative pipeline
Scripted pipeline.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now got down to script directly

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now we will try some sample pipeline scripts using


Jenkins

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

So this is below is my stage 1 output

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now if you want to configure one more stage in


existing

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Same we can do terraform script git script


Terraform means stage 1 terraform init stage 2 apply stage3 destory

Aws & devops by veera nareshit


Aws & devops by veera nareshit

But before we have to install plugins individually means if terraform


then terraform plug ins for git git plugins for maven its maven
plugins we have installed by default pluging while installing time so
these all not comes in default.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Suppose stage 1 fail what will be issue? Stage 1 fail where you will
go and check git right 2nd stage related to terraform init part stage 3
related to terraform apply part if any task fails at any stage we can
go check that stage only so we staging here. We not suppose to do
in as single stage so that it will very tough to find.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

On search type teeraform and github Integration , and install both


plugins.

Now we will go back to

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Now lets clone from my github repository

Aws & devops by veera nareshit


Aws & devops by veera nareshit

I have one resource block on my terraform CICD git repos so we will


take this as a refrence to clone via http to Jenkins

Aws & devops by veera nareshit


Aws & devops by veera nareshit

NOTE : before git clone “http link” we use sh example if you make
file we use touch file1 but here sh “touch file1” ..so sh will come
before rest all commands comes under “ ”.

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

So cloning into terraform CICD is done now.


Next , we will do terraform init means we are allowing the plugins
of ec2 instance we are working to create one ec2 instance that
already kept inside terraform CICD of my repository .

Now give IAM Access

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Then apply and save then build and apply

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Started by user Ankit Kumar


[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/terraform
[Pipeline] {
[Pipeline] stage
[Pipeline] { (checkout)
[Pipeline] git
The recommended git tool is: NONE
No credentials specified
> git rev-parse --resolve-git-dir /var/lib/jenkins/workspace/terraform/.git # timeout=10
Fetching changes from the remote Git repository
> git config remote.origin.url https://github.com/Ankitadi5746/Terraform-CICD.git #
timeout=10
Fetching upstream changes from https://github.com/Ankitadi5746/Terraform-CICD.git
> git --version # timeout=10
> git --version # 'git version 2.40.1'
> git fetch --tags --force --progress -- https://github.com/Ankitadi5746/Terraform-
CICD.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git rev-parse refs/remotes/origin/main^{commit} # timeout=10
Checking out Revision 759808b160eb15f33030d2f85f2c4b899ff738c2
(refs/remotes/origin/main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 759808b160eb15f33030d2f85f2c4b899ff738c2 # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git branch -D main # timeout=10
> git checkout -b main 759808b160eb15f33030d2f85f2c4b899ff738c2 # timeout=10
Commit message: "Create main.tf"
First time build. Skipping changelog.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (init)
[Pipeline] sh
+ terraform init

[0m[1mInitializing the backend...[0m

[0m[1mInitializing provider plugins...[0m


- Finding latest version of hashicorp/aws...

Aws & devops by veera nareshit


Aws & devops by veera nareshit

- Installing hashicorp/aws v5.39.0...


- Installed hashicorp/aws v5.39.0 (signed by HashiCorp)

Terraform has created a lock file [1m.terraform.lock.hcl[0m to record the provider


selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.[0m

[0m[1m[32mTerraform has been successfully initialized![0m[32m[0m


[0m[32m
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,


rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (plan)
[Pipeline] sh
+ terraform plan

Terraform used the selected providers to generate the following execution


plan. Resource actions are indicated with the following symbols:
[32m+[0m create[0m

Terraform will perform the following actions:

[1m # aws_instance.dev[0m will be created


[0m [32m+[0m[0m resource "aws_instance" "dev" {
[32m+[0m[0m ami = "ami-0440d3b780d96b29d"
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m associate_public_ip_address = (known after apply)
[32m+[0m[0m availability_zone = (known after apply)
[32m+[0m[0m cpu_core_count = (known after apply)
[32m+[0m[0m cpu_threads_per_core = (known after apply)
[32m+[0m[0m disable_api_stop = (known after apply)
[32m+[0m[0m disable_api_termination = (known after apply)
[32m+[0m[0m ebs_optimized = (known after apply)
[32m+[0m[0m get_password_data = false
[32m+[0m[0m host_id = (known after apply)
[32m+[0m[0m host_resource_group_arn = (known after apply)
[32m+[0m[0m iam_instance_profile = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m instance_initiated_shutdown_behavior = (known after apply)
[32m+[0m[0m instance_lifecycle = (known after apply)
[32m+[0m[0m instance_state = (known after apply)
[32m+[0m[0m instance_type = "t2.micro"
[32m+[0m[0m ipv6_address_count = (known after apply)
[32m+[0m[0m ipv6_addresses = (known after apply)
[32m+[0m[0m key_name = (known after apply)
[32m+[0m[0m monitoring = (known after apply)
[32m+[0m[0m outpost_arn = (known after apply)
[32m+[0m[0m password_data = (known after apply)
[32m+[0m[0m placement_group = (known after apply)
[32m+[0m[0m placement_partition_number = (known after apply)
[32m+[0m[0m primary_network_interface_id = (known after apply)

Aws & devops by veera nareshit


Aws & devops by veera nareshit

[32m+[0m[0m private_dns = (known after apply)


[32m+[0m[0m private_ip = (known after apply)
[32m+[0m[0m public_dns = (known after apply)
[32m+[0m[0m public_ip = (known after apply)
[32m+[0m[0m secondary_private_ips = (known after apply)
[32m+[0m[0m security_groups = (known after apply)
[32m+[0m[0m source_dest_check = true
[32m+[0m[0m spot_instance_request_id = (known after apply)
[32m+[0m[0m subnet_id = (known after apply)
[32m+[0m[0m tags ={
[32m+[0m[0m "Name" = "dev-ec2"
}
[32m+[0m[0m tags_all ={
[32m+[0m[0m "Name" = "dev-ec2"
}
[32m+[0m[0m tenancy = (known after apply)
[32m+[0m[0m user_data = (known after apply)
[32m+[0m[0m user_data_base64 = (known after apply)
[32m+[0m[0m user_data_replace_on_change = false
[32m+[0m[0m vpc_security_group_ids = (known after apply)
}

[1mPlan:[0m 1 to add, 0 to change, 0 to destroy.


[0m[90m
───────────────────────────────────────────────────────────────────────────
──[0m

Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (action)
[Pipeline] sh
+ terraform apply --auto-approve

Terraform used the selected providers to generate the following execution


plan. Resource actions are indicated with the following symbols:
[32m+[0m create[0m

Terraform will perform the following actions:

[1m # aws_instance.dev[0m will be created


[0m [32m+[0m[0m resource "aws_instance" "dev" {
[32m+[0m[0m ami = "ami-0440d3b780d96b29d"
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m associate_public_ip_address = (known after apply)
[32m+[0m[0m availability_zone = (known after apply)
[32m+[0m[0m cpu_core_count = (known after apply)
[32m+[0m[0m cpu_threads_per_core = (known after apply)
[32m+[0m[0m disable_api_stop = (known after apply)
[32m+[0m[0m disable_api_termination = (known after apply)
[32m+[0m[0m ebs_optimized = (known after apply)
[32m+[0m[0m get_password_data = false
[32m+[0m[0m host_id = (known after apply)
[32m+[0m[0m host_resource_group_arn = (known after apply)
[32m+[0m[0m iam_instance_profile = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m instance_initiated_shutdown_behavior = (known after apply)

Aws & devops by veera nareshit


Aws & devops by veera nareshit

[32m+[0m[0m instance_lifecycle = (known after apply)


[32m+[0m[0m instance_state = (known after apply)
[32m+[0m[0m instance_type = "t2.micro"
[32m+[0m[0m ipv6_address_count = (known after apply)
[32m+[0m[0m ipv6_addresses = (known after apply)
[32m+[0m[0m key_name = (known after apply)
[32m+[0m[0m monitoring = (known after apply)
[32m+[0m[0m outpost_arn = (known after apply)
[32m+[0m[0m password_data = (known after apply)
[32m+[0m[0m placement_group = (known after apply)
[32m+[0m[0m placement_partition_number = (known after apply)
[32m+[0m[0m primary_network_interface_id = (known after apply)
[32m+[0m[0m private_dns = (known after apply)
[32m+[0m[0m private_ip = (known after apply)
[32m+[0m[0m public_dns = (known after apply)
[32m+[0m[0m public_ip = (known after apply)
[32m+[0m[0m secondary_private_ips = (known after apply)
[32m+[0m[0m security_groups = (known after apply)
[32m+[0m[0m source_dest_check = true
[32m+[0m[0m spot_instance_request_id = (known after apply)
[32m+[0m[0m subnet_id = (known after apply)
[32m+[0m[0m tags ={
[32m+[0m[0m "Name" = "dev-ec2"
}
[32m+[0m[0m tags_all ={
[32m+[0m[0m "Name" = "dev-ec2"
}
[32m+[0m[0m tenancy = (known after apply)
[32m+[0m[0m user_data = (known after apply)
[32m+[0m[0m user_data_base64 = (known after apply)
[32m+[0m[0m user_data_replace_on_change = false
[32m+[0m[0m vpc_security_group_ids = (known after apply)
}

[1mPlan:[0m 1 to add, 0 to change, 0 to destroy.


[0m[0m[1maws_instance.dev: Creating...[0m[0m
[0m[1maws_instance.dev: Still creating... [10s elapsed][0m[0m
[0m[1maws_instance.dev: Still creating... [20s elapsed][0m[0m
[0m[1maws_instance.dev: Still creating... [30s elapsed][0m[0m
[0m[1maws_instance.dev: Creation complete after 32s [id=i-082ed5054b2a1fa87][0m
[0m[1m[32m
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Aws & devops by veera nareshit


Aws & devops by veera nareshit

Aws & devops by veera nareshit

You might also like