Terraform End-End Document
Terraform End-End Document
Installation Terraform
DAY1
Install Terraform on your local system
C:\Users\Asus\Downloads\terraform_1.7.3_windows_386
Terraform Codes
Aws & devops by veera nareshit
Aws & devops by veera nareshit
DAY 2
Custom Network
cidr_block = "10.0.0.0/16"
tags = {
Name = "ankit_vpc"
vpc_id = aws_vpc.custnw.id
tags = {
vpc_id = aws_vpc.custnw.id
cidr_block = "10.0.0.0/24"
tags = {
vpc_id = aws_vpc.custnw.id
tags = {
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.custnw.id
route_table_id = aws_route_table.custnw.id
subnet_id = aws_subnet.custnw.id
name = "custnw_sg"
vpc_id = aws_vpc.custnw.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
egress {
from_port =0
to_port =0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ami = var.ami
instance_type = var.instance_type
key_name = var.key_name
subnet_id = aws_subnet.custnw.id
associate_public_ip_address = true
tags = {
Name = "CustANKITec2"
DAY 3
S3 BUCKET CREATION WITH VERSIONING
Aws & devops by veera nareshit
Aws & devops by veera nareshit
bucket = "terrabucketcreate"
bucket = aws_s3_bucket.devankit.id
versioning_configuration {
status = "Enabled"
#Create the fresh EC2 instance and print the output of public ip,
public dns and private ip dns
++Don't print output of privateip_by using sensative.
resource "aws_instance" "MrSingh" {
ami = var.ami
instance_type = var.instance_type
key_name = var.key_name
tags = {
Name = "MrSinghec2"
DAY 4
Aws & devops by veera nareshit
Aws & devops by veera nareshit
Backend.tf script
#we are creating one S3 Bucket and try to see the whole creation process inside
terraform.tfstate.
#terraform.tfstate can be vanished or it will not get seen into the local as above by using
configuring backend.tf block. Means after we do terraform apply terraform.tfstate will get
created and it will capture also the running process whatever any creation deletion any
ongoing process it will able to capture but it will not located into local as above it will get
located into backend.tf
DAY 5
IMPORT : import resource into terraform
To do any further changes in created ec2 instance we import or clone to our local
system and control the main.tf for further changes for ec2 instance.
First we create a resource block before that we will create a ec2 instance
Now we will map the ec instance id with our local ec2 resource block
terraform import aws_instance.importec2 i-0e5ffb92c68b388e7
Now we can give all ami instance_type key_name by the refrence of statefile
because state file recorded capture all details of that ec2 while importing to our local
We can refer the details from statefile and code on our main resource block
Now suppose I want to make further changes on it I will give another key pair
previous at first before import it was redhat in statefile means first ec2 before import
have redhat key_name
DAY 6
DATA SOURCE
Here we can use custom network where we already have vpc created and inside vpc
my subnet , my internet g/w RT everything configured and that all vpc configuration
attached to our placed ec2 public instance inside public subnet.
But here we can create any instance at any time and we can call same cust netwoek
configuration where we already have our vpc details and all.
So we already have the custom network configured we can copy the whole
configuration and paste to our new folder.
#create vpc
resource "aws_vpc" "custnw" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "ankit_vpc"
}
}
#create Internet Gateway and attach to VPC
resource "aws_internet_gateway" "custnw" {
vpc_id = aws_vpc.custnw.id
tags = {
Name = "Ankit Internet Gateway"
}
}
#create subnet attach to vpc
ingress {
description = "TLS from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "TLS from VPC"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "TLS from VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
Now we can create fresh ec2 instance and inside it only we call vpc which we
already taken from old and pasted to our new dir.
Go to vpc
Subnet ID (from here copying subnet id and pasting inside data source block)
Like above we passed the value of subnet into fresh ec2 via creating data source.
Same we can pass Security Group as well
Go to vpc
Security Group ID (from here copying SG id and pasting inside data source block)
DAY 7
Provisioners
Sandbox access key secret key will not work so try with personal account
- Go to IAM click on Ankit drop down security credentials and create a both access key and secret keys
#create key pair define name of the key and path of local public from
locla system
#Create Security group but before that create a VPC so that http port
can be enabled for apache web server
}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
#provisioner local exec. will help to save this all into one new file
in local you can create that new one inside from here.
provisioner "local-exec" {
command = "touch Mrsingh"
}
provider "aws" {
access_key = "AKIAXWB4SSTJ5JKWVRO3"
secret_key = "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"
region = "us-east-1"
} #create key
pair define name of the key and path of local public from locla system
#Create Security group but before that create a VPC so that http port
can be enabled for apache web server
resource "aws_vpc" "Myownvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "MyVPC"
}
}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console
resource "aws_instance" "ANKITEC2"{
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = aws_key_pair.AnkitGoldenKey.key_name
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
}
File provisioner to copy a file from local to the remote EC2 instance
# let me create one file name as India and i will send that to ec2 from
local
#create key pair define name of the key and path of local public from
locla system
#Create Security group but before that create a VPC so that http port
can be enabled for apache web server
resource "aws_vpc" "Myownvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "MyVPC"
}
}
#create a fresh ec2 instance and pass key_pair value here in key_name
make aconnection to connect to ec2 user in aws console
resource "aws_instance" "ANKITEC2"{
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = aws_key_pair.AnkitGoldenKey.key_name
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
#File provisioner to copy a file from local to the remote EC2 instance
# let me create one file name as India and i will send that to ec2 from
local
provisioner "file" {
source = "India"
destination = "/home/ec2-user/India"
}
4. USER DATA
We can compare both User data and file provisioner both are same but
User data is easy.
DAY 8
Creation of S3 and Dynamo DB Table
Day1 folder or under Resource_for_s3_dynamo_table folder
<Creation of s3 bucket and dynamo db >
First you have to create s3 bucket then init and apply once it
completed then create DynamoDB again init and apply.
# create s3 bucket
provider "aws" {
access_key = "AKIAXHZH6XUYQIXXAMWG"
secret_key = "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"
region = "us-east-1"
#create dynamoDB
resource "aws_dynamodb_table" "round" {
name = "aws_dynamodbtable"
hash_key = "LockID"
read_capacity = 20
write_capacity = 20
attribute {
name = "LockID"
type = "S"
}
}
In backend.tf
terraform {
backend "s3" {
encrypt = true
bucket = "sonuxtyuaankit-s3"
dynamodb_table = "aws_dynamodbtable"
key = "terraform.tfstate"
region = "us-east-1"
}
}
In main.tf
Create ec2 instance
#create ec2
provider "aws" {
access_key = "AKIAXHZH6XUYQIXXAMWG"
secret_key = "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"
region="us-east-1"
Terraform init
DAY 9
WORKSPACE :
As we can see the workspace is default where we have created one S3 bucket
named as defaultworkspacebuckzez..
Depend on:
Depends on means if s3 is dependent upon ec2 then ec2 will
create first and then s3.
LifeCycle:
As usual when we modify and changes in existing instance it will
destroy the previous and apply a latest changes on existing
instance.
But if we create_before_destroy with help of Lifecycle argument
we can able to bypass the destroy step it will first create a frest
change with new instance along with the old instance within old
parameter.
provider "aws" {
access_key = "AKIAWBU7VN4Q3UKVR4NK"
secret_key = "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"
region = "us-east-1"
}
#previos ec2 instance which we already have created at the time of
dependson
#resource "aws_instance" "saradin" {
#ami = "ami-0440d3b780d96b29d"
#instance_type = "t2.micro"
#key_name = "abc"
#tags = {
#Name = "sanamreyec2"
#}
#}
#If we will change the key name previos it was abc now will change
before anything get destroy
resource "aws_instance" "saradin" {
ami = "ami-0440d3b780d96b29d"
instance_type = "t2.micro"
key_name = "xyz"
tags = {
Name = "create_before)destroy"
}
lifecycle {
create_before_destroy = true #this will create the same existing
ec3 with latest keypair and terminate the existing one with old
keyname.
#}
#lifecycle {
DAY 10
Modules
It’s a concept, where the templet can
be called from anywhere to create a
instance.
DAY 11
Count
Here we are creating number of instance.Counts -2 means no.of
two instance example
"aws" {
access_key = "AKIAVWCE7CNU5LKFYXIT"
secret_key = "6ihDc5YtCFTdiWZ+LUwAxg7i+OPluDbBCplXDONn"
region = "us-east-1"
Now we will create count of noumber of two instance with two diffrenet names –
provider "aws" {
access_key = "AKIAVWCE7CNU5LKFYXIT"
secret_key = "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"
region = "us-east-1"
variable "tags" {
type = list(string)
default = [ "ankit_ec2","bholu_ec2" ]
PROBLEM with the count is if any instance or element destroy then the 2 count come beofre position
example if bholu is deleted then ankit will occupy back to bholu place then it will dusturb because it
is set as count 0 1 2 if any delete ut will palce before. So to avoid this problem For Each come to the
picture.
For_each:
Aws & devops by veera nareshit
Aws & devops by veera nareshit
For example :
## Example for_each
# variables.tf
variable "ami" {
type = string
default = "ami-0440d3b780d96b29d"
}
# main.tf
resource "aws_instance" "sandbox" {
ami = var.ami
instance_type = var.instance_type
for_each = var.server
tags = {
Name = each.value # for a set, each.value and each.key is the same
}
}
DAY 12
CICD PROCESS with TERRAFORM GIT
CICD TOOLS JENKINS
First we will create Instance Jenkins with instance type t2.medium
2xcpu4Gib memory to start our automation process.
Now install
2 type of pipelines --
Gurvi script declarative pipeline
Scripted pipeline.
Suppose stage 1 fail what will be issue? Stage 1 fail where you will
go and check git right 2nd stage related to terraform init part stage 3
related to terraform apply part if any task fails at any stage we can
go check that stage only so we staging here. We not suppose to do
in as single stage so that it will very tough to find.
NOTE : before git clone “http link” we use sh example if you make
file we use touch file1 but here sh “touch file1” ..so sh will come
before rest all commands comes under “ ”.
Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (action)
[Pipeline] sh
+ terraform apply --auto-approve