Terraform
Terraform
Manual:
1. time consume
2. Manual work
3. committing mistakes
Automate -- > Terraform -- > code -- > hcl (Hashicorp configuration languge)
HOW IT WORKS:
WRITE
PLAN
APPLY
ADVANTAGES:
1. Reusable
2. Time saving
3. Automation
4. Avoiding mistakes
5. Dry run
CLOUD ALTERNATIVES:
CFT = AWS
ARM = AZURE
GDE = GOOGLE
PULUMI
ANSIBLE
CHEF
PUPPET
OpenTofu
TERRAFORM VS ANSIBLE:
Terraform will create server
While Terraform is known for being cloud-agnostic and supporting public clouds such as AWS, Azure,
GCP, it can also be used for on-prem infrastructure including VMware vSphere and OpenStack.
INSTALLING TERRAFORM:
blocks
arguments
Configuration files:
based on that input terraform will create the real world resources.
extension is .tf
mkdir terraform
cd terraform
vim main.tf
provider "aws" {
region = "us-east-1"
ami = "ami-03eb6185d756497f8"
instance_type = "t2.micro"
I : INIT
P : PLAN
A : APPLY
D : DESTROY
TERRAFORM COMMANDS:
it will take inputs given by users and plan the resource creation
if we haven't given inputs for few fields it will take default values.
as per the given inputs on configuration file it will create the resources in real word.
region = "us-east-1"
count = 5
ami = "ami-03eb6185d756497f8"
instance_type = "t2.micro"
STATE FILE: used to store the resource information which is created by terraform
Command:
TERRAFORM VARIABLES:
in real time we keep all the variables in variable.tf to maintain the variables easily.
main.tf
provider "aws" {
region = "us-east-1"
count = var.instance_count
ami = "ami-0b41f7055516b991a"
instance_type = var.instance_type
variable.tf
variable "instance_type" {
description = "*"
type = string
default = "t2.micro"
variable "instance_count" {
description = "*"
type = number
default = 2
=================================================================
Terraform tfvars:
on execution time pass the tfvars to the command it will apply the values of that file.
cat main.tf
provider "aws" {
region = "us-east-1"
count = var.instance_count
ami = "ami-0e001c9271cf7f3b9"
instance_type = var.instance_type
tags = {
Name = var.instance_name
cat variable.tf
variable "instance_count" {
variable "instance_type" {
}
variable "instance_name" {
cat dev.tfvars
instance_count = 1
instance_type = "t2.micro"
instance_name = "dev-server"
cat test.tfvars
instance_count = 2
instance_type = "t2.medium"
instance_name = "test-server"
cat prod.tfvars
instance_count = 3
instance_type = "t2.large"
instance_name = "prod-server"
TERRAFORM CLI:
cat main.tf
provider "aws" {
}
resource "aws_instance" "one" {
ami = "ami-00b8917ae86a424c9"
instance_type = var.instance_type
tags = {
Name = "raham-server"
cat variable.tf
variable "instance_type" {
METHOD-1:
METHOD-2:
NOTE: If you want to pass single variable from cli you can use -var or if you want to pass multiple
variables from cli create terraform .tfvars files and use -var-file.
export TF_VAR_instance_count=1
export TF_VAR_instance_name="dummy"
export TF_VAR_instance_type="t2.micro"
provider "aws" {
region = "us-east-1"
ami = var.ami
instance_type = "t2.micro"
tags = {
Name = "raham"
cat variable.tf
variable "ami" {
default = ""
TERRAFORM OUTPUTS:
Whenever we create a resource by Terraform if you want to print any output of that resource we can
use the output block this block will print the specific output as per our requirement.
provider "aws" {
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "raham-server"
output "raham" {
output "raham" {
value = aws_instance.one
Note: when we change output block terraform will execute only that block
remianing blocks will not executed because there are no changes in those blocks.
Why:
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "one" {
ami = "ami-0195204d5dce06d99"
instance_type = "t2.micro"
tags = {
Name = "raham"
bucket = "rahamshaik8e3huirfh9uf2f"
TERRAFORM REPLACE:
============================================================================
once you define a value on this block you can use them multiple times
provider "aws" {
locals {
env = "prod"
cidr_block = "10.0.0.0/16"
tags = {
Name = "${local.env}-vpc"
vpc_id = aws_vpc.one.id
cidr_block = "10.0.0.0/24"
tags = {
Name = "${local.env}-subnet"
subnet_id = aws_subnet.two.id
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
key_name = "jrb"
tags = {
Name = "${local.env}-server"
WORKSPACES:
all the resource we create on terraform by default will store on default workspace
NOTE:
EXECUTION:
cat main.tf
provider "aws" {
region = "us-east-1"
count = var.instance_count
ami = "ami-03eb6185d756497f8"
instance_type = var.instance_type
tags = {
Name = var.instance_name
cat variable.tf
variable "instance_count" {
variable "instance_type" {
variable "instance_name" {
cat dev.tfvars
instance_count = 1
instance_type = "t2.micro"
instance_name = "dev-server"
cat test.tfvars
instance_count = 2
instance_type = "t2.medium"
instance_name = "test-server"
cat prod.tfvars
instance_count = 3
instance_type = "t2.large"
instance_name = "prod-server"
EXAMPLES:
S3
K8S
CONSUL
AZURE
TERRAFORM CLOUD
backup file is a backup of the terraform. tfstate file. Terraform automatically creates a backup of the
state file before making any changes to the state file. This ensures that you can recover from a
corrupted or lost state file.
terraform state mv aws_subnet.two aws_subnet.three : to move state info from one to another
CODE:
provider "aws" {
region = "us-east-1"
terraform {
backend "s3" {
bucket = "terrastatebyucket007"
key = "terraform.tfstate"
region = "us-east-1"
locals {
env = "${terraform.workspace}"
cidr_block = "10.0.0.0/16"
tags = {
Name = "${local.env}-vpc"
vpc_id = aws_vpc.one.id
cidr_block = "10.0.0.0/24"
tags = {
Name = "${local.env}-subnet"
}
subnet_id = aws_subnet.two.id
ami = "ami-0e001c9271cf7f3b9"
instance_type = "t2.micro"
tags = {
Name = "${local.env}-server"
========================================================================
META ARGUMENTS:
provider "aws" {
provider "aws" {
region = "us-east-1"
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "raham-server"
bucket = "dummyawsbuckeet0088ndehd"
depends_on = [aws_instance.two]
provider "aws" {
count =3
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.medium"
tags = {
provider "aws" {
count = length(var.instance_type)
ami = "ami-00b8917ae86a424c9"
instance_type = var.instance_type[count.index]
tags = {
Name = var.instance_name[count.index]
variable "instance_type" {
variable "instance_name" {
FOR_EACH:
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.micro"
tags = {
Name = "${each.key}"
provider "aws" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
prevent_destroy = true
If we want to recreate any object in terraform. first of all terraform will destroy the existing object
and then it will create the new object.
it will create new replacement object is created first, & destroyed the existing resource.
provider "aws" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
create_before_destroy = true
IGNORE CHANGES: Whenever we do any changes to the infrastructure manually if I run terraform
plan or if I run terraform apply the values will be taken to the terraform state if I want to ignore the
manual changes made to my infrastructure we can use ignore changes.
NOTE: It is mainly used to ignore the manual changes applied to the infrastructure if you apply any
change to the existing infrastructure manually terraform will completely ignore during the runtime
provider "aws" {
ami = "ami-0d7a109bf30624c99"
instance_type = "t2.nano"
tags = {
Name = "lucky-server"
lifecycle {
ignore_changes = all
======================================================================
Providers:
Terraform will support thousands of providers in real time but among them we are not going to use
some specific providers which is going to maintain by community.
GITHUB:
provider "github" {
token = "***********************"
name = "example-repo"
LOCAL:
provider "local" {
filename = "abc.txt"
}
NOTE: For every provider in Terraform we need to download the plugins by running terraform init.
VERSION CONSTRAINTS:
Whenever we have new changes on the aws console the old code might not work so if you want to
work with the new code window download the new provider plugins for the new code in real time
we update the plugins based upon our requirement.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.41.0"
terraform {
required_providers {
local = {
source = "hashicorp/local"
version = "2.2.2"
}
NOTE: in signle file we can write multiple terraform providers code.
terraform import: with this import command if we create a resource manually we can manage the
resource from terraform.
import {
to = aws_instance.example
id = var.instance_id
terraform plan-generate-config-out=ec2.tf
terraform apply
TERRAFORM REFRESH:
it will store the values when compared with real world infrastructure when we modified the
terraform values in real world infrastructure it does not replicate to state file
so we need to run the command called Terraform Refresh it will refresh the state file while refreshing
state file it will compare original values with the state file values if the original values are modified or
change it it will be replicated to state file after running terraform refresh command.
terraform refresh
DISADVATAGE: Sometimes it will delete all of the existing infrastructure due to some small sort of
changes so in real time we never run this command manually.
TERRAFORM MODULES:
A module that has been called by another module is often referred to as a child module.
we can publish modules for others to use, and to use modules that others have published.
These modules are free to use, and Terraform can download them automatically if you specify the
appropriate source and version in a module call block.
cat main.tf
provider "aws" {
module "my_instance" {
source = "./modules/instances"
module "s3_module" {
source = "./modules/buckets"
mkdir -p modules/instances
mkdir -p modules/buckets
cat modules/buckets/main.tf
bucket = "devopsherahamshaik0099889977"
}
cat modules/instance/main.tf
count =2
ami = "ami-00b8917ae86a424c9"
instance_type = "t2.medium"
key_name = "yterraform"
tags = {
Name = "n.virginia-server"
terraform fmt -recursive : used to apply format for files on all folders
===========================================================================
DYNAMIC BLOCK: it is used to reduce the length of code and used for reusabilty of code in loop.
provider "aws" {
locals {
ingress_rules = [{
port = 443
},
port = 80
},
port = 8080
description = "Ingree rules for port 8080"
}]
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
egress = [
cidr_blocks = ["0.0.0.0/0"]
description = "*"
from_port =0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port =0
}]
dynamic "ingress" {
for_each = local.ingress_rules
content {
description = "*"
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
tags = {
PROVISIONERS: used to execute commands or scripts in terraform managed resources on both local
and remote.
LOCAL-EXEC: used to execute a command or script on local machine (where terraform is installed)
provider "aws" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
tags = {
Name = "rahaminstance"
}
provisioner "local-exec" {
once the server got created it will execute the commands and scripts for installing the softwares and
configuring them and even for deployment also.
provider "aws" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
key_name = "yterraform"
tags = {
provisioner "remote-exec" {
inline = [
"touch file1"
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
1. create account
2. create organization
3. create workspace
4. add vsc -- > GitHub -- > username & password -- > select repo
TERRAFORM VAULT: used to produce the dyamic secrets.DYNAMIC BLOCK: it is used to reduce the
length of code and used for reusabilty of code in loop.
provider "aws" {
locals {
ingress_rules = [{
port = 443
},
port = 80
port = 8080
}]
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.main.id]
tags = {
egress = [
cidr_blocks = ["0.0.0.0/0"]
description = "*"
from_port =0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port =0
}]
dynamic "ingress" {
for_each = local.ingress_rules
content {
description = "*"
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
tags = {
PROVISIONERS: used to execute commands or scripts in terraform managed resources on both local
and remote.
LOCAL-EXEC: used to execute a command or script on local machine (where terraform is installed)
provider "aws" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
tags = {
Name = "rahaminstance"
provisioner "local-exec" {
once the server got created it will execute the commands and scripts for installing the softwares and
configuring them and even for deployment also.
provider "aws" {
ami = "ami-04823729c75214919"
instance_type = "t2.micro"
key_name = "yterraform"
tags = {
provisioner "remote-exec" {
inline = [
"touch file1"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
create account
create organization
new run
fail
new run
TERRAFORM MAP:
ITS A VARIABE TYPE USED TO ASSING KEY & VALE PAIR FOR RESOURCE.
provider "aws" {
ami = "ami-0c02fb55956c7d316"
instance_type = "t2.micro"
tags = var.instance_tags
variable "instance_tags" {
type = map(any)
default = {
Name = "app-server"
Env = "dev"
Client = "swiggy"
=================================================
SONARQUBE:
https://github.com/RAHAMSHAIK007/all-setups.git
port: 9000
generate a token
2. configure tool
dashboard -- > manage Jenkins -- > system -- >SonarQube -- > name: sonarube & url: ----- & add
secret text.
dashboard -- > tools -- > maven -- > name: maven -- > save
CODE:
node {
stage('checkout') {
git 'https://github.com/devopsbyraham/jenkins-java-project.git'
stage('build') {
sh 'mvn compile'
stage('test') {
sh 'mvn test'
stage('artifact') {
sh 'mvn package'
stage("code quality") {
withSonarQubeEnv('sonarqube')
sh "${mavenCMD} sonar:sonar"
}
}
K8SGPT:
CONFIGURE:
k8sgpt generate
generate a token
K8sgpt analyze