0% found this document useful (0 votes)
550 views

Notes Terraform

Here are the key steps to get started with Terraform: 1. Install Terraform on your local machine. You can download the binaries from the Terraform website. 2. Create your first Terraform configuration file with the .tf extension (e.g. main.tf). Define the infrastructure you want to provision using Terraform's declarative language. 3. Initialize Terraform with the "terraform init" command in the directory with your .tf files. This downloads any required providers. 4. Validate your configuration with "terraform plan". This will show the infrastructure changes required without making them. 5. Apply your infrastructure changes with "terraform apply". Terra

Uploaded by

Juan Pinto
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
550 views

Notes Terraform

Here are the key steps to get started with Terraform: 1. Install Terraform on your local machine. You can download the binaries from the Terraform website. 2. Create your first Terraform configuration file with the .tf extension (e.g. main.tf). Define the infrastructure you want to provision using Terraform's declarative language. 3. Initialize Terraform with the "terraform init" command in the directory with your .tf files. This downloads any required providers. 4. Validate your configuration with "terraform plan". This will show the infrastructure changes required without making them. 5. Apply your infrastructure changes with "terraform apply". Terra

Uploaded by

Juan Pinto
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 138

PXE (Preboot Execution Environment) boot

Is a method used to boot and configure operating systems on computers over the network, rather
than from local storage media such as a hard drive or USB flash drive.

In Terraform, PXE boot can be used as part of an automated provisioning strategy to configure and
launch virtual machine instances in the cloud. This can be especially useful in situations where a large
number of instances need to be deployed and configured quickly and efficiently.

By using PXE boot in Terraform, a custom boot image can be specified for the instance, which will be
automatically loaded into system memory and executed to begin the operating system installation
process. Other network and operating system configuration options can also be specified during the
provisioning process.

In summary, PXE boot in Terraform enables the automation of the virtual machine provisioning
process, allowing for rapid and scalable infrastructure deployment in the cloud."

So, what can you use Terraform for?


● Multi-tier applications

● Self-service infrastructure

● Production, development, and testing environments

● Continuous delivery

● Managing your management tools

How to install terraform in windows?


1. Download the Windows executable from the Terraform website.
https://developer.hashicorp.com/terraform/downloads
2. Unzip the executable file in a folder where you want to have it (recommended in the
local C: drive).
3. Go to system environment variables, click on PATH, then click Edit, then New, and
paste the path to the location where the Terraform file is stored. Click OK until all
windows are closed.
4. Open PowerShell or CLI and type "terraform" command to confirm that the commands
are running smoothly.
TIP To get detailed help on a specific command run it with the –help
flag—for example, terraform plan –help.

What’s “override” file construct


In Terraform, an override file constructor is a way to modify or override certain attributes of a
resource block in a Terraform configuration file.

An override file constructor allows you to selectively modify or replace the attributes of a
resource block without having to modify the original configuration file. This can be useful in
situations where you need to make specific changes to a resource block for a particular
environment, such as a development, staging, or production environment.

The override file constructor is defined using the overrides block, which is added to the
terraform block in your configuration file. Within the overrides block, you can specify the
resource block to be overridden using its type and name, and then provide the updated
attribute values.

Here's an example of an override file constructor that modifies the tags attribute of an AWS
EC2 instance resource:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}

# Override EC2 instance tags in the "dev" environment


overrides {
aws_instance.my_instance_dev {
tags = {
Name = "my-instance-dev"
Environment = "dev"
}
}
}
}
# Original EC2 instance resource block
resource "aws_instance" "my_instance" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = {
Name = "my-instance"
}
}
In this example, the original EC2 instance resource block creates an instance with the Name
tag set to "my-instance". However, in the override file constructor, we modify this tag to be
"my-instance-dev" and add a new tag Environment with a value of "dev" specifically for the
aws_instance.my_instance_dev resource. This modification will only apply when the
configuration file is applied with the -var-file flag pointing to the override file.

Terraform configuration files are normal text files.


They are suffixed with either .tf or .tf.json. Files suffixed with .tf are in Terraform’s native file
format, and .tf.json files are JSON-formatted.

The two configuration file formats are for two different types of audiences:
● Humans.

● Machines.
Humans
The .tf format, also called the HashiCorp Configuration Language or
HCL, is broadly human-readable, allows inline comments, and is generally
recommended if humans are crafting your configuration.
Machines
The .tf.json format is pure JSON. The .tf.json format is meant for
machine interactions, where a machine is building your configuration files.
You can use JSON if you’d prefer, but the HCL file format is definitely
easier to consume and we recommend using it primarily.

Indicators
+: A resource that will be added.
-: A resource that will be destroyed.
-/+: A resource that will be destroyed and then added again.
~: A resource will be changed.
How does Terraform work?
Terraform creates and manages resources on cloud platforms and other services through their
application programming interfaces (APIs). Providers enable Terraform to work with virtually
any platform or service with an accessible API.

HashiCorp and the Terraform community have already written thousands of providers to
manage many different types of resources and services. You can find all publicly available
providers on the Terraform Registry, including Amazon Web Services (AWS), Azure, Google
Cloud Platform (GCP), Kubernetes, Helm, GitHub, Splunk, DataDog, and many more.

The core Terraform workflow consists of three stages:

● Write: You define resources, which may be across multiple cloud providers and
services. For example, you might create a configuration to deploy an application on
virtual machines in a Virtual Private Cloud (VPC) network with security groups and a
load balancer.
● Plan: Terraform creates an execution plan describing the infrastructure it will create,
update, or destroy based on the existing infrastructure and your configuration.
● Apply: On approval, Terraform performs the proposed operations in the correct order,
respecting any resource dependencies. For example, if you update the properties of a
VPC and change the number of virtual machines in that VPC, Terraform will recreate
the VPC before scaling the virtual machines.
1 Understand infrastructure as code (IaC) concepts
1a) Explain what IaC is
Infrastructure as Code is essentially a hub that can be used for collaboration across the IT
organization to improve infrastructure deployments, increase our ability to scale quickly, and
improve the application development process. Infrastructure as Code allows us to do all this
consistently and proficiently. By using Infrastructure as Code for both our on-premises
infrastructure and the public cloud, our organization can provide dynamic infrastructure to
both our internal team members and ensure our customers have an excellent experience.

Infrastructure as Code (IaC) is an approach to infrastructure management that uses code to


describe and automate the deployment, configuration, and management of infrastructure
resources such as servers, networks, and storage. Terraform is a popular IaC tool that allows
you to define and manage infrastructure as code.

Using Terraform, you can define your infrastructure in code, version control your
infrastructure configurations, and automate the creation and management of your
infrastructure resources. Terraform also provides tools for testing and validating your
infrastructure configurations, making it a powerful tool for managing infrastructure at scale.

1b) Describe advantages of IaC patterns


Benefits of IaC
While there are many benefits of Infrastructure as Code, a few key benefits include simplifying
cloud adoption, allowing us to adopt cloud-based services and offerings to improve our
capabilities quickly. Infrastructure as Code allows us to remove many of the manual steps
required today for infrastructure requests, giving us the ability to automate approved
requests without worrying about tickets sitting in a queue. We can also use Infrastructure as
Code to provide capacity-on-demand by offering a library of services for our developers. We
can publish a self-service capability where developers and application owners can be
empowered to request and provision infrastructure that better matches their requirements.
Again, all of this is possible while driving standardization and consistency throughout the
organization, which can drive efficiencies and reduce errors or deviations from established
norms.
Here are some of the advantages of using IaC patterns in Terraform:

Reproducibility: With IaC, the infrastructure can be reproduced identically multiple times.
This is particularly useful in development, testing, staging, and production environments,
where the same infrastructure is required to be deployed in multiple locations with the same
configuration.

Consistency: IaC patterns in Terraform provide a consistent way to provision infrastructure


resources. This ensures that the same infrastructure resources are provisioned in the same
way, regardless of the environment or who is provisioning the resources.

Version control: Infrastructure code can be version-controlled like any other software code.
This enables teams to manage changes to infrastructure resources, track who made the
changes, and roll back to a previous version if required.

Collaboration: IaC patterns in Terraform enable collaboration between development,


operations, and security teams. Infrastructure resources can be defined and provisioned
collaboratively, which helps to reduce silos and bottlenecks.

Cost optimization: Terraform provides the ability to specify and manage infrastructure
resources at a granular level. This enables teams to optimize costs by provisioning only the
resources that are required and shutting them down when they are not needed.

Agility: IaC patterns in Terraform enable teams to quickly provision and de-provision
infrastructure resources as required. This agility allows teams to respond to changing
requirements and customer needs quickly.

Automation: IaC patterns in Terraform allow you to automate the creation and management
of your infrastructure resources. This means you can deploy and manage infrastructure
resources at scale with less manual intervention, freeing up your team's time for other tasks.

Security and compliance: Terraform allows you to define and enforce security and compliance
policies for your infrastructure resources. This means you can ensure that your infrastructure
resources are compliant with company and regulatory standards, and reduce the risk of
security breaches.

2 Understand the purpose of Terraform (vs other IaC)


Terraform is an Infrastructure as Code (IaC) tool that allows you to define and manage your
cloud infrastructure using declarative configuration files. Terraform is designed to work with a
wide range of cloud providers, including AWS, Google Cloud Platform, and Microsoft Azure,
and can be used to manage a variety of resources, such as virtual machines, networks, and
storage.

Compared to other IaC tools, Terraform has a number of unique features:


Multi-Cloud Support: Terraform can be used to manage infrastructure across multiple cloud
providers, making it easier to manage your infrastructure in a hybrid or multi-cloud
environment.

Declarative Configuration: Terraform allows you to define your infrastructure in a declarative


way, which means that you specify what you want your infrastructure to look like, rather than
how to create it. This makes it easier to maintain and update your infrastructure over time.

Plan and Apply: Terraform has a built-in planning and applying feature that allows you to
preview changes to your infrastructure before applying them. This helps prevent accidental
changes and allows you to ensure that your infrastructure is in the desired state.

State Management: Terraform stores the state of your infrastructure in a file, which allows it
to keep track of changes to your infrastructure over time. This makes it easier to manage your
infrastructure as it grows and changes.

Terraform Goals
• Unify the view of resources using infrastructure as code
• Support the modern data center (IaaS, PaaS, SaaS)
• Expose a way for individuals and teams to safely and predictably change infrastructure
• Provide a workflow that is technology agnostic
• Manage anything with an API

Terraform Benefits
• Provides a high-level abstraction of infrastructure (IaC)
• Allows for composition and combination
• Supports parallel management of resources (graph, fast)
• Separates planning from execution (dry-run)

2a) Explain multi-cloud and provider-agnostic benefits


Multi-cloud refers to the use of more than one cloud provider for hosting different parts of an
organization's infrastructure. Multi-cloud can help organizations avoid vendor lock-in, achieve
better performance, reliability and security, and enable more efficient disaster recovery and
business continuity strategies.

Terraform's multi-cloud support allows organizations to use the same IaC tool across multiple
cloud providers. This means that infrastructure can be defined and managed consistently
across different cloud platforms, which simplifies the management and deployment of
applications that run on multi-cloud environments. With Terraform, you can define resources
and their dependencies in a single configuration file, and then apply it to multiple cloud
providers, which reduces the time and effort required to manage infrastructure.

Provider-agnostic refers to the ability of an IaC tool to work with multiple cloud providers
without being tied to a specific vendor. This is important because it allows organizations to
choose the cloud provider that best suits their needs, without having to learn a new IaC tool
for each provider.

Terraform's provider-agnostic architecture allows you to define and manage resources across
multiple cloud providers, which gives you the flexibility to choose the best cloud platform for
each part of your infrastructure. This also means that you can use the same IaC tool across
different cloud providers, which reduces the complexity and costs associated with managing
infrastructure on multiple cloud platforms.

2b) Explain the benefits of state

Terraform State
In order to properly and correctly manage your infrastructure resources, Terraform stores the
state of your managed infrastructure. Terraform uses this state on each execution to plan and
make changes to your infrastructure. This state must be stored and maintained on each
execution so future operations can perform correctly.

Benefits of state
During execution, Terraform will examine the state of the currently running infrastructure,
determine what differences exist between the current state and the revised desired state, and
indicate the necessary changes that must be applied. When approved to proceed, only the
necessary changes will be applied, leaving existing, valid infrastructure untouched.
Terraform state
After creating our resource, terraform has saved the current state of our infrastructure into a
file called terraform.tfstate in our base directory. This is called a state file. The state file
contains a map of resources and their data to resource IDs. The state is the canonical record
of what Terraform is managing for you. This file is important because it is canonical. If you
delete the file Terraform will not know what resources you are managing, and it will attempt
to apply all configuration from scratch. This is bad. You should ensure you preserve this file.

Terraform also creates a backup file of our state from the most recent previous execution in a
file called terraform.tfstate.backup.

Some Terraform documentation recommends putting this file into version control. We do not.
The state file contains everything in your configuration, including any secrets you might have
defined in them. We recommend instead adding this file to your .gitignore configuration.

Commands of state:
to view the resources created: terraform show
to list all of the items in Terraform’s managed state: terraform state list

desired state
In the context of infrastructure management, the desired state refers to the configuration or
definition of the infrastructure resources that you want to achieve or maintain.

In other words, the desired state represents the ideal state of the infrastructure that you want
Terraform (or another infrastructure management tool) to create or maintain. This includes
the configuration of resources such as servers, load balancers, databases, and other
components, as well as their dependencies and relationships.

When you use an infrastructure management tool like Terraform, you define the desired state
of your infrastructure in configuration files, often written in a domain-specific language. The
tool then uses these configuration files to create, modify, or delete the necessary resources to
achieve the desired state.

One of the benefits of using a desired state approach to infrastructure management is that it
allows you to define your infrastructure as code, which can be version-controlled, tested, and
shared like any other code. It also enables you to automate the provisioning and management
of your infrastructure, making it more scalable, consistent, and reliable.
The current state
refers to the actual state of the infrastructure resources that are currently deployed and
running in your environment. It reflects the configuration and status of each resource at a
particular point in time.

When you use an infrastructure management tool like Terraform, it tracks the current state of
your infrastructure by reading information from the resources themselves or from APIs
provided by the infrastructure providers. This information is then compared against the
desired state defined in your configuration files to determine what changes, if any, need to be
made.

For example, if you have a desired state of two EC2 instances running in a particular region
with specific configuration options, Terraform will check the current state of your
infrastructure to see if the two instances exist, if they are running in the correct region, and if
their configurations match what is defined in the configuration files. If any discrepancies are
found between the desired state and the current state, terraform will apply the necessary
changes to bring the infrastructure back into the desired state.

By tracking the current state of your infrastructure, Terraform and other infrastructure
management tools enable you to manage your resources in a consistent and repeatable way,
ensuring that your infrastructure remains up-to-date and in the desired state.

3 Understand Terraform basics


All interactions with Terraform occur via the CLI. Terraform is a local tool (runs on the current
machine). The terraform ecosystem also includes providers for many cloud services, and a
module repository. Hashicorp also has products to help teams manage Terraform: Terraform
Cloud and Terraform Enterprise.

There are a handful of basic terraform commands, including:


• terraform init:
`terraform init` is a command in Terraform that initializes a new or existing Terraform working
directory. The `init` command sets up everything needed for Terraform to manage your
infrastructure, including downloading and installing the required provider plugins, configuring
the backend, and setting up the working directory.
• terraform validate:
is used to validate the syntax and configuration of your Terraform code. When you run this
command, Terraform checks your configuration files to ensure that they are valid and can be
parsed correctly. It checks for syntax errors, unsupported attributes, missing or required
variables, and more.
• terraform plan
terraform plan is a command in Terraform that generates an execution plan for applying
changes to your infrastructure. The plan command shows you what changes Terraform will
make to your infrastructure resources when you apply your configuration files.
When you run terraform plan, Terraform will compare your current infrastructure state to the
desired state specified in your configuration files. It will then generate a list of actions that it
will take to bring your infrastructure into compliance with your desired state.
• terraform apply
terraform apply is a command in Terraform that applies the changes specified in your
configuration files to your infrastructure. The apply command executes the changes that were
previewed in the terraform plan command and modifies your infrastructure resources to
match your desired state.
• terraform destroy
Terraform destroy is a command in Terraform that is used to destroy the infrastructure
resources managed by Terraform. The destroy command reverses the changes made by the
apply command and removes all the resources created by Terraform.

It's important to note that terraform destroy can have significant consequences for your
infrastructure, as it will delete all resources created by Terraform. Therefore, it's important to
review the resources that will be destroyed carefully before executing the destroy command.

It does not delete your configuration file(s), main.tf, etc. It destroys the resources built from
your Terraform code.

terraform state: This command manages the state of the infrastructure and can perform
actions such as importing existing resources.

terraform output: This command displays the output values of the Terraform configuration,
which can be used in other tools.

terraform validate: This command validates the syntax and configuration of the Terraform
configuration files.
terraform graph: This command generates a visual graph of the infrastructure and its
dependencies.

terraform refresh: This command updates the state of the infrastructure in Terraform.

terraform taint: This command marks a resource as tainted, forcing its recreation on the next
apply.

There are several commonly used command FLAGS in Terraform:

-target flag
allows us to destroy specific resource. Example terraform destroy -target aws_instance.myec2

-var or -var-file: These flags allow you to specify variable values for your Terraform
configuration. The -var flag can be used to specify a single variable value, while the -var-file
flag can be used to specify a file containing multiple variable values.

-target: This flag allows you to target a specific resource or set of resources in your Terraform
configuration. This can be useful if you want to apply changes to a specific part of your
infrastructure without affecting other resources.

-auto-approve: This flag allows you to automatically approve and apply changes without
having to manually confirm each change. This can be useful for automated deployments or for
making quick changes without having to interact with the command line.

-state: This flag allows you to specify the location of the Terraform state file. This can be
useful if you want to store the state file in a specific location or if you want to use a state file
from a different Terraform configuration.

-input: This flag controls whether Terraform should prompt for input values during plan or
apply. The default behavior is to prompt for input, but you can use this flag to disable input
prompts for automated deployments.

-force: This flag forces Terraform to perform an operation that might be dangerous or
destructive, such as destroying a resource. Use with caution.
-parallelism: This flag controls the number of concurrent operations that Terraform should
perform. The default is 10, but you can use this flag to increase or decrease the level of
parallelism based on your infrastructure and performance requirements.

The terraform plan -out: command generates an execution plan for your Terraform
configuration and saves it to a file specified after the -out flag.

HashiCorp Configuration Language (HCL)


Terraform is written in HCL (HashiCorp Configuration Language) and is designed to be both
human and machine readable.

block label
is a string that identifies a particular block within a Terraform configuration file. The label is
used to indicate the type of block and to provide a unique name for that block within the
configuration file.

For example, consider the following resource block that creates an AWS EC2 instance:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "my-key-pair"
}

In this block, the label is `"aws_instance"` and the name of the block is `"example"`. The label
indicates that this is a resource block that creates an EC2 instance, and the name `"example"`
provides a unique name for this particular instance within the configuration file.

Labels are used throughout Terraform configuration files to identify different types of blocks,
including resources, data sources, providers, and variables. By using labels, Terraform can
distinguish between different types of blocks and ensure that each block is properly defined
and configured within the configuration file.

Block types
Terraform Code Configuration block types include:
• Terraform Settings Block
• Terraform Provider Block
• Terraform Resource Block
• Terraform Data Block
• Terraform Input Variables Block
• Terraform Local Variables Block
• Terraform Output Values Block
• Terraform Modules Block

Terraform Settings Block


In Terraform, the `settings` block is used to configure various options related to the Terraform
CLI and backend. It can be used to specify settings like the required Terraform version, the
backend configuration, and other CLI options.

Here's an example of a settings block that specifies the required Terraform version and the
backend configuration:

terraform {
required_version = ">= 0.14.0"

backend "s3" {
bucket = "my-terraform-state"
key = "terraform.tfstate"
region = "us-west-2"
}
}

In this example, the `required_version` parameter specifies that the configuration requires a
minimum Terraform version of 0.14.0. The `backend` block specifies that the backend should
use an S3 bucket named `my-terraform-state` in the `us-west-2` region to store the state file.

You can also specify other CLI options in the `settings` block, such as the path to the Terraform
binary or environment variables that should be set when running Terraform commands.
Here's an example that sets the path to the Terraform binary:

terraform {
path = "/usr/local/bin/terraform"
}

Overall, the `settings` block provides a way to configure various options related to the
Terraform CLI and backend, and allows you to fine-tune your Terraform configuration to your
specific needs.

Configuration Block
Terraform relies on plugins called “providers” to interact with remote systems and expand
functionality. Terraform configurations must declare which providers they require so that
Terraform can install and use them. This is performed within a Terraform configuration block.
terraform {
# Block body
<ARGUMENT> = <VALUE>
}

Terraform Provider Block


Providers connect Terraform to the infrastructure you want to manage—for example, AWS,
Microsoft Azure, or a variety of other Cloud, network, storage, and SaaS services. They
provide configuration like connection details and authentication credentials. You can think
about them as a wrapper around the services whose infrastructure we wish to manage.
Example:

provider "aws" {
access_key = "abc123"
secret_key = "abc123"
region = "us-east-1"
}

You can specify multiple providers in a Terraform configuration to manage resources from
multiple services or from multiple regions or parts of a service.

Terraform Resource Block


Our second block is a resource. Resources are the bread and butter of Terraform. They
represent the infrastructure components you want to manage: hosts, networks, firewalls, DNS
entries, etc. The resource object is constructed of a type, name, and a block containing the
configuration of the resource.

resource "aws_instance" "base" {


ami = "ami-0d729a60"
instance_type = "t2.micro"
}

Resource definition

Hence there can be only one aws_instance named base in your configuration. If you specify
more than one resource type with the same name, you’ll see an error like so:
* aws_instance.base: resource repeated multiple times

Your configuration is defined as the scope of what configuration Terraform loads when it runs.
You can have a resource with a duplicate name in another configuration—for example,
another directory of Terraform files.

Terraform Resource Blocks


Terraform uses resource blocks to manage infrastructure, such as virtual networks, compute instances,
or higher-level components such as DNS records. Resource blocks represent one or more
infrastructure objects in your Terraform configuration. Most Terraform providers have a number of
different resources that map to the appropriate APIs to manage that particular infrastructure type.

# Template
<BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
# Block body
<IDENTIFIER> = <EXPRESSION> # Argument
}

When working with a specific provider, likeAWS, Azure, or GCP, the resources are defined in the
provider documentation. Each resource is fully documented in regards to the valid and required
arguments required for each individual resource. For example, the aws_key_pair resource has a
“Required” argument of public_key but optional arguments like key_name and tags. You’ll need to
look at the provider documentation to understand what the supported resources are and how to
define them in your Terraform configuration.
Important - Without resource blocks, Terraform is not going to create resources. All of the other block
types, such as variable, provider, terraform, output, etc. are essentially supporting block types for the
resource block.

Terraform Data Block


In Terraform, a `data` block is used to define a data source that can be used to retrieve
information from an external system and make it available to other parts of your Terraform
configuration.

The `data` block is similar to a `resource` block, but instead of creating a new resource in your
infrastructure, it fetches data from an external system, such as an API endpoint or a database.

Here's an example of a `data` block that retrieves information about an AWS VPC:

data "aws_vpc" "example" {


id = "vpc-1234567890abcdef0"
}
resource "aws_subnet" "example" {
vpc_id = data.aws_vpc.example.id
cidr_block = "10.0.1.0/24"
}

In this example, the `data` block fetches information about an AWS VPC with the ID `vpc-
1234567890abcdef0`. The result of this query is stored in a variable called `example`, which
can be referenced elsewhere in the configuration using the syntax
`data.<DATA_SOURCE_TYPE>.<NAME>.<ATTRIBUTE>`.

In the example above, the `id` attribute of the `aws_vpc.example` data source is referenced in
the `vpc_id` parameter of the `aws_subnet.example` resource block. This ensures that the
new subnet is created in the specified VPC.

Overall, the `data` block provides a powerful way to fetch and use data from external systems
in your Terraform configuration. It allows you to easily reference and reuse information from
other parts of your infrastructure, without the need to manually manage and update that
information.

Terraform Data Block


Data sources are used in Terraform to load or query data from APIs or other Terraform
workspaces. You can use this data to make your project’s configuration more flexible, and to
connect workspaces that manage different parts of your infrastructure. You can also use data
sources to connect and share data between workspaces in Terraform Cloud and Terraform
Enterprise.

To use a data source, you declare it using a data block in your Terraform configuration.
Terraform will perform the query and store the returned data. You can then use that data
throughout your Terraform configuration file where it makes sense.

Data Blocks within Terraform HCL are comprised of the following components:
• Data Block - “resource” is a top-level keyword like “for” and “while” in other programming
languages.
• Data Type - The next value is the type of the resource. Resources types are always prefixed
with their provider (aws in this case). There can be multiple resources of the same type in a
Terraform configuration.
• Data Local Name - The next value is the name of the resource. The resource type and name
together form the resource identifier, or ID. In this lab, one of the resource IDs is
aws_instance.web. The resource ID must be unique for a given configuration, even if multiple
files are used.
• Data Arguments - Most of the arguments within the body of a resource block are specific to
the selected resource type. The resource type’s documentation lists which arguments are
available and how their values should be formatted.

Example: A data block requests that Terraform read from a given data source (“aws_ami”) and
export the result under the given local name (“example”). The name is used to refer to this
resource from elsewhere in the same Terraform module.

data “<DATA TYPE>” “<DATA LOCAL NAME>” {


# Block body
<IDENTIFIER> = <EXPRESSION> # Argument
}

variables
are used to parameterize the configuration and allow values to be passed in at runtime.
Variables are used to make configurations more flexible and reusable, and to enable the same
configuration to be deployed across multiple environments with different values.

Variables can be defined in a number of different ways in Terraform, depending on the


context in which they are used. Some common ways to define variables include:

In a variables.tf file: Variables can be defined in a separate file called variables.tf, which is
typically located in the same directory as the Terraform configuration files. This file defines
the names and types of the variables, but does not assign them any values. For example:

variable "region" {
type = string
}

In a terraform.tfvars file: Variables can be assigned values in a separate file called


terraform.tfvars, which is typically located in the same directory as the Terraform
configuration files. This file assigns values to the variables defined in the variables.tf file. For
example:

region = "us-west-2"

On the command line: Variables can be passed in as command line arguments when running
Terraform commands. For example:

terraform apply -var="region=us-west-2"


In a variable block within a configuration: Variables can be defined within the configuration
files using a variable block. For example:

variable "instance_type" {
type = string
default = "t2.micro"
}

Once variables are defined, they can be used in the configuration using interpolation syntax,
like this:

resource "aws_instance" "web" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = var.instance_type
subnet_id = "subnet-abc123"
}

In this example, the value of the "instance_type" attribute for the "aws_instance" resource is
set to the value of the "instance_type" variable.

Overall, variables are a key feature in Terraform that allow for greater flexibility and
reusability in defining and managing infrastructure as code. By using variables, configurations
can be easily customized for different environments or use cases, and can be made more
modular and easier to maintain over time.

variables can have several data types


● String: a sequence of characters, such as "hello world".

● Number: a numeric value, such as 42 or -3.14.

● Boolean: a value that is either true or false.

● List: an ordered collection of values, such as ["apple", "banana", "orange"].

● Map: an unordered collection of key-value pairs, such as {"name" = "Alice", "age" =


30}.
Object: a complex data type that can contain multiple attributes of different data types.

Terraform Variables Block


The value of a Terraform variable can be set multiple ways, including setting a default value,
interactively passing a value when executing a terraform plan and apply, using an
environment variable, or setting the value in a .tfvars file. Each of these different options
follows a strict order of precedence that Terraform uses to set the value of a variable.

Terraform Input Variables Block


In Terraform, an `input` block is used to define input variables that can be used to
parameterize your configuration. Input variables are used to pass values into a Terraform
module or configuration at runtime, and can be used to make your configuration more flexible
and reusable.

Here's an example of an `input` block that defines two input variables, `instance_count` and
`instance_type`:

variable "instance_count" {
description = "The number of instances to launch"
type = number
default = 1
}

variable "instance_type" {
description = "The type of instance to launch"
type = string
default = "t2.micro"
}

In this example, the `instance_count` variable is defined as a number type, with a default
value of 1. The `instance_type` variable is defined as a string type, with a default value of
"t2.micro". Both variables also include a `description` attribute, which can be used to provide
additional context or documentation about the variable.

Input variables can be used throughout your configuration to reference dynamic values that
may change over time, such as the number of instances to launch or the type of instance to
use. By using input variables, you can create more flexible and reusable configurations that
can be easily customized to meet your specific needs.

To pass values into these variables at runtime, you can use a number of different approaches,
such as passing them in via the command line, specifying them in a separate input variable
file, or using an environment variable.

locals
is a block that allows you to define named values that can be reused throughout your code.
locals is used to store values that are derived from other values or that are calculated based
on a complex expression.

Here's an example of using locals to define a value:

locals {
subnet_cidr_block = "10.0.1.0/24"
}

In this example, the locals block defines a subnet_cidr_block value that can be referenced
later in the Terraform code.

You can also define a locals block that references other values, as shown in this example:

locals {
subnet_cidr_block = "10.0.1.0/24"
subnet_prefix = "${cidrsubnet(local.subnet_cidr_block, 8,
1)}"
}
In this example, the subnet_prefix value is derived from the subnet_cidr_block value using the
cidrsubnet function. The cidrsubnet function takes three arguments: the base CIDR block, the
prefix length, and the subnet index. In this case, the prefix length is 8 (which means that the
first 8 bits of the IP address are fixed), and the subnet index is 1 (which means that the second
subnet in the CIDR block is used).

locals can also be used to define values that are calculated based on complex expressions, like
this:

locals {
web_server_count = length(var.web_server_instance_types)
db_server_count = length(var.db_server_instance_types)
total_instance_count = local.web_server_count +
local.db_server_count
}
In this example, the total_instance_count value is calculated based on the lengths of two lists
(web_server_instance_types and db_server_instance_types). The length function returns the
number of elements in a list.

By using locals to define reusable values, you can make your Terraform code more readable
and easier to maintain.

Terraform Locals Block


Locals blocks (often referred to as locals) are defined values in Terraform that are used to
reduce repetitive references to expressions or values. Locals are very similar to traditional
input variables and can be referred to throughout your Terraform configuration. Locals are
often used to give a name to the result of an expression to simplify your code and make it
easier to read. Locals are not set directly by the user/machine executing the Terraform
configuration, and the values don’t change between or during the Terraform workflow (init,
plan, apply). Locals are defined in a locals block (plural) and include named local variables with
their defined values. Each locals block can contain one or more local variables. Locals are then
referenced in your configuration using interpolation using local. <name> (note local and not
locals). The syntax of a locals block is as follows:

locals {
# Block body
local_variable_name = <EXPRESSION OR VALUE>
local_variable_name = <EXPRESSION OR VALUE>
}
Terraform Output Values Block

In Terraform, an `output` block is used to define output values that are produced by a
Terraform module or configuration. Output values are used to return information from a
Terraform configuration, and can be used to communicate information to other parts of your
infrastructure or to external systems.

Here's an example of an `output` block that defines an output value for an AWS instance ID:

output "instance_id" {
value = aws_instance.example.id
}
In this example, the `output` block defines a single output value called `instance_id`, which is
derived from the `id` attribute of an AWS instance resource called `aws_instance.example`.
This output value can then be referenced by other parts of your configuration or passed to
external systems.

Output values can be used to provide visibility into the state of your infrastructure and to
communicate information to other parts of your system. They can also be used to enable
integration with external systems or to make it easier to work with Terraform in a larger team
environment.

To retrieve the value of an output variable, you can use the `terraform output` command,
which displays the current value of all defined output variables. Output variables can also be
accessed programmatically using the Terraform API, allowing you to automate the integration
of your Terraform configuration with other systems and processes.

Output Block
Terraform output values allow you to export structured data about your resources. You can
use this data to configure other parts of your infrastructure with automation tools, or as a
data source for another Terraform workspace. Outputs are also necessary to share data from
a child module to your root module.
As with all other blocks in HashiCorp Configuration Language (HCL), the output block has a
particular syntax that needs to be followed when creating output blocks. Each output name
should be unique. The snytax looks like this:

output “<NAME>” {
# Block body
value= <EXPRESSION> # Argument
}
Terraform Modules Block
In Terraform, a `module` block is used to define a reusable collection of resources that can be
shared across multiple Terraform configurations. Modules allow you to encapsulate your
infrastructure logic into reusable components, making it easier to maintain and update your
infrastructure over time.

Here's an example of a `module` block that defines a reusable AWS instance module:

module "web_server" {
source = "terraform-aws-modules/ec2-instance/aws"
instance_type = "t2.micro"
ami = "ami-0c55b159cbfafe1f0"
subnet_id = "subnet-12345678"
security_groups = ["sg-12345678"]
}

In this example, the `module` block defines a reusable module called `web_server`, which is
based on the `terraform-aws-modules/ec2-instance/aws` module from the Terraform registry.
The module is configured to launch an AWS EC2 instance with the specified `instance_type`,
`ami`, `subnet_id`, and `security_groups`.

Modules can be used to abstract away the details of infrastructure components, making it
easier to reuse and maintain your infrastructure code. Modules can also be versioned and
shared, allowing you to easily collaborate with others and benefit from the work of the wider
Terraform community.

To use a module in your Terraform configuration, you can reference it using its `source`
attribute, which specifies the module location. You can also pass variables into the module
using its input variables, and retrieve outputs from the module using its output values.

Module Block
A module is used to combine resources that are frequently used together into a reusable
container. Individual modules can be used to construct a holistic solution required to deploy
applications. The goal is to develop modules that can be reused in a variety of different ways,
therefore reducing the amount of code that needs to be developed. Modules are called by a
parent or root module, and any modules called by the parent module are known as child
modules.
Modules can be sourced from a number of different locations, including remote, such as the
Terraform module registry, or locally within a folder. While not required, local modules are
commonly saved in a folder named modules, and each module is named for its respective
function inside that folder. An example of this can be found in the diagram below:

module “<MODULE_NAME>” {
# Block body
source = <MODULE_SOURCE>
<INPUT_NAME> = <DESCRIPTION> #Inputs
<INPUT_NAME> = <DESCRIPTION> #Inputs
}

Commenting Terraform Code


To make our code easier to understand for others who might want to contribute we may want
to add a comment to explain what a resource or a particular code block is doing. The
Terraform language supports three different syntaxes for comments:

# begins a single-line comment, ending at the end of the line.

// also begins a single-line comment, as an alternative to #.

/* and */ are start and end delimiters for a comment that might span over
multiple lines.

3a) Install and version Terraform providers


Install Terraform AWS Provider
Terraform Providers are plugins that implement resource types for particular clouds, platforms and
generally speaking any remote system with an API. Terraform configurations must declare which
providers they require, so that Terraform can install and use them. Popular Terraform Providers
include: AWS, Azure, Google Cloud, VMware, Kubernetes and Oracle.
In the next step we will install the Terraform AWS provider, and set the provider version in a way that
is very similar to how you did for Terraform. To begin you need to let Terraform know to use the
provider through a required_providers block in the terraform.tf file as seen below.

terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}

terraform version: Run this command to check the Terraform version.

To install and version Terraform providers, follow these steps:

1. Initialize a new Terraform configuration by running `terraform init` in your project directory.

2. Add the required provider to your configuration file (e.g. `provider "aws" {}` for AWS
provider).

3. Run `terraform init -upgrade` to ensure you have the latest version of the provider.

4. If you want to use a specific version of the provider, you can specify it in your configuration
file using the `version` argument (e.g. `version = "3.5.0"`).

5. Run `terraform init -reconfigure` to update the configuration file with the specified provider
version.

6. Once you have the provider installed and configured, you can start defining resources and
applying changes to your infrastructure.

7. To update the provider to a newer version, simply run `terraform init -upgrade` again.

Note that Terraform automatically downloads and installs the required provider plugins when
you run `terraform init`. If you want to use a specific version of the provider, you must specify
it in your configuration file and then run `terraform init -reconfigure` to update the provider
version.
Also, it's important to periodically check for updates to the providers you're using, as new
versions may include bug fixes, security updates, and new features.

Install the Terraform AWS Provider


To install the Terraform AWS provider, and set the provider version in a way that is very
similar to how you did for Terraform. To begin you need to let Terraform know to use the
provider through a required_providers block in the terraform.tf file as seen below.
terraform.tf

terraform {
required_version = ">= 1.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
}
}
Run a terraform init to install the providers specified in the configuration

terraform init

View installed and required providers


If you ever would like to know which providers are installed in your working directory and those
required by the configuration, you can issue a terraform version and terraform providers
command.

terraform version

terraform providers

resource from random provider


a "random provider" is a provider that generates random values. The random provider can be
used to generate random strings, numbers, Booleans, and more.

The "resource" from the random provider is a Terraform resource that generates a specific
type of random value. For example, the random_string resource from the random provider
generates a random string of a specified length, while the random_integer resource generates
a random integer within a specified range.

These resources are useful for generating unique identifiers, random passwords, or other
values that need to be unpredictable or difficult to guess. They can also be used to simulate
random events or conditions in a test environment. To use these resources, you need to
define them in your Terraform configuration file and specify any required parameters, such as
the length of the random string or the range of the random integer.

3b) Describe plugin-based architecture


Terraform relies on plugins called “providers” to interact with remote systems and expand
functionality. Terraform configurations must declare which providers they require so that
Terraform can install and use them. This is performed within a Terraform configuration block.

Terraform Providers are plugins that implement resource types for particular clouds,
platforms and generally speaking any remote system with an API. Terraform configurations
must declare which providers they require, so that Terraform can install and use them.
Popular Terraform Providers include: AWS, Azure, Google Cloud, VMware, Kubernetes and
Oracle.

Terraform has a plug-in based architecture, which means that it is designed to work with a
variety of plug-ins, each of which provides support for a specific infrastructure provider. This
architecture allows Terraform to support a wide range of cloud providers, as well as other
infrastructure services.

Each plug-in is responsible for translating Terraform configuration files into the API calls
necessary to create and manage resources in the target infrastructure provider. Plug-ins are
written in Go, and they are compiled into a single binary file that is distributed with the
Terraform executable.

When Terraform runs, it dynamically loads the plug-ins that are required for the specific
configuration files being used. This means that you only need to install the plug-ins that you
are actually using, which can help reduce the size of your Terraform installation and improve
performance.

The plug-in architecture also makes it easy to extend Terraform with new features or support
for new providers. Anyone can write a plug-in for Terraform, and there is a growing ecosystem
of third-party plug-ins that provide support for a wide range of providers and services. This
makes Terraform a highly flexible and extensible tool for managing cloud infrastructure.
Terraform Providers
Terraform Providers are plugins that implement resource types for particular clouds,
platforms and generally speaking any remote system with an API. Terraform configurations
must declare which providers they require, so that Terraform can install and use them.
Popular Terraform Providers include: AWS, Azure, Google Cloud, VMware, Kubernetes and
Oracle.

3c) Write Terraform configuration using multiple providers


A Terraform configuration that uses multiple providers is a configuration that manages
infrastructure across multiple cloud providers or services, using more than one provider block
in the configuration.

provider "aws" {
access_key = "ACCESS_KEY"
secret_key = "SECRET_KEY"
region = "us-east-1"
}

provider "azurerm" {
features {}
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = "subnet-12345678"
}

resource "azurerm_resource_group" "example" {


name = "example-resource-group"
location = "East US"
}

output "aws_instance_id" {
value = aws_instance.example.id
}

output "azurerm_resource_group_id" {
value = azurerm_resource_group.example.id
}
In this example, we are using two providers: aws and azurerm. The aws provider is used to
create an EC2 instance in the us-east-1 region, while the azurerm provider is used to create an
Azure resource group in the East US region.

Note that the provider blocks at the beginning of the configuration specify the authentication
information for each provider, including access keys and secrets for the aws provider, and
feature flags for the azurerm provider.

Each resource block specifies which provider to use by prefixing the resource type with the
provider name, such as aws_instance and azurerm_resource_group.

Finally, the output blocks at the end of the configuration display the IDs of the created
resources, using the id attribute of each resource.

3d) Describe how Terraform finds and fetches providers


Terraform uses a provider system to interact with various infrastructure platforms and
services, such as cloud providers, databases, and monitoring tools. Providers are essentially
plugins that extend the functionality of Terraform, allowing it to manage resources across
multiple infrastructure platforms.

When you define a provider in your Terraform configuration file, Terraform needs to find and
fetch the provider's binary or plugin to execute the operations. Here's how Terraform finds
and fetches providers:

1. Provider Configuration: You specify which provider to use in your Terraform configuration
file using the `provider` block. The block specifies the name of the provider and any required
authentication details, such as access keys, secrets, or credentials.

2. Provider Registry: Terraform maintains a public registry of providers that it can download
and install automatically. You can search for providers in the registry using the `terraform init`
command.

3. Provider Installation: When you run `terraform init`, Terraform downloads the necessary
providers from the registry and installs them in a hidden directory called `.terraform`.
Providers can also be installed manually by downloading and placing them in the
`.terraform/plugins` directory.

4. Provider Caching: Once a provider is installed, Terraform caches it locally to avoid


downloading it again in future runs. The cache is stored in the `.terraform/plugins` directory.
5. Provider Upgrades: Terraform automatically checks for provider upgrades when you run
`terraform init`. If a new version is available, you can choose to upgrade to the latest version
or keep using the current version.

Overall, Terraform's provider system is designed to be flexible and extensible, allowing users
to easily integrate with different infrastructure platforms and services, and enabling provider
developers to maintain and distribute their providers through a central registry.

Fetch, Version and Upgrade Terraform Providers


Terraform relies on plugins called “providers” to interact with remote systems and expand
functionality. Terraform providers can be versioned inside a Terraform configuration block. To
prevent external changes from causing unintentional changes, it’s highly recommended that
providers specify versions which they are tied to. Depending on the level of acceptable risk
and management effort to be tracking version updates, that can either be hard locked to a
particular version, or use looser mechanisms such as less than next major rev, or using tilde to
track through bug fix versions.

How to upgrade provider versions


To upgrade provider versions in Terraform, you can use the `terraform init` command with the
`-upgrade` option. Here are the steps:

1. Check for available upgrades: Run `terraform init` with the `-upgrade` option to check for
available upgrades. Terraform will download the latest version of each provider, compare it to
the version currently installed, and display a list of available upgrades.

$ terraform init -upgrade

2. Choose the providers to upgrade: Review the list of available upgrades and decide which
providers to upgrade. You can upgrade all providers by pressing `Enter` or select specific
providers by typing their names.

Terraform has upgraded this configuration to a newer version of the Terraform language, and
the new version requires a few provider plugin updates.

- Upgrading hashicorp/aws from "3.45.0" to "3.46.0"...


- Upgrading hashicorp/null from "3.1.0" to "3.1.0"...

Do you want to upgrade these providers?


Terraform will upgrade the providers shown above.
Enter a value:
3. Confirm the upgrades: Once you've chosen the providers to upgrade, Terraform will display
a summary of the changes and ask for confirmation. Type `yes` to confirm the upgrades and
start downloading the new provider versions.

hashicorp/aws: Upgrading from 3.45.0 to 3.46.0...


hashicorp/null: Upgrading from 3.1.0 to 3.1.0...
Terraform has upgraded the following providers in your configuration:
hashicorp/aws: "3.46.0"
hashicorp/null: "3.1.0"

4. Apply the changes: After the new provider versions are downloaded, you can apply the
changes to your infrastructure using `terraform apply`.

$ terraform apply

Note that upgrading provider versions may introduce breaking changes, so it's important to
review the provider release notes and test your infrastructure thoroughly after the upgrades.
Additionally, upgrading providers may require you to update your Terraform configuration file
with new or changed provider-specific arguments.

Terraform Provisioners
Provisioners can be used to model specific actions on the local machine or on a remote
machine in order to prepare servers or other infrastructure objects for service.
To this point the EC2 web server we have created is useless. We created a server without any
running code with no useful services are running on it.
We will utilize Terraform provisoners to deploy a webapp onto the instance we’ve created. In
order run these steps Terraform needs a connection block along with our generated SSH key
from the previous labs in order to authenticate into our instance. Terraform can utilize both
the local-exec provisioner to run commands on our local workstation, and the remote-exec
provisoner to install security updates along with our web application.

4 Use Terraform outside of core workflow


Terraform is a powerful tool for managing infrastructure as code, but it can also be used
outside of the core workflow to automate other tasks or integrate with other systems. Here
are some examples:

1. Data processing and analysis: Terraform can be used to automate data processing and
analysis tasks by provisioning and configuring computing resources like EC2 instances, running
scripts, and performing other tasks needed for the analysis.
2. Continuous integration and deployment (CI/CD): Terraform can be integrated with CI/CD
tools like Jenkins or CircleCI to automate the creation and destruction of infrastructure for
testing and deploying code changes.

3. Disaster recovery: Terraform can be used to automate disaster recovery processes by


provisioning and configuring resources in a secondary region or cloud provider in case of a
primary region or provider outage.

4. Resource tagging: Terraform can be used to automatically apply resource tags to


infrastructure resources based on specific criteria, such as resource type, environment, or cost
center.

5. Compliance and security: Terraform can be used to enforce security policies and
compliance regulations by automatically provisioning and configuring infrastructure resources
with the necessary security measures and compliance standards.

When using Terraform outside of the core workflow, it's important to ensure that the
Terraform code is properly version controlled and tested, just like any other codebase.
Additionally, be aware of any potential security or compliance risks and take steps to mitigate
them.

Terraform Taint and Replace


`terraform taint` and `terraform replace` are two commands in Terraform that are used to
change the state of a specific resource.

- `terraform taint` is used to mark a resource as "tainted" in the Terraform state. This means
that Terraform will consider the resource as having been modified outside of Terraform and
will attempt to recreate it on the next `terraform apply` run. You can use `terraform taint`
when you need to recreate a resource because of some changes that have happened outside
of Terraform, such as manual updates or configuration changes.

Here's an example of using `terraform taint`:

```
$ terraform taint aws_instance.example
Resource instance aws_instance.example has been marked as tainted.
```
- `terraform replace` is used to replace one resource with another resource in the Terraform
state. This command can be useful when you need to replace an existing resource with a new
one but want to keep the existing resource's dependencies intact. For example, you may want
to replace an EC2 instance with a new one that has the same IP address or security group
settings.

Here's an example of using `terraform replace`:

```
$ terraform replace aws_instance.example aws_instance.new_example
Replaced resource aws_instance.example with aws_instance.new_example in the Terraform
state.
```

Note that `terraform replace` does not actually create a new resource; instead, it updates the
Terraform state to point to the new resource. You will still need to apply the changes using
`terraform apply` to create the new resource. Also, be careful when using `terraform taint`
and `terraform replace` as they can cause resources to be recreated or replaced, which may
result in downtime or data loss if not managed carefully.

4a) Describe when to use terraform import to import


existing infrastructure into your Terraform state

`terraform import` is a command in Terraform that allows you to import existing


infrastructure resources into your Terraform state. This is useful when you have resources
that were created outside of Terraform and you want to manage them using Terraform.

Here are some scenarios where you may want to use `terraform import`:

1. You have existing resources that you want to manage using Terraform: If you have
infrastructure resources that were created outside of Terraform and you want to manage
them using Terraform, you can use `terraform import` to import them into your Terraform
state. This will allow you to manage those resources using Terraform, apply configuration
changes, and use Terraform to plan infrastructure updates.

2. You have made manual changes to resources that were created using Terraform: If you
have made manual changes to resources that were originally created using Terraform, you can
use `terraform import` to bring those resources back under Terraform management. This will
allow you to manage those resources using Terraform going forward and keep track of
changes made to them.

3. You are migrating from a different infrastructure-as-code tool: If you are migrating from a
different infrastructure-as-code tool to Terraform, you may want to use `terraform import` to
bring your existing resources into your Terraform state. This will allow you to continue
managing those resources using Terraform going forward.

It's important to note that `terraform import` can be a complex and error-prone process, and
requires careful attention to detail. You must provide Terraform with the necessary resource
identifiers and ensure that the resource configuration in your Terraform code matches the
existing resource. Additionally, `terraform import` does not create new resources, it only
imports existing ones into the Terraform state, so you will need to make sure that any
dependencies or associated resources are also managed by Terraform.

Terraform Import
We’ve already seen many benefits of using Terraform to build out our cloud infrastructure.
But what if there are existing resources that we’d also like to manage with Terraform?

Enter terraform import.

With minimal coding and effort, we can add our resources to our configuration and bring them into
state

Terraform Workspaces – OSS


Those who adopt Terraform typically want to leverage the principles of DRY (Don’t Repeat Yourself)
development practices. One way to adopt this principle with respect to IaC is to utilize the same
code base for different environments (development, quality, production, etc.)
Workspaces is a Terraform feature that allows us to organize infrastructure by environments and
variables in a single directory.Terraform is based on a stateful architecture and therefore stores
state about your managed infrastructure and configuration. This state is used by Terraform to map
real world resources to your configuration, keep track of metadata, and to improve performance for
large infrastructures. The persistent data stored in the state belongs to a Terraform workspace.
Initially the backend has only one workspace, called “default”, and thus there is only one Terraform
state associated with that configuration.

4b) Use terraform state to view Terraform state


`terraform state` is a command in Terraform that allows you to view and manage the state of
your infrastructure resources. Here are some examples of how to use `terraform state`:
1. View the current state of your resources: You can use `terraform state list` to list all
resources currently managed by Terraform, and then use `terraform state show <resource-
name>` to view the details of a specific resource. This can be useful for checking the current
state of your infrastructure resources, including any changes that have been made outside of
Terraform.

2. Update the state of a resource: You can use `terraform state replace-provider` to update
the provider configuration of a specific resource in your state file. This can be useful when
switching to a new provider or updating provider configurations.

3. Remove a resource from the Terraform state: If you want to remove a resource from the
Terraform state file, you can use `terraform state rm <resource-name>`. This will remove the
resource from the state file, but will not destroy the actual resource.

4. Import an existing resource into the Terraform state: You can use `terraform state pull` to
download the current state of a remote backend, or use `terraform state import` to import an
existing resource into the Terraform state file. This can be useful when you want to start
managing an existing resource using Terraform.

5. Clean up your state file: If you have unused resources in your Terraform state file, you can
use `terraform state rm` to remove them. Additionally, you can use `terraform state mv` to
move resources from one module to another, or rename resources in your state file.

It's important to use `terraform state` with care, as any changes made to the state file can
have a significant impact on your infrastructure resources. Always back up your state file
before making any changes, and use caution when removing or modifying resources in your
state file.

Terraform State Command


The terraform state command is used for advanced state management. As your Terraform
usage becomes more advanced, there are some cases where you may need to modify the
Terraform state. Rather than modify the state directly, the terraform state commands can be
used in many cases instead.

4c) Describe when to enable verbose logging and what the


outcome/value is
Verbose logging can be enabled in Terraform to get more detailed information about what
Terraform is doing behind the scenes. This can be useful in several scenarios:
1. Debugging Terraform errors: When Terraform encounters an error, enabling verbose
logging can provide additional information that can help you troubleshoot and resolve the
error. The verbose output will show the exact steps Terraform is taking and which resource is
causing the error.

2. Analyzing Terraform performance: Enabling verbose logging can also help you analyze
Terraform performance and identify any slow or problematic resources. The verbose output
will show how much time Terraform is spending on each resource and which resources are
taking the longest to create or modify.

3. Understanding Terraform behavior: Verbose logging can provide a more detailed


understanding of how Terraform works and what it is doing. This can be useful for learning
how to use Terraform more effectively or for gaining a deeper understanding of the
underlying infrastructure resources.

The outcome/value of enabling verbose logging is that it provides a more detailed picture of
what Terraform is doing behind the scenes. This can be helpful in identifying errors, analyzing
performance, and understanding Terraform behavior. However, verbose logging can produce
a lot of output, so it is important to use it judiciously and only when needed. Additionally,
verbose logging can slow down Terraform performance, so it is recommended to disable it
after you have identified and resolved the issue you are investigating. To enable verbose
logging in Terraform, you can use the `-debug` or `-trace` flags when running Terraform
commands.

Enable Logging
Terraform has detailed logs which can be enabled by setting the TF_LOG environment variable to
any value. This will cause detailed logs to appear on stderr.
You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or ERROR to change the
verbosity of the logs, with TRACE being the most verbose.

Linux
export TF_LOG=TRACE

PowerShell
$env:TF_LOG="TRACE"

Enable Logging Path


To persist logged output you can set TF_LOG_PATH in order to force the log to always be
appended to a specific file when logging is enabled. Note that even when TF_LOG_PATH is set,
TF_LOG must be set in order for any logging to be enabled.
Disable Logging
In Terraform, logging is enabled by default and provides important information about what
Terraform is doing and any errors that occur. However, in some cases, you may want to
disable logging to reduce the amount of output or to prevent sensitive information from being
logged.

To disable logging in Terraform, you can set the `TF_LOG` environment variable to the value
`none`. This will prevent Terraform from writing any log output to the console or log file.

Here are the steps to disable logging in Terraform:

1. Open a terminal window or command prompt.

2. Set the `TF_LOG` environment variable to `none` using the command:

export TF_LOG=none

This command sets the `TF_LOG` environment variable to `none` for the current terminal
session. If you are using Windows, the command to set the environment variable may be
different.

3. Run your Terraform command as usual.

With `TF_LOG` set to `none`, Terraform will not produce any logging output to the console or
log file. If you want to re-enable logging, you can set `TF_LOG` to a different value, such as
`INFO`, `DEBUG`, or `TRACE`.

It's important to note that disabling logging can make it harder to troubleshoot errors or
diagnose issues with your infrastructure. It is recommended to use logging when possible and
to only disable it when necessary.

5 Interact with Terraform modules


In Terraform, modules are reusable packages of Terraform configuration that can be used to
manage infrastructure resources. Interacting with Terraform modules involves using the
module in your Terraform configuration, passing input variables to the module, and retrieving
output values from the module.

Here are the steps to interact with Terraform modules:


1. Define the module in your Terraform configuration. To use a module in your Terraform
configuration, you first need to define the module block, which specifies the location of the
module and any input variables that the module requires. For example:

module "example" {
source = "github.com/example/modules//module_name"
var1 = "value1"
var2 = "value2"
}

This block defines a module named "example" that is located in a GitHub repository. The
module requires two input variables, `var1` and `var2`, which are set to the values "value1"
and "value2", respectively.

2. Run `terraform init`. After defining the module in your Terraform configuration, you need to
run `terraform init` to download the module and any dependencies.

3. Use the module in your Terraform configuration. Once you have defined the module and
run `terraform init`, you can use the module in your Terraform configuration by referencing
the module name and any output values that you want to use. For example:

resource "aws_instance" "example_instance" {


ami = module.example.ami_id
instance_type = module.example.instance_type
subnet_id = module.example.subnet_id
}

This block creates an AWS EC2 instance using values from the "example" module. The AMI ID,
instance type, and subnet ID are retrieved from the output values of the module.

4. Pass input variables to the module. Input variables are used to customize the behavior of a
module. To pass input variables to a module, you can specify the variable values in the
module block. For example:

module "example" {
source = "github.com/example/modules//module_name"
var1 = "value1"
var2 = "value2"
var3 = "value3"
}

This block sets three input variables for the "example" module: `var1`, `var2`, and `var3`.

5. Retrieve output values from the module. Output values are used to provide information
from a module to the rest of your Terraform configuration. To retrieve output values from a
module, you can reference the module name and the output variable name. For example:

resource "aws_security_group_rule" "example_rule" {


type = "ingress"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = module.example.security_group_id
}

This block creates an AWS security group rule that allows inbound traffic on port 80. The
security group ID is retrieved from the output value `security_group_id` of the "example"
module.

These are the basic steps to interact with Terraform modules. By defining modules, passing
input variables, and retrieving output values, you can easily reuse and share Terraform code
across multiple projects and teams.

Terraform Modules
Terraform configuration can be separated out into modules to better organize your
configuration. This makes your code easier to read and reusable across your organization. A
Terraform module is very simple: any set of Terraform configuration files in a folder is a
module. Modules are the key ingredient to writing reusable and maintainable Terraform
code. Complex configurations, team projects, and multi-repository codebases will benefit
from modules. Get into the habit of using them wherever it makes sense.

Terraform Module Sources


Modules can be sourced from a number of different locations, including both local and remote
sources. The Terraform Module Registry, HTTP urls and S3 buckets are examples of remote sources,
while folders and subfolders are examples of local sources. Support for various module sources
allow you to include Terraform configuration from a variety of locations while still providing proper
organization of code.
5a) Contrast and use different module source options
including the public Terraform Module Registry
In Terraform, there are different ways to specify the source of a module that you want to use
in your Terraform configuration. The most common module sources are:

1. Local path: You can specify the path to a local directory that contains the module. This is
useful for developing modules that are specific to your organization or project.

Example:

module "example" {
source = "./modules/example"
}

2. Git repository: You can specify the URL of a Git repository that contains the module. This is
useful for sharing modules across different projects or organizations.

Example:

module "example" {
source = "git::https://github.com/example/modules.git//example"
}

3. Terraform Registry: You can specify the name of a module in the public Terraform Module
Registry. This is useful for discovering and using modules that are created and maintained by
the community.

Example:

module "example" {
source = "terraform-aws-modules/vpc/aws"
}

Using the Terraform Module Registry involves a few extra steps:

- Make sure you have the latest version of the Terraform CLI installed.
- Run `terraform init` to initialize your Terraform configuration and download the necessary
providers and modules.
- If you're using a module from the public registry for the first time, you may need to log in to
your Terraform Cloud account, or set up a credentials file with your API token.
When you use a module from the Terraform Module Registry, Terraform will automatically
download the module and any required dependencies. You can also specify the version of the
module that you want to use, or use a dynamic version constraint to always use the latest
compatible version.

Example:

module "example" {
source = "terraform-aws-modules/vpc/aws"
version = "2.0.0"
}

In summary, the different module source options allow you to reuse and share Terraform
code across different projects and organizations. You can use local paths for development, Git
repositories for sharing across projects, and the public Terraform Module Registry for
discovering and using community-maintained modules.

5b) Interact with module inputs and outputs


When you use a module in your Terraform configuration, you can interact with its inputs and
outputs. Inputs are values that you pass to the module when you use it, while outputs are
values that the module returns after it has been applied.

To use a module's inputs, you can declare a variable in your configuration and pass it to the
module using the `variables` block:

variable "region" {
type = string
default = "us-west-1"
}
module "example" {
source = "./modules/example"
region = var.region
}

In this example, we declare a variable named `region` with a default value of `us-west-1`. We
then pass this variable to the module using the `region` input.
To use a module's outputs, you can reference them using the `module` block:

module "example" {
source = "./modules/example"
region = var.region
}

output "instance_id" {
value = module.example.instance_id
}

In this example, we declare an output named `instance_id` that references the `instance_id`
output of the `example` module. When you run `terraform apply`, Terraform will first apply
the `example` module, and then output its `instance_id`.

You can also use module outputs as inputs to other modules. For example:

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.0.0"
# ...
}
module "example" {
source = "./modules/example"
vpc_id = module.vpc.vpc_id
}

In this example, we use the `vpc_id` output of the `vpc` module as an input to the `example`
module. This allows the `example` module to create resources that are attached to the VPC
created by the `vpc` module.

In summary, to interact with a module's inputs and outputs, you can declare variables and
outputs in your configuration, and reference them in the `module` block. You can also use
module outputs as inputs to other modules.
Terraform Modules Inputs and Outputs
To make a Terraform module configurable you can add input parameters to the module. These are
defined within the module using input variables. A module can also return values to the
configuration that called the module. These module returns or outputs are defined using terraform
output blocks.

5c) Describe variable scope within modules/child modules


In Terraform, variables have a scope that determines where they can be accessed and used.
When working with modules and child modules, the scope of a variable is determined by its
definition location and visibility.

Variables declared within a module are only visible within that module and cannot be
accessed outside of it. This is known as module-local scope. However, a module can expose
some of its variables to be used by other modules or the root configuration using outputs.

Child modules can access variables declared in their parent module, but not vice versa. This is
known as child-module scope. The variables that are declared in the parent module are
passed as inputs to the child module when it is instantiated. The child module can then access
these variables within its own scope using the `var` syntax.

For example, consider the following module structure:

```
module "parent" {
source = "./modules/parent"
variable1 = "foo"
}

module "child" {
source = "./modules/child"
variable2 = "bar"
parent_variable1 = module.parent.variable1
}
```

In this example, the `parent` module declares a variable named `variable1`. The `child` module
declares a variable named `variable2` and accesses `variable1` from the parent module using
`module.parent.variable1`.
It's important to note that when working with complex module structures that include
multiple levels of nesting, it can become difficult to manage the scope of variables. To avoid
potential naming conflicts, it's a good practice to use unique and descriptive names for
variables, as well as to avoid using global variables. Additionally, it's important to carefully
define the inputs and outputs of each module to ensure that they are used correctly and to
avoid unexpected errors.

Resources within Child Modules


In principle any combination of resources and other constructs can be factored out into a
module, but over-using modules can make your overall Terraform configuration harder to
understand and maintain, so we recommend moderation. A good module should raise the
level of abstraction by describing a new concept in your architecture that is constructed from
resource types offered by providers.
Let’s take a closer look at the auto scaling group module that we are calling to further
understand which resources are used to construct the module. Inside our main.tfwe can see
that the autoscaling module we are calling is being sourced from the Terraform Public Module
registry, and we are passing 10 inputs into the module.

Terraform Module Scope


Deciding what infrastructure to include is the one of the most challenging aspects about
creating a new Terraform module.
Modules should be opinionated and designed to do one thing well. If a module’s function or
purpose is hard to explain, the module is probably too complex. When initially scoping your
module, aim for small and simple to start.

Terraform Modules - Public Module Registry Hashicorp maintains a public registry that helps
you to consume Terraform modules from others. The Terraform Public Registry is an index of
modules shared publicly and is the easiest way to get started with Terraform and find
modules created by others in the community. It includes support
for module versioning and searchable and filterable list of available modules for quickly
deploying common infrastructure configurations.
Modules on the public Terraform Registry can be sourced using a registry source address of
the form <NAMESPACE>/<NAME>/<PROVIDER>, with each module’s information page on the
registry site including the exact address to use.
Notes about building Terraform Modules

When building a module, consider three areas:

• Encapsulation: Group infrastructure that is always deployed together. Including more


infrastructure in a module makes it easier for an end user to deploy that infrastructure but
makes the module’s purpose and requirements harder to understand
• Privileges: Restrict modules to privilege boundaries. If infrastructure in the module is the
responsibility of more than one group, using that module could accidentally violate
segregation of duties. Only group resources within privilege boundaries to increase
infrastructure segregation and secure your infrastructure
• Volatility: Separate long-lived infrastructure from short-lived. For example, database
infrastructure is relatively static while teams could deploy application servers multiple times a
day.
Managing database infrastructure in the same module as application servers exposes
infrastructure that stores state to unnecessary churn and risk.

A simple way to get start with creating modules is to:

• Always aim to deliver a module that works for at least 80% of use cases.
• Never code for edge cases in modules. An edge case is rare. A module should be a reusable
block of code.
• A module should have a narrow scope and should not do multiple things.
• The module should only expose the most commonly modified arguments as variables.
Initially, the module should only support variables that you are most likely to need.

5d) Set module version


In Terraform, you can specify the version of a module using the `version` argument in the
`module` block.

There are different ways to specify the module version:

1. **Version constraint strings:** You can specify a version range or a specific version using a
constraint string. For example, to use version `1.2.3`, you can set `version = "1.2.3"`. To use
any version in the `1.2.x` range, you can set `version = "~> 1.2"`.

2. **Module sources with version suffix:** You can also include the version in the module
source URL by appending `#<version>` to the end of the source URL. For example, to use
version `1.2.3` of the `example` module from the Terraform Module Registry, you can set
`source = "terraform-registry/modules/example/aws#1.2.3"`.

3. **Module sources with version variable:** You can set the version in a variable and use
that variable in the module source URL. For example:

variable "example_version" {
default = "1.2.3"
}

module "example" {
source = "terraform-registry/modules/example/aws#v${var.example_version}"
}

In this example, the version is set in the `example_version` variable, which is then used in the
module source URL. The `v` prefix is required for the version variable in this case because the
Terraform Module Registry expects versions to be prefixed with `v`.

It's important to keep in mind that specifying a version for a module is important for
maintaining reproducibility and stability in your infrastructure. By using a specific version, you
can ensure that your infrastructure always uses the same version of the module, which can
help prevent unexpected changes or issues in your environment.

Terraform Module Versions


Modules, like any piece of code, are never complete. There will always be new module
requirements and changes.
Each distinct module address has associated with it a set of versions, each of which has an
associated version number. Terraform assumes version numbers follow the Semantic
Versioning 2.0 convention. Each module block may select a distinct version of a module, even
if multiple blocks have the same source address.

6 Use the core Terraform workflow


The core Terraform workflow consists of the following steps:

1. **Initialize**: This step initializes a new or existing Terraform working directory. It


downloads the required providers and modules and prepares the backend to store the state
file. To initialize the working directory, you can run the `terraform init` command.

2. **Configuration**: This step involves defining the desired state of your infrastructure using
Terraform configuration files. This involves specifying resources, variables, and other settings
that define your infrastructure. You can use variables to define your configuration in a more
flexible and reusable way.

3. **Planning**: In this step, Terraform creates an execution plan that describes what actions
it will take to achieve the desired state of your infrastructure. The plan includes the creation,
modification, or destruction of resources. To generate a plan, you can run the `terraform plan`
command.

4. **Execution**: This step involves applying the changes described in the execution plan.
Terraform will create, modify, or destroy resources as necessary to bring the infrastructure to
the desired state. To apply the changes, you can run the `terraform apply` command.

5. **Review**: After applying the changes, you should review the state of your infrastructure
to ensure that it is in the expected state. You can use the `terraform show` command to
display the current state.

6. **Destroy**: If you want to remove the resources created by Terraform, you can use the
`terraform destroy` command. This step should be used with caution as it will remove all
resources created by Terraform and cannot be undone.

This workflow allows you to define and manage your infrastructure as code, which makes it
easier to maintain, reproduce, and scale. It also provides version control and auditability for
your infrastructure, which can help ensure that it meets compliance and security
requirements.

6a) Describe Terraform workflow ( Write -> Plan -> Create )

Terraform Workflow

The core Terraform workflow has three steps:


1. Write - Author infrastructure as code.
2. Plan - Preview changes before applying.
3. Apply - Provision reproducible infrastructure.

The command line interface to Terraform is via the terraform command, which accepts a
variety of subcommands such as terraform init or terraform plan. The Terraform command
line tool is available
for MacOS, FreeBSD, OpenBSD, Windows, Solaris and Linux.
• Task 1: Verify Terraform installation
• Task 2: Using the Terraform CLI
• Task 3: Initializing a Terraform Workspace
• Task 4: Generating a Terraform Plan
• Task 5: Applying a Terraform Plan
• Task 6: Terraform Destroy

6b) Initialize a Terraform working directory (terraform


init)
The `terraform init` command initializes a new or existing Terraform working directory. During
initialization, Terraform downloads the required provider plugins and modules and prepares
the backend to store the state file. Here's how you can use `terraform init` to initialize a
working directory:

1. Open a terminal or command prompt window and navigate to the directory where your
Terraform configuration files are located.

2. Run the `terraform init` command. This will download any provider plugins and modules
specified in your configuration files and create the state file in the backend. If you are using
remote state storage, such as with AWS S3 or HashiCorp Consul, you will need to configure
the backend before running `terraform init`.

$ terraform init

3. Once `terraform init` completes, you should see output indicating which provider plugins
and modules were downloaded and where they were installed. You may also see warnings or
errors if any dependencies are missing or if there are issues with the backend configuration.

Terraform has been successfully initialized!


...
After running `terraform init`, you are ready to begin configuring and managing your
infrastructure with Terraform. It's a good practice to run `terraform init` whenever you add or
remove provider plugins or modules, or if you switch to a different backend configuration.
This ensures that your working directory is up-to-date and ready to use with the latest
configuration changes.

terraform init
The terraform init command is used to initialize a working directory containing Terraform
configuration files. This is the first command that should be run after writing a new Terraform
configuration or cloning an existing one from version control.

Task 1: Initialize a Terraform working directory


• Task 2: Re-initialize after adding a new provider
• Task 3: Re-initialize after adding a new module
• Task 4: Re-initialize after modifying a Terraform backend
• Task 5: Other initialization steps/considerations
6c) Validate a Terraform configuration (terraform
validate)
The `terraform validate` command is used to check the syntax and validity of a Terraform
configuration file. It verifies that the configuration file is properly formatted and does not
contain any syntax errors or incorrect settings. Here's how you can use `terraform validate` to
validate a Terraform configuration:

1. Open a terminal or command prompt window and navigate to the directory where your
Terraform configuration files are located.

2. Run the `terraform validate` command, specifying the name of the configuration file that
you want to validate. For example, if your configuration file is named `main.tf`, you would run:

$ terraform validate main.tf

If you want to validate all of the configuration files in the current directory, you can run:

$ terraform validate

3. If there are no syntax errors or validation errors in your configuration file(s), `terraform
validate` will return a message indicating that the configuration is valid. If there are errors,
`terraform validate` will return an error message describing the issue(s).
Success! The configuration is valid.

By using `terraform validate`, you can quickly catch errors and ensure that your Terraform
configuration files are properly formatted and valid before running `terraform apply` or
`terraform plan`. This can help prevent issues with your infrastructure and save you time and
effort in the long run.

terraform validate
The terraform validate command validates the configuration files in a directory, referring only
to the Terraform configuration files. Validate runs checks that verify whether a configuration
is syntactically valid and internally consistent.

• Task 1: Validate Terraform configuration


• Task 2: Terraform Validate False Positives
• Task 3: JSON Validation Output

6d) Generate and review an execution plan for Terraform


(terraform plan)
The `terraform plan` command is used to generate an execution plan for Terraform. The plan
shows what actions Terraform will take to create, modify, or delete resources based on the
current state and the desired state described in the configuration files. Here's how you can
use `terraform plan` to generate and review an execution plan:

1. Open a terminal or command prompt window and navigate to the directory where your
Terraform configuration files are located.

2. Run the `terraform plan` command to generate an execution plan. This command will
examine your configuration files and compare the desired state with the current state of your
infrastructure. Terraform will then generate a plan that shows what actions it will take to
make the desired state match the current state.

```
$ terraform plan
```
3. Review the execution plan that Terraform generates. The plan will list the actions that
Terraform will take, including creating new resources, modifying existing resources, or
deleting resources. The plan will also list any changes to resource attributes or settings.
Review the plan carefully to ensure that it matches your expectations.

4. If the plan looks correct, you can apply the changes by running `terraform apply`. If you
need to make changes to your configuration files, you can edit them and run `terraform plan`
again to generate a new plan.

By using `terraform plan`, you can review the actions that Terraform will take before actually
making any changes to your infrastructure. This can help prevent unexpected changes and
ensure that your infrastructure remains in a consistent and predictable state.

terraform plan
The terraform plan command performs a dry-run of executing your terraform configuration
and checks whether the proposed changes match what you expect before you apply the
changes or share your changes with your team for broader review.

• Task 1: Generate and Review a plan


• Task 2: Save a Terraform plan
• Task 3: No Change Plans
• Task 4: Refresh Only plans

6e) Execute changes to infrastructure with Terraform


(terraform apply)
In Terraform, `terraform apply` is a command that is used to apply the changes to the
infrastructure that you have specified in your Terraform configuration files.
When you run `terraform apply`, Terraform reads your configuration files, creates a plan for
how to make the desired changes to your infrastructure, and then asks for your confirmation
to proceed. Once you confirm, Terraform executes the plan by creating, updating, or deleting
the necessary resources in your infrastructure based on the changes you specified in your
configuration files.

`terraform apply` is a powerful command, as it can create, update, or delete entire


infrastructure components or individual resources, depending on what you have specified in
your configuration files. Therefore, it's important to review the changes that will be made
before running `terraform apply` to ensure that you understand and agree with the changes
that will be made to your infrastructure.

6f) Destroy Terraform managed infrastructure (terraform


destroy)

The terraform destroy command is a convenient way to destroy all remote objects managed
by a particular Terraform configuration.

In Terraform, terraform destroy is a command that is used to destroy all the resources that
were created by a particular Terraform configuration.

When you run terraform destroy, Terraform reads the state of the infrastructure that was
created by a previous terraform apply, and creates a plan to destroy all the resources that
were created by that apply. The command then asks for confirmation before executing the
plan. Once you confirm, Terraform proceeds to destroy all the resources that were created by
the previous terraform apply, effectively tearing down the infrastructure.

It's important to note that terraform destroy is a powerful command, as it can destroy all the
resources created by a particular Terraform configuration. Therefore, it's important to review
the resources that will be destroyed before running terraform destroy to ensure that you
understand and agree with the changes that will be made to your infrastructure. Additionally,
it's always a good practice to have backups or disaster recovery plans in place in case anything
goes wrong during the destruction process.
6g) Apply formatting and style adjustments to a configuration
(terraform fmt)

Auto Formatting Terraform Code


Terraform provides a useful subcommand that can be used to easily format all of your code.
This subcommand is called fmt. The subcommand allows you to easily format your Terraform
code to a canonical format and style based on a subset of Terraform language style
conventions. run the fmt subcommand:

terraform fmt

In Terraform, terraform fmt is a command that is used to format Terraform configuration files
according to a standard style.

When you run terraform fmt, Terraform reads the specified configuration files and reformats
them to adhere to a standard style defined by Terraform. This includes indentation, line
breaks, and other formatting conventions. The formatting changes made by terraform fmt are
purely cosmetic and do not affect the functionality of the configuration files.

Using terraform fmt can help ensure that your Terraform configuration files are consistent
and easy to read. It also helps to avoid common mistakes, such as syntax errors and incorrect
indentation, that can arise when working with manually formatted configuration files.

It's a good practice to run terraform fmt on your configuration files before committing them
to version control or sharing them with other team members, as it can help ensure that
everyone is working with files that have a consistent style.

7 Implement and maintain state

In Terraform, implementing and maintaining state involves managing the file that records the
current state of your infrastructure, which is used by Terraform to determine what changes
need to be made to the infrastructure during subsequent runs of terraform apply.

Here are some best practices for implementing and maintaining state in Terraform:

Use a backend: Terraform supports several backends that can be used to store and manage
state, such as S3, Azure Blob Storage, and Consul. Using a backend provides several benefits,
including improved collaboration and easier sharing of state between team members.
Enable state locking: As discussed earlier, state locking ensures that only one user or process
can modify the state file at a time, preventing conflicts that could arise if multiple users or
processes attempted to modify the state simultaneously.

Regularly backup state: While using a backend provides improved durability and availability of
state, it's still important to regularly backup the state file to protect against accidental or
malicious deletion. Consider automating this process and storing backups in a secure location.

Version control state: Storing state in version control provides several benefits, including the
ability to track changes to the state file over time and to easily revert to a previous version if
needed.

Follow security best practices: Since the state file contains sensitive information about your
infrastructure, it's important to follow security best practices, such as encrypting the state file
and restricting access to it only to authorized users.

By following these best practices, you can effectively implement and maintain state in
Terraform, improving collaboration, reducing errors, and ensuring the security of your
infrastructure.

Terraform state file


Terraform state file is a file that keeps track of the resources that Terraform manages for a
particular infrastructure. It contains information about the current state of the resources,
including their configuration, metadata, and other details.

When terraform applies changes to an infrastructure, it updates the state file to reflect the
new state of the resources. Terraform uses the state file to plan and apply changes to the
infrastructure in a consistent and repeatable way.

The state file is also used to store sensitive data, such as passwords or access keys, that are
required for managing the resources. Terraform encrypts the state file by default to protect
this sensitive information.

It's important to note that the state file is unique to each Terraform project and should be
treated as a critical piece of the project's infrastructure. Losing the state file or having it
become corrupted can lead to inconsistencies and errors in the infrastructure. It's
recommended to store the state file in a secure and centralized location, such as a version
control system or a cloud storage service, and to use Terraform's state management features
to manage it.
7a) Describe default local backend

In Terraform, a backend is responsible for storing the state of the infrastructure being
managed by Terraform. The local backend is a default backend that stores the state file on the
local disk of the machine running the Terraform command.

When you run terraform init without specifying a backend, Terraform automatically uses the
local backend. The state file is stored in a file named terraform.tfstate in the current working
directory.

The local backend has some limitations and potential risks. Since the state file is stored on the
local disk, it can be lost if the machine running the Terraform command is damaged or lost.
Additionally, the local backend is not suitable for collaboration between team members, since
each member will have their own copy of the state file, which can lead to conflicts.

Despite these limitations, the local backend is useful for small, non-critical projects or for
testing and experimentation. It's important to note that the local backend is not
recommended for production use, and other backends such as S3, Azure Blob Storage, or
Consul should be used instead for more robust and collaborative deployments.

7b) Describe state locking

Terraform uses persistent state data to keep track of the resources it manages. Since it needs
the state in order to know which real-world infrastructure objects correspond to the
resources in a configuration, everyone working with a given collection of infrastructure
resources must be able to access the same state data. Terraform’s local state is stored on disk
as JSON, and that file must always be up to date before a person or process runs Terraform. If
the state is out of sync, the wrong operation might occur, causing unexpected results. If
supported, the state backend will “lock” to prevent concurrent modifications which could
cause corruption.

In Terraform, state locking is a mechanism that ensures that only one user or process can
modify the state file at a time, preventing conflicts that could occur if multiple users or
processes attempted to modify the state simultaneously.

When Terraform applies changes to infrastructure, it updates the state file to reflect the
current state of the infrastructure. This state file can be used by other Terraform commands,
such as terraform plan and terraform destroy, to determine the current state of the
infrastructure and to plan and execute changes to it. However, if multiple users or processes
attempt to modify the state file simultaneously, conflicts can arise, leading to inconsistent
state and potentially damaging consequences.

State locking helps prevent these conflicts by ensuring that only one user or process can
modify the state file at a time. When a user or process initiates a change to the infrastructure,
Terraform creates a lock file to prevent other users or processes from modifying the state file
while the change is in progress. Once the change is complete, Terraform releases the lock,
allowing other users or processes to modify the state file.

There are several mechanisms that can be used for state locking in Terraform, including file-
based locking, Consul-based locking, and DynamoDB-based locking. The specific mechanism
used depends on the infrastructure being managed and the requirements of the organization
using Terraform.

7c) Handle backend and cloud integration authentication


methods

Backend Configuration: Authentication


Some backends allow us to provide access credentials directly as part of the configuration.
However, in normal use we do not recommend including access credentials as part of the
backend configuration. Instead, leave those arguments completely unset and provide
credentials via the credentials files or environment variables that are conventional for the
target system, as described in the documentation for each backend.

In Terraform, different backends require different authentication methods for integration with
cloud providers. Here are some common authentication methods used with different
backends:

1. S3 backend: When using the S3 backend, authentication can be done by providing access
and secret keys or by using an instance profile on an EC2 instance. The access and secret keys
can be set as environment variables or in the AWS CLI credentials file.

2. Azure Blob Storage backend: When using the Azure Blob Storage backend, authentication
can be done by providing an Azure storage account name and access key or by using a
managed identity. The storage account name and access key can be set as environment
variables or in the Terraform configuration file.
3. Consul backend: When using the Consul backend, authentication can be done by providing
a Consul token or by using an ACL token. The token can be set as an environment variable or
in the Terraform configuration file.

In addition to backend authentication, Terraform also requires authentication to interact with


cloud providers, such as AWS, Azure, or GCP. Here are some common authentication methods
used with cloud providers:

1. Access and secret keys: This method involves providing access and secret keys for the cloud
provider, which can be set as environment variables or in the Terraform configuration file.

2. Instance profiles: This method involves assigning an IAM role to an EC2 instance, which can
then be used to authenticate Terraform to the cloud provider.

3. Managed identities: This method involves creating a managed identity in Azure, which can
be used to authenticate Terraform to Azure services.

4. Service accounts: This method involves creating a service account in GCP, which can be
used to authenticate Terraform to GCP services.

It's important to follow best practices for securing authentication credentials, such as using
environment variables or credential files with restricted permissions, and rotating credentials
regularly to minimize the risk of exposure.

7d) Differentiate remote state backend options

In Terraform, there are several options for remote state backends, each with its own strengths
and limitations. Here are some of the most commonly used remote state backends and their
key features:

Amazon S3: This backend stores state in an S3 bucket and provides a durable and scalable
option for remote state storage. It also supports versioning of the state file, which can be
useful for auditing and disaster recovery. S3 can be configured with server-side encryption for
data at rest.

Azure Blob Storage: This backend stores state in an Azure Blob Storage container and provides
similar benefits to S3, including scalability and durability. It also supports versioning and
encryption of the state file.
Google Cloud Storage: This backend stores state in a Google Cloud Storage bucket and offers
the same features as S3 and Azure Blob Storage. It also supports versioning and encryption of
the state file.

HashiCorp Consul: This backend stores state in a Consul cluster and provides a highly available
option for remote state storage. It also offers support for locking to prevent multiple users
from modifying the state file simultaneously.

HashiCorp Vault: This backend stores state in a Vault cluster and provides a highly secure
option for remote state storage. It also offers support for locking and versioning of the state
file.

Terraform Cloud: This backend is a fully managed service by HashiCorp that provides remote
state storage, collaboration, and other features such as workspace management and policy
enforcement. It supports multiple users and provides a user-friendly web interface for
managing infrastructure.

When choosing a remote state backend, it's important to consider factors such as scalability,
durability, security, and collaboration requirements. Additionally, the cost of the backend
should also be taken into account as some options may be more expensive than others,
particularly for larger infrastructure deployments.

7e) Manage resource drift and Terraform state


Resource drift occurs when the actual state of a resource in the cloud infrastructure differs
from the state recorded in the Terraform state file. This can happen if changes are made
outside of Terraform, such as manually modifying resources or using a different tool to
manage them. To manage resource drift and ensure that the state recorded in the Terraform
state file matches the actual state of the resources, there are several strategies you can use:

1. Regularly inspect the state of resources in the cloud provider's console: This can help
identify any discrepancies between the actual state of resources and the state recorded in the
Terraform state file.

2. Use the `terraform refresh` command: This command updates the state file with the
current state of resources in the cloud provider. It should be run before applying any changes
with `terraform apply`.
3. Use the `terraform plan` command: This command shows a preview of changes that will be
applied to the infrastructure. By comparing the plan with the current state of resources in the
cloud provider, you can identify any discrepancies and adjust the plan accordingly.

4. Enable state file locking: This prevents multiple users from modifying the Terraform state
file simultaneously and helps prevent resource drift caused by conflicting changes.

5. Use a version control system: By storing the Terraform configuration files and state file in a
version control system such as Git, you can track changes and revert to previous versions if
necessary.

6. Use a remote state backend: By using a remote state backend, such as AWS S3 or HashiCorp
Consul, you can store the Terraform state file in a centralized location, making it easier to
manage and track changes.

It's important to regularly inspect and manage resource drift to ensure the infrastructure
remains in the desired state. By using these strategies, you can help prevent resource drift
and ensure that changes made to the infrastructure are managed consistently and reliably
using Terraform.

7f) Describe backend block and cloud integration in


configuration

In Terraform, the backend block is used to define the backend configuration for storing the
Terraform state. The backend is responsible for storing the state file and providing locking
mechanisms to prevent concurrent changes. The backend can be either a local or a remote
storage location.

Cloud integration can be achieved through the use of remote state backends, which provide
integration with cloud providers' storage services such as AWS S3, Google Cloud Storage, and
Azure Storage. To configure a remote state backend in Terraform, you need to provide the
necessary information in the backend block. The configuration options for a remote state
backend may vary depending on the backend provider.
Here's an example of a backend block configuration for storing the Terraform state file in an
S3 bucket:
terraform {
backend "s3" {
bucket = "example-bucket"
key = "terraform.tfstate"
region = "us-west-2"
}
}

In this configuration, the backend block specifies the type of backend as s3, and the required
parameters such as the bucket name, key, and region. These values are used by Terraform to
store the state file in the specified S3 bucket.

Once the backend is configured, you can use the terraform init command to initialize the
backend and download any necessary plugins. This command sets up the state storage and
prepares Terraform to manage the infrastructure.

By using a remote state backend, you can centralize and secure the state file, making it easier
to manage and collaborate on infrastructure changes across multiple teams. Additionally,
integrating with cloud providers' storage services provides scalability and durability for the
state file.

7g) Understand secret management in state files

Secrets such as passwords, access keys, and tokens are often required for authentication and
authorization when working with cloud infrastructure resources. However, storing secrets in
plain text in Terraform configuration files or state files can pose a security risk.

To manage secrets in state files, Terraform provides several options:


Use environment variables: You can use environment variables to store secrets instead of
hardcoding them in the Terraform configuration files. Terraform allows you to reference
environment variables in configuration files using the ${var.NAME} syntax.

Use Vault or other third-party tools: Vault is a popular secret management tool that can be
integrated with Terraform to securely store and manage secrets. Other third-party tools that
can be used for secret management include HashiCorp's Consul and AWS Secrets Manager.

Use Sensitive Data Handling in Terraform: Sensitive data can be defined using the sensitive
argument, which is a boolean value that can be added to resource blocks and module blocks.
When a value is marked sensitive, it will not be displayed in command output, logs or state
files.

Use backends with encryption: Backends like AWS S3 and Google Cloud Storage support
server-side encryption to encrypt data at rest. By configuring Terraform to use a backend that
supports encryption, you can store secrets in a more secure manner.

Use Workspaces for separate environments: Workspaces are a feature in Terraform that
allows you to manage multiple environments (e.g. production, staging, development) using a
single Terraform configuration. By separating each environment into a separate workspace,
you can ensure that secrets for each environment are kept separate.

It's important to carefully manage secrets when working with Terraform, as any breach in
security can have serious consequences. By using these options, you can manage secrets in a
secure manner while still being able to manage cloud infrastructure with Terraform.

8 Read, generate, and modify configuration


In Terraform, the configuration is defined using HashiCorp Configuration Language (HCL),
which is a declarative language that is used to describe the desired state of infrastructure
resources. HCL is easy to read and write, and is designed to be human-friendly.

To read a configuration, you can simply open the file using a text editor or an IDE that
supports HCL syntax highlighting. You can then review the configuration and make any
necessary changes.
To generate a configuration, you can use Terraform's built-in terraform init and terraform
apply commands. These commands will create a configuration file for the infrastructure
resources that you are creating, based on the provider and resource configurations that you
specify.

For example, if you wanted to create an AWS EC2 instance, you would need to specify the aws
provider and the aws_instance resource in your configuration file. Then, when you run
terraform apply, Terraform will generate the necessary configuration file for the EC2 instance
and create the instance in your AWS account.

To modify a configuration, you can simply edit the existing configuration file and make any
necessary changes. Once you have made your changes, you can run terraform apply again to
apply the changes to the infrastructure resources.

It's important to note that when making changes to the configuration, Terraform will
automatically detect the changes and create a plan that outlines the changes that will be
made to the infrastructure. This plan can be reviewed before applying the changes, allowing
you to verify that the changes are correct before they are applied.

Overall, reading, generating, and modifying configuration in Terraform is a straightforward


process that can be done using a text editor or IDE, and can be managed using Terraform's
built-in commands.

8a) Demonstrate use of variables and outputs


In Terraform, variables are used to parameterize the configuration, allowing you to reuse the
same configuration for multiple environments or instances. Outputs allow you to extract
values from the Terraform state file and make them available to other configurations or
scripts.

Here's an example of how to use variables and outputs in Terraform:

variable "instance_count" {
default = 1
}
variable "instance_type" {
default = "t2.micro"
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = var.instance_type
count = var.instance_count
}

output "instance_ips" {
value = aws_instance.example.*.public_ip
}
In this example, we define two variables: ‘instance_count’ and ‘instance_type’. The
‘instance_count’ variable has a default value of ‘1’, while the ‘instance_type’ variable has a
default value of t2.micro. These variables are then used in the ‘aws_instance’ resource block
to create EC2 instances.

We also define an output called ‘instance_ips’ that extracts the public IP addresses of the
instances created by the ‘aws_instance’ resource block using the ‘*’ syntax. This output can be
used in other Terraform configurations or scripts to retrieve the IP addresses of the instances.

To use this configuration, we can create a terraform.tfvars file with the following contents:

instance_count = 2
instance_type = "t2.large"

When we run ‘terraform apply’, Terraform will use the values specified in the terraform.tfvars
file for the ‘instance_count’ and ‘instance_type’ variables, rather than the default values
specified in the configuration file.

After the instances are created, we can retrieve their public IP addresses using the ‘terraform
output instance_ips’ command. This will output a list of the public IP addresses of the
instances, which can be used in other scripts or configurations.

Overall, using variables and outputs in Terraform allows for greater flexibility and reusability
in configurations, making it easier to manage infrastructure resources.
8b) Describe secure secret injection best practice

Do Not Store Secrets in Plain Text

Never put secret values, like passwords or access tokens, in .tf files or other files that are
checked into
source control. If you store secrets in plain text, you are giving the bad actors countless ways
to access
sensitive data. Ramifications for placing secrets in plain text include:
• Anyone who has access to the version control system has access to that secret.
• Every computer that has access to the version control system keeps a copy of that secret
• Every piece of software you run has access to that secret.
• No way to audit or revoke access to that secret.

Mark Variables as Sensitive

The first line of defense here is to mark the variable as sensitive so Terraform won’t
output the value in
the Terraform CLI. Remember that this value will still show up in the Terraform state
file:
In your variables.tf file, add the following code:

variable "phone_number" {
type = string
sensitive = true
default = "867-5309"
}
output "phone_number" {
value = var.phone_number
sensitive = true
}

Environment Variables
Another way to protect secrets is to simply keep plain text secrets out of your code by taking
advantage
of Terraform’s native support for reading environment variables. By setting the
TF_VAR_<name>
environment variable, Terraform will use that value rather than having to add that directly to
your
code.
In your variables.tf file, modify the phone_number variable and remove the default value so
the
sensitive value is no longer in cleartext:
variable “phone_number” { type = string sensitive = true }
In your terminal, export the following environment variable and set the value:
export TF_VAR_phone_number="867-5309"

Note: If you are still using Terraform Cloud as your remote backend, you will need to set this
environment variable in your Terraform Cloud workspace instead.
Now, run a terraform apply and see that the plan runs just the same, since Terraform picked
up the value of the sensitive variable using the environment variable. This strategy prevents
us from having to add the value directly in our Terraform files and likely being committed to a
code repository.

Inject Secrets into Terraform using HashiCorp Vault

Another way to protect your secrets is to store them in secrets management solution, like
HashiCorpVault.
By storing them in Vault, you can use the Terraform Vault provider to quickly retrieve values
from Vault and use them in your Terraform code.
Download HashiCorp Vault for your operating system at vaultproject.io. Make sure the binary
is moved to your $PATH so it can be executed from any directory. For help, check out
https://www.vaultproject.io/docs/install. Alternatively, you can use Homebrew (MacOS) or
Chocolatey (Windows). There are also RPMs available for Linux. Validate you have Vault
installed by running: vault version

You should get back the version of Vault you have downloaded and installed.
In your terminal, run vault server -dev to start a Vault dev server. This will launch Vault in a
pre-configured state so we can easily use it for this lab. Note that you should never run Vault
in a production deployment by starting it this way.
Open a second terminal, and set the VAULT_ADDR environment variable. By default, this is set
to HTTPS, but since we’re using a dev server, TLS is not supported.

Fuente El pana:
Secure secret injection is a critical aspect of managing infrastructure resources in Terraform.
Here are some best practices for implementing secure secret injection in your Terraform
configurations:

1. Use a secure secret storage solution: Storing secrets in plain text files is not secure, as these
files can be easily accessed by unauthorized users. Instead, use a secure secret storage
solution such as HashiCorp Vault or AWS Secrets Manager to store and manage secrets.

2. Limit access to secrets: Only grant access to secrets to the people or applications that need
them. Use role-based access control (RBAC) to enforce access controls and restrict access to
secrets to only authorized users.

3. Use environment variables or Terraform variables: Instead of hard-coding secrets in your


configuration files, use environment variables or Terraform variables to inject secrets at
runtime. This way, secrets are not stored in the Terraform state file or in plain text files, and
can be more easily managed and rotated.

4. Use encrypted communication channels: When injecting secrets into Terraform


configurations, ensure that the communication channels used to transmit the secrets are
encrypted. Use HTTPS or other secure protocols to transmit secrets over the network.

5. Rotate secrets regularly: Regularly rotate secrets to ensure that they remain secure. Use a
secret management solution that allows for automated secret rotation to simplify the process.

6. Use auditing and logging: Implement auditing and logging capabilities to track who accessed
secrets, when they were accessed, and what actions were taken using the secrets. This can
help detect and investigate potential security breaches.

By following these best practices, you can ensure that secrets are managed securely and
reduce the risk of unauthorized access to sensitive information.

8c) Understand the use of collection and structural types

Terraform Collections and Structure Types

As you continue to work with Terraform, you’re going to need a way to organize and structure
data. This data could be input variables that you are giving to Terraform, or it could be the
result of resource creation, like having Terraform create a fleet of web servers or other
resources. Either way, you’ll find that data needs to be organized yet accessible so it is
referenceable throughout your configuration.
The Terraform language uses the following types for values:
• string: a sequence of Unicode characters representing some text, like “hello”.
• number: a numeric value. The number type can represent both whole numbers like 15 and
fractional values like 6.283185.
• bool: a boolean value, either true or false. bool values can be used in conditional logic.
• list (or tuple): a sequence of values, like [“us-west-1a”, “us-west-1c”]. Elements in a list or
tuple
are identified by consecutive whole numbers, starting with zero.
• map (or object): a group of values identified by named labels, like {name = “Mabel”, age =
52}.Maps are used to store key/value pairs.
Strings, numbers, and bools are sometimes called primitive types. Lists/tuples and
maps/objects are sometimes called complex types, structural types, or collection types. Up
until this point, we’ve primarily worked with string, number, or bool, although there have
been some instances where we’ve provided a collection by way of input variables. In this lab,
we will learn how to use the different collections and structure types available to us.

In Terraform, collection and structural types are used to store and manipulate data in
configurations. Here's a brief overview of these types and how they are used:

1. Collection types: Collection types are used to store lists, maps, and sets of values. They
include:

- Lists: An ordered collection of values, represented by square brackets ([ ]). For example:
`["foo", "bar", "baz"]`.

- Maps: An unordered collection of key-value pairs, represented by curly braces ({ }). For
example: `{ "key1" = "value1", "key2" = "value2" }`.

- Sets: An unordered collection of unique values, represented by the `set()` function. For
example: `set("foo", "bar", "baz")`.

2. Structural types: Structural types are used to define objects and resources in Terraform
configurations. They include:

- Blocks: Blocks define resources and their configuration parameters. For example: `resource
"aws_instance" "example" { ... }`.

- Modules: Modules are self-contained Terraform configurations that can be reused in other
configurations. They are defined using the `module` block.
- Locals: Locals are used to define variables that are used within a configuration. They are
defined using the `locals` block.

- Outputs: Outputs allow you to extract values from the Terraform state file and make them
available to other configurations or scripts. They are defined using the `output` block.

Collection and structural types are often used together to define complex infrastructure
configurations. For example, you might use a list of maps to define multiple resources of the
same type, or use a module block to define a reusable set of resources.

By understanding the use of collection and structural types in Terraform, you can write more
powerful and flexible configurations that can be easily maintained and scaled.

8d) Create and differentiate resource and data


configuration
In Terraform, resources and data are both used to represent infrastructure objects, but they
have different purposes and configurations. Here's a brief overview of the differences
between resource and data configurations:

1. Resource configurations: Resource configurations define infrastructure objects that


Terraform can manage and provision. Resources are typically created, updated, and destroyed
as part of the infrastructure lifecycle. Resource configurations are defined using the `resource`
block, followed by the resource type and a unique name. For example:

```
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "example-key"
subnet_id = "subnet-abc123"
}
```

In this example, the `aws_instance` resource type is used to define an EC2 instance in AWS.
The resource is given a unique name of "example", and the configuration parameters define
the instance's AMI, instance type, key name, and subnet ID.
2. Data configurations: Data configurations are used to retrieve and reference information
from infrastructure objects that Terraform does not manage directly. Data configurations are
defined using the `data` block, followed by the data source type and any required
configuration parameters. For example:

```
data "aws_ami" "example" {
most_recent = true
owners = ["amazon"]

filter {
name = "name"
values = ["amzn2-ami-hvm*"]
}
}
```

In this example, the `aws_ami` data source type is used to retrieve the most recent Amazon
Linux 2 AMI from AWS. The data source is given a unique name of "example", and the
configuration parameters define the filter criteria to search for the AMI.

Data configurations are used when you need to reference information about infrastructure
objects that Terraform does not manage directly, such as information about security groups,
VPCs, or other resources that are created outside of Terraform. By understanding the
differences between resource and data configurations, you can write more effective
Terraform configurations that can manage both managed and unmanaged resources.

8e) Use resource addressing and resource parameters to


connect resources together
In Terraform, resource addressing and resource parameters are used to connect resources
together and define relationships between them. Here's a brief overview of how resource
addressing and resource parameters can be used to connect resources together:

1. Resource addressing: Resource addressing is used to specify the location of a resource in


Terraform. Resource addressing is based on the name of the resource and the type of
resource, which are both defined in the resource configuration block. Resource addressing is
typically used to connect resources together by referencing one resource from another. For
example:

```
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "example-key"
subnet_id = "subnet-abc123"
}

resource "aws_security_group" "web" {


name_prefix = "web-"
vpc_id = "vpc-abc123"

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}

depends_on = [aws_instance.web]
}
```

In this example, the `aws_security_group` resource depends on the `aws_instance` resource


by using the `depends_on` parameter. This ensures that the security group is created after the
instance is created, so that the security group can reference the instance's attributes.

2. Resource parameters: Resource parameters are used to define relationships between


resources and specify how resources should be connected together. Resource parameters are
defined in the resource configuration block and can include things like references to other
resources, output values, and other parameters that define the relationship between
resources. For example:

```
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "example-key"
subnet_id = "subnet-abc123"
}

resource "aws_security_group" "web" {


name_prefix = "web-"
vpc_id = "vpc-abc123"

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}

tags = {
Name = "web-sg"
}

security_group_rules = [
{
type = "ingress"
protocol = "tcp"
from_port = 80
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
]

depends_on = [aws_instance.web]
}
```

In this example, the `aws_security_group` resource uses the `security_group_rules`


parameter to define an ingress rule that allows traffic on port 80 from anywhere. The
`security_group_rules` parameter references the `aws_security_group` resource and the
`aws_instance` resource, and specifies the protocol, port range, and CIDR block that should be
allowed. This creates a relationship between the security group

8f) Use HCL and Terraform functions to write configuration


HCL (HashiCorp Configuration Language) is the language used to write configuration files in
Terraform. HCL is designed to be easy to read and write, and allows you to define resources,
variables, and other settings in a simple and declarative way.

In addition to HCL, Terraform provides a number of built-in functions that you can use to
manipulate and transform data in your configuration files. Here are some examples of how to
use HCL and Terraform functions to write configuration:

1. Define variables: Variables can be defined using the `variable` block in your configuration
file. Variables can be used to store values that are reused throughout your configuration, or to
provide inputs to modules or other resources. For example:

```
variable "region" {
type = string
default = "us-east-1"
}
```

2. Use expressions: Expressions can be used to perform calculations or transform data in your
configuration. For example, you can use expressions to concatenate strings, perform
arithmetic operations, or filter lists. Here's an example of an expression that concatenates two
strings:

```
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "example-key"
subnet_id = "subnet-abc123"
tags = {
Name = "Example Instance"
Env = "${var.environment}"
}
}
```

3. Use functions: Terraform provides a number of built-in functions that you can use to
manipulate and transform data in your configuration. For example, you can use the
`cidrsubnet` function to calculate a subnet IP address. Here's an example:

```
resource "aws_subnet" "example" {
vpc_id = aws_vpc.example.id
cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 1)
}
```

In this example, the `cidrsubnet` function is used to calculate the subnet IP address for the
`aws_subnet` resource. The `cidrsubnet` function takes three arguments: the base CIDR block,
the number of bits to shift, and the new subnet index.

Overall, HCL and Terraform functions provide a powerful and flexible way to write
configuration files that are easy to read and maintain, while also allowing you to perform
complex calculations and transformations. By using these tools effectively, you can create
Terraform configurations that are robust, reliable, and easy to manage.

8g) Describe built-in dependency management (order of


execution based)
Terraform Graph Terraform’s interpolation syntax is very human-friendly, but under the hood
it builds a very powerful resource graph. When resources are created they expose a number
of relevant properties and Terraform’s resource graph allows it to determine dependency
management and order of execution for resource buildouts. Terraform has the ability to
support the parallel management of resources because of it’s resource graph allowing it to
optimize the speed of deployments.
Terraform Resource Lifecycles Terraform has the ability to support the parallel management
of resources because of it’s resource graph allowing it to optimize the speed of deployments.
The resource graph dictates the order in which Terraform creates and destroys resources, and
this order is typically appropriate. There are however situations where the we wish to change
the default lifecycle behavior that Terraform uses. To provide you with control over
dependency errors, Terraform has a lifecycle block. This lab demonstrates how to use lifecycle
directives to control the order in which Terraform creates and destroys resources.

9 Understand Terraform Cloud capabilities


Terraform Cloud - Getting Started

Terraform Cloud is HashiCorp’s managed service offering that eliminates the need for
unnecessary tooling and documentation to use Terraform in production. Terraforn Cloud
helps you to provision infrastructure securely and reliably in the cloud with free remote state
storage. Teraform Cloud and it’s self hosted counterpart Terraform Enterprise offer
Workspaces, Private Module Registry, Team Goverenance along with Policy as Code (Sentinel)
as a few of it’s benefits.

Terraform Remote State - Enhanced Backend

Enhanced backends can both store state and perform operations. There are only two
enhanced backends: local and remote. The local backend is the default backend used by
Terraform which we worked with in previous labs. The remote backend stores Terraform state
and may be used to run operations in Terraform Cloud. When using full remote operations,
operations like terraform plan or terraform apply can be executed in Terraform Cloud’s run
environment, with log output streaming to the local terminal. Remote plans and applies use
variable values from the associated Terraform Cloud workspace

Terraform's remote state feature allows teams to store and manage Terraform state files in a
centralized location, which enables better collaboration and reduces the risk of state file
corruption. In addition to the default remote state backends, such as S3, Azure Blob Storage,
and Google Cloud Storage, Terraform offers an enhanced backend for remote state
management.

The enhanced backend is designed to address some of the limitations of the default backends,
such as the lack of support for locking and versioning. Here are some features of the
enhanced backend:

1. Consul-based backend: The enhanced backend uses Consul as the storage backend for
remote state. This allows for high availability and scalability of state storage.
2. Locking support: The enhanced backend supports locking of state files, which prevents
concurrent modifications to the state file and avoids the risk of state file corruption.

3. Versioning support: The enhanced backend supports versioning of state files, which allows
you to track changes to the state file over time.

4. Encryption support: The enhanced backend supports encryption of state files, which
ensures the security of sensitive data in the state file.

5. Performance optimization: The enhanced backend optimizes performance for large state
files, improving the speed of plan and apply operations.

To use the enhanced backend, you need to set up a Consul cluster and configure Terraform to
use Consul as the remote state backend. The enhanced backend is recommended for large-
scale deployments and enterprise-level infrastructure management, where high availability,
locking, and versioning support are critical requirements.

Terraform Cloud Workspaces

A Terraform workspace is a managed unit of infrastructure. Workspaces are the workhorse of


Terraform Cloud and build on the Terraform CLI workspace construct. Each uses the same
Terraform code to deploy infrastructure and each keeps separate state data for each
workspace. Terraform Cloud simply adds more functionality. On your local workstation, the
terraform workspace is simply a directory full of terraform code and variables. This code is
also ideally stored in a git repository. Terraform Cloud workspaces take on some extra roles. In
Terraform Cloud your workspace stores state data, has it’s own set variable values and
environment variables, and allows for remote operations and logging. Terraform Cloud
workspaces also provide access controls, version control integration, API access and policy
management

Terraform Cloud Workspaces are a feature of Terraform Cloud that allow teams to organize
and manage their Terraform configurations in a more structured and scalable way.
Workspaces enable teams to create multiple isolated environments for managing
infrastructure resources, allowing for better collaboration, version control, and resource
management.

Here are some key features of Terraform Cloud Workspaces:


1. Isolated environments: Each workspace in Terraform Cloud is an isolated environment with
its own set of resources and state files. This enables teams to manage infrastructure resources
independently and reduces the risk of conflicts or interference between different
environments.

2. Version control: Terraform Cloud Workspaces integrate with version control systems such
as GitHub, allowing teams to track changes to their Terraform configurations and manage
different versions of their infrastructure resources.

3. Collaboration: Terraform Cloud Workspaces enable teams to collaborate more effectively


by providing a centralized platform for managing Terraform configurations and related
processes.

4. Resource management: Terraform Cloud Workspaces enable teams to manage their


infrastructure resources more efficiently by providing a clear view of all resources associated
with a workspace, including their status, configuration, and dependencies.

5. Access control: Terraform Cloud Workspaces provide granular access controls, enabling
teams to control who can access and modify their infrastructure resources.

6. API integration: Terraform Cloud Workspaces integrate with various APIs, enabling teams to
automate and streamline their infrastructure management workflows.

Overall, Terraform Cloud Workspaces provide a powerful and flexible way for teams to
manage their infrastructure resources more efficiently and collaboratively. They offer a
scalable, version-controlled, and secure way to manage infrastructure as code, helping teams
to accelerate their infrastructure delivery and improve their overall infrastructure
management processes.

Terraform Cloud Secure Variables

Terraform Cloud has built in support for encryption and storage of variables used within your
Terraform configuration. This allows you to centrally manage variables per workspace or
organization as well as store sensitive items (such as cloud credentials, passwords, etc.)
securely during the provisioning process without exposing them in plaintext or storing them
on someone’s laptop.

Terraform Cloud - Version Control

In order for different teams and individuals to be able to work on the same Terraform code,
you need to use a Version Control System (VCS). Terraform Cloud can integrate with the most
popular VCS systems including GitHub, GitLab, Bitbucket and Azure DevOps

Terraform Cloud offers integration with version control systems such as GitHub, GitLab, and
Bitbucket, which enables teams to version and manage their Terraform configurations more
effectively. This integration provides a number of benefits, including:

1. Version control: Version control systems allow teams to track changes to their Terraform
configurations and manage different versions of their infrastructure resources. This enables
teams to work collaboratively on infrastructure as code and manage changes to infrastructure
resources in a structured and controlled way.

2. Code review: Version control systems provide tools for code review, enabling teams to
review changes to their Terraform configurations and ensure that code changes are validated
before they are merged into the master branch.

3. Audit trail: Version control systems provide an audit trail of all changes to the Terraform
configurations, including who made the change, when the change was made, and what was
changed. This provides a clear record of changes to the infrastructure resources and helps
teams to track and troubleshoot issues.

4. Collaboration: Integration with version control systems enables teams to work


collaboratively on infrastructure as code, reducing the risk of conflicts or interference
between different team members.

5. Security: Version control systems provide security controls such as access control,
authentication, and encryption, ensuring that only authorized users have access to the
Terraform configurations and related resources.

6. Continuous integration and delivery: Integration with version control systems enables
teams to automate their infrastructure delivery pipelines, using continuous integration and
delivery (CI/CD) tools to automate testing, build, and deployment processes.
Overall, integration with version control systems provides a powerful and flexible way for
teams to manage their Terraform configurations more effectively. It enables teams to version,
collaborate, and manage their infrastructure as code in a structured and controlled way,
helping to improve the overall quality, security, and efficiency of their infrastructure
management processes.

Terraform Cloud - Private Module Registry

Terraform Cloud’s Private Module Registry allows you to store and version Terraform
modules which are re-usable snippets of Terraform code. It is very similar to the Terraform
Public Module registry including support for module versioning along with a searchable and
filterable list of available modules for quickly deploying common infrastructure configurations.

Terraform Cloud provides a private module registry that enables teams to publish and share
Terraform modules within their organization. A module is a reusable set of Terraform code
that defines a specific infrastructure resource, such as an AWS S3 bucket or an Azure Virtual
Machine.

The private module registry provides a number of benefits, including:

1. Centralized module management: The private module registry provides a centralized


location for teams to manage and share Terraform modules, making it easier for teams to
discover and reuse modules across their organization.

2. Improved collaboration: The private module registry enables teams to collaborate more
effectively on infrastructure as code by sharing Terraform modules and managing module
dependencies in a controlled way.

3. Security and compliance: The private module registry provides security controls such as
access control, authentication, and encryption, ensuring that only authorized users have
access to the Terraform modules and related resources. This helps teams to meet compliance
requirements and reduce security risks.

4. Version control: The private module registry enables teams to version their Terraform
modules, providing an audit trail of changes and ensuring that different teams are using the
same version of the module.
5. Automation: The private module registry enables teams to automate their module delivery
pipelines, using continuous integration and delivery (CI/CD) tools to test, build, and deploy
modules.

Overall, the private module registry provides a powerful way for teams to manage and share
Terraform modules within their organization. It enables teams to collaborate more effectively,
improve security and compliance, and automate their infrastructure delivery processes.

Terraform Cloud - Sentinel Policy

Sentinel is the Policy-as-Code product from HashiCorp that automatically enforces logic-based
policy decisions across all HashiCorp Enterprise products. It allows users to implement policy-
as-code in a similar way to how Terraform implements infrastructure-as-code. If enabled,
Sentinel is run between the terraform plan and apply stages of the workflow.

Terraform Cloud includes a feature called Sentinel, which provides a policy-as-code


framework for managing and enforcing policy across your infrastructure as code. Sentinel
policies are written in the Sentinel language, which is a simple, yet powerful, domain-specific
language (DSL) that is designed for policy enforcement.

Sentinel policies can be used to define rules and constraints that ensure that infrastructure is
provisioned in a secure, compliant, and consistent way. Some examples of policy rules that
can be enforced using Sentinel include:

- Enforcing resource naming conventions


- Enforcing security requirements, such as encryption or network security groups
- Enforcing compliance requirements, such as regulatory compliance or company policies
- Limiting the use of specific cloud services or resources

Sentinel policies can be applied to Terraform Cloud workspaces, which enables you to enforce
policies across all your infrastructure as code deployments. When a policy violation occurs,
Terraform Cloud will notify the relevant stakeholders and provide details of the policy
violation, enabling teams to quickly address any issues and maintain compliance.

Sentinel policies can also be integrated into your CI/CD pipelines, enabling you to enforce
policies as part of your infrastructure delivery processes. This helps to ensure that policies are
enforced at every stage of the development lifecycle, from development through to
production.
Overall, Sentinel policies provide a powerful way to manage and enforce policy across your
infrastructure as code. By using Sentinel policies, teams can ensure that infrastructure is
provisioned in a secure, compliant, and consistent way, while reducing the risk of policy
violations and ensuring that infrastructure is deployed in a reliable and repeatable way.

Terraform Cloud - Version Control Workflow

Once multiple people are collaborating on Terraform configuration, new steps must be added
to the core Terraform workflow (Write, Plan, Apply) to ensure everyone is working together
smoothly. In order for different teams and individuals to be able to work on the same
Terraform code, you need to use a Version Control System (VCS). The Terraform Cloud VCS or
version control system workflow includes the most common steps necessary to work in a
collaborative nature, but it also requires that you host the Terraform code in a VCS repository.
Events on the repository will trigger workflows on Terraform Cloud. For instance, a commit to
the default branch could kick off a plan and apply workflow in Terraform Cloud.

Terraform Cloud provides a version control workflow that enables teams to manage and
collaborate on infrastructure as code using version control systems such as Git. This workflow
provides a number of benefits, including:

- Collaboration: Multiple team members can work on the same infrastructure as code project
at the same time, without stepping on each other's toes.

- Change management: All changes to the infrastructure as code are tracked in version
control, providing a history of changes that can be audited and reviewed.

- Code reviews: Changes to the infrastructure as code can be reviewed by other team
members before they are applied, ensuring that changes are high-quality and meet
organizational standards.

- Rollback: If a change causes issues, it can be easily rolled back to a previous version in
version control.

The version control workflow in Terraform Cloud works as follows:


1. Connect your version control system (such as Git) to your Terraform Cloud workspace. This
is done by configuring a webhook in your version control system that triggers a Terraform
Cloud run whenever changes are made to your infrastructure as code.

2. Create a branch in your version control system for each change you want to make to your
infrastructure as code. This helps to isolate changes and enables you to review and test
changes before merging them into the main branch.

3. Make changes to the infrastructure as code in your version control system using your
favorite editor or IDE.

4. Create a pull request (PR) in your version control system to merge the changes from your
branch into the main branch. The PR should include a description of the changes, as well as
any relevant documentation or tests.

5. Review the PR with your team members, making any necessary changes or fixes before
approving it.

6. Once the PR is approved, merge the changes into the main branch. This will trigger a
Terraform Cloud run that will apply the changes to your infrastructure.

7. Monitor the Terraform Cloud run to ensure that the changes are applied successfully. If
there are any issues, roll back the changes to a previous version in version control.

Overall, the version control workflow in Terraform Cloud provides a powerful way to manage
and collaborate on infrastructure as code, while ensuring that changes are audited, reviewed,
and tested before they are applied.

Terraform Cloud is a web-based SaaS platform offered by HashiCorp that provides a


centralized location for managing and collaborating on Terraform configurations. It offers a
range of features that help teams manage and automate their infrastructure, including:

1. Remote state management: Terraform Cloud provides a secure and centralized location to
store your Terraform state files, which allows for better collaboration and avoids state file
locking issues.
2. Team management and collaboration: Terraform Cloud provides a centralized location for
managing your teams and their permissions. This allows teams to collaborate on
infrastructure management more effectively.

3. Run management: Terraform Cloud provides a web-based interface for managing and
monitoring Terraform runs, which allows you to see the status of your infrastructure and
troubleshoot issues more easily.

4. Policy enforcement: Terraform Cloud offers policy enforcement capabilities to help teams
enforce security and compliance standards, such as preventing the use of insecure resources
or limiting access to sensitive data.

5. Integration with other tools: Terraform Cloud integrates with various third-party tools such
as version control systems, chat systems, and notification systems to enable better
automation and collaboration.

6. Remote operations: Terraform Cloud provides remote operations capabilities, such as plan
and apply, without requiring you to install and manage the Terraform CLI on your local
machine.

7. Cost estimation: Terraform Cloud offers cost estimation features to help teams understand
the cost implications of their infrastructure changes.

Overall, Terraform Cloud provides a range of capabilities that can help teams manage and
automate their infrastructure more effectively. By providing a centralized location for
managing Terraform configurations and related processes, Terraform Cloud can improve
collaboration, security, compliance, and automation.

9a) Explain how Terraform Cloud helps to manage


infrastructure
Terraform Cloud is a web-based SaaS platform that helps manage infrastructure by providing a
centralized location for storing and managing Terraform configurations and related processes.
Here are some ways in which Terraform Cloud can help manage infrastructure:
1. Collaborative workflow: Terraform Cloud allows teams to collaborate more effectively on
infrastructure management. Team members can share and review configurations, manage
access and permissions, and see what changes have been made and by whom.

2. Remote state management: Terraform Cloud provides a secure and centralized location for
storing Terraform state files, which are used to track the current state of infrastructure
resources. This avoids state file locking issues and allows for better collaboration.

3. Automation and remote execution: Terraform Cloud provides remote execution


capabilities, such as plan and apply, without requiring you to install and manage the
Terraform CLI on your local machine. This enables more automated and scalable
infrastructure management.

4. Run management: Terraform Cloud provides a web-based interface for managing and
monitoring Terraform runs, which allows you to see the status of your infrastructure and
troubleshoot issues more easily.

5. Policy enforcement: Terraform Cloud offers policy enforcement capabilities to help teams
enforce security and compliance standards, such as preventing the use of insecure resources
or limiting access to sensitive data.

6. Integration with other tools: Terraform Cloud integrates with various third-party tools such
as version control systems, chat systems, and notification systems to enable better
automation and collaboration.

7. Cost estimation: Terraform Cloud offers cost estimation features to help teams understand
the cost implications of their infrastructure changes.

Overall, Terraform Cloud helps manage infrastructure by providing a centralized location for
managing Terraform configurations and related processes, enabling more effective
collaboration, automation, and policy enforcement.

9b) Describe how Terraform Cloud enables collaboration and


governance
Terraform Cloud enables collaboration and governance by providing a centralized platform for
managing Terraform configurations and related processes. Here are some ways in which
Terraform Cloud enables collaboration and governance:

1. Team management and permissions: Terraform Cloud allows you to manage teams and
their permissions, enabling you to control who can access and make changes to your
infrastructure.

2. Role-based access control: Terraform Cloud allows you to set up role-based access control
(RBAC), which enables you to define different levels of permissions for different team
members based on their roles.

3. Collaboration tools: Terraform Cloud integrates with various collaboration tools, such as
version control systems and chat systems, to enable teams to collaborate more effectively.

4. Code review: Terraform Cloud provides a web-based interface for reviewing Terraform code
changes, enabling teams to review and approve changes before they are applied.

5. Remote execution: Terraform Cloud enables remote execution of Terraform plans and
applies, allowing teams to apply changes from a central location without needing to install
and manage the Terraform CLI on their local machines.

6. State management: Terraform Cloud provides a secure and centralized location for
managing Terraform state files, which enables teams to collaborate more effectively on
infrastructure management.

7. Policy enforcement: Terraform Cloud enables you to enforce policies and standards across
your infrastructure, helping to ensure compliance with security and governance requirements.

Overall, Terraform Cloud enables collaboration and governance by providing a centralized


platform for managing Terraform configurations and related processes, enabling teams to
collaborate more effectively, enforce policies, and ensure compliance with governance
requirements.

ZEAL BORA NOTES


dependency lock file
is a file that captures the specific versions of external modules that were used during the last
successful Terraform run. This file is automatically generated when terraform installs or
updates modules for a configuration.

The purpose of the dependency lock file is to ensure that the same versions of external
modules are used consistently across different Terraform runs, especially when working in
teams or across different environments. By capturing the exact module versions used in a
previous successful run, the dependency lock file helps prevent unintentional changes to the
configuration that might be caused
by updates to the external modules.

The dependency lock file is typically named "terraform.lock.hcl" and is stored in the same
directory as the main Terraform configuration file. It is a human-readable file written in
HashiCorp Configuration Language (HCL) and contains information about the versions of all
the external modules used in the configuration.

You can use the "terraform init" command to initialize a configuration with the dependencies
listed in the lock file. If any of the dependencies have changed, terraform will prompt you to
update the lock file before proceeding with the run.

Attributes
are properties of a resource that can be used to define its configuration and behavior.
Attributes are used to specify the desired state of a resource and to manage changes to that
resource over time.

Each resource type in Terraform has its own set of attributes that are specific to that resource.
For example, the "aws_instance" resource type in the AWS provider has attributes such as
"ami", "instance_type", "subnet_id", and "tags". These attributes are used to specify the
desired configuration of an EC2 instance in AWS.
Attributes can be set directly in the resource block using a key-value syntax, like this:

resource "aws_instance" "web" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
subnet_id = "subnet-abc123"
tags = {
Name = "web-server"
Environment = "production"
}
}

In this example, the attributes "ami", "instance_type", "subnet_id", and "tags" are all set for
the "aws_instance" resource named "web".

Attributes can also be referenced and used in other parts of the configuration using
interpolation syntax. For example, the value of the "tags" attribute could be used in another
resource's configuration by referencing it like this: "${aws_instance.web.tags}"

Overall, attributes are a fundamental concept in Terraform that allow resources to be


configured and managed in a flexible and granular way, making it possible to define
infrastructure as code and automate the management of complex infrastructure.

Outputs
are a way to export values from a module or a set of resources so that they can be reused or
shared with other parts of the configuration or with external systems.

Outputs are defined in a module using the "output" block, which specifies the name of the
output and the value that should be associated with it. For example:

output "instance_ip" {
value = aws_instance.web.private_ip
}

In this example, the output "instance_ip" is defined to have the value of the private IP address
of the "aws_instance" resource named "web".

Outputs can be used in other parts of the configuration or in other modules using
interpolation syntax, like this:
resource "aws_security_group_rule" "allow_ssh" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${module.web_server.instance_ip}/32"]
}

In this example, the "instance_ip" output from the "web_server" module is referenced to
allow SSH traffic from that IP address in a security group rule.

Outputs can also be displayed in the Terraform CLI output by running the "terraform output"
command. This command displays the values of all the outputs defined in the current
configuration.

Overall, outputs are a powerful feature in Terraform that allow for greater flexibility and
reusability in defining and managing infrastructure as code.

The count parameter


is used to create multiple instances of a resource. It allows you to define the number of
instances you want to create using a single block of code, instead of copying and pasting the
same code multiple times.

When you use count in a resource block, Terraform will create a separate instance of that
resource for each value in the count parameter. The value of count must be an integer,
and it can be a static value or a variable.

Here's an example of how count works in Terraform:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
count = 3
}

In this example, Terraform will create three instances of an AWS EC2 instance using the same
configuration defined in the block. Each instance will be assigned a unique identifier based on
the index of the instance in the list, starting from 0. For example, the first instance will have
the identifier aws_instance.example[0], the second instance will have the identifier
aws_instance.example[1], and so on.

Using count can be a powerful way to create multiple instances of resources with similar
configurations in a concise and efficient way. However, it's important to ensure that the
resources you create with count are consistent and have unique identifiers to avoid
conflicts.

Count Index
In Terraform, the count parameter creates multiple instances of a resource based on the value
assigned to it. The count.index object is a zero-based index that represents the current
instance being created during the iteration.

When you use count in a resource block, Terraform creates a separate instance of that
resource for each value in the count parameter. The count.index object is automatically
set by Terraform and can be used to refer to the current instance being created.

For example, let's say you have the following resource block that uses count:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
count = 3
}

In this example, count.index will take on the values of 0, 1, and 2 during the creation of
the three instances. You can use count.index to create unique identifiers for each
instance, like this:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
count = 3

tags = {
Name = "example-${count.index}"
}
}
In this example, the tags block is using count.index to create a unique identifier for each
instance by appending the index to the end of the name. The first instance will have a Name
tag of "example-0", the second instance will have a Name tag of "example-1", and so on.

Using count.index can be a powerful way to create unique identifiers for instances or
resources that are created dynamically based on the value of the count parameter.

conditional expressions
can be used to conditionally include or exclude resources or blocks of code based on a
boolean value or an expression that evaluates to a boolean value. The conditional expressions
are used in conjunction with the if keyword to create conditional logic.

condition ? true_val : false_val

Here's an example of how to use a conditional expression in Terraform:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"

count = var.create_instances ? 2 : 0
}
In this example, the count parameter for the aws_instance resource is set using a conditional
expression. The expression evaluates the value of the create_instances variable and returns
either 2 (if create_instances is true) or 0 (if create_instances is false).

Another example of using conditional expressions is in a locals block to create a dynamic value
based on the value of another variable:

locals {
instance_type = var.environment == "prod" ? "t2.large" :
"t2.micro"
}

In this example, the instance_type variable is set using a conditional expression that checks
the value of the environment variable. If environment is "prod", instance_type will be set to
"t2.large". Otherwise, it will be set to "t2.micro".

Conditional expressions can also be used with the for_each parameter to dynamically create
resources based on the elements of a list or map. For example:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"

for_each = var.create_instances ? toset(["server1",


"server2"]) : {}

tags = {
Name = each.value
}
}

In this example, the for_each parameter for the aws_instance resource is set using a
conditional expression. If create_instances is true, Terraform will create two instances of the
aws_instance resource with the names "server1" and "server2". If create_instances is false,
Terraform will not create any instances.

Functions
in Terraform are built-in operations that can be used to manipulate data, transform values, or
perform calculations in your code. Terraform provides a wide range of functions that can be
used to work with various data types, including strings, numbers, lists, maps, and more.

Here are some examples of commonly used functions in Terraform:

● element(list, index): Returns the element of the list at the specified index. For
example, element(["a", "b", "c"], 1) would return "b".
● format(format_string, ...args): Returns a formatted string using the specified format
string and arguments. For example, format("Hello, %s!", "world") would return "Hello,
world!".
● join(separator, list): Joins the elements of a list into a single string, separated by the
specified separator. For example, join(", ", ["a", "b", "c"]) would return "a, b, c".
● map(key, value, ...): Returns a map created from the specified key-value pairs. For
example, map("key1", "value1", "key2", "value2") would return a map with two keys,
"key1" and "key2", each with their corresponding values.
● max(list): Returns the maximum value in the list. For example, max([3, 1, 4, 1, 5, 9, 2,
6, 5, 3, 5]) would return 9.
● min(list): Returns the minimum value in the list. For example, min([3, 1, 4, 1, 5, 9, 2, 6,
5, 3, 5]) would return 1.
● replace(string, search, replace): Returns a new string with all occurrences of the
search string replaced with the replace string. For example, replace("hello world",
"world", "Terraform") would return "hello Terraform".
● substr(string, start, length): Returns a substring of the specified length, starting at the
specified index. For example, substr("Terraform", 2, 4) would return "rraf".
These are just a few examples of the many functions available in Terraform. To see a full list of
functions and their documentation, you can visit the Terraform function reference at
https://www.terraform.io/docs/language/functions/index.html.

Data source
is a way to retrieve data from an external system or provider and use it within your Terraform
configuration. Data sources allow you to obtain information about existing resources, such as
a list of EC2 instances or a DNS record, and use that information in your Terraform
configuration to inform decisions about what resources to create, modify or delete.

Data sources are declared using the data keyword in a Terraform configuration file, followed
by the type of data source and any required configuration parameters. For example, to
retrieve information about an AWS EC2 instance, you would use the aws_instance data source
and specify the instance ID:

data "aws_instance" "web" {


instance_id = "i-0123456789abcdef"
}
Once a data source has been declared, you can reference it in your configuration using the
data.<type>.<name>.<attribute> syntax. For example, to reference the public IP address of
the web instance declared above, you would use data.aws_instance.web.public_ip.

Data sources can be used to retrieve a wide variety of information from external systems and
providers, such as DNS records, database instances, and cloud resources. They provide a
powerful way to incorporate external data into your Terraform configuration and automate
the management of your infrastructure.

validation
is the process of verifying whether a set of configurations meet the specified criteria and
constraints. Terraform provides several built-in validation mechanisms that can be used to
validate the configurations.

Some of the validation mechanisms provided by Terraform include:

Syntax validation: Terraform can validate the syntax of the configuration files using its built-in
parser. This helps to ensure that the configurations are properly structured and formatted.

Variable validation: Terraform allows you to specify constraints on variables used in your
configuration files. For example, you can specify that a variable must be of a certain data type,
or that it must have a minimum or maximum value.

Resource validation: Terraform can validate the resources defined in your configuration files
to ensure that they are properly configured and meet the requirements of the provider.

Plan validation: Terraform can validate the execution plan before actually applying the
changes. This helps to catch any potential errors or issues before they are deployed.

To implement validation in Terraform, you can use the various built-in validation mechanisms
and tools provided by the platform. You can also use third-party plugins and modules to
extend the validation capabilities of Terraform.

load order and semantics


refer to the order in which resources are created and managed, and how they relate to each
other.

Load order refers to the order in which resources are loaded and managed by Terraform.
Terraform loads resources in the order in which they are declared in the configuration files.
Resources that have dependencies on other resources are loaded after their dependencies are
loaded. This ensures that resources are loaded in the correct order and that dependencies are
satisfied.

Semantics refers to the meaning and behavior of the resources and their relationships to each
other. Terraform uses a declarative syntax to define the desired state of the infrastructure.
Each resource is defined with its own set of attributes and properties, and Terraform manages
the relationships between resources automatically based on their dependencies.

Terraform manages resources using a graph-based approach. When resources are loaded,
Terraform builds a dependency graph based on the relationships between resources. This
graph is used to determine the load order and semantics of the resources.
Terraform also supports modules, which are self-contained groups of resources that can be
reused across different projects. Modules can be loaded and managed independently, and
their dependencies are managed automatically by Terraform.

In summary, load order and semantics are important concepts in Terraform that help to
ensure that resources are loaded in the correct order and that their relationships are properly
managed. These concepts are fundamental to the way that Terraform manages infrastructure
and ensure that changes are made safely and predictably.

dynamic block
is a way to create a dynamic set of nested configuration blocks within a resource or module
block. This allows you to generate complex configurations dynamically based on input
variables, instead of having to manually define each configuration block.

Dynamic blocks are useful when you need to generate a variable number of nested
configuration blocks, such as when creating multiple instances of the same resource with
different configurations. They allow you to write more flexible and reusable configurations
that can adapt to different scenarios.

The syntax for a dynamic block is as follows:

dynamic "block_type" {
for_each = var.some_list_or_map

content {
# Configuration for each dynamic block
}
}
In this example, block_type is the name of the nested configuration block that you want to
create dynamically. for_each is an expression that evaluates to a list or a map, and specifies
how many times to create the dynamic block. The content block contains the configuration for
each dynamic block.

For example, if you have a list of subnets that you want to create, you could use a dynamic
block to create multiple subnet resources:

variable "subnets" {
type = list(object({
name = string
cidr_block = string
availability_zone = string
}))
}

resource "aws_subnet" "subnet" {


dynamic "subnet" {
for_each = var.subnets

content {
cidr_block = subnet.value.cidr_block
availability_zone = subnet.value.availability_zone
tags = {
Name = subnet.value.name
}
}
}
}

In this example, the aws_subnet resource block contains a dynamic block that creates a
subnet resource for each item in the subnets list. The content block specifies the configuration
for each subnet resource, including the CIDR block, availability zone, and tags.

Dynamic blocks are a powerful feature of Terraform that allow you to create complex
configurations dynamically based on input variables. They can help you write more flexible
and reusable configurations that can adapt to different scenarios.

iterators
are used to generate a sequence of values based on a set of input variables. They allow you to
dynamically generate configurations that can adapt to different scenarios, based on the values
of input variables.

Terraform provides several built-in iterators that can be used to generate sequences of values.
These include count, for_each, and for.

The count iterator is used to create a fixed number of resource instances, based on the value
of a count variable. For example, if you set count = 3, Terraform will create three instances of
the resource.

The for_each iterator is used to create a variable number of resource instances, based on the
values of a map or set variable. For example, if you set for_each = { "app-1" = "10.0.0.1", "app-
2" = "10.0.0.2" }, Terraform will create two instances of the resource, one for each key-value
pair in the map.

The for iterator is used to generate a sequence of values based on an expression. It can be
used to generate a list of values, or to iterate over a set of resources. For example, you can
use the for iterator to iterate over a list of subnets and create a corresponding set of EC2
instances.

Here's an example of using the for_each iterator to create multiple instances of an AWS S3
bucket:

variable "buckets" {
type = map
default = {
"bucket-1" = {
acl = "private"
versioning = true
}
"bucket-2" = {
acl = "public-read"
versioning = false
}
}
}

resource "aws_s3_bucket" "bucket" {


for_each = var.buckets

bucket = each.key
acl = each.value.acl
versioning {
enabled = each.value.versioning
}
}
In this example, the aws_s3_bucket resource block contains a for_each iterator that creates a
separate S3 bucket for each key-value pair in the buckets map variable. The configuration for
each bucket is defined in the each.value block, which contains the acl and versioning
attributes.

Iterators are a powerful feature of Terraform that allow you to create flexible and reusable
configurations that can adapt to different scenarios. By using iterators, you can create a
dynamic set of resources and configurations that can scale to meet your needs.
tainting
is a mechanism to force a resource to be recreated on the next apply. When a resource is
tainted, Terraform marks it as "tainted" in the state file. When you run terraform apply again,
Terraform will destroy and recreate the tainted resource, rather than simply updating it.

You can taint a resource using the terraform taint command:

terraform taint <resource_address>

Where <resource_address> is the address of the resource you want to taint, in the format
resource_type.resource_name. For example:

terraform taint aws_instance.web

In this example, the aws_instance.web resource will be marked as tainted in the state file.

Tainting is useful in situations where you need to force a resource to be recreated, for
example when you need to change the configuration of a resource that cannot be updated in
place. Tainting can also be used to recover from failed updates, by forcing Terraform to start
again from a known state.

It's important to note that tainting should be used with caution, as it can result in downtime
and data loss if not used correctly. Tainting a resource will cause it to be destroyed and
recreated, which can result in the loss of any data or state associated with the resource.
Therefore, it's important to ensure that you have a backup plan in place before tainting a
resource.

In general, tainting should be used as a last resort, when all other options have been
exhausted. It's recommended to try to update a resource in place before resorting to tainting,
as updating a resource in place is usually faster and safer than recreating it.

splat expression
is a shorthand notation that allows you to reference a subset of elements in a list or set, or a
subset of attributes in a map.

The syntax for a splat expression is .<attribute_name> or [<index>]. For example, if you have a
list of EC2 instance IDs, you can reference a specific instance ID using a splat expression like
this:

aws_instance.example.*.id
In this example, aws_instance.example is a resource that creates multiple EC2 instances.
The .* syntax tells Terraform to include all instances created by this resource, and the .id
attribute specifies that you want to reference the instance ID of each instance.

Here's another example that uses a splat expression to reference a subset of attributes in a
map:

variable "tags" {
type = map(string)
default = {
"Name" = "my-instance"
"Environment" = "dev"
"Owner" = "me"
}
}

output "tags" {
value = { for k, v in var.tags : k => v if k != "Owner" }
}

output "owner_tag" {
value = var.tags["Owner"]
}
In this example, the output "tags" block uses a for expression to iterate over the tags map
variable and create a new map that includes all tags except the "Owner" tag. The output
"owner_tag" block uses a regular map access expression to reference the "Owner" tag
directly.

Splat expressions are a useful feature of Terraform that allow you to reference specific
elements or attributes in complex data structures. By using splat expressions, you can write
more concise and readable code, and avoid the need for manual iteration or filtering.

terraform graph
is a command in the Terraform CLI (Command Line Interface) that generates a visual
representation of the Terraform resource dependency graph. The resource dependency graph
is a directed acyclic graph (DAG) that shows the relationships between resources in a
Terraform configuration.

The terraform graph command generates a DOT file that can be used with Graphviz to create
a visualization of the dependency graph. The DOT file can be redirected to a file and then
passed to the dot command, which is part of Graphviz, to generate a PNG, PDF, or other
image format file.

The terraform graph command can be useful for understanding the relationships between
resources in a complex Terraform configuration, and can be used to identify potential issues,
such as circular dependencies, before applying changes to the infrastructure

Saving a Terraform plan to a file


involves using the -out option when running the terraform plan command. The -out option
allows you to specify a file name where the plan will be saved.

Here is an example of how to save a Terraform plan to a file named terraform_plan.out:

terraform plan -out=terraform_plan.out

When you run this command, Terraform will create a binary file containing the execution plan
for your infrastructure changes. This file can be used later to apply the changes, and it can also
be shared with other team members or stored in version control as a record of planned
changes.

To apply the plan saved in the file, you can use the terraform apply command with the -
input=false option and the path to the plan file:
terraform apply -input=false terraform_plan.out

This will apply the infrastructure changes defined in the plan file without prompting for
confirmation.

terraform output
is a command in the Terraform CLI (Command Line Inte rface) that displays the values of
output variables defined in a Terraform configuration. Output variables are values that are
computed by Terraform after it applies the configuration and they are useful for extracting
information about the resources created by a Terraform configuration.

The terraform output command can be used to view the values of output variables in the
terminal or to redirect the output to a file. Here is an example of how to use the terraform
output command:

terraform output
This command will display a list of all output variables defined in the configuration and their
values.

You can also use the -json option to output the variables in JSON format, which can be useful
for scripting and automation. Here is an example:

terraform output -json

This command will display the output variables in JSON format.

You can also specify a specific output variable to display by providing its name as an argument
to the terraform output command. Here is an example:

terraform output my_output_variable

This command will display the value of the my_output_variable output variable.

Terraform settings
can be configured using several methods, including environment variables, command-line
options, and Terraform configuration files.

Here are some of the common ways to configure Terraform settings:

Environment Variables: Terraform uses several environment variables to configure settings


such as AWS credentials, Terraform state storage, and log levels. Some examples of
environment variables are AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and TF_LOG.

Command-Line Options: Terraform CLI commands support a variety of command-line options


that can be used to configure settings such as the backend configuration, the state file
location, and the log level. For example, the terraform apply command supports the -state
option to specify the location of the state file.

Terraform Configuration Files: Terraform configuration files allow you to define provider
configuration, backend configuration, variable definitions, and many other settings.
Configuration files are written in HashiCorp Configuration Language (HCL) or JSON format. The
configuration files can be organized into modules and combined to form a complete
infrastructure configuration.

Terraform Modules: Terraform modules can be used to define reusable infrastructure


configurations that can be shared across multiple Terraform projects. Modules can include
provider configuration, backend configuration, and variable definitions, among other settings.
Overall, Terraform settings can be configured in a flexible and customizable way to meet the
needs of different use cases and environments. It is important to understand the different
configuration options and to choose the appropriate method based on the requirements of
your infrastructure.

Dealing with large infrastructure


in Terraform can be a complex task, but there are a number of best practices and strategies
that can help make the process more manageable. Here are some key considerations:

Modularize your code: As your infrastructure grows, it's important to break your Terraform
code into smaller, more manageable pieces. This allows you to reuse code and makes it easier
to maintain and update your infrastructure over time.

Use remote state: When working with large infrastructure, it's important to use remote state
storage, such as Amazon S3 or Terraform Cloud, to store your Terraform state files. This
allows multiple team members to work on the same infrastructure and makes it easier to
manage changes to your infrastructure over time.

Separate production and non-production environments: To minimize risk and ensure


stability, it's important to separate your production and non-production environments. This
allows you to test changes in a controlled environment before deploying them to production.

Use Terraform modules: Terraform modules allow you to package infrastructure resources
and configurations into reusable, shareable modules. This can help reduce complexity and
make it easier to manage large infrastructure.

Automate testing and deployment: To ensure that your infrastructure changes are successful
and don't cause unintended consequences, it's important to automate testing and
deployment. This can include running automated tests, using continuous integration and
deployment tools, and using infrastructure-as-code pipelines to manage your infrastructure
changes.

By following these best practices and strategies, you can effectively manage large
infrastructure in Terraform and ensure that your infrastructure remains stable and secure
over time.

The zipmap function


is a built-in function that takes two lists and returns a map where the values in the first list are
the keys and the values in the second list are the corresponding values.
Here's an example usage of zipmap:

locals {
regions = ["us-west-1", "us-west-2", "us-east-1"]
instance_types = ["t2.micro", "t2.small", "t2.medium"]
instance_type_map = zipmap(local.regions,
local.instance_types)
}
In this example, zipmap is used to create a map called instance_type_map where the keys are
the regions in the regions list and the values are the instance types in the instance_types list.

The resulting instance_type_map would look like this:

{
"us-west-1" = "t2.micro"
"us-west-2" = "t2.small"
"us-east-1" = "t2.medium"
}

This function can be particularly useful when you need to create a map where the keys and
values are based on the elements in two different lists. It can also be used in combination with
other functions, such as flatten, to transform data structures in more complex ways.

Data-type set
is a collection data type that represents an unordered set of unique values. A set can be
defined using the set function, and its elements can be added or removed using the setadd
and setremove functions.

Here's an example usage of a set in Terraform:

locals {
tags = ["web", "app", "database", "dev"]
selected_tags = [ "web", "database", "testing", ]
valid_tags = set(local.tags)
selected_valid_tags = setintersection(local.selected_tags,
local.valid_tags)
}
In this example, we define a set called valid_tags that contains all the valid tags. We then
create another set called selected_valid_tags that contains only the tags that are both valid
and selected.
The resulting selected_valid_tags set would look like this:

{
"web",
"database"
}
One important thing to note about sets is that they are unordered and cannot contain
duplicate elements. If you need to maintain a specific order or allow duplicates, you may need
to use a different data type, such as a list or a map.123

The for_each meta-argument


in Terraform is used to create multiple instances of a resource or module based on a set or
map of values. It allows you to create multiple resources with a single block of code and can
be used to simplify your configuration and make it more maintainable.

Here's an example usage of for_each in Terraform:

locals {
instances = {
"web-1" = {
instance_type = "t2.micro"
subnet_id = "subnet-1234"
}
"web-2" = {
instance_type = "t2.small"
subnet_id = "subnet-5678"
}
}
}

resource "aws_instance" "web" {


for_each = local.instances

ami = "ami-0c55b159cbfafe1f0"
instance_type = each.value.instance_type
subnet_id = each.value.subnet_id

tags = {
Name = each.key
}
}
In this example, we define a map called instances that contains the instance types and subnet
IDs for two web instances. We then use the for_each meta-argument to create two
aws_instance resources based on the values in the instances map.

The resulting aws_instance resources would have the following names and attributes:

aws_instance.web["web-1"]:
instance_type = "t2.micro"
subnet_id = "subnet-1234"
tags.Name = "web-1"

aws_instance.web["web-2"]:
instance_type = "t2.small"
subnet_id = "subnet-5678"
tags.Name = "web-2"

Using for_each can be particularly useful when you need to create multiple instances of a
resource or module with different configurations, but want to avoid duplicating the code.

DRY (Don't Repeat Yourself)


it is a programming principle that emphasizes the importance of avoiding duplication in code.
The idea behind DRY is that every piece of knowledge or logic should have a single,
unambiguous representation within a system. In other words, code should be organized in a
way that minimizes duplication and promotes reusability.

The benefits of following the DRY principle include:

Reduced maintenance: When code is duplicated, it can be difficult to make changes without
introducing errors or inconsistencies. By minimizing duplication, you can reduce the amount
of code that needs to be maintained, which can make it easier to add new features or fix bugs.

Improved readability: Code that follows the DRY principle is typically easier to read and
understand because it is more concise and focused. This can make it easier for developers to
understand how the system works and how different components interact with each other.

Increased efficiency: When code is organized according to DRY principles, it is often more
efficient because it can be reused in different parts of the system. This can help to reduce
development time and increase productivity.

To apply DRY principles in practice, developers should look for opportunities to reuse code
and minimize duplication. This can involve creating reusable functions, modules, or libraries
that can be used across different parts of the system. It can also involve organizing code in a
way that promotes modularity and encapsulation, so that different parts of the system can be
developed and tested independently.

Overall, the DRY principle is an important best practice in software development that can help
to improve code quality, reduce development time, and enhance maintainability.

centralized structure
refers to a configuration setup where multiple teams or projects share a single Terraform
state file or a set of state files that are stored in a central location. This approach is often used
in large organizations or environments with many teams that are responsible for different
parts of the infrastructure.

The benefits of using a centralized structure in Terraform include:

Improved collaboration: With a centralized Terraform state, teams can more easily
collaborate and coordinate their infrastructure changes. Instead of managing separate state
files and worrying about conflicts, teams can work together on a shared infrastructure plan.

Greater visibility: A centralized Terraform state can provide greater visibility into the overall
infrastructure and help teams identify dependencies and potential conflicts before they occur.
This can be especially valuable in complex environments with many interrelated components.

Enhanced consistency: By using a centralized Terraform configuration, teams can ensure that
infrastructure resources are consistently provisioned and configured across the entire
organization. This can help to prevent inconsistencies and ensure that infrastructure is
compliant with organizational standards and policies.

To implement a centralized structure in Terraform, you can use tools like Terraform Cloud,
AWS Organizations, or HashiCorp Consul to store and manage your Terraform state files. You
can also use role-based access control (RBAC) and other security measures to ensure that only
authorized users can access and modify the Terraform state.

However, it's important to note that a centralized structure can also introduce some
challenges, such as increased complexity and potential security risks. Therefore, it's important
to carefully consider the benefits and drawbacks of this approach before implementing it in
your organization.

module
is a reusable configuration unit that encapsulates resources, input variables, output values,
and other Terraform constructs. Modules enable you to abstract and reuse complex
infrastructure configurations, making it easier to manage and maintain your infrastructure as
code.

A module typically consists of a set of Terraform files that define resources and other
constructs, as well as input and output variables that allow the module to be customized and
integrated with other modules or configurations.

To use a module in Terraform, you can call it from another configuration using the module
block, passing any necessary input variables. Here's an example:

module "my_module" {
source = "github.com/myuser/my-module"

variable1 = "value1"
variable2 = "value2"
}

In this example, we're calling a module named my_module, which is located in a GitHub
repository. We're passing two input variables to the module, variable1 and variable2, and
providing their values.

Once you have called a module, you can reference its output values using the
module.<name>.<output> syntax. For example:

resource "aws_instance" "my_instance" {


ami = module.my_module.ami_id
instance_type = module.my_module.instance_type
}

In this example, we're using output values from the my_module module to provision an AWS
instance. We're referencing the ami_id and instance_type outputs using the
module.my_module prefix.

Modules can be published and shared through the Terraform Registry, a public repository of
modules that can be used by anyone. You can also create and use private modules within your
organization.
By using modules in Terraform, you can simplify the management and configuration of
complex infrastructure, promote code reuse and consistency, and reduce errors and
duplication.

variable and terraform module


In Terraform, variables are used to define input values for a configuration or a module. When
you define a variable, you are creating a placeholder for a value that can be specified when
you apply the configuration or module. Variables can be used to make a configuration or
module more flexible and reusable, as they allow you to parameterize values that might
change based on different environments or use cases.

When you create a module in Terraform, you can define input variables that are used to
configure the module. For example, you might create a module that provisions an AWS EC2
instance, and define variables for the instance type, the AMI ID, the security group, and other
parameters. You can then use these variables within the module to provision the resources.

Here's an example of a module that provisions an AWS EC2 instance, using input variables for
the instance type and AMI ID:

variable "instance_type" {
type = string
default = "t2.micro"
}

variable "ami_id" {
type = string
default = "ami-0c55b159cbfafe1f0"
}

resource "aws_instance" "my_instance" {


ami = var.ami_id
instance_type = var.instance_type

# other resource configurations...


}

In this example, we've defined two input variables, instance_type and ami_id, with default
values. These variables are used within the aws_instance resource block to provision the EC2
instance.
To use a module with input variables, you can call the module and pass values for the
variables:

module "my_module" {
source = "./modules/my_module"

instance_type = "t2.large"
ami_id = "ami-0123456789abcdef0"
}

In this example, we're calling a module named my_module and passing values for the
instance_type and ami_id variables.

Using variables and modules together in Terraform allows you to create reusable and flexible
infrastructure configurations, which can be customized for different environments or use
cases. It also promotes consistency and reduces duplication of code.

Using Locals with Modules


locals are used to define intermediate values that are derived from other values within a
configuration or a module. When you define a local value, you are creating a reusable and
maintainable expression that can be used throughout the configuration or module.

When using modules in Terraform, you can define local values within the module and
reference them in the module's resources or outputs. This can help to simplify the module
and make it more maintainable, as you can define complex expressions once and reuse them
throughout the module.

Here's an example of a module that provisions an AWS EC2 instance, using local values to
define the instance name and tags:

variable "instance_name" {
type = string
default = "my-instance"
}

variable "tags" {
type = map(string)
default = {
Name = var.instance_name
Environment = "prod"
}
}

locals {
instance_name = var.instance_name
tags = var.tags
}

resource "aws_instance" "my_instance" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = local.tags

# other resource configurations...


}

In this example, we've defined two input variables, instance_name and tags, which are used
to configure the EC2 instance. We've also defined two local values, instance_name and tags,
which are derived from the input variables.

The instance_name local value simply references the var.instance_name input variable. The
tags local value references the var.tags input variable, but adds an additional tag for the
Environment.

By defining the local values, we can reference them throughout the module without repeating
the expressions. For example, we can use the instance_name local value in the EC2 instance
resource:

resource "aws_instance" "my_instance" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = local.tags
name = local.instance_name

# other resource configurations...


}
We can also reference the local values in module outputs, if needed:

output "instance_name" {
value = local.instance_name
}

output "tags" {
value = local.tags
}

By using locals with modules in Terraform, you can simplify and modularize your
infrastructure configurations, making them easier to maintain and reuse.

123

module outputs
are used to define values that can be accessed by other configurations or modules. When you
define an output in a module, you are creating a way to expose a value from the module's
resources or locals, so that it can be used by other parts of the Terraform configuration.

Module outputs can be used to communicate information between different parts of the
infrastructure, or to pass information between different stages of a pipeline. They can also be
used to create reusable modules that can be customized for different use cases.

Here's an example of a module that provisions an AWS EC2 instance, using an output to
expose the instance's public IP address:

resource "aws_instance" "my_instance" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"

# other resource configurations...


}

output "public_ip" {
value = aws_instance.my_instance.public_ip
}

In this example, we've defined an output named public_ip that references the public_ip
attribute of the aws_instance.my_instance resource. This output can be accessed by other
parts of the configuration or by other modules.

To use the output, you can call the module and reference the output value:

module "my_module" {
source = "./modules/my_module"

# other input configurations...


}

resource "aws_eip" "my_eip" {


vpc = true
instance = module.my_module.public_ip

# other resource configurations...


}

In this example, we're calling a module named my_module and referencing the public_ip
output value. We're using this output value to configure an Elastic IP resource that is
associated with the EC2 instance.

By using outputs in Terraform modules, you can create reusable and flexible modules that can
be customized and composed in different ways. Outputs can also help to simplify the
configuration and promote consistency, by providing a standard way to access and use
module values.

Terraform workspace
is a feature of Terraform, an open-source infrastructure as code (IAC) tool. It allows users to
manage multiple sets of infrastructure resources within a single Terraform configuration. A
workspace is a named container for a specific state of the infrastructure managed by
Terraform. It allows you to create, manage, and switch between multiple instances of the
same infrastructure stack.

Workspaces enable you to keep multiple environments separate, such as development,


staging, and production. Each workspace has its own set of variables and state files, which
means that you can make changes to the infrastructure without affecting other environments.
For example, you could create separate workspaces for testing and production environments,
each with their own state files, variables, and configurations.

With Terraform workspace, you can create, delete, and switch between workspaces using
simple commands, such as terraform workspace new, terraform workspace select, and
terraform workspace delete. This makes it easy to manage multiple environments and keep
them organized within a single Terraform configuration.

Terraform backend
is a configuration setting that determines where the state file is stored when you run
Terraform apply. The state file is used to store information about the resources that
Terraform creates and manages in your cloud infrastructure.
The backend can be a local file, a remote file, or a service. Terraform supports various
backends, including Amazon S3, Azure Blob Storage, Google Cloud Storage, and HashiCorp
Consul.

Using a remote backend can provide several benefits, including improved collaboration and
increased resilience. With a remote backend, multiple users can work together on the same
infrastructure project, and changes made by one user are visible to everyone else.
Additionally, using a remote backend can help protect against data loss or corruption because
the state file is stored separately from the local machine running Terraform.

file locking
is a mechanism to prevent multiple instances of Terraform from attempting to modify the
same state file at the same time. This is important because Terraform state files are used to
store the current state of the infrastructure being managed by Terraform, and modifying it
concurrently can lead to conflicts and data corruption.

Terraform uses a file locking mechanism to prevent multiple instances of Terraform from
modifying the state file at the same time. When Terraform begins to modify the state file, it
creates a lock file in the same directory with the name ".terraform.tfstate.lock.info". This lock
file contains information about the Terraform process that has acquired the lock, such as the
process ID and a timestamp.

If another instance of Terraform attempts to modify the state file while the lock file exists, it
will be unable to acquire the lock and will fail with an error message. Once the Terraform
process that holds the lock completes its modifications to the state file, it releases the lock
and removes the lock file.

File locking in Terraform is important for ensuring the integrity of the state file and preventing
conflicts between multiple Terraform processes that may be running concurrently. It is also
important to ensure that the state file is stored in a location that is accessible by all Terraform
processes that need to access it, such as a shared file system or a remote object store.

Force unlocking the Terraform state


should be used as a last resort, as it can lead to data corruption and other issues. However, if
you are unable to release a lock on the state file due to a failed process or other issues, you
may need to force unlock the state.
To force unlock the Terraform state, you can use the terraform force-unlock command with
the -force flag. This command takes the lock ID as an argument and releases the lock without
performing any checks to ensure that the lock is not currently held by another process.

Here's an example of how to force unlock the state:

terraform force-unlock LOCK_ID -force

Where LOCK_ID is the ID of the lock that you want to release. You can find the lock ID by
looking at the lock file name in the .terraform directory.

It is important to note that forcing the unlock of the state file can lead to data corruption if
the state file is modified concurrently by multiple processes. Therefore, it should only be used
as a last resort and after ensuring that no other processes are modifying the state file. If you
are unsure about the consequences of force unlocking the state file, you should reach out to
the Terraform community or seek professional support.

Terraform state management


is a critical aspect of managing infrastructure with Terraform. The Terraform state file is used
to store the current state of the infrastructure being managed by Terraform, including the
resources that have been created, their configuration, and their dependencies.

There are several important aspects to consider when managing Terraform state:

State storage: The Terraform state file should be stored in a location that is accessible to all
instances of Terraform that need to access it. This can be a local file system or a remote object
store, such as Amazon S3 or Google Cloud Storage.

Locking: Terraform uses file locking to prevent multiple instances of Terraform from modifying
the state file at the same time. This is important to prevent conflicts and data corruption.

State backups: It is important to back up the Terraform state file regularly to ensure that it
can be recovered in the event of data loss or corruption.

State migration: If you make changes to the Terraform configuration that affect the format or
structure of the state file, you may need to migrate the state to a new format or version.
Terraform provides tools to assist with state migration, including the terraform state migrate
command.

Remote state: When working with remote state storage, you can configure Terraform to
encrypt and decrypt the state file to provide additional security.
Overall, managing Terraform state is a critical aspect of using Terraform to manage
infrastructure. By following best practices for state storage, locking, backups, migration, and
security, you can ensure the reliability and integrity of your infrastructure deployments.

12313

Terraform state modification


is an important aspect of managing infrastructure with Terraform. The Terraform state file
stores the current state of the infrastructure being managed by Terraform, and modifications
to this file can impact the infrastructure in a significant way. Here are some important aspects
to consider when modifying Terraform state:

Use the terraform state command: The terraform state command is the recommended way
to modify the Terraform state file. This command provides a set of subcommands that allow
you to modify individual resources in the state file, such as adding or removing a resource,
updating its configuration, or modifying its dependencies.

Use Terraform modules: Terraform modules provide a way to encapsulate infrastructure


resources and configurations into reusable and shareable components. When modifying the
Terraform state, it's best to modify the configuration of a module and apply the changes,
rather than modifying the state file directly.

Backup the state file: Before making any modifications to the Terraform state file, it's
important to back it up. This ensures that you can recover the previous state in the event of
data loss or corruption.

Use version control: It's important to keep the Terraform configuration and state files under
version control, such as with Git. This provides a history of changes and allows you to revert to
a previous version if necessary.

Be cautious with state file modifications: Modifying the Terraform state file can impact the
infrastructure in significant ways. It's important to review and test any modifications
thoroughly before applying them to production environments.

Overall, modifying Terraform state should be done with caution and using the recommended
tools and best practices. By following these guidelines, you can ensure the reliability and
consistency of your infrastructure deployments.

There are multiple sub-commands that can be used with terraform state, these include :
State Sub Command Description

list List resources within terraform state file.

mv Moves item with terraform state.

pull Manually download and output the state from remote state.

push Manually upload a local state file to remote state.

rm Remove items from the Terraform state

show Show the attributes of a single resource in the state.

Terraform remote state


is a feature that allows you to store the Terraform state file in a remote location, such as a
cloud-based object store or a version control system. By using remote state, you can share the
state file between multiple users or teams, and ensure that it is always accessible and up-to-
date.

The remote state feature is particularly useful in scenarios where multiple users or teams are
working on the same infrastructure, or when the infrastructure is managed by a CI/CD
pipeline. By storing the state file in a remote location, you can ensure that all users and
processes have access to the same state information, and that there are no conflicts or data
inconsistencies.

There are several benefits to using remote state:

Collaboration: Remote state allows multiple users or teams to collaborate on the same
infrastructure, without the risk of conflicts or data inconsistencies.

Security: Remote state can be encrypted to ensure that the state file is secure and protected
from unauthorized access.

Availability: By storing the state file in a remote location, you can ensure that it is always
accessible, even if the local machine or storage device fails.

Versioning: Many remote storage systems provide versioning features, which allow you to
track changes to the state file over time.
To use remote state in Terraform, you need to configure a backend that specifies the remote
storage location and credentials. Terraform supports several backend types, including Amazon
S3, Google Cloud Storage, Azure Storage, and HashiCorp's own Terraform Cloud.

Overall, Terraform remote state is a powerful feature that provides a way to share and
collaborate on infrastructure state information in a secure and reliable way.

Terraform import
is a command that allows you to import an existing infrastructure resource into Terraform
state. This can be useful if you have existing resources that were not created using Terraform,
but you want to start managing them with Terraform.

Here are the basic steps to use the terraform import command:

Define the resource in Terraform: First, you need to define the resource in Terraform
configuration, using the same resource type and name as the existing resource.

Identify the resource ID: Next, you need to identify the unique identifier of the existing
resource, such as the resource ID, ARN, or other identifier that is specific to the resource type.

Run the terraform import command: Finally, you can run the terraform import command,
specifying the Terraform resource type, name, and the identifier of the existing resource.
Terraform will then import the existing resource into its state file, so that you can manage it
using Terraform.

Here's an example command to import an AWS S3 bucket into Terraform state:

terraform import aws_s3_bucket.example_bucket my-existing-


bucket

In this example, aws_s3_bucket.example_bucket is the Terraform resource type and name,


and my-existing-bucket is the identifier of the existing S3 bucket.

It's important to note that the terraform import command only imports the resource into
Terraform state, and does not create or modify the resource itself. After importing the
resource, you can modify its configuration in Terraform and apply the changes to update the
resource as needed.

Overall, terraform import can be a useful command for incorporating existing infrastructure
resources into Terraform state, and enabling you to manage them using Terraform's
infrastructure-as-code approach.
how to Handle Access & Secret Keys the Right Way in
Providers in terraform
When working with Terraform providers that require access and secret keys, it's important to
handle these credentials securely to avoid any unauthorized access or data breaches. Here are
some best practices for handling access and secret keys in Terraform:

Use environment variables: Instead of hardcoding your access and secret keys in your
Terraform configuration files, you can use environment variables to pass them to Terraform.
This way, you can keep your credentials separate from your code and avoid accidentally
committing them to source control. You can set environment variables in your shell or use a
tool like Vault or AWS Secrets Manager to manage them.

Use Terraform variables: If you don't want to use environment variables, you can use
Terraform variables to store your access and secret keys. This way, you can pass your
credentials to your Terraform configuration files at runtime. You can define variables in a
separate file or pass them as command-line arguments.

Use a credential file: Some providers allow you to store your access and secret keys in a
credential file. This file can be encrypted or stored in a secure location to prevent
unauthorized access. You can then reference the credential file in your Terraform
configuration files.

Use IAM roles: If you're using AWS, you can use IAM roles to grant access to your resources
instead of using access and secret keys. This way, you can avoid managing and storing
credentials in your Terraform configuration files.

Use least privilege: When granting access to your resources, always use the principle of least
privilege. This means granting only the permissions that are necessary for the resources to
function properly. You can use IAM policies or other access control mechanisms to achieve
this.

Rotate your credentials regularly: It's important to rotate your access and secret keys
regularly to reduce the risk of them being compromised. You can use tools like AWS Key
Management Service or Vault to manage key rotation.

By following these best practices, you can ensure that your access and secret keys are handled
securely in Terraform and reduce the risk of unauthorized access or data breaches.
Terraform Provider UseCase - Resources in Multiple
Regions
One common use case for Terraform providers is managing resources across multiple regions.
Let's say you have a web application that needs to be deployed in multiple regions to provide
low latency access to users. Each region may have its own set of resources, such as virtual
machines, load balancers, databases, and storage accounts.

To manage these resources with Terraform, you can use a provider that supports multiple
regions, such as the AWS or Azure providers. Here's how you can use Terraform to manage
resources across multiple regions:

Define the provider: First, you need to define the provider in your Terraform configuration
file. For example, if you're using AWS, you can define the provider as follows:

provider "aws" {
region = "us-east-1"
}

This sets the default region to US East (N. Virginia). You can also specify multiple regions by
using a variable or a list of regions.

Define the resources: Next, you can define the resources that you want to create or manage
in each region. For example, if you want to create an EC2 instance in two regions, you can
define the resources as follows:

resource "aws_instance" "web" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
count = 2

tags = {
Name = "Web Server"
}

# Define the placement for each instance


placement {
availability_zone = "us-east-1a"
}
placement {
availability_zone = "us-west-2a"
}
}

This creates two EC2 instances, one in US East (N. Virginia) and one in US West (Oregon), using
the specified AMI and instance type.

Apply the configuration: Finally, you can apply the Terraform configuration to create or
manage the resources in each region. For example, you can run the following command to
apply the configuration:

terraform apply

This will create or update the resources in each region according to the specified
configuration.

By using Terraform to manage resources across multiple regions, you can ensure consistency
and repeatability across your deployments. You can also easily modify or remove resources as
needed, and track changes to your infrastructure over time.

Handling Multiple AWS Profiles with Terraform


Providers
When working with AWS and Terraform, it's common to have multiple AWS profiles to
manage different environments, such as development, staging, and production. Each profile
may have its own set of AWS access and secret keys, regions, and other settings.

Here's how you can handle multiple AWS profiles with Terraform providers:

Define the AWS provider: In your Terraform configuration file, you can define the AWS
provider and specify the profile to use. For example:

provider "aws" {
profile = "default"
region = "us-west-2"
}

This sets the default AWS profile to "default" and the default region to "us-west-2". If you
have multiple profiles, you can specify a different profile for each environment.
Configure the AWS CLI: To switch between AWS profiles, you can use the AWS CLI to configure
your credentials and settings. You can run the following command to list your configured
profiles:

aws configure list-profiles

This will show a list of your configured profiles, such as "default", "dev", "staging", and "prod".
To switch to a different profile, you can run the following command:

export AWS_PROFILE=dev

This sets the current profile to "dev" for the current shell session.

Use environment variables: Another option is to use environment variables to pass your AWS
access and secret keys to Terraform. For example:

provider "aws" {
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-west-2"
}
You can set the AWS access and secret keys as environment variables for each profile, and
reference them in your Terraform configuration files using variables.

Use a configuration file: You can also use a separate configuration file for each profile to store
your AWS access and secret keys and other settings. For example, you can create a file named
"dev.tfvars" with the following content:

aws_access_key = "YOUR_ACCESS_KEY"
aws_secret_key = "YOUR_SECRET_KEY"
region = "us-west-2"

You can then reference this file in your Terraform configuration file using the "-var-file"
option:

terraform apply -var-file=dev.tfvars

This will use the settings from the "dev.tfvars" file for the current Terraform run.

By using these methods, you can manage multiple AWS profiles and configurations with
Terraform, and ensure that your AWS access and secret keys are handled securely.

sensitive parameter
are used to mask sensitive or confidential information, such as passwords, API keys, and
tokens. When a parameter is marked as sensitive, Terraform will hide its value in the console
output, logs, and state file to prevent accidental exposure of sensitive information.

Here's how you can mark a parameter as sensitive in Terraform:

Define the parameter: First, you need to define the parameter in your Terraform
configuration file. For example:

variable "password" {
type = string
description = "The password for the database"
sensitive = true
}

This defines a variable named "password" with the type "string", a description, and the
"sensitive" attribute set to true.

Use the parameter: Next, you can use the parameter in your Terraform resources, data
sources, or modules. For example:

resource "aws_db_instance" "example" {


# ...
master_password = var.password
# ...
}
This sets the master password for the AWS RDS instance to the value of the "password"
variable.

Protect the parameter: To protect the sensitive parameter, you need to take extra
precautions to ensure that it is not accidentally exposed. For example:
Avoid displaying the value of the parameter in console output or logs.
Use a secure mechanism to store and retrieve the parameter value, such as a secrets
management system or environment variables.
Restrict access to the Terraform state file and configuration files to authorized users only.
By using sensitive parameters in Terraform, you can ensure that your sensitive information is
protected and not accidentally exposed.

HashiCorp Vault
is a popular open-source tool used for securely storing and accessing secrets, such as
passwords, API keys, and tokens. It provides a centralized place to store and manage secrets,
and allows fine-grained access control to prevent unauthorized access.

Vault offers several key features:

Secure storage: Vault provides a secure storage mechanism for secrets, using strong
encryption and access controls to protect sensitive data.

Dynamic secrets: Vault can generate and manage dynamic secrets for cloud platforms and
databases, reducing the risk of credential theft and misuse.

Auditing and logging: Vault maintains an audit trail of all secret access and changes, and
provides detailed logging and monitoring capabilities.

Fine-grained access control: Vault allows fine-grained access control to secrets, using policies
and roles to restrict access to specific secrets and actions.

Integration with other tools: Vault integrates with popular cloud platforms, databases, and
tools, making it easy to manage secrets across different environments.

Vault supports several authentication methods, including LDAP, GitHub, and Kubernetes, and
can be deployed on-premises or in the cloud. It also provides a comprehensive API and CLI,
making it easy to automate secret management tasks.

Overall, Vault is a powerful tool for managing secrets in modern cloud environments,
providing strong security and access controls for sensitive data.

terraform and vault integration


Terraform can integrate with HashiCorp Vault to securely manage secrets used in your
infrastructure code. By using Vault as a centralized secrets management system, you can
avoid hard-coding sensitive information in your Terraform configuration files or environment
variables, which can reduce the risk of accidental exposure.

Here are the general steps to integrate Terraform with Vault:

Configure Vault: First, you need to configure Vault to create a policy and an associated token
that Terraform can use to access secrets. You can create a policy that grants Terraform read
access to specific secrets, and generate a token with that policy.
Use Vault provider in Terraform: Once Vault is configured, you can use the vault provider in
your Terraform configuration to access secrets. You will need to specify the Vault address, the
authentication method, and the Vault path to the secret. For example:

provider "vault" {
address = "https://vault.example.com"
token = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}

data "vault_generic_secret" "database_creds" {


path = "secret/database"
}

This defines a Vault provider with the address and token, and retrieves a secret from the path
secret/database using the vault_generic_secret data source.

Use secrets in Terraform resources: Finally, you can use the secrets retrieved from Vault in
your Terraform resources. For example:

resource "aws_db_instance" "example" {


# ...
master_username =
data.vault_generic_secret.database_creds.data["username"]
master_password =
data.vault_generic_secret.database_creds.data["password"]
# ...
}

This sets the master username and password for the AWS RDS instance to the values retrieved
from Vault.

By integrating Terraform with Vault, you can manage secrets more securely and efficiently,
reducing the risk of accidental exposure and improving your overall infrastructure security
posture.

dependency lock file


is a file that is generated by package managers, such as NPM or Yarn, to ensure that the exact
versions of all dependencies are installed when an application is built or deployed.

When a developer installs a package, the package manager creates a dependency tree, which
shows all of the packages that the installed package relies on, and all of the packages that
those packages rely on, and so on. The dependency lock file records the exact versions of all
packages in this tree, along with any transitive dependencies, to ensure that the exact same
versions are installed every time the application is built or deployed.

This is important because different versions of packages can have different functionality or
APIs, and can even contain security vulnerabilities. By using a dependency lock file, developers
can ensure that their applications are built and deployed with the exact same set of
dependencies every time, which can help prevent errors and security issues caused by version
mismatches.

Terraform Cloud
is a cloud-based service that provides a collaborative environment for infrastructure as code
(IaC) workflows using HashiCorp's Terraform. It allows teams to manage infrastructure
resources across multiple cloud platforms, including AWS, Azure, and Google Cloud, from a
single interface.

Terraform Cloud provides several features, including:

Collaboration: Terraform Cloud allows teams to work collaboratively on infrastructure


projects, enabling multiple users to access and modify code simultaneously.

Remote state management: It provides a centralized location for storing infrastructure state
data, which allows teams to easily manage and version their infrastructure.

Secure storage: Terraform Cloud provides secure storage for sensitive data like credentials
and API keys.

Integration with version control: Terraform Cloud integrates with popular version control
systems like GitHub and Bitbucket, enabling teams to manage infrastructure code alongside
application code.

Automation: It automates infrastructure deployments and provides a simple way to version


infrastructure configurations, ensuring that changes are applied consistently and reliably.

Role-based access control: Terraform Cloud provides role-based access control (RBAC) to
ensure that users have access to only the resources they need.

Overall, Terraform Cloud is a powerful tool for managing infrastructure as code, enabling
teams to collaborate effectively and efficiently, automate infrastructure deployments, and
maintain the integrity of their infrastructure.
Sentinel
is a policy as code framework developed by HashiCorp that allows teams to enforce policies
on infrastructure as code (IaC) deployments. It integrates with HashiCorp's Terraform, Consul,
Nomad, and Vault to provide policy enforcement capabilities for these tools.

Sentinel allows teams to define policies as code, which can be versioned, tested, and deployed
alongside infrastructure code. Policies can be defined to ensure that infrastructure is deployed
according to best practices, compliance requirements, and organizational policies.

Sentinel works by intercepting requests made by Terraform, Consul, Nomad, or Vault and
checking them against predefined policies. If the request violates a policy, Sentinel can block it
or trigger a notification to the appropriate stakeholders.

Sentinel provides a flexible and customizable framework for defining policies. Policies can be
defined using the Sentinel language, which is a high-level, easy-to-read language specifically
designed for policy definition. Sentinel also provides a policy testing framework that allows
policies to be tested in isolation before they are deployed.

Overall, Sentinel provides a powerful and flexible way to enforce policies on infrastructure as
code deployments. It helps teams ensure that their infrastructure is deployed securely,
reliably, and in compliance with organizational policies and regulations.

remote backend
is a storage location where the state of the infrastructure is stored. Terraform stores the state
of the infrastructure in a file called the state file, which contains information about the
resources that have been created, modified, or destroyed.

By default, Terraform stores the state file on the local file system. However, as the number of
resources and team members involved in a project grows, it becomes more challenging to
manage the state file manually. Remote backends offer an alternative solution to this
problem.

Remote backends allow teams to store the state file in a centralized location, accessible to all
members of the team. This means that team members can work collaboratively and share the
state file, reducing the risk of conflicts or errors.

Terraform supports several types of remote backends, including Amazon S3, Azure Blob
Storage, Google Cloud Storage, HashiCorp Consul, and HashiCorp Terraform Cloud. Each of
these backends provides its own benefits, such as high availability, versioning, access control,
and encryption.

Using a remote backend can improve the reliability and scalability of infrastructure
deployments, as well as simplify the management of the state file. However, it's important to
ensure that the remote backend is secured and appropriately configured to prevent
unauthorized access or modifications to the state file.

Implementing a remote backend involves a few steps:


Choose a remote backend provider: Terraform supports several types of remote backends,
including Amazon S3, Azure Blob Storage, Google Cloud Storage, HashiCorp Consul, and
HashiCorp Terraform Cloud. Choose a provider that meets your needs and that you are
familiar with.

Configure the provider: Each provider has its own configuration settings that must be
specified in the Terraform configuration file. For example, if you choose Amazon S3, you'll
need to specify the S3 bucket name and access credentials.

Initialize the backend: Once the provider is configured, initialize the backend by running the
terraform init command with the appropriate backend configuration. For example, if you're
using Amazon S3, you'll run terraform init -backend-config="bucket=<bucket_name>" -
backend-config="key=<path_to_state_file>".
Migrate the state file: If you're migrating from a local state file to a remote backend, you'll
need to move the state file to the remote backend. You can do this by running the terraform
state push command.

Verify the backend: Verify that the backend is working correctly by running Terraform
commands like plan and apply. Terraform should read and write the state file to the remote
backend instead of the local file system.

Secure the backend: Ensure that the remote backend is properly secured and configured to
prevent unauthorized access. This may involve setting up access controls, encryption, and
other security measures.

By following these steps, you can implement a remote backend in Terraform to improve the
reliability, scalability, and management of your infrastructure deployments.

air-gapped environment
is a secure computing environment that is physically isolated from unsecured networks and
the internet. It is typically used in situations where high levels of security are required, such as
in military, government, or financial institutions.

In an air-gapped environment, there is no connection between the secure environment and


any unsecured network or system, including the internet. This ensures that data cannot be
transferred in or out of the secure environment without explicit physical access.

Air-gapped environments can be created in a variety of ways, such as using physically separate
computers, networks, or storage media that are not connected to external networks or the
internet. Access to the environment is usually restricted to authorized personnel, and data
transfer in and out of the environment is carefully controlled and monitored.

Air-gapped environments are used to protect sensitive data and systems from external threats
such as cyber-attacks, malware, and unauthorized access. However, they can also present
challenges for organizations that need to manage and update software or deploy
infrastructure changes within the environment. Special procedures and tools may be required
to transfer data and updates into the air-gapped environment while maintaining the security
of the environment.
Environment Variables
Terraform refers to a number of environment variables to customize various aspects of its
behavior. None of these environment variables are required when using Terraform, but they
can be used to change some of Terraform's default behaviors in unusual situations, or to
increase output verbosity for debugging.

TF_LOG
Enables detailed logs to appear on stderr which is useful for debugging. For example:

export TF_LOG=trace

To disable, either unset it, or set it to off. For example:

export TF_LOG=off

TF_LOG_PATH
This specifies where the log should persist its output to. Note that even when
TF_LOG_PATH is set, TF_LOG must be set in order for any logging to be enabled. For
example, to always write the log to the directory you're currently running terraform
from:

export TF_LOG_PATH=./terraform.log
TF_INPUT
If set to "false" or "0", causes terraform commands to behave as if the -input=false
flag was specified. This is used when you want to disable prompts for variables that
haven't had their values specified. For example:

export TF_INPUT=0

TF_VAR_name
Environment variables can be used to set variables. The environment variables must
be in the format TF_VAR_name and this will be checked last for a value. For example:

export TF_VAR_region=us-west-1

export TF_VAR_ami=ami-049d8641

export TF_VAR_alist='[1,2,3]'

export TF_VAR_amap='{ foo = "bar", baz = "qux" }'

TF_CLI_ARGS and TF_CLI_ARGS_name


The value of TF_CLI_ARGS will specify additional arguments to the command-line.
This allows easier automation in CI environments as well as modifying default
behavior of Terraform on your own system.

These arguments are inserted directly after the subcommand (such as plan) and
before any flags specified directly on the command-line. This behavior ensures that
flags on the command-line take precedence over environment variables.

For example, the following command: TF_CLI_ARGS="-input=false"


terraform apply -force is the equivalent to manually typing: terraform
apply -input=false -force.

The flag TF_CLI_ARGS affects all Terraform commands. If you specify a named
command in the form of TF_CLI_ARGS_name then it will only affect that command. As
an example, to specify that only plans never refresh, you can set
TF_CLI_ARGS_plan="-refresh=false".

The value of the flag is parsed as if you typed it directly to the shell. Double and single
quotes are allowed to capture strings and arguments will be separated by spaces
otherwise.

TF_DATA_DIR
TF_DATA_DIR changes the location where Terraform keeps its per-working-directory
data, such as the current backend configuration.

By default this data is written into a .terraform subdirectory of the current directory,
but the path given in TF_DATA_DIR will be used instead if non-empty.

In most cases it should not be necessary to set this variable, but it may be useful to do
so if e.g. the working directory is not writable.

The data directory is used to retain data that must persist from one command to the
next, so it's important to have this variable set consistently throughout all of the
Terraform workflow commands (starting with terraform init) or else Terraform
may be unable to find providers, modules, and other artifacts.

TF_WORKSPACE
For multi-environment deployment, in order to select a workspace, instead of doing
terraform workspace select your_workspace, it is possible to use this
environment variable. Using TF_WORKSPACE allow and override workspace
selection.

export TF_WORKSPACE=your_workspace

Using this environment variable is recommended only for non-interactive usage, since
in a local shell environment it can be easy to forget the variable is set and apply
changes to the wrong state.

TF_IN_AUTOMATION
If TF_IN_AUTOMATION is set to any non-empty value, Terraform adjusts its output to
avoid suggesting specific commands to run next. This can make the output more
consistent and less confusing in workflows where users don't directly execute
Terraform commands, like in CI systems or other wrapping applications.

This is a purely cosmetic change to Terraform's human-readable output, and the exact
output differences can change between minor Terraform versions.

TF_REGISTRY_DISCOVERY_RETRY
Set TF_REGISTRY_DISCOVERY_RETRY to configure the max number of request
retries the remote registry client will attempt for client connection errors or 500-range
responses that are safe to retry.

TF_REGISTRY_CLIENT_TIMEOUT
The default client timeout for requests to the remote registry is 10s.
TF_REGISTRY_CLIENT_TIMEOUT can be configured and increased during
extraneous circumstances.

export TF_REGISTRY_CLIENT_TIMEOUT=15

TF_IGNORE
If TF_IGNORE is set to "trace", Terraform will output debug messages to display
ignored files and folders. This is useful when debugging large repositories with
.terraformignore files.

export TF_IGNORE=trace
Terraform Cloud CLI Integration
The CLI integration with Terraform Cloud lets you use Terraform Cloud and Terraform
Enterprise on the command line. The integration requires including a cloud block in your
Terraform configuration. You can define its arguments directly in your configuration file or
supply them through environment variables, which can be useful for non-interactive
workflows like Continuous Integration (CI)

Terraform provisioners

The use of Terraform provisioners should generally be minimized and used judiciously. While
provisioners can be useful in certain scenarios, it's important to understand their limitations
and consider alternative approaches whenever possible.

Here are a few reasons why it is recommended to use Terraform provisioners minimally:

1. **Imperative vs. Declarative**: Terraform follows a declarative approach, where you define
the desired state of your infrastructure. Provisioners, on the other hand, introduce imperative
actions, which can make the infrastructure less predictable and harder to manage over time.
It's generally preferable to use Terraform to declare the desired state and rely on other tools
or processes for configuration management or post-provisioning tasks.

2. **Maintainability**: Provisioners can introduce complexity and dependencies on external


tools or scripts. This can make the Terraform code harder to maintain and troubleshoot,
especially as the infrastructure grows or changes. It's generally better to keep the Terraform
code focused on resource creation and management and leverage separate configuration
management tools for more advanced provisioning tasks.

3. **Idempotency**: Terraform is designed to be idempotent, meaning that applying the


same configuration multiple times should result in the same desired state. However,
provisioners often execute actions that may not be idempotent, making it challenging to
ensure consistent and predictable results. This can lead to issues with reproducibility and
reliability of the infrastructure.

That being said, there are cases where provisioners can be useful, such as when you need to
perform specific actions that are not easily achievable through other means. For example, you
might use provisioners to initialize databases, bootstrap instances, or integrate with external
systems. In such cases, it's important to carefully consider the implications and test
thoroughly to ensure the provisioners behave as expected.
In general, it's recommended to explore alternative approaches whenever possible, such as
leveraging infrastructure-as-code principles, configuration management tools, or cloud-native
services for more advanced provisioning requirements. This helps to maintain the desired
properties of declarative infrastructure management and keep the Terraform codebase
focused on resource provisioning and management.

remote-exec Provisioner
https://developer.hashicorp.com/terraform/language/resources/provisioners/remote-exec

The remote-exec provisioner invokes a script on a remote resource after it is


created. This can be used to run a configuration management tool, bootstrap into a
cluster, etc. To invoke a local process, see the local-exec provisioner instead. The
remote-exec provisioner requires a connection and supports both ssh and winrm.

resource "aws_instance" "web" {


# ...

# Establishes connection to be used by all


# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}

provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
Argument Reference
The following arguments are supported:

● inline - This is a list of command strings. The provisioner uses a default shell
unless you specify a shell as the first command (eg., #!/bin/bash). You
cannot provide this with script or scripts.
● script - This is a path (relative or absolute) to a local script that will be copied
to the remote resource and then executed. This cannot be provided with
inline or scripts.
● scripts - This is a list of paths (relative or absolute) to local scripts that will be
copied to the remote resource and then executed. They are executed in the
order they are provided. This cannot be provided with inline or script.

Script Arguments
You cannot pass any arguments to scripts using the script or scripts arguments
to this provisioner. If you want to specify arguments, upload the script with the file
provisioner and then use inline to call it. Example:

resource "aws_instance" "web" {


# ...

provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}

provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}

What's the difference between Terraform Cloud and Terraform


Enterprise?
Terraform Enterprise is offered as a private installation. It is designed to suit the needs
of organizations including more features (audit logging, SSO/SAML), more
customization (private networking), better performance (job scaling), and higher levels
of support. Terraform Cloud is offered as a multi-tenant SaaS platform. It offers a free
tier for getting started, and can accommodate both small businesses and large
organizations.
Which of the features are unique to Terraform Cloud Business Plan?

1. Audit Logging
2. Clustering Functionality
3. Private Network Connectivity

Terraform Core is a statically-compiled binary written in the Go programming


language. The compiled binary is the command line tool (CLI) terraform, the entrypoint
for anyone using Terraform. The code is open source and hosted at
github.com/hashicorp/terraform.

For local state, Terraform stores the workspace states in a directory called
terraform.tfstate.d. This directory should be treated similarly to local-only
terraform.tfstate; some teams commit these files to version control, although using a
remote backend instead is recommended when there are multiple collaborators

alias: Multiple Provider Configurations


You can optionally define multiple configurations for the same provider, and select
which one to use on a per-resource or per-module basis. The primary reason for this is
to support multiple regions for a cloud platform; other examples include targeting
multiple Docker hosts, multiple Consul hosts, etc.

To create multiple configurations for a given provider, include multiple provider


blocks with the same provider name. For each additional non-default configuration,

use the alias meta-argument to provide an extra name segment. For example:

# The default provider configuration; resources that begin with `aws_` will use
# it as the default, and it can be referenced as `aws`.
provider "aws" {
region = "us-east-1"
}

# Additional provider configuration for west coast region; resources can


# reference this as `aws.west`.
provider "aws" {
alias = "west"
region = "us-west-2"
}

Command: state mv
The main function of Terraform state is to track the bindings between resource
instance addresses in your configuration and the remote objects they represent.
Normally Terraform automatically updates the state in response to actions taken when
applying a plan, such as removing a binding for an remote object that has now been
deleted.

You can use terraform state mv in the less common situation where you wish to
retain an existing remote object but track it as a different resource instance address in
Terraform, such as if you have renamed a resource block or you have moved it into a
different module in your configuration.

zipmap Function
zipmap constructs a map from a list of keys and a corresponding list of values.

zipmap(keyslist, valueslist)

Both keyslist and valueslist must be of the same length. keyslist must be a

list of strings, while valueslist can be a list of any type.


Each pair of elements with the same index from the two lists will be used as the key
and value of an element in the resulting map. If the same value appears multiple times

in keyslist then the value with the highest index is used in the resulting map.

> zipmap(["a", "b"], [1, 2])


{
"a" = 1
"b" = 2
}

Terraform starts with a single workspace named "default". This workspace is special
both because it is the default and also because it cannot ever be deleted.

Command: import
Hands-on: Try the Import Terraform Configuration tutorial.

The terraform import command imports existing resources into Terraform.

Usage
Usage: terraform import [options] ADDRESS ID

Import will find the existing resource from ID and import it into your Terraform state at
the given ADDRESS.

ADDRESS must be a valid resource address. Because any resource address is valid,
the import command can import resources into modules as well as directly into the root
of your state.

ID is dependent on the resource type being imported. For example, for AWS EC2

instances it is the instance ID (i-abcd1234) but for AWS Route53 zones it is the
zone ID (Z12ABC4UGMOZ2N). Please reference the provider documentation for details
on the ID format. If you're unsure, feel free to just try an ID. If the ID is invalid, you'll
just receive an error message.

Terraform Core is a statically-compiled binary written in the Go programming


language.

You might also like