Interview Terra
Interview Terra
Yes, you can create on-premises infrastructure using Terraform, although it's typically associated with cloud
environments like AWS, Azure, or Google Cloud. Terraform has a set of providers that allow you to manage
on-premises infrastructure as well.
num_cpus = 2
memory = 4096
guest_id = "otherGuest"
network_interface {
network_id = data.vsphere_network.network.id
adapter_type = "vmxnet3"
}
4. How u can store your terraform sate file remotely, and apart from remote where we can store the
state file.
To store your Terraform state file remotely, you can use remote backends, which help manage and secure your
state files when working in a team environment or managing large infrastructures. Here's a breakdown of how
you can store the state file remotely and other storage options:
1. Storing Terraform State Remotely:
a. Amazon S3 (with DynamoDB for locking):
You can store your Terraform state file in an S3 bucket, which is commonly used for remote state storage,
especially in AWS environments. You can also use DynamoDB for state locking to prevent race conditions
when multiple team members are applying changes simultaneously.
Example configuration:
hcl
Copy
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "path/to/my/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "my-lock-table"
encrypt = true
}
}
In this example:
bucket: The name of the S3 bucket.
key: The path inside the bucket where the state file will be stored.
dynamodb_table: The DynamoDB table for state locking.
encrypt: Whether to encrypt the state file in S3.
b. Azure Blob Storage:
In an Azure environment, you can use Azure Blob Storage to store the state file remotely.
Example configuration:
hcl
Copy
terraform {
backend "azurerm" {
resource_group_name = "my-resource-group"
storage_account_name = "mystorageaccount"
container_name = "terraform-state"
key = "terraform.tfstate"
}
}
c. Google Cloud Storage (GCS):
For Google Cloud environments, GCS can be used to store the state file.
Example configuration:
hcl
Copy
terraform {
backend "gcs" {
bucket = "my-terraform-state-bucket"
prefix = "terraform/state"
}
}
"In my previous roles, I’ve worked extensively with Terraform to manage and automate infrastructure across
multiple cloud platforms and on-premises environments. Some of the key tasks I’ve performed include:
Through all of this, I’ve developed a strong understanding of Terraform best practices, including the importance
of modularity, remote state management, and automating infrastructure provisioning to ensure consistency and
reliability in deployments."
6. If i want to launch a EC2 instance using Terraform, what steps you'll take?
By following these steps, you can easily launch and manage an EC2 instance using Terraform!
7. How many years of experience you have working with terraform?
8. In Terraform, can you write a sample resource block to provision a Virtual Machine in Azure.
9. If you want to create 5 Virtual Machine's in the same terraform, how would you do it?
tags = {
Name = "MyEC2Instance-${count.index}" # Each VM will get a unique name like
'MyEC2Instance-0', 'MyEC2Instance-1', etc.
}
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
10. How will you provide different configurations for each of the 5 instances that you have created like
size/name/location etc., what you can do in that case?
To provide different configurations (such as size, name, and location) for each of the 5 instances in
Terraform, you can use the for_each feature instead of count. The for_each feature allows you to create
resources with unique configurations by iterating over a map or set of values. This way, each resource can
have its own distinct configuration, such as a different instance type, name, or AMI ID.
Here's how you can handle different configurations for each of the 5 EC2 instances:
hcl
Copy
provider "aws" {
region = "us-west-2" # Adjust the region as needed
}
default = {
"instance_1" = {
ami = "ami-0c55b159cbfafe1f0" # Replace with your AMI ID
instance_type = "t2.micro"
name = "Instance-1"
availability_zone = "us-west-2a"
},
"instance_2" = {
ami = "ami-0c55b159cbfafe1f0" # Replace with your AMI ID
instance_type = "t2.small"
name = "Instance-2"
availability_zone = "us-west-2b"
},
"instance_3" = {
ami = "ami-0c55b159cbfafe1f0" # Replace with your AMI ID
instance_type = "t2.medium"
name = "Instance-3"
availability_zone = "us-west-2c"
},
"instance_4" = {
ami = "ami-0c55b159cbfafe1f0" # Replace with your AMI ID
instance_type = "t2.large"
name = "Instance-4"
availability_zone = "us-west-2a"
},
"instance_5" = {
ami = "ami-0c55b159cbfafe1f0" # Replace with your AMI ID
instance_type = "t2.xlarge"
name = "Instance-5"
availability_zone = "us-west-2b"
}
}
}
ami = each.value.ami
instance_type = each.value.instance_type
availability_zone = each.value.availability_zone
key_name = "my-key-pair" # Replace with your key pair name
tags = {
Name = each.value.name
}
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
11. instance_config={
instance1 = { instance_size = "abc", disk = "xyz" }
instance2 = { instance_size = "xyz", disk = "abc" }
}
Based on the above input, you need to create a resource.
Whatever instances/VM's mentioned in this input, it should be created, and suppose if this input is
updated, your resources should also be updated, like if I remove an input that resource should be
removed. If I add a new instance data here, new instance should be created. So its Dynamic
basically. What you will use & how will you use in terraform script?
To achieve this dynamic behavior based on the given input, you should use for_each along with Terraform
variables to create resources dynamically. This approach will ensure that resources are created, updated, or
removed as per the input data (map) you provide.
In this case, you’ll define a variable (instance_config) which contains a map of instance configurations, and
use the for_each feature to create resources dynamically based on that map. If you update the map (by adding
or removing entries), Terraform will automatically detect the changes and adjust the resources accordingly.
Steps:
main.tf:
hcl
Copy
# Define the input variable for instance configurations
variable "instance_config" {
type = map(object({
instance_size = string
disk = string
}))
default = {
instance1 = {
instance_size = "t2.micro"
disk = "20GB"
},
instance2 = {
instance_size = "t2.small"
disk = "30GB"
}
}
}
provider "aws" {
region = "us-west-2" # Adjust the region as needed
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
12. If you have to use terraform resource for multiple providers, what you'll configure and how would
you write that in terraform script?
13. Lets say that you have some resource managed by terraform & later on other team member has
modified that resource multiple times, outside of terraform, now you don't want to manage it with
terraform going forward. So you have to remove it from terraform, but the resource should not be
deleted. What would you do?
Import the resource into Terraform state if it was not previously managed by Terraform.
Use terraform state rm to remove the resource from Terraform’s state management while
leaving the actual resource untouched in the infrastructure.
Terraform will no longer manage or attempt to modify the resource after it has been removed from
the state.
You can leave the resource configuration in your .tf files or remove it, depending on whether you
want to reintroduce management of the resource in the future.
This approach gives you full control over removing a resource from Terraform management while
ensuring that the resource remains in place.
In a situation where you have infrastructure both managed by Terraform and created manually, and Terraform is
facing issues maintaining those resources (such as changes made outside of Terraform), but you don't want to
delete any existing resources, the best approach is to import the manually created resources into
Terraform's state. This ensures that Terraform can manage those resources going forward without deleting
them.
Terraform is not able to maintain resources because it doesn't know about them (if they were created
manually).
Terraform will show discrepancies or errors because the infrastructure has drifted from what is defined in the
Terraform configuration.
To resolve this, you'll need to import the manually created resources back into the Terraform state. This allows
Terraform to begin managing those resources without deleting them.
If you have an AWS EC2 instance that was created manually, you can import it into Terraform as follows:
2. Use the terraform import Command to import the resource into Terraform’s state:
bash
Copy
terraform import aws_instance.example i-1234567890abcdef0
o aws_instance.example: This is the type and name of the resource in your Terraform configuration.
o i-1234567890abcdef0: The instance ID of the manually created EC2 instance.
After importing the resource, you can run terraform plan to see if there are any configuration differences
(drift) between the manually created infrastructure and the Terraform configuration.
bash
Copy
terraform plan
Terraform will compare the current state of the resource (now imported) with the configuration in your .tf
files.
If there are any differences, Terraform will attempt to reconcile them, but it will not delete resources unless you
explicitly ask it to do so.
4. Fix the Configuration:
Ensure that the resource configurations in your .tf files match the current state of the infrastructure, as it is
now managed by Terraform. For example:
If you imported an EC2 instance, make sure that the instance size, AMI, and other parameters match the actual
resource settings in your AWS account.
Once you're confident that Terraform is correctly managing the resource, you can run:
bash
Copy
terraform apply
This will apply any changes needed to bring the infrastructure into compliance with the configuration, but it
will not delete existing resources unless explicitly instructed.
If there are resources that you no longer want to manage with Terraform (and you don't want to delete them),
you can remove them from the Terraform state using terraform state rm:
bash
Copy
terraform state rm aws_instance.example
This removes the specified resource from Terraform’s state file, and Terraform will stop managing that resource.
Important: This action does not delete the resource. It just removes it from Terraform’s management, so
Terraform will no longer track it in future operations.
Important Considerations:
Manual Changes: If manual changes were made to resources, Terraform might try to reconcile those changes
based on its configuration. Make sure your .tf files reflect the current state of resources.
Resource Drift: If there’s any drift (differences between the actual resource state and the state defined in
Terraform), Terraform will try to fix it on the next terraform apply, unless you specifically disable or
configure drift handling.
Sensitive Data: If there are sensitive values that have been manually configured (like passwords, keys, etc.),
make sure those are captured or reconfigured appropriately when you import the resource into Terraform.
No Deletion: By using terraform import and avoiding terraform destroy or state rm without proper
consideration, you can avoid the risk of accidentally deleting existing resources.
type= list(object({
default=
acr "acr1"{
name=1
location=eastus
}
acr "acr2"{
name=2
loaction=eastus
}
acr "acr3"{
name=3
loaction=eastus
}
} we need to declare this variable file in the main file and it should fetch all the three names
23. 3) the above variable group should be for the resource group also
variabel "name"{
type= list(object({
default=
resourse_group "rg"{
name=1
location=eastus
}
resourse_group "rg"{
name=2
loaction=eastus
}
resourse_group "rg"{
name=3
loaction=eastus
}
}
24. apart from storage account or blob storage where you will store your state file of terraform
Apart from Storage Account or Blob Storage (e.g., in Azure), there are several other locations where you can
store your Terraform state file remotely. Remote state storage is essential for collaboration, state sharing, and
disaster recovery. Terraform supports multiple backend options for remote state storage. Here are some
common alternatives:
Backend Type: s3
Amazon S3 is commonly used for storing Terraform state in AWS environments. You can also combine
it with DynamoDB for state locking and consistency.
hcl
Copy
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "path/to/my/terraform.tfstate"
region = "us-west-2"
encrypt = true # Enable server-side encryption
}
}
Advantages:
2. HashiCorp Consul
Example configuration:
hcl
Copy
terraform {
backend "consul" {
address = "consul-server:8500"
path = "terraform/state"
scheme = "http"
}
}
Advantages:
Example configuration:
hcl
Copy
terraform {
backend "gcs" {
bucket = "my-terraform-state"
prefix = "terraform/state"
}
}
Advantages:
Easily integrated into Google Cloud environments.
Supports versioning and encryption.
Example configuration:
hcl
Copy
terraform {
backend "azurerm" {
resource_group_name = "my-resource-group"
storage_account_name = "mystorageaccount"
container_name = "mycontainer"
key = "terraform.tfstate"
}
}
Advantages:
5. Terraform Cloud
Example configuration:
hcl
Copy
terraform {
backend "remote" {
organization = "my-org"
workspaces {
name = "my-workspace"
}
}
}
Advantages:
In Terraform, defining a provider is not strictly mandatory in all cases, but it is required if you want to
interact with any remote infrastructure (e.g., AWS, Azure, Google Cloud, etc.) or any external services. The
provider block tells Terraform which cloud platform or service you will be managing resources on, and it also
specifies any necessary credentials or configuration for that provider.
1. When You Are Using Cloud Providers (e.g., AWS, Azure, Google Cloud): If you are provisioning
resources on a cloud platform, you need to define the corresponding provider. For example, to create
AWS resources, you need the aws provider defined.
hcl
Copy
provider "aws" {
region = "us-west-2"
}
Without defining the provider, Terraform will not know how to interact with AWS and will raise an
error when you try to run terraform apply.
2. When You Are Using Services that Require API Integration: For resources like databases, third-
party tools (e.g., Datadog, Kubernetes), or any other API-driven service, you need a provider to define
how to interact with those services.
hcl
Copy
provider "google" {
credentials = file("<YOUR-CREDENTIALS-FILE>.json")
project = "my-project-id"
region = "us-central1"
}
3. When Using Multiple Providers: If you are managing resources from multiple providers (e.g., AWS
and Azure in the same Terraform configuration), you need to define a provider for each platform and
specify which provider each resource should use.
hcl
Copy
provider "aws" {
region = "us-west-2"
}
provider "azurerm" {
features {}
}
In this case, Terraform needs to know which provider to use for each resource type (e.g., aws_instance,
azurerm_virtual_machine).
1. For Local Resources: If you're working with local resources or modules that don't require a specific
provider (e.g., local file systems, local resources like local_file), then you don't need to define a
provider.
hcl
Copy
resource "local_file" "example" {
content = "Hello, Terraform!"
filename = "${path.module}/example.txt"
}
2. If You're Using Terraform Cloud or Remote Backend with Preconfigured Providers: In some
cases, especially with Terraform Cloud or if you are using a remote backend, providers can be pre-
configured within the workspace, meaning you don’t need to explicitly define them in your Terraform
configuration files. However, this is specific to the environment in which you are working.
Key Takeaways:
Mandatory: If you are provisioning or managing remote infrastructure (e.g., cloud resources), you must
define a provider.
Optional: If you're managing local resources or resources that don’t require a specific provider, you
don't need to define one.
Multiple Providers: If you're using multiple providers, you need to define each provider separately.
27. do you know anything about terraform state lock why terraform apply statelock
28. destroy lifecycles
instance_config={
instance1 = { instance_type = "t2.micro", ami = "ami-12345" }
instance2 = { instance_type = "t2.medium", ami = "ami-67890" }
}
29. we need define this variablefile inside the main.tf file
30. difference between tf plan and tf apply
31. terraform remote execution
32. terraform backend
33. differnce between data block and resource block
In Terraform, both the data block and the resource block are used to define and manage infrastructure, but
they serve different purposes. Here’s a detailed comparison between the two:
1. resource Block:
Purpose:
The resource block is used to create and manage resources that Terraform will provision, configure,
and maintain.
Resources can be anything you want to create and manage, such as virtual machines, networks,
databases, etc.
When you apply the Terraform configuration, Terraform will create, update, or delete resources based
on your configuration.
Key Points:
Example:
hcl
Copy
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
In this example, Terraform will create an EC2 instance (aws_instance.example) with the specified
AMI and instance type.
Terraform will track the state of this instance and will manage it, ensuring it matches the configuration.
2. data Block:
Purpose:
The data block is used to read or query data about existing infrastructure or external resources that
are not managed by Terraform.
The data block doesn’t create or modify resources, but it allows Terraform to fetch information from
other resources that could be used in your configuration.
Key Points:
Terraform does not manage or create the data resource; it only reads the data.
Data resources are useful for fetching information about resources that were created outside of
Terraform or that are already provisioned.
Terraform does not track the state of data resources, since they are read-only and not subject to creation
or destruction.
Example:
Here’s an example of a data block in Terraform to get information about an existing AWS AMI:
hcl
Copy
data "aws_ami" "latest" {
most_recent = true
owners = ["amazon"]
filters = {
name = "amzn2-ami-hvm-*-x86_64-gp2"
}
}
In this example, Terraform is querying the most recent Amazon Linux 2 AMI from AWS.
It does not create or modify anything, but it will fetch the required data (in this case, the latest AMI ID)
that can be used later in your configuration (for example, to launch an EC2 instance).
Summary:
resource block: Used to create and manage infrastructure. Terraform will manage and maintain the
state of these resources, ensuring they remain as defined in the configuration.
data block: Used to fetch information about existing infrastructure or services. Terraform doesn't
manage the state of the data, and it only reads the resource’s current state but doesn’t modify or create it.
34. i have 3 acr with differnt name and i have called them in the variable now how can call them in the
main file
To call multiple Azure Container Registries (ACRs) with different names in Terraform, and to pass them as
variables, you would define the variables for the ACR names, and then reference them in your main Terraform
file. I'll guide you through how to structure this.
Steps:
1. Define the Variables:
You first need to define the variables in a variables.tf file (or directly in your main .tf file).
hcl
Copy
# variables.tf
variable "acr_names" {
description = "List of ACR names"
type = list(string)
default = ["acr1", "acr2", "acr3"] # You can also pass these values
dynamically
}
Here, the variable acr_names is a list of strings that contains the names of the 3 ACRs. You can also set
these values when running the Terraform plan or apply using -var or by setting them in a
terraform.tfvars file.
hcl
Copy
# main.tf
provider "azurerm" {
features {}
}
name = each.value
location = "East US"
resource_group_name = "myResourceGroup"
sku = "Basic"
tags = {
environment = "dev"
}
}
Explanation:
o The for_each is used to loop through the list of ACR names passed in the variable acr_names.
o each.value refers to the individual item from the list, so it would loop through each ACR name
(like acr1, acr2, and acr3) and create an individual resource for each one.
o The name of the azurerm_container_registry resource will be set dynamically using
each.value.
1. Ensure the Storage Account is Managed by Terraform: If the storage account was not created by
Terraform, but you want to manage it now, you'll need to import the existing resource into Terraform
state.
bash
Copy
terraform import azurerm_storage_account.example
/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}/providers/
Microsoft.Storage/storageAccounts/{storage_account_name}
Replace the placeholder values with the actual subscription ID, resource group name, and storage
account name.
2. Define the Provider Configuration: First, you need to define the Azure provider in your Terraform
configuration file (main.tf).
Example:
hcl
Copy
provider "azurerm" {
features {}
}
3. Define the Storage Account Resource Block: You can define the resource block for the Azure
Storage Account (azurerm_storage_account) and modify its configuration. For example, if you want
to change the sku or location, you can do so by defining them in the Terraform configuration.
Example:
hcl
Copy
resource "azurerm_storage_account" "example" {
name = "examplestoraccnt" # The name of your storage account
(should be unique)
resource_group_name = "my-resource-group"
location = "East US"
account_tier = "Standard" # This could be "Standard" or "Premium"
account_replication_type = "LRS" # This could be "LRS", "GRS", "ZRS", etc.
tags = {
environment = "production"
}
}
In this example:
bash
Copy
terraform plan
This will show you what changes Terraform intends to make to your infrastructure. If the storage
account's configuration differs from what Terraform sees in the state file, Terraform will propose those
changes.
5. Apply the Changes: If the terraform plan output looks good and reflects the modifications you want
to make, run terraform apply to apply the changes.
bash
Copy
terraform apply
Terraform will make the necessary changes to the storage account based on the configuration in your
.tf files.
Let’s say you want to modify the replication type and account tier of an existing Azure Storage Account.
After importing the existing resource and defining the modified configuration, Terraform will make the
necessary updates.
For example, change the account tier from Standard to Premium and update the replication type from LRS to
GRS.
hcl
Copy
resource "azurerm_storage_account" "example" {
name = "examplestoraccnt" # The same storage account name
resource_group_name = "my-resource-group"
location = "East US"
account_tier = "Premium" # Updated account tier
account_replication_type = "GRS" # Updated replication type
tags = {
environment = "production"
}
}
When you run terraform plan, Terraform will recognize that these properties are different from the current
state and will propose an update. Running terraform apply will update the storage account to reflect the new
configuration.
Notes:
Terraform will not rename resources or change immutable properties of certain resources (like the
name of an Azure Storage Account) once they are created. In such cases, you might need to destroy and
recreate the resource, which could result in data loss. Always review changes carefully before applying
them, especially for critical resources.
If you want to manage multiple environments (e.g., dev, prod), consider using workspaces or variable-
based configurations to make the configuration more flexible and reusable.
Conclusion:
38. can we the terraform stages and what are the effects of skipping the stages
39.Skipping Each Stage:
43. you have craeted the aks using Iac tool now you need to add another node how will you do this
hcl
Copy
resource "azurerm_kubernetes_cluster" "example" {
name = "example-aks-cluster"
location = "East US"
resource_group_name = "my-resource-group"
dns_prefix = "exampleaks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_DS2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
environment = "production"
}
}
In this case:
An additional node pool is created with the name additional-pool and a node_count of 2.
The VM size for this new node pool is Standard_DS3_v2.
Important Considerations:
Scaling Node Pools: Terraform will only scale the node pool as per the node_count specified. If you
are modifying the node_count for an existing node pool, Terraform will update the number of nodes in
the cluster.
Rolling Updates: When modifying the node pool or scaling the cluster, Azure performs a rolling update
of the nodes to ensure availability.
Resource Limits: Ensure that you stay within the quota limits for virtual machines or compute
resources in your subscription.
Conclusion:
1. Modify the node_count of the existing node pool in your Terraform configuration.
2. Run terraform plan and terraform apply to apply the changes.
3. Verify that the new node has been added by using tools like kubectl.
Alternatively, you can also create an entirely new node pool if required, and manage it through Terraform in the
same way.