Own Interview Question V - 1.0
Own Interview Question V - 1.0
Ans- Terraform providers are plugins that enable Terraform to interact with various APIs and services.
Providers are responsible for managing the lifecycle of a resource: create, read, update, and delete. They
allow Terraform to work with cloud platforms, SaaS providers, and other services
Example
• provider "azurerm" {
• features {}
• }
# Create a resource group
Ans- Terraform state is a critical aspect of how Terraform operates. It records the state of your
infrastructure managed by Terraform. The state is stored in a file, often called the state file
(terraform.tfstate).
1. Mapping Real Resources to Configuration: Terraform uses the state file to map the resources in your
configuration to real-world resources.
2. Metadata Storage: The state file stores metadata for resources, which helps Terraform improve
performance.
3. Dependency Graph: Terraform uses the state to determine the order of operations and relationships
between resources.
4. Concurrency Control: The state file helps manage concurrent updates to your infrastructure.
Terraform files are the configuration files that define your infrastructure. These files use the HashiCorp
Configuration Language (HCL) or JSON and typically have a .tf extension. These files describe the desired
state of your infrastructure and are used by Terraform to plan and apply changes.
• Main.tf- This file typically contains the main configuration for your infrastructure
• Variable.tf
• variable "region" {
• description = "The AWS region to create resources in"
• type = string
• default = "us-west-2"
}
• Output.tf
output "instance_id" {
description = "The ID of the EC2 instance"
value = aws_instance.example.id
}
Ans- Terraform modules are a way to group multiple resources together and reuse this configuration
across different parts of your infrastructure
terraform-azure-example/
├── main.tf
├── outputs.tf
├── variables.tf
└── modules/
└── network/
├── main.tf
├── outputs.tf
└── variables.tf
We can call modules in root module via providing path of child modules.
module "network" {
source = "./modules/network"
resource_group_name = var.resource_group_name
location = var.location
vnet_name = var.vnet_name
address_space = var.address_space
subnet_name = var.subnet_name
subnet_prefixes = var.subnet_prefixes
}
• Root Module: The main entry point for Terraform, located in the directory where terraform apply is run.
• Child Module: Modules called by another module, usually stored in a subdirectory or sourced remotely.
• Remote Module: Modules sourced from remote locations such as Git repositories, Terraform Registry, or
other remote storage.
Example:
• module "network" {
• source = "git::https://github.com/username/repo.git//modules/network?ref=v1.0.0"
}
Q4- Why we need terraform?
Ans: Terraform is required due to its reusability, efficiency, and its open source so there is no cost
involved so there is no extra cost we need to wear.
Imperative Approach - The imperative approach involves explicitly describing the steps required to
achieve a desired outcome. It focuses on how to perform tasks, detailing the sequence of commands or
operations.
Declarative Approach - The declarative approach involves specifying the desired state of the system
without describing the steps to achieve that state. It focuses on what the end state should be, letting the
underlying system figure out the necessary operations.
Ans:-
• Implicit Dependency: Terraform automatically figures out the order in which resources need to be
created or updated based on how they're connected in your code. You don't have to explicitly tell
Terraform about these relationships; it understands them by looking at your configuration.
• Explicit Dependency: With explicit dependency, you directly tell Terraform which resources depend on
others by using specific syntax. You explicitly declare the relationships between resources in your code,
leaving no room for ambiguity.
Ans: you can handle explicit dependencies between resources using the depends_on attribute. This
attribute allows you to specify a list of resources that a particular resource depends on, ensuring that
Terraform creates or updates the dependent resources before processing the resource with the
dependency
• provider "azurerm" {
• features {}
•
• client_id = var.client_id
• client_secret = var.client_secret
• subscription_id = var.subscription_id
• tenant_id = var.tenant_id
• }
•
• variable "client_id" {}
• variable "client_secret" {
• sensitive = true
• }
• variable "subscription_id" {}
• variable "tenant_id" {}
Azure Key Vault is a secure way to store and manage sensitive information. You can integrate Azure Key
Vault with Terraform using the Azure Key Vault provider.
Example:
• provider "azurerm" {
• features {}
• }
•
• data "azurerm_key_vault" "example" {
• name = "myKeyVault"
• resource_group_name = "myResourceGroup"
• }
•
• data "azurerm_key_vault_secret" "example" {
• name = "mySecret"
• key_vault_id = data.azurerm_key_vault.example.id
• }
•
• resource "azurerm_example_resource" "example" {
• secret_value = data.azurerm_key_vault_secret.example.value
• }
Q9- What is terraform resource graph?
Ans: The Terraform resource graph is a conceptual and visual representation of the resources defined in a
Terraform configuration and their relationships to one another. It helps you understand how resources are
interdependent and the order in which Terraform will create, update, or destroy them during the
execution of a plan or apply operation.
1. Generate the Graph: Run the following command to generate the resource graph in DOT format:
2. Visualize the Graph: Use Graphviz or an online DOT file viewer to visualize the graph. For
example, if you have Graphviz installed:
Ans: terraform refresh is a Terraform command used to reconcile the state of the Terraform-managed
infrastructure with the actual state of the resources. This command updates the state file to reflect the
current reality of the resources in the infrastructure.
terraform refresh
Ans:- null_resource in Terraform is a way to execute scripts or commands and manage dependencies
without creating any actual infrastructure. This is useful for running custom scripts after certain resources
are created.
Now, let's use a null_resource to run a script after the VM is created. Let's assume you have a script called
configure.sh that you want to run.
Q12- Someone deleted resources manually in portal how can you recover that through
terraform???
First, identify which resources have been deleted. This information can be gathered from the Azure portal
or resource logs.
Ensure that your Terraform configuration files (.tf files) still describe the desired state of your
infrastructure, including the deleted resources.
Run terraform refresh to update the Terraform state file with the current state of your infrastructure. This
will reflect the deletion of resources in the state file.
terraform refresh
Run terraform plan to see what changes Terraform will make to align the actual state with the desired
state described in your configuration files. The plan should indicate that the deleted resources will be
recreated.
terraform plan
5. Apply Changes
Run terraform apply to execute the plan and recreate the deleted resources.
terraform apply
Ans: In Terraform, workspaces are a feature that allows you to manage multiple instances of the same
infrastructure configuration within a single directory. Each workspace maintains its own state file, allowing
you to manage different environments (such as development, staging, and production) or different
configurations (such as different regions or configurations with different variables) independently
Common Commands for Terraform Workspaces
Create a Workspace:
terraform workspace new <workspace-name>
• Select a Workspace:
• List Workspaces:
• Delete a Workspace:
Suppose you have a Terraform configuration for deploying a web application to different environments:
├── main.tf
├── variables.tf
└── terraform.tfvars
• You can create different workspaces for development, staging, and production:
Ans:-
terraform init: Initializes a Terraform working directory by downloading plugins and modules.
terraform refresh: Updates the state file with the current real-world state of resources.
terraform state: Manages Terraform state files, allowing inspection and modification.
Execute scripts or commands on local or remote resources after they are provisioned by Terraform,
handling tasks not managed directly by Terraform's resource declarations.
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_ssh_key {
username = "adminuser"
public_key = file("~/.ssh/id_rsa.pub")
}
}
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx",
"sudo systemctl start nginx"
]
}
# File provisioner example
provisioner "file" {
source = "path/to/local/file.txt"
destination = "/path/on/remote/file.txt"
}
}
Q16- What is terraform taint ??
Terraform taint is a command used to mark a specific resource managed by Terraform as "tainted,"
forcing it to be destroyed and recreated during the next terraform apply. This is useful for handling
situations where you want to recreate a resource due to configuration changes or troubleshooting issues.
Key Points:
• Selective Recreate: Allows you to recreate specific resources without affecting others.
• State Management: Updates Terraform's state file to reflect the tainted status for the resource.
• Use Case: Useful for troubleshooting resource configuration issues or applying specific changes
that require recreation rather than modification.
<resource_id>: The unique identifier of the existing resource in the provider's format.
The purpose of the lockfile in Terraform is to ensure exclusive access and prevent concurrent
modifications to the state file (terraform.tfstate), maintaining consistency and integrity during
infrastructure operations.
Prevents Concurrent Modifications: Terraform uses a lockfile to prevent concurrent modifications to
the state file (terraform.tfstate). This ensures that only one Terraform operation (like terraform apply or
terraform destroy) can modify the state at any given time.
Exclusive Access: When one user or process is modifying infrastructure with Terraform, other users or
processes are prevented from simultaneously applying changes that could conflict or overwrite state data.
In Terraform, state locking occurs when a command that modifies the Terraform state (terraform apply,
terraform destroy, etc.) is executed.
Q20- How can we deploy multiple subscription using same terraform code ?
Ans
To deploy multiple subscriptions using the same Terraform code, you can use the concept of workspaces or modules
and parameterize your Terraform configuration. Here's how you can do it:
1. Using Workspaces
Workspaces allow you to manage multiple environments or configurations within the same Terraform configuration.
Here's a general approach:
1. Initialize Terraform:
terraform init
variable "subscription_id" {
description = "The subscription ID"
}
provider "azurerm" {
subscription_id = var.subscription_id
features {} }
5. Configure Variables for Workspaces: Create terraform.tfvars files for each workspace, e.g., dev.tfvars,
prod.tfvars, containing the subscription-specific values:
# dev.tfvars
subscription_id = "your-dev-subscription-id"
# prod.tfvars
subscription_id = "your-prod-subscription-id"
6. Apply Configuration: Select the appropriate workspace and apply the configuration:
You can also use modules and different variable files for different subscriptions. Here’s a general approach:
# modules/your_module/main.tf
resource "azurerm_resource_group" "example" {
name = var.resource_group_name
location = var.location
}
# modules/your_module/variables.tf
variable "resource_group_name" {}
variable "location" {}
variable "subscription_id" {}
2. Create a Root Module: In your root module, call the module and pass the necessary variables.
# main.tf
module "your_module" {
source = "./modules/your_module"
resource_group_name = var.resource_group_name
location = var.location
subscription_id = var.subscription_id
}
# variables.tf
variable "resource_group_name" {}
variable "location" {}
variable "subscription_id" {}
4. Create Variable Files: Create variable files for each subscription, e.g., dev.tfvars, prod.tfvars.
# dev.tfvars
resource_group_name = "dev-rg"
location = "West Europe"
subscription_id = "your-dev-subscription-id"
# prod.tfvars
resource_group_name = "prod-rg"
location = "East US"
subscription_id = "your-prod-subscription-id"
5. Apply Configuration: Apply the configuration using the appropriate variable file.
Ans: To reference existing resources in Terraform, you use the data block. The data block allows you to query existing
resources and use their attributes in your Terraform configuration.
In your Terraform configuration, define a data block to query the existing resource. For example, to get details of an
existing Azure resource group:
provider "azurerm" {
features {}
}
You can use the attributes of the data source in your resource definitions. For example, you might want to use the
location of the existing resource group when creating a new resource:
Ans When you run terraform init for the first time in a new configuration directory, Terraform
downloads the necessary provider plugins based on the providers specified in your configuration files.
Q22- What is difference between terraform refresh and terraform plan.?
Ans
terraform refresh: Updates the state file to reflect the actual state of resources without proposing any
changes.
terraform plan: Generates an execution plan showing the proposed changes to reach the desired
state defined in the configuration.
Q22- What type of automation tools you have used other than terraform?
• Azure CLI
• PowerShell
• ARM Template
Q23- Which types of resources you have created in Azure via terraform?
I have create most of all type of service Which we are using for IAAS & PAAS.
Resource Groups
Virtual Networks
Subnets
Virtual Machines
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}
Q24- Suppose a resourse is created using terraform and someone made changes manually using
portal, How to sync the changes with terraform state file?
Ans:- When changes are made manually in the Azure portal to resources managed by Terraform, you'll need to sync
those changes back into your Terraform state file to maintain consistency and manage them effectively.
• Import Resources: For existing resources that were modified manually, you can import them into Terraform
state. Use the terraform import command to bring them under Terraform's management. For example:
Terraform refresh
Ans:-In Terraform, locals allow you to define reusable values within a module without creating additional resources.
They're useful for avoiding repetitive expressions and enhancing readability.
Example in Azure:
locals {
resource_group_name = "myResourceGroup"
location = "West Europe"
}
Q26- What is Landing Zones in azure and how we can deploy via terraform?
Ans:
Azure Landing Zones are architectural blueprints that facilitate the deployment of well-organized, compliant, and
secure Azure environments. They typically include:
Deploying via Terraform: Use Terraform to define and provision Azure resources such as management groups,
VNets, policies, and more, ensuring consistent deployment and management of Azure landing zones.
Initialization (terraform init): Initializes the working directory and downloads necessary providers and modules.
Planning (terraform plan): Generates an execution plan showing what Terraform will do to reach the desired state
defined in configuration files.
Application (terraform apply): Applies the changes defined in Terraform configuration files to provision, modify,
or delete infrastructure resources.
Ans
Plan Destruction: Generate a destruction plan (terraform plan -destroy) to show which resources will be deleted.
Resource Deletion: Delete all resources managed by Terraform in the current configuration.
State Management: Update the Terraform state file (terraform.tfstate) to reflect the removal of resources.
1. During Initialization: The state file is initially created when you run terraform apply for the first time after
defining infrastructure resources in Terraform configuration files.
2. Resource Tracking: It tracks the current state of deployed resources, including IDs, metadata, and
dependencies.
3. Update Operations: The state file is updated automatically whenever you apply changes (terraform apply)
or destroy resources (terraform destroy), reflecting the current state of your infrastructure.
Q26- You have a requirement to create a scalable web app infrastructure using the terraform , that
app consist of a LB and DB , what would be your design and what type of structures of your terraform
configuration is going to be. how are you going to deploy these infrastructure.
Frontend: The user interface (UI) that users interact with through their browsers.
Backend: The server-side logic that processes requests, interacts with databases (like the DB mentioned earlier),
and performs computations.
Database Integration: Integration with databases for storing and retrieving data.
Scalability Considerations: Configuration to handle varying loads and traffic spikes efficiently.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "myResourceGroup"
location = "West Europe"
}
resource "azurerm_public_ip" "lb_public_ip" {
name = "myPublicIP"
location = azurerm_resource_group.example.location
allocation_method = "Dynamic"
sku = "Standard"
}
frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = azurerm_public_ip.lb_public_ip.id
}
backend_address_pool {
name = "myBackendPool"
}
}
resource "azurerm_mysql_server" "example" {
name = "myMySQLServer"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku_name = "GP_Gen5_2"
storage_profile {
storage_mb = 5120
}
administrator_login = "mysqladmin"
administrator_login_password = "P@ssw0rd!"
}
The depends_on attribute establishes explicit dependencies between resources in Terraform, ensuring that certain
resources are created or updated before others, effectively controlling the order of resource provisioning and
ensuring correct resource dependencies.
Ans: Both count[index] and for_each[map/list] are used to create multiple instances of resources in Terraform.
While count[index] is based on a numerical index and creates a fixed number of resource instances, for_each[map/list]
is based on a map or list of values and allows for dynamic creation of resource instances based on variable input.
Example:
variable "storage_accounts" {
type = map(object({
location = string
sku = string
}))
}
resource "azurerm_storage_account" "example" {
for_each = var.storage_accounts
location = each.value.location
sku = each.value.sku
# Other settings...
}
Q27-How do variables (`var`) and outputs (`output`) differ in Terraform?
Ans:-
Variables (var):
• Purpose: Variables in Terraform are used to define and parameterize configurations, allowing for dynamic
inputs that can be customized during deployment.
• Usage: They are declared in .tf files (e.g., variables.tf) and can be set via command-line flags (-var) or
terraform.tfvars files.
variable "region" {
description = "AWS region where resources will be deployed"
type = string
default = "us-east-1"
}
Outputs (output):
• Purpose: Outputs in Terraform are used to expose information about resources after deployment, such as IP
addresses or resource IDs.
• Usage: They are defined in .tf files (e.g., outputs.tf) and accessed using terraform output to fetch values from
the Terraform state.
output "instance_id" {
value = aws_instance.example.id
}
Q27-What types of variables we use in Terraform?
variable "region" {
type = string
default = "us-west-2"
}
• Number: Used for defining numeric values, which can be integers or floating-point numbers.
variable "instance_count" {
type = number
default = 3
}
variable "enable_monitoring" {
type = bool
default = true
}
variable "availability_zones" {
type = list(string)
default = ["us-west-1a", "us-west-1b"]
}
variable "tags" {
type = map(string)
default = {
Environment = "Production"
Owner = "DevOps Team"
}
}
Ans:
Static Block:- Static blocks in Terraform are used to define fixed configurations within resource blocks, where the
configuration is predefined and does not change based on input variables.
Dynamic Block: - Dynamic blocks in Terraform allow for flexible configurations based on dynamically-generated
values or lists, where the configuration can vary based on input variables or conditions.
variable "vm_sizes" {
type = list(string)
default = ["Standard_DS1_v2", "Standard_DS2_v2", "Standard_DS3_v2"]
}
resource "azurerm_virtual_machine" "example" {
for_each = { for idx, size in var.vm_sizes : idx => size }
name = "vm-${each.key}"
location = "West Europe"
resource_group_name = azurerm_resource_group.example.name
vm_size = each.value
os_profile {
computer_name = "vm-${each.key}"
admin_username = "adminuser"
admin_password = "P@ssw0rd!"
}
}
DOCKER INTERVIEW QUESTION
1- How do you build a Docker image from a Dockerfile?
Ans - Use the docker build command. For example:
10- Your Dockerized web application is experiencing increased traffic, and you need to scale it
horizontally. How would you achieve this using Docker?
Ans- Docker Compose: Update the docker-compose.yml file to scale the web service
11- You need to run a MySQL database in a Docker container, but the data should persist even
if the container is removed. How would you set this up?
Ans - Create a Docker volume:
Inspect container: Use the docker inspect command to get detailed information about the
container.
Execute commands: Use the docker exec command to run commands inside the running
container.
Check Docker daemon logs: If needed, check the Docker daemon logs on the host system for
additional information.
13- Your containers need to communicate with each other, but they are unable to do so. How
would you troubleshoot and resolve this issue?
Ans - Check network configuration: Ensure that the containers are on the same Docker network
14- Your application requires sensitive information such as API keys and passwords. How would you
securely manage these secrets in Docker?
Ans- Use Docker secrets to store sensitive data securely
15- You need to update your running application without any downtime. How would you
achieve this using Docker?
Ans - Use Docker Swarm for rolling updates.
Define update configurations in the docker-compose.yml for Swarm
1. API Server: In Simple terms, after installing the kubectl on the master node developers run the commands
to create pods. So, the command will go to the API Server, and then, the API Server forwards it to that
component which will help to create the pods. In other words, the API Server is an entry point for any
Kubernetes task where the API Server follows the hierarchical approach to implement the things.
2. Etcd: Etcd is like a database that stores all the pieces of information of the Master node and Worker
node(entire cluster) such as Pods IP, Nodes, networking configs, etc. Etcd stored data in key-value pair. The
data comes from the API Server to store in etc.
3. Controller Manager: The controller Manager collects the data/information from the API Server of the
Kubernetes cluster like the desired state of the cluster and then decides what to do by sending the
instructions to the API Server.
4. Scheduler: Once the API Server gathers the information from the Controller Manager, the API Server
notifies the Scheduler to perform the respective task such as increasing the number of pods, etc. After
getting notified, the Scheduler takes action on the provided work.
8- What is kubelet?
Ans- kubelet is an agent that runs on each Node in the Kubernetes cluster. It ensures that
containers are running in Pods as expected and communicates with the Kubernetes API
server.
9- What is a kube-proxy?
Ans- kube-proxy is a network proxy that runs on each Node in the cluster. It maintains
network rules and handles traffic routing to ensure that services can communicate with each
other.
10- What is the Difference between liveness probe and readiness probe
Ans-
11- Scenario: You notice a pod consistently hitting high CPU usage.
Ans- First, identify the pod using kubectl get pods. Then, use kubectl describe pod
<pod_name> to check resource requests and limits. You can also use kubectl top pods to see
CPU usage. Analyze the pod's workload and optimize code or adjust resource requests/limits.
Consider horizontal pod autoscaling (HPA) for automatic scaling based on CPU usage.
Alternatively, you can configure Horizontal Pod Autoscaler (HPA) to automatically scale based
on CPU utilization or other metrics.
This HPA configuration will automatically scale the Deployment up or down based on CPU
utilization.
Update the Service to route traffic to both the stable and canary deployments.
Gradually increase the replicas in the canary deployment while monitoring its performance.
17- How would you deploy an application to multiple environments (e.g., dev, staging,
prod) using Kubernetes?
Ans- Use Kubernetes namespaces to separate environments and customize configurations
using ConfigMaps and Secrets for each environment.
This allows you to deploy the same application to different environments with environment-
specific configurations.
You can specify resource requests and limits in the container specification within your Pod or
Deployment YAML file.
In this example:
20- A pod is stuck in a Pending state. How would you troubleshoot this issue?
Ans- Check Pod Description: (kubectl describe pod <pod-name>)
Check Node Resources: (kubectl describe nodes)
Check for Node Affinity/Anti-Affinity: Verify if there are any node affinity or anti-affinity rules
that are preventing the pod from being scheduled.
Check Taints and Tolerations: Ensure that the pod has the necessary tolerations for any
taints on the nodes.
kubectl describe nodes | grep -i taints
Check Resource Quotas: Verify if there are any resource quotas in the namespace that are
preventing the pod from being scheduled.
kubectl get resourcequotas
21- A pod is continuously crashing and restarting. What steps would you take to diagnose
the problem?
Ans - Check Pod Logs: Get logs from the crashing pod to see any application errors.
kubectl logs <pod-name>
For previous logs: kubectl logs <pod-name> --previous
Describe the Pod: Check the events and status for any clues.
kubectl describe pod <pod-name>
Check Container Exit Code: Look at the exit code of the container to understand why it is
crashing.
Check Readiness and Liveness Probes: Misconfigured probes can cause containers to be killed
and restarted.
22- A node is marked as Not Ready. How would you troubleshoot this issue?
Ans- Describe the Node: Check for events and conditions.
kubectl describe node <node-name>
Check Kubelet Logs: Inspect the kubelet logs on the node for errors.
ssh <node-name>
journalctl -u kubelet
Check Node Status: Look for taints and other status conditions.
kubectl get nodes <node-name> -o jsonpath='{.status.conditions}'
23- A pod cannot communicate with another pod in a different namespace. How would you
troubleshoot this issue?
Ans- Check Network Policies: Ensure there are no Network Policies blocking the traffic.
kubectl get networkpolicies --all-namespaces
24- You are unable to delete a namespace. What could be the possible reasons and how
would you resolve it?
Ans - Check for Finalizers: The namespace might have finalizers that prevent deletion.
kubectl get namespace <namespace-name> -o jsonpath='{.spec.finalizers}'
Check for Resources: Ensure all resources within the namespace are deleted.
kubectl get all -n <namespace-name>
Check for Stuck Resources: Sometimes, resources can get stuck in terminating state.
kubectl get pods -n <namespace-name> | grep Terminating
25- ConfigMap changes are not reflected in the running pods. What could be the issue?
Ans- Check ConfigMap: Ensure the ConfigMap has been updated correctly.
Check Volume Mounts: If using volume mounts, ensure they are correctly configured to use
the ConfigMap.