0% found this document useful (0 votes)
4 views

Interview_Questions for devops

Raj is an AWS and DevOps engineer with 4 years of experience, currently working at Infosys, where he utilizes various tools like Git, Jenkins, Docker, and Terraform on AWS services. His daily responsibilities include managing project status, handling application issues, and automating processes using Python and Shell scripts. The document also covers technical details about AWS services, DevOps tools, and Jenkins pipeline configurations.

Uploaded by

pradeepm.paddu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Interview_Questions for devops

Raj is an AWS and DevOps engineer with 4 years of experience, currently working at Infosys, where he utilizes various tools like Git, Jenkins, Docker, and Terraform on AWS services. His daily responsibilities include managing project status, handling application issues, and automating processes using Python and Shell scripts. The document also covers technical details about AWS services, DevOps tools, and Jenkins pipeline configurations.

Uploaded by

pradeepm.paddu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Self Introduction

I am Raj, Basically I am from Mysur, Karnataka. I have around 4 years of experience as a AWS &
Devops engg, I am currently working with Infosys. In this experience I have worked on Diff Diff
Devops tools like Git, Maven, Jenkins, Github action, Docker, ECS/ Kubernetes, Terraform & I
implemented all this things on top of AWS cloud provider & Also I have experience on AWS services
like VPC, EC2, Autoscaling, Load Balancer, S3, CloudFront, ERC, ECS, Route53, SNS, CloudWatch,
Lambda & Some other services, And I used Python & Shell script for automation & Groovy script for
to write CICD pipeline. And I used Jira & Notion as ticket tools

Optional : Our current project is Health Care, Brief about project.

When it comes to my daily Roles & Responsibilities: Daily we will have stand-up call we will discuss
over all status of that project any upcoming release versions is there we need to discuss over call, is
there any ticket pending, If any application related issues occur we need to raise a ticket Jira ticket,
we need to assign specific team member, we need to coordinate with developer team members &
we need to fix that issue as soon as possible depend on job priority (application priority).

Additional:

-> Checking alerts mails, If any issue found fixing that issue.

-> Any changes required in infra side, taking care of this things.

-> Any issues found, debugging and fixing that issue.

-> Any things needs to be added on cicd pipeline will add those things as per requirement.

-> Any new project onboarding that time we need to create pipeline from scratch, and we need
provision infra from terraform.

================================================================

AWS Questions:

Q) What is VPC ?

VPC stands for Virtual Private Cloud. It is a virtual network dedicated to our AWS (Amazon Web
Services) account.

-> In VPC we will create Subnets, In that subnets we will categorise as a Public subnet (By providing
Internet from internet gateway) and Private subnet (By providing Internet from NAT gateway).

Q) What is VPC PEERING?

In companies we will be having multiple VPCs, if you want to attach (Establish connection b/w) 1 VPC
to Another VPC is called VPC Peering. It could be in same Account or different account or different
region also. (While creating VPC Peering it will ask it is for same account or another account).

EC2 (Not Important)

Instance types:
On-demand: You can launch start, stop, terminate anytime as per your demand that's why its a on-
demand instance and we pay pay-as-u-go model. To launch Go to EC2 -> Instance -> Launch instance

Reserved Instance: It means we reserve the instance like for 1 year or 3 years (Only have 2 options 1
year or 3 years). From this we save half of the cost compared to On-demand It may depends on
region. If your sure you will use this instance 1 year then only go with this. ( FIrst we launch On-
demand instance & then we convert it as spot instance) (If you selected "convertable" instade of
standard you can able to change instance type(T2.medium, C5x.large))

To launch Go to EC2 -> Instance -> Reserved Instance

Spot Request/Instance: You can think if they have 1000 machines in datacentre(any region), In that
only 300 machines are using, What they will do they keep 200 machines, they put other 500
machines(unused) on bidding. They provide around 60 to 70 % offer for this instance.

Scheduled Instance: If your running any script/Job like everyday at 4AM that time only for 20 mins or
1 hour you need instance means you can use this scheduled instance(Available only some regions).

Dedicated Hosts/Instance: Whenever you ask a machine like 2GB 1 cpu they will provide particular
hardware, But if they dedicated that hardware for you only that is called dedicated host(If you asked
2 GB, But that is allocated 8GB then also that hardware is dedicated for you only). Whenever you
stop/restart the instance public-IP will not change here in case of dedicated instance, because it got
and sit on same hardware. When you use other instance type like on-demand public-IP will change
because whenever you restart server It go & sit on other hardware. We use for security purpose.

Q) What is AUTO SCALING?

Automatically Scale up / Scale down, automatically it will increase/ decrease instances. -> We are
using horizontal scaling (Automatically it will increase 1 more same copy of running instance if
threshold value is greater than provided value (70%/60%)).

Q) What is “Load balancer” ?

Suppose our application running on 10 machines (EC2) As per our scale, So If we put load balancer on
top of it, so it will distribute the load across all the machines. Basically, It will balance the load. Suppose
if 10 request comes it will send one one traffic to one one machines. (We will map our IP address to
load balancer). And also, it will do the health check. Suppose any one machine is not working as
expected it will notify. So, we can go & fix it that one.

There is (3) different types of load balancer :

Application LB' : Choose an Application Load Balancer when you need a flexible feature set for your
applications with HTTP and HTTPS traffic. Operating at the request level, Application Load Balancers
provide advanced routing and visibility features targeted at application architectures, including
microservices and containers.

Network LB': We can go with Network Load Balancer when you need ultra-high performance,
operating at the connection level, Network Load Balancers are capable of handling millions of requests
per second securely while maintaining ultra-low latencies.
Gateway LB': We can go with Gateway Load Balancer when you need to deploy and manage a fleet of
third-party virtual appliances, These appliances enable you to improve security, compliance and policy
controls.

Classic LB': Deprecated

Q) Why we use S3 Bucket?

-> We store application or Database backup / env files / logs/ we can host our front-end application
on S3. -> You can upload object up to 5TB and you can store unlimited.

Q) What is Rote-53?

> This service is all about domain name system (DNS).

-> Other domain name registers like GoDaddy, crazy domains & many more.

-> Here we can purchase domain name & map domain name, If you purchased on any other domain
name Register, we can able to map here.

Q) Brief about “IAM”?

IAM, or Identity and Access Management, provided by AWS that helps you securely control access to
AWS resources. In IAM we manage users, groups, and roles to control who is authenticated (signed
in) and authorized (has permissions) to use resources in our AWS account.

Key features and concepts of AWS IAM include:

Users:

IAM users represent an individual or application that interacts with AWS. Each user has a unique
name and security credentials (username and password or access keys) that are used to sign in
securely.

Groups:

Groups are collections of IAM users. You can assign permissions to a group, and all users in that
group inherit those permissions. This helps in managing access at scale, as permissions are assigned
to groups rather than individual users.

Roles:

IAM roles are similar to users but Roles are used to grant permissions to AWS resources (1 services to
other) or cross-account access.

Policies:

IAM policies are used to provide required permission for users and roles.

Permissions and Access Control:

IAM allows fine-grained control over access to AWS resources. Permissions can be defined based on
specific actions (e.g., ec2:StartInstances, s3:GetObject), resources (e.g., an S3 bucket or an EC2
instance), and conditions.
Multi-Factor Authentication (MFA):

IAM supports Multi-Factor Authentication to add an extra layer of security. Users can be required to
provide a second form of authentication (such as a time-based one-time password from a hardware
or virtual MFA device) in addition to their password.

Access Keys:

IAM users can have access keys (access key ID and secret access key) for programmatic access to AWS
services. Access keys are commonly used with the AWS Command Line Interface (CLI), SDKs, and
other developer tools.

IAM Policies Types:

Managed Policies: Pre-built policies managed by AWS that you can attach to users, groups, or roles.

Inline Policies: Policies that you create and manage directly within a user, group, or role.

Q) Why you used “Lambda”?

-> Used to automate few services

Like :

- Created Lambda function to exports logs from cloudwatch log group to S3 bucket, After
retention period in log group. Compared to S3 storage cost, CloudWatch log storage cost is high, so
we keep logs on log group max 30 days, after that we export to S3 for cost saving

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Devops Question
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Q) What are all the tools you used in Devops?

-> Git, GitHub, Jenkins(CICD), Git hub actions(CICD), Docker, Kubernetes, Terraform.

Have knowledge on Ansible.

Git Commands :

git config --global user.name "testuser"

git config --global user.email “[email protected]

* Tell Git who you are

* Configure the author’s name and email address to be used with your commits.

Note that Git strips some characters (for example trailing periods) from user.name

$git init => Create a new local git repository


$git clone /path/to/repository => Create a working copy of a local repository:

$git clone username@host:/path/to/repository => For a remote server, use.

$git add <filename> => Add one file to staging (index)

$git add * => Add one or more files to staging (index).

$git add *.html /* To add specific File Like .html/ .java/ .py/.sh */

$git add . /* To add all files to staging area */

$git commit -m "Commit message" => Commit changes to head (but not yet to the remote
repository)

$git commit -a => Commit any files you've added with git add, and also commit any files you've
changed since then.

$git push origin branch_name => Send changes to the master branch of your remote repository.

$git status => List the files you've changed and those you still need to add or commit.

$git remote add origin <server> =>Connect to a remote repository, If you haven't connected your
local repository to a remote server, add the server to be able to push to it.

$git remote -v => List all currently configured remote repositories.

$git checkout <branchname> => Switch from one branch to another.

$git checkout -b <branchname> => Create a new branch and switch to it.

$git branch -a => List all the branches in your repo, and also tell you what branch you're currently
in.

$git branch -d <branchname> => Delete the feature branch.

git branch -M main

$git config --global init.defaultBranch main -> To set default branch

$git push origin <branchname> => Push the branch to your remote repository, so others can use it.

$git push --all origin => Push all branches to your remote repository.

$git pull => Fetch and merge changes on the remote server to your working

Q) How ‘merge conflict will happen & how you will solve ?

-> When two developer work on same repo and same file, merge conflict will happen. For ex Dev-1
cut branch from main branch work on some feature.html file & yet to commit and push & at the
same time Dev-2 clone main branch work on same file commit & push to main branch before Dev-1,
so when Dev-1 trying to merge to main branch merge conflict will come.

Solve :
-> If we opened this on VS code & try to merge it will show 3 options 1-> Accept current change
(Means use our branch code ) 2-> Accept incoming change ( Means other branch code) 3-> Accept
both ( Keep both the changes ). We want to choose whatever we want. Then #git add . & #git commit
then #git merge dev-2 To Dev-1.

-------------------

JENKINS:

Q) Tell me what are all the steps involved in your Jenkins pipeline jobs
A: We have git checkout stage we clone our latest code from GitHub, Sonar stage to check code
quality, maven build stage to build Java application, Unit test, Docker build stage, Trivy vulnerability
scan stage & Docker image push stage, Deploy to Kubernetes or ECS.

Q) What are all the plugins you worked on?

-> I have worked on git, git hub, Maven Integration Plugin, SonarQube Scanner Plugin, junit, Docker
Pipeline Plugin, Amazon ECR Plugin, Docker Security Scanning (trivy), Kubernetes Continuous Deploy
Plugin (Required for deploying to Kubernetes using Helm in a Jenkins pipeline), Kubernetes CLI Plugin
(Required for interacting with Kubernetes clusters), AWS CLI Plugin, Email Extension Plugin (Required
for sending email notifications from "post block")

// nexus artifact uploader plugin, deploy to container for tomcat, ansible & some more.

Q) Difference b/w freestyle & pipeline job?

-> Freestyle job: You can configure everything through web UI, based on plugin selection you get
option, you can configure all things through UI.

-> Pipeline: we use groovy script for pipeline jobs, we write some stages like Git checkout, maven
build, Docker build & push, deploy to target server.

Q) What is “master slave” architecture? what is the use of it?


-> Because we will not put burden to the master. Because of we don’t want to store all the data source
code to master like where we installed Jenkins. That’s why we divide workload to diff diff slave
machines. Like if you’re going to run the parallel jobs at the same time on same Jenkins machine like
2 job / 3 jobs. Then there is chance Jenkins might crash, Because of if you run the 3/4 projects at a
time, take slave machine run the job into the slave machine.

-> And also like sometimes you need specific dependencies, tools or configurations for different
projects or jobs. It also helps (that time.) to prevent resource contention and conflicts.

Jenkins pipeline example to learn how to write (structure):

pipeline {

agent any

environment {
AWS_DEFAULT_REGION = 'your-eks-region'

EKS_CLUSTER_NAME = 'your-eks-cluster-name'

HELM_CHART_NAME = 'your-helm-chart-name'

stages {

stage('Checkout') {

steps {

script {

https://github.com/git-repo-name/webapp-demo.git,

stage('Build') {

steps {

script {

sh 'mvn clean install'

stage('Docker Build') {

steps {

script {

// Build Docker image and push to ECR (Elastic Container Registry)

sh 'docker build -t ecr-repository-name:latest .'

sh 'aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --


username AWS --password-stdin your-ecr-repository'

sh 'docker push ecr-repository-name:latest'

}
}

stage('Deploy to EKS') {

steps {

script {

// Configure kubectl with EKS credentials

withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: 'your-


aws-credentials-id', accessKeyVariable: 'AWS_ACCESS_KEY_ID', secretKeyVariable:
'AWS_SECRET_ACCESS_KEY']]) {

sh "aws eks --region $AWS_DEFAULT_REGION update-kubeconfig --name


$EKS_CLUSTER_NAME"

// Deploy Helm chart to EKS

sh "helm upgrade --install $HELM_CHART_NAME your-helm-chart-directory"

//Post block

post {

// Send Build job status to email, If status is failure, Unstable, aborted or success

============================================================

DOCKER:

GENAREL COMMANDS:
apt install docker.io : To install docker on UBUNTU
docker ps : To check status of Running container
docker ps -a : To check Existed/Stopped Container
service docker status : To check docker status
docker image ls : To check images

docker pull tomcat(or image link) : To Pull tomcat images from docker hub
docker build . : To build docker file
docker build -t project-name . : To build docker file with tag name or
docker build -t repo-name:1.0 . : “1.0” is tag name
docker build . --tag tomcat:rajudev : To tag tomcat is repo name and tag is rajudev
docker run -d --name container_name -p 8081:8080 tomcat(Image-name or Image-Id):latest
/*To create container, If you want to set --hostname host-name */ or
docker container run -d --name container_name -p 8081:8080 Image-name:latest
docker run -d --env-file=dev-env.env -p 3880:3880 image-id : To create container when you
are using .env (environment) files
docker exec -it (means interactive terminal) <container-id> /bin/bash: To login to Container, it
means Interactive mode once you press cntr+c we come out of the container */
useradd username : To create USER
passwd user-password : To set Password
cat /etc/group : To see group
usermod -aG docker(Group-name) dockeradmin(user-name) : To add USER to Docker Group
id dockeradmin(username) : To check user- groups and information
ip addr : To check private ip
vi /etc/ssh/sshd_config : To check Config File
service sshd reload : To Reload services
cd /home/dockeradmin(username) : Path of User
docker rm container-id/name : To Remove Container
docker stop container-id/name : To stop particular
docker stop $(docker ps -a -q) : Stop all container
docker rm $(docker ps -a -q) : Remove all container
docker container prune -a : To remove all stopped/exited Container
docker rmi docker-image-name or id : To Remove Images
docker rmi $(docker images -a -q) : To delete all images
docker image prune -a : Remove all Unused images
docker system prune -a : Clean up stopped containers, unused images, volumes, and
networks.
docker create network network-name : Create network
docker network connect network_name container-id : To connect Network
docker network disconnect network_name container-id : To Disconnect Network
#To check what are all the containers connected to the network
docker network inspect network-name --format='{{range $container, $config :=
.Containers}}{{printf "%s\n" $container}}{{end}}'
docker volume create volume-name : To create volumes
/ var/lib/docker/volumes : By default, storage of docker volumes
docker logs - -f container-id : By using this we can check live logs of any container
docker logs container-name/ID : To check container logs
/var/lib/docker/containers/container-id(dir) : To check any container logs
/var/lib/docker : Where we get all docker related info like containers, image,
volumes, logs, networking info everything we get here.
docker create namespace : To create name space
docker-compose -v : To check docker compose Version
docker-compose build -d : To build Compose(Docker image) File in detached mode
Ip addr show : To see IP address */
#apt-get remove docker docker-engine docker.io containerd runc /* To remove existing
docker setup / engine */
#systemctl status docker : To check docker status on any linux machine
# docker exec -it <container_name_or_id> ps aux : Check running process in container

--------------------------
=>Difference B/W Containerization & Virtualization
Ans: - Containers are light weight, for virtualization you want 1 dedicated guest OS, RAM & MEMORY.
& In container we don’t want any dedicated OS, RAM & MEMORY, we just group together all the things
& convert it as an image & we will deploy as a container-based application.
-> In containerization all containers share the same kernel & In VM’ shares equal memory spaces, it
will share resources of your origin OS.
Docker file for JAVA APP
=====================

Single stage docker file:

Build artifactory (.jar) from Jenkins using maven & build Image using below docker file

# For Java 8, try this

# FROM openjdk:8-jdk-alpine

# For Java 11, try this

FROM adoptopenjdk/openjdk11:alpine-jre

# Refer to Maven build -> finalName

ARG JAR_FILE=target/spring-boot-web.jar

# cd /opt/app

WORKDIR /opt/app

# cp target/spring-boot-web.jar /opt/app/ spring-boot-web.jar

COPY ${JAR_FILE} spring-boot-web.jar

# java -jar /opt/app/app.jar

ENTRYPOINT ["java","-jar","spring-boot-web.jar"]

## sudo docker run -p 8080:8080 -t docker-spring-boot:1.0


## sudo docker run -p 80:8080 -t docker-spring-boot:1.0

## sudo docker run -p 443:8443 -t docker-spring-boot:1.0

=======================
Multi-stage Dockerfile :

Deploy Java app to tomcat

FROM openjdk:11 as base /* Making this image as base because it’s a multi-stage */

WORKDIR /app /* Set working directory */

COPY . /app /* copy all files from host to working directory */

RUN ./app build /* Build the appl’n */

FROM tomcat:9

WORKDIR /webapps

COPY --from=base /app/build/libs/webapp.war

KUBERNETES COMPONENTS:
1] MASTER: Is basically manages nodes & pods in a cluster so whenever there is a failure of a node, it
will take care & move the work load of failed nodes to some other node.

Master has 4 components: (API Server, Scheduler, Etcd, Controller manager)

* API Server: is our communication point, when you interact with your Kubernetes cluster using the
kubectl command, your actually communicate with Master API server. All the component will
communicate with this API service. All sort of information come to API Server, like if we want to create
new microservice, to terminate, to launch, to auto-scale for everything we will communicate to the
API Server, we will communicate from kubectl.

* Scheduler: The scheduler Schedules pods across multiple nodes. For ex If you saying like 1
microservice want to run on 3 replication factor on top of which 3 machine wants to schedule that
scheduler will take care across scheduling 3 machines. scheduler gets all the information from
configuration files & indirectly from the etcd data store.

* etcd : Is a kind of database, So basically it stores all the object, all the services information, all the
pods information, Where particular pod is stored, So all these kind of data will stored in the etcd. [is
distributed consistent key-value store its used for configuration management (like if one microservice
running on 3 machines on which machines it is running) and Service discovery (means how is
connectivity) and coordinating distributed work]

* Controller manager: On our entire cluster there is different different types of controllers like
Replication controller, End point controller, Name space controller. All these controllers work managed
by controller manager.
2] NODE / SLAVE: It can be Physical Machine or VM (Kubectl, Container engine, Service/kube
proxy, iptables)

* Kubelet : Kubelet is a one which is direct sync with our API server from slave components. From API
server we say to kubelet like you need run this microservice on this particular machine. On this
machine you need to run this image, how many copies 1/2/3, deploy application, check the nodes,
node health, what are the services running everything we running with kubectl. So Kubelet is a one
which is direct sync with API server.

*Container Engine (Docker): If you want to run container, we need a container engine like docker. It
will be on all the machines.

*Service / kube Proxy: If end user wants to connect, we need service proxy.

(kube-Proxy : is a core networking component in a node & it can also interact with the external world
& this is a component or an agent which is responsible for maintaining the network configuration &
the rules so kube-proxy is the networking component in kubernetes. All the nodes run daemon called
kube-proxy. which watches the API server on the master node so it can interact or communicate with
the master node using API server & it gets all the information for addition of removal of services & the
endpoints so this is all about kube-proxy.

*iptables: For network establishment and connectivity we use iptables.

K8S COMMANDS
kubectl create ns namespace-name : To create namespace

kubectl get namespaces( or ns) : To check all namespaces created on cluster

kubectl get pods -n namespace-name or

kubectl get pods --namespace namespace-name : To check pods running on particular namespace

kubectl get services -n namespace-name: To check services running on particular namespace

Helm commands:

helm create chart-name : To create helm chart

helm template Chart-name: To check actual values

helm lint Chart-name: Is there any problem with your helm-chart code it will highlight you. It’s like a
compilation.

helm install release-name --debug --dry-run chart-name: This command will verify before running
the helm install command, we can verify all the errors like if it’s there any configuration mistake you
have done. Is there any error it will highlight otherwise simply generate the .yml files.

helm install release-name chart-name: To install helm chart to deploy application

helm upgrade release-name chart-name: Upgrade new version of application

kubectl get all: To check info like Pod, services, replicas

helm list -a: Verify the helm install ( To see all the releases)
helm rollback release-name 2(revision): To rollback previous release

helm delete release-name: Delete Helm release

helm dependency list

helm dependency update

#kubectl describe quota quota-name -n name-space: It will show how much resources it used.

Practice writing below YAML files.

apiVersion: apps/v1

kind: Deployment

metadata:

name: myapp3-deployment

spec:

replicas: 3

selector:

matchLabels:

app: myapp3

template:

metadata: # Dictionary

name: myapp3-pod

labels: # Dictionary

app: myapp3

spec:

containers: # List

- name: myapp3-container

image: nginx

ports:

- containerPort: 80

===============================
apiVersion: v1

kind: Service

metadata:
name: deployment-nodeport-service

spec:

type: NodePort

selector:

app: myapp3

ports:

- name: http

port: 80

targetPort: 80

nodePort: 31233

+++++++++++++++++++++++++++++++++++++++++++++++
TERRAFORM

+++++++++++++++++++++++++++++++++++++++++++++++
Q) Explain the core terraform end-to-end workflow to deploy & delete resources in AZURE or AWS cloud
?

1 : Write : First we will write a code on terraform file based on requirement.

2 : Init : It will initialize providers (Like AWS) plugins, and .terraform file will be created & in that there will
be two files which will be added, provider file & module file.

(By default, It will initialize latest version of “AWS”)

3. Validate: Then we will validate [ #terraform validate command, it will validate] terraform
configuration file for syntactical & internal consistent. (Syntactical means we will check for syntax there
is any syntax error like that, there is any error it will through syntax error & It will make sure that your
configurations are consistent).
4. Plan : Plan is nothing but it’s a preview changes before applying, So what exactly our configuration
going to create on cloud, So if u want to preview before apply then u can run #terraform plan command.
+ Indicates a resource will be created

- Indicates a resource will be destroyed

~ Indicates a resource will be update in plan

-/+ Indicates a resource will be destroyed & Re-Created

5. Apply: So, whatever we planned till now if u wan to apply those changes on Cloud provider then u
can run the Command #terraform apply
6. Destroy: If u want to delete / destroy terraform infrastructure.

Common commands:

$ terraform destroy -target RESOURCE_TYPE.NAME #To delete single resource


$ terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME #Multiple

#terraform state rm resource_name It will delete resource only on terraform.state file

#terraform taint : resource has become degraded or damaged & needed to be replaced in next apply.

#terraform apply -replace=”resource_name” : Replace resource

#terraform state list : To list ll resource

#terraform destroy : Delete resources on AWS console

#terraform import resource_name

#terraform refresh

#terraform plan -var-file=demo.tfvars : If 'terraform.tfvars' file in diff name lik project.tfvars or you have to
give file name like 'filename.auto.tfvars'

#terraform plan/apply -var="instancetype=t2.large" -var="image=ami-id" #To pass variables in cmd line

terraform state mv <old-name> <new-name>: Moves a resource within the state file.

terraform state rm <resource-name>: Removes a resource from the state. Use with caution.

terraform output: Displays output values defined in the configuration.

terraform get: Downloads and installs modules from the source specified in the configuration.

terraform init -upgrade: Upgrades modules to the latest versions defined in the configuration.

terraform import <resource-type>.<resource-name> <resource-id>: Imports an existing resource into the


Terraform state.

Ex: terraform import aws_instance. Instance_name i-0123456789abcdef0

terraform providers: Lists the providers used in the configuration and their versions.

terraform init -upgrade: Upgrades providers to the latest versions defined in the configuration.

Q) How can I upgrade plugins in terraform?

Ans:- Running #terraform init -upgrade command.

TERRAFORM FILE
terraform {

required_providers { -------------------------- #Provider

aws = {

source = "hashicorp/aws"

version = "4.13.0"

}}}

#Necessary

provider "aws" {

region = var.aws_region
access_key = var.access_key

secret_key = var.secret_key

resource "aws_instance" "base" { -------------------------- #EC2

ami = "ami-cgcgwcw7587cwc7"

instance_type = "t3.micro"

count = 3 # if you want to launch 3 copy

key_name = "$(aws_key_pair.tf-key.key_name)"

vpc_security_group_ids [aws_security_group.My_SG.id

tags {

Name = "tf-instance"

resource "aws_key_pair" "tf-key" { --------------------- #Key-pair

key_name = "tf-Key"

public_key = "vigcwgcowicgiowcgwocgo"

resource "aws_eip" "my_eip" { --------------------- #E-IP

vpc = true

instance = "aws_instance.base.id"

#Create VPC --------------------- #VPC / Network

resource "aws_vpc" "main-vpc" {

cidr_block = "10.1.0.0/16"

tags {

Name = "default-vpc"

}
#private subnet

resource "aws_subnet" "private-subnet" { ----------------------- #Subnet

vpc_id = aws_vpc.main-vpc.id

cidr_block = "10.1.1.0/24"

tags = {

Name = "Private-subnet"

#public subnet

resource "aws_subnet" "private-subnet" {

vpc_id = aws_vpc.main-vpc.id

cidr_block = "10.1.2.0/24"

tags = {

Name = "Public-subnet"

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Python and Shell Script

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Q) Why you use “Python and Shell Script”?

I used for automation, For ex:

-> Dump 'MySQL' database backup to S3 bucket, We have setup MySQL on Digital Ocean cloud
server, So from there we need to take backup, for this I used shell/Python script & I execute that
shell/Python script automatically every day at 1:00 AM.

-> To restart bots in application from bots-server (DO), We run bots in our application for loadtest,
To do that manually we have to do many settings, So I automated all this from Python or Shell script.

Q) Python script to print unique number in a list, take input from users

def get_unique_numbers():
try:

numbers = input("Enter a list of numbers separated by spaces: ").split()

numbers = [int(num) for num in numbers]

unique_numbers = set(numbers)

print("Unique Numbers:", sorted(unique_numbers))

except ValueError:

print("Invalid input. Please enter valid numbers.")

if __name__ == "__main__":

get_unique_numbers()

====================================================

Q) Write a shell script to meet the following requirements:

First, prompt for the folder path. If the folder exists, indicate that the folder exists. If the folder
doesn't exist, create the folder and download and install a packages in the newly created folder."

#!/bin/bash

# Prompt the user for a folder path

read -p "Enter the folder path: " folder_path

# Check if the folder already exists

if [ -d "$folder_path" ]; then

echo "Folder already exists."

else

# If the folder doesn't exist, create it

mkdir -p "$folder_path"

# Move into the new folder

cd "$folder_path"
# Download and install a package (e.g., 'example_package' for demonstration)

wget https:example

tar -xzf example_package.tar.gz

You might also like