Devops Technical Interview Questions
Devops Technical Interview Questions
Characteristics DevOps
Basic premise A collaboration of development and operations teams. It is more
of a cultural shift.
Related to Agile methodology
Priorities Resource management, communication, and teamwork
Benefits Speed, functionality, stability, and innovation
A DevOps engineer is responsible for bridging the gap between the development and
operations teams by facilitating the delivery of high-quality software products. They
use automation tools and techniques to streamline the software development
lifecycle, monitor and optimize system performance, and ensure continuous
deployment and delivery.
Moreover, they ensure that everything in the development and operations process
runs smoothly.
HTTP or Hypertext Transfer Protocol works in a client–server model like most other
protocols. HTTP provides a way to interact with web resources by transmitting
hypertext messages between clients and servers.
• Development
• Version Control
• Testing
• Integration
• Deployment
• Delivery
• Configuration
• Monitoring
• Feedback
6. What are some technical and business benefits of
DevOps?
Technical benefits:
Business benefits:
• Git
• Maven
• Selenium
• Jenkins
• Docker
• Puppet
• Chef
• Ansible
• Nagios
8. What are the core principles of DevOps?
They also collaborate closely to ensure everyone is on the same page and
continuously working to improve the software through continuous monitoring and
feedback loops.
To evaluate the success of a DevOps implementation, we can use key indicators such
as the frequency of changes, the speed of implementation, error recovery time, and
the incidents of issues arising from changes. These metrics enable us to assess the
efficiency and effectiveness of our software development process. We can also ask
for feedback from team members and clients to measure the satisfaction level with
the software and its functionality.
This approach helps teams achieve consistency, reduce errors, and increase speed
and efficiency. IaC also enables teams to version control their infrastructure code,
making it easier to track changes and collaborate.
15. Is DevOps a tool?
You can take the following actions to ensure a DevOps pipeline is scalable and can
cope with rising demand:
The core operations of DevOps are application development, version control, unit
testing, deployment with infrastructure, monitoring, configuration, and
orchestration.
It also helps to ensure that the software meets the needs of its users, resulting in
better customer satisfaction and higher business value. Continuous feedback is a key
element of the DevOps culture and promotes a mindset of continuous learning and
improvement.
To ensure that infrastructure is secure in a DevOps environment, you can take the
following steps:
Security is a key element integrated into the entire software development lifecycle in
a DevOps context. Security ensures the software is created to comply with rules and
defend against any security risks.
DevOps teams should include professionals who are knowledgeable about the most
current security standards, can spot security threats, and can put the best security
practices into action. This guarantees that the program is secure from creation and
constantly watched throughout the deployment and maintenance phases.
Once the container images are deployed, access control and network
segmentation are used to limit their exposure to potential threats. Regular security
scanning, patching, monitoring, and logging are essential to maintaining container
security.
In a DevOps environment, monitoring and logging are vital because they give insight
into the system’s functionality, effectiveness, and security. While logging enables
analysis of system events and actions, monitoring assists in identifying any issues
before they become critical.
Identifying the root cause of problems simplifies the process of resolution and helps
prevent their recurrence. Additionally, monitoring and logging offer insights into user
behavior and usage patterns, facilitating better optimization and decision-making.
These tools can be integrated into the build process to test for security issues and
provide feedback to developers automatically. Additionally, it is essential to involve
security experts early in the development process to identify potential security risks
and ensure that security is integrated into every pipeline stage.
Virtual machines (VMs) and containers are two different approaches to running
software. Applications can execute in a consistent environment across several
systems due to containers, which are portable, lightweight environments that share
the host system’s resources.
Automation testing has several advantages, including quicker and more effective
testing, expanded coverage, and higher test accuracy. It can save time and money in
the long run because automated testing can be repeated without human
intervention.
Continuous testing is a critical component of DevOps that involves testing early and
often throughout the software development process. It provides continuous
feedback to developers, ensuring that code changes are tested thoroughly and
defects are detected early. Continuous testing helps improve software quality,
reduce costs and time-to-market, and increase customer satisfaction.
Cloud computing plays a vital role within the realm of DevOps, as it offers a versatile
and scalable infrastructure for software development and deployment. Its provision
of computing resources on demand, which are easily manageable and provisional, is
instrumental in empowering DevOps teams. By leveraging cloud services, these
teams are able to automate the deployment process, collaborate effectively, and
seamlessly integrate various DevOps practices.
DevOps can increase customer satisfaction and drive business growth by providing
better software faster. DevOps teams can offer features that satisfy customers’
expectations quickly by concentrating on collaboration, continuous improvement,
and customer feedback. It can result in more loyal consumers and, ultimately, the
company’s growth.
One common misconception about DevOps is that it is solely focused on tools and
automation. In reality, it is a cultural and organizational shift that involves
collaboration between teams and breaking down barriers.
Another misconception is that DevOps is only for startups or tech companies, when
it can be beneficial for any organization looking to improve its software development
and delivery processes.
People consider DevOps the responsibility of the IT department, but it requires buy-
in and involvement from all levels of the organization.
42. How can DevOps affect culture change to improve
the business?
43. Our team has some ideas and wants to turn those
ideas into a software application. Now, as a manager, I
am confused about whether I should follow the Agile
work culture or DevOps. Can you tell me why I should
follow DevOps over Agile?
According to the current market trend, instead of releasing big sets of features in an
application, companies are launching small features for software with better product
quality and quick customer feedback, for high customer satisfaction. To keep up with
this, we have to:
DevOps fulfills all these requirements for fast and reliable development and
deployment of software. Companies like Amazon and Google have adopted DevOps
and are launching thousands of code deployments per day. But Agile, on the other
hand, only focuses on software development.
44. Can one consider DevOps as an Agile methodology?
DevOps can be considered complementary to the Agile methodology but not entirely
similar.
DevOps and Agile are two methodologies that aim to improve software development
processes. Agile focuses on delivering features iteratively and incrementally, while
DevOps focuses on creating a collaborative culture between development and
operations teams to achieve faster delivery and more reliable software.
DevOps also emphasizes the use of automation, continuous integration and delivery,
and continuous feedback to improve the software development process. While Agile
is primarily focused on the development phase, DevOps is focused on the entire
software development lifecycle, from planning and coding to testing and
deployment.
AWS in DevOps works as a cloud provider, and it has the following roles:
• Flexible services: AWS provides us with ready-to-use resources for
implementation.
• Scaling purpose: We can deploy thousands of machines on AWS,
depending on the requirement.
• Automation: AWS helps us automate tasks using various services.
• Security: Using its security options (IAM), we can secure our deployments
and builds.
Teams can also create a culture of collaboration, automate crucial procedures, and
use a continuous improvement strategy. On the basis of the particular requirements
and objectives of the project, it is also crucial to choose the right tools and
technologies.
Several branching strategies are used in version control systems, including trunk-
based development, feature branching, release branching, and git-flow. Trunk-based
development involves committing changes directly to the main branch, while feature
branching involves creating a new branch for each new feature.
Release branching involves creating a separate branch for each release, and git-flow
combines feature and release branches to create a more structured branching
strategy. Each strategy has its advantages and disadvantages, and the choice of
strategy depends on the specific needs of the project and the team.
• It helps improve the collaborative work culture: Here, team members are
allowed to work freely on any file at any time. The version control system
allows us to merge all changes into a common version.
• It keeps different versions of code files securely: All the previous versions
and variants of code files are neatly packed up inside the version control
system.
• It understands what happened: Every time we save a new version of our
project, the version control system asks us to provide a short description
of what was changed. More than that it allows us to see what changes
were made in the file’s content, as well as who has made those changes.
• It keeps a backup: A distributed version control system like Git allows all
team members to access the complete history of the project file so that
in case there is a breakdown in the central server, they can use any of
their teammate’s local Git repository.
The command ‘git pull’ pulls any new commits from a branch from the central
repository and then updates the target branch in the local repository.
But, ‘git fetch’ is a slightly different form of ‘git pull’. Unlike ‘git pull’, it pulls all new
commits from the desired branch and then stores them in a new branch in the local
repository.
In order to reflect these changes in your target branch, ‘git fetch’ must be followed
with a ‘git merge’. The target branch will only be updated after merging with the
fetched branch (where we performed ‘git fetch’). We can also interpret the whole
thing with an equation like this:
git pull = git fetch + git merge
54. How do you handle merge conflicts in Git?
Both ‘git rebases’ and ‘git merge’ commands are designed to integrate changes from
one branch into another branch: just that they just do it in different ways.
When we perform rebase of a feature branch onto the master branch, we move the
base of the feature branch to the master branch’s ending point.
By performing a merge, we take the contents of the feature branch and integrate
them with the master branch. As a result, only the master branch is changed, but the
feature branch history remains the same. Merging adds a new commit to your
history.
Rebasing will create inconsistent repositories. For individuals, rebasing makes a lot
of sense. Now, in order to see the history completely, the same way as it has
happened, we should use merge. Merge preserves history, whereas rebase rewrites
it.
57. I just made a bad git commit and made it public, and
I need to revert the commit. Can you suggest how to do
that?
This command can revert any command just by adding the commit ID.
git commit
For Firefox:
For Chrome:
It can be used to execute the same or different test scripts on multiple platforms and
browsers, concurrently, in order to achieve distributed test execution. It allows
testing under different environments, remarkably saving execution time.
The driver.close command closes the focused browser window. But, the driver.quit
command calls the driver.dispose method which closes all browser windows and also
ends the WebDriver session.
65. I want to move or copy Jenkins from one server to
another. Is it possible? If yes, how?
I would suggest copying the Jenkins jobs directory from the old server to the new
one. We can just move a job from one installation of Jenkins to another by copying
the corresponding job directory.
Or, we can also make a copy of an existing Jenkins job by making a clone of that job
directory in a different name.
Another way is that we can rename an existing job by renaming the directory. But, if
you change the name of a job, you will need to change any other job that tries to call
the renamed job.
Yes, it is. With the help of a Jenkins plugin, we can build DevOps projects one after
the other. If one parent job is carried out, then automatically other jobs are also run.
We also have the option to use Jenkins Pipeline jobs for the same.
Every Puppet Node or Puppet Agent has got its configuration details in Puppet
Master, written in the native Puppet language. These details are written in a language
that Puppet can understand and are termed as Puppet Manifests. These manifests
are composed of Puppet codes, and their filenames use the .pp extension.
For instance, we can write a manifest in Puppet Master that creates a file and installs
Apache on all Puppet Agents or slaves that are connected to the Puppet Master.
A Puppet Module is nothing but a collection of manifests and data (e.g., facts, files,
and templates). Puppet Modules have a specific directory structure. They are useful
for organizing the Puppet code because, with Puppet Modules, we can split the
Puppet code into multiple manifests. It is considered best practice to use Puppet
Modules to organize almost all of your Puppet Manifests.
Puppet Modules are different from Puppet Manifests. Manifests are nothing but
Puppet programs, composed of the Puppet code. File names of Puppet Manifests
use the .pp extension.
It is the main directory for code and data in Puppet. It consists of environments
(containing manifests and modules), a global modules directory for all the
environments, and your Hiera data.
Unix/Linux Systems:
/etc/puppetlabs/code
Windows:
%PROGRAMDATA%\PuppetLabs\code (usually,
C:\ProgramData\PuppetLabs\code)
Non-root users:
~/.puppetlabs/etc/code
• Controlling machines
• Nodes
Ansible will be installed on the controlling machine and using that, machine nodes
are managed with the help of SSH. Nodes’ locations are specified by inventories in
that controlling machine.
Ansible can handle a lot of nodes from a single system over an SSH connection with
the help of Ansible Playbooks. Playbooks are capable of performing multiple tasks,
and they are in the YAML file format.
Ad-hoc commands are used to do something quickly, and they are mostly for one-
time use. Whereas Ansible Playbook is used to perform repeated actions. There are
scenarios where we want to use ad-hoc commands to perform a non-repetitive
activity.
• Configuration Management
• Application Deployment
• Task Automation
Handlers in Ansible are just like regular tasks inside an Ansible Playbook but they are
only run if the task contains a ‘notify’ directive. Handlers are triggered when it is called
by another task.
Yes, I have. Ansible Galaxy refers to the ‘Galaxy website’ by Ansible, where users
share Ansible roles. It is used to install, create, and manage Ansible roles.
• Automating tasks
• Managing configurations
• Deploying applications
• Efficiency
81. What are the prerequisites to install Ansible 2.8 on
Linux?
Docker containers can be shared on different nodes with the help of the Docker
Swarm. IT developers and administrators use this tool for the creation and
management of a cluster of swarm nodes within the Docker platform. A swarm
consists of a worker node and a manager node.
Below are the differences in multiple criteria that show why Docker has advantages
over virtual machines.
Memory Space – In terms of memory, Docker occupies less space than a virtual
machine.
Boot-up Time – Docker has a shorter boot-up time than a virtual machine.
Performance – Docker containers show better performance as they are hosted in a
single Docker engine, whereas performance is unstable if multiple virtual machines
are run.
Scaling – Docker is easy to scale up compared to virtual machines.
Efficiency – The efficiency of docker is higher, which is an advantage over virtual
machines.
Portability – Docker doesn’t have the same cross-platform compatibility issues with
porting as virtual machines do.
Space Allocation – Data volumes can be shared and used repeatedly across multiple
containers in Docker, unlike virtual machines that cannot share data volumes.
Sudo is a program for Unix/Linux-based systems that provides the ability to allow
specific users to use specific system commands at the system’s root level. It is an
abbreviation of ‘superuser do’, where ‘super user’ means the ‘root user’.
SSH is nothing but a secure shell that allows users to log in with a secure and
encrypted mechanism into remote computers. It is used for encrypted
communications between two hosts on an unsafe network. It supports tunneling,
forwarding TCP, and also transferring files.
NRPE stands for ‘Nagios Remote Plugin Executor’. As the name suggests, it allows you
to execute Nagios plugins remotely on other Linux or Unix machines.
Nagios
It can help monitor remote machine performance metrics such as disk usage, CPU
load, etc. It can communicate with some of the Windows agent add-ons. We can
execute scripts and check metrics on remote Windows machines as well.
88. Can you tell me why I should use Nagios?
Nagios Log Server simplifies the process of searching the log data. Nagios Log Server
is the best choice to perform tasks such as setting up alerts, notifying when potential
threats arise, simply querying the log data, and quickly auditing any system. With
Nagios Log Server, we can get all of our log data in one location.
90. Can you tell me why I should use Nagios for HTTP
monitoring?
Nagios can provide us with a complete monitoring service for our HTTP servers and
protocols. Here are a few benefits of implementing effective HTTP monitoring with
Nagios:
In Maven, we define all dependencies inside pom.xml so that all the dependencies
will be downloaded and can be used within the project.
Maven also supports modular project structures, allowing teams to develop and test
individual components separately, and provides consistent and reproducible builds
across development, testing, and production environments. These benefits enable
faster and more efficient delivery of software applications in a DevOps environment.
97. Why are SSL certificates used in Chef?
SSL certificates are used in Chef to establish secure and encrypted communication
channels between Chef components and nodes. These certificates verify the
authenticity of Chef servers and nodes, ensuring secure data transmission. By
encrypting communication, SSL certificates protect sensitive information, such as
authentication credentials and configuration data, from unauthorized access or
tampering. This enhances the overall security of the Chef infrastructure and helps
maintain the integrity and confidentiality of the data being exchanged.
Chef differs from other configuration management tools like Puppet and Ansible in
its approach to infrastructure automation. While Puppet and Ansible rely on a
procedural approach, Chef uses a declarative approach, which means that users
define the desired state of their infrastructure, and Chef ensures that it remains in
that state. Additionally, Chef has a strong focus on testing and compliance, making it
a popular choice in enterprise environments with strict security and compliance
requirements.
The key components of a Chef deployment include the Chef Server, which acts as the
central hub for storing configuration data and Chef code; Chef Client, which runs on
each node and applies the configurations defined by the Chef code; and the Chef
Workstation, which is used by developers and administrators to write and test the
Chef code before pushing it to the Chef Server for deployment. Other important
components include cookbooks, recipes, and resources, which define the desired
state of the infrastructure and the actions needed to achieve it.
100. What are some common integration challenges that
can arise when using multiple DevOps tools?
When using multiple DevOps tools, integration challenges can arise, such as data
incompatibility, conflicting configurations, and a lack of communication between
tools. For example, a deployment tool may not be compatible with a monitoring tool,
or a configuration management tool may have different settings from the testing
tool.