DevOps UNIT 5
DevOps UNIT 5
Software testing is a process of analyzing an application's functionality as per the customer prerequisite.
If we want to ensure that our software is bug-free or stable, we must perform the various types of software testing
because testing is the only method that makes our application bug-free.
Manual Testing
Testing any software or an application according to the client's needs without using any automation tool is known
as manual testing.
In other words, we can say that it is a procedure of verification and validation. Manual testing is used to verify
the behavior of an application or software in contradiction of requirements specification.
Classification of Manual Testing
In software testing, manual testing can be further classified into three different types of testing, which are as
follows:
• White Box Testing
• Grey Box Testing
• Black Box Testing
White box testing is also known as open box testing, glass box testing, structural testing, clear box testing, and
transparent box testing.
In other words, we can say that if a single-person team done both white box and black-box testing, it is
considered grey box testing.
Functional Testing
The test engineer will check all the components systematically against requirement specifications are known
as functional testing. Functional testing is also known as Component testing.
In functional testing, all the components are tested by giving the value, defining the output, and validating the
actual output with the expected value.
Functional testing is a part of black-box testing as its emphases on application requirement rather than actual
code. The test engineer has to test only the program instead of the system.
Types of Functional Testing
Just like another type of testing is divided into several parts, functional testing is also classified into various
categories. The diverse types of Functional Testing contain the following:
• Unit Testing
• Integration Testing
• System Testing
1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this, the test engineer will test
the module of an application independently or test all the module functionality. The primary objective of
executing the unit testing is to confirm the unit components with their performance. Here, a unit is defined as a
single testable function of software or an application. And it is verified throughout the specified application
development phase.
2. Integration Testing
Once we are successfully implementing the unit testing, we will go integration testing. It is the second level of
functional testing, where we test the data flow between dependent modules or interface between two features.
The purpose of executing the integration testing is to test the statement's accuracy between each module.
Types of Integration Testing
Integration testing is also further divided into the following parts:
• Incremental Testing
• Non-Incremental Testing
Incremental Integration Testing
Whenever there is a clear relationship between modules, we go for incremental integration testing. Suppose, we
take two modules and analyses the data flow between them if they are working fine or not.
If these modules are working fine, then we can add one more module and test again. And we can continue with
the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data flow between the modules
is known as Incremental integration testing.
Types of Incremental Integration Testing
Incremental integration testing can further classify into two parts, which are as follows:
a) Top-down Incremental Integration Testing
b) Bottom-up Incremental Integration Testing
Let's see a brief introduction of these types of integration testing:
a) Top-down Incremental Integration Testing
In this approach, we will add the modules step by step or incrementally and test the data flow between them. We
have to ensure that the modules we are adding are the child of the earlier ones.
b) Bottom-up Incremental Integration Testing
In the bottom-up approach, we will add the modules incrementally and check the data flow between modules.
And also, ensure that the module we are adding is the parent of the earlier ones.
Non-Incremental Integration Testing/ Big Bang Method
Whenever the data flow is complex and very difficult to classify a parent and a child, we will go for the non-
incremental integration approach. The non-incremental method is also known as the Big Bang method.
3. System Testing
Whenever we are done with the unit and integration testing, we can proceed with the system testing. In system
testing, the test environment is parallel to the production environment. It is also known as end-to-end testing.
In this type of testing, we will undergo each attribute of the software and test if the end feature works according
to the business requirement. And analysis the software product as a complete system.
Non-Functional Testing
The next part of black-box testing is non-functional testing. It provides detailed information on software product
performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the software. It is a
combination of performance, load, stress, usability and, compatibility testing.
Types of Non-functional Testing
Non-functional testing categorized into different parts of testing, which we are going to discuss further:
• Performance Testing
• Usability Testing
• Compatibility Testing
1. Performance Testing
In performance testing, the test engineer will test the working of an application by applying some load.
In this type of non-functional testing, the test engineer will only focus on several aspects, such as Response time,
Load, scalability, and Stability of the software or an application.
Classification of Performance Testing
Performance testing includes the various types of testing, which are as follows:
• Load Testing
• Stress Testing
• Scalability Testing
• Stability Testing
Load Testing: While executing the performance testing, we will apply some load on the particular application to
check the application's performance, known as load testing. Here, the load could be less than or equal to the
desired load.
It will help us to detect the highest operating volume of the software and bottlenecks.
Stress Testing: It is used to analyze the user-friendliness and robustness of the software beyond the common
functional limits.
Primarily, stress testing is used for critical software, but it can also be used for all types of software applications.
Scalability Testing: To analysis, the application's performance by enhancing or reducing the load in particular
balances is known as scalability testing.
In scalability testing, we can also check the system, processes, or database's ability to meet an upward need. And
in this, the Test Cases are designed and implemented efficiently.
Stability Testing: Stability testing is a procedure where we evaluate the application's performance by applying the
load for a precise time.
It mainly checks the constancy problems of the application and the efficiency of a developed product. In this type
of testing, we can rapidly find the system's defect even in a stressful situation.
2. Usability Testing
Another type of non-functional testing is usability testing. In usability testing, we will analyze the user-
friendliness of an application and detect the bugs in the software's end-user interface.
Here, the term user-friendliness defines the following aspects of an application:
• The application should be easy to understand, which means that all the features must be visible to end-
users.
• The application's look and feel should be good that means the application should be pleasant looking and
make a feel to the end-user to use it.
3. Compatibility Testing
In compatibility testing, we will check the functionality of an application in specific hardware and software
environments. Once the application is functionally stable then only, we go for compatibility testing.
Here, software means we can test the application on the different operating systems and other browsers,
and hardware means we can test the application on different sizes.
Automation Testing
The most significant part of Software testing is Automation testing. It uses specific tools to automate manual
design test cases without any human interference.
Automation testing is the best way to enhance the efficiency, productivity, and coverage of Software testing.
It is used to re-run the test scenarios, which were executed manually, quickly, and repeatedly.
In other words, we can say that whenever we are testing an application by using some tools is known
as automation testing.
We will go for automation testing when various releases or several regression cycles goes on the application or
software. We cannot write the test script or perform the automation testing without understanding the
programming language.
Selenium
Selenium is one of the most widely used open source Web UI (User Interface) automation testing suite. It was
originally developed by Jason Huggins in 2004 as an internal tool at Thought Works. Selenium supports
automation across different browsers, platforms and programming languages.
Selenium can be easily deployed on platforms such as Windows, Linux, Solaris and Macintosh. Moreover, it
supports OS (Operating System) for mobile applications like iOS, windows mobile and android.
Selenium supports a variety of programming languages through the use of drivers specific to each Language.
Languages supported by Selenium include C#, Java, Perl, PHP, Python and Ruby.
Currently, Selenium Web driver is most popular with Java and C#. Selenium test scripts can be coded in any of
the supported programming languages and can be run directly in most modern web browsers. Browsers supported
by Selenium include Internet Explorer, Mozilla Firefox, Google Chrome and Safari.
Selenium can be used to automate functional tests and can be integrated with automation test tools such
as Maven, Jenkins, & Docker to achieve continuous testing. It can also be integrated with tools such as TestNG,
& JUnit for managing test cases and generating reports.
Selenium Features:
• Selenium is an open source and portable Web testing Framework.
• Selenium IDE provides a playback and record feature for authoring tests without the need to learn a test
scripting language.
• It can be considered as the leading cloud-based testing platform which helps testers to record their actions
and export them as a reusable script with a simple-to-understand and easy-to-use interface.
• Selenium supports various operating systems, browsers and programming languages. Following is the list:
• Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
• Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
• Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera, Safari, etc.
• It also supports parallel test execution which reduces time and increases the efficiency of tests.
• Selenium can be integrated with frameworks like Ant and Maven for source code compilation.
• Selenium can also be integrated with testing frameworks like TestNG for application testing and
generating reports.
• Selenium requires fewer resources as compared to other automation test tools.
• WebDriver API has been indulged in selenium which is one of the most important modifications done to
selenium.
• Selenium web driver does not require server installation, test scripts interact directly with the browser.
• Selenium commands are categorized in terms of different classes which make it easier to understand and
implement.
JavaScript testing
JavaScript testing is a crucial part of the software development process that helps ensure the quality and reliability
of code. The following are the key components of JavaScript testing:
Test frameworks: A test framework provides a structure for writing and organizing tests. Some popular JavaScript
test frameworks include Jest, Mocha, and Jasmine.
Assertion libraries: An assertion library provides a set of functions that allow developers to write assertions about
the expected behavior of the code. For example, an assertion might check that a certain function returns the
expected result.
Test suites: A test suite is a collection of related tests that are grouped together. The purpose of a test suite is to
test a specific aspect of the code in isolation.
Test cases: A test case is a single test that verifies a specific aspect of the code. For example, a test case might
check that a function behaves correctly when given a certain input.
Test runners: A test runner is a tool that runs the tests and provides feedback on the results. Test runners typically
provide a report on which tests passed and which tests failed.
Continuous Integration (CI): CI is a software development practice where developers integrate code into a shared
repository frequently. By using CI, developers can catch issues early and avoid integration problems.
The goal of JavaScript testing is to catch bugs and defects early in the development cycle, before they become
bigger problems and impact the quality of the software. Testing also helps to ensure that the code behaves as
expected, even when changes are made in the future.
There are different types of tests that can be performed in JavaScript, including unit tests, integration tests, and
end-to-end tests. The choice of which tests to write depends on the specific requirements and goals of the project.
2. Functional Testing
Functional Testing is the process of validating that the transactions and operations made by the end-users meet
the requirements.
Types of Functional Testing: The following are the different types of functional testing:
a) Black Box Testing:
• Black Box Testing is the process of checking the functionalities of the integration of the database.
• This testing is carried out at the early stage of development and hence It is very helpful to reduce errors.
• It consists of various techniques such as boundary analysis, equivalent partitioning, and cause-effect
graphing.
• These techniques are helpful in checking the functionality of the database.
• The best example is the User login page. If the entered username and password are correct, It will allow the
user and redirect to the next page.
b) White Box Testing:
• White Box Testing is the process of validating the internal structure of the database.
• Here, the specified details are hidden from the user.
• The database triggers, functions, views, queries, and cursors will be checked in this testing.
• It validates the database schema, database table, etc.,
• Here the coding errors in the triggers can be easily found.
• Errors in the queries can also be handled in this white box testing and hence internal errors are easily
eliminated.
3. Non-Functional Testing
Non-functional testing is the process of performing load testing, stress testing, and checking minimum system
requirements are required to meet the requirements. It will also detect risks, and errors and optimize the
performance of the database.
a) Load Testing:
• Load testing involves testing the performance and scalability of the database.
• It determines how the software behaves when it is been used by many users simultaneously.
• It focuses on good load management.
For example, if the web application is accessed by multiple users at the same time and it does not create any
traffic problems then the load testing is successfully completed.
b) Stress Testing:
• Stress Testing is also known as endurance testing. Stress testing is a testing process that is performed to
identify the breakpoint of the system.
• In this testing, an application is loaded till the stage the system fails.
• This point is known as a breakpoint of the database system.
• It evaluates and analyzes the software after the breakage of system failure. In case of error detection, It will
display the error messages.
For example, if users enter the wrong login information then it will throw an error message.
Backend Testing Process
1. Set up the Test Environment: When the coding process is done for the application, set up the test environment
by choosing a proper testing tool for back-end testing. It includes choosing the right team to test the entire back-
end environment with a proper schedule. Record all the testing processes in the documents or update them in
software to keep track of all the processes.
2. Generate the Test Cases: Once the tool and the team are ready for the testing process, generate the test cases as
per the business requirements. The automation tool itself will analyze the code and generate all possible test cases
for developed code. If the process is manual then the tester will have to write the possible test cases in the testing
tool to ensure the correctness of the code.
3. Execution of Test Cases: Once the test cases are generated, the tester or Quality Analyst needs to execute those
test cases in the developed code. If the tool is automated, it will generate and execute the test cases by itself.
Otherwise, the tester needs to write and execute those test cases. It will highlight whether the execution of test
cases is executed successfully or not.
4. Analyzing the Test Cases: After the execution of test cases, it highlights the result of all the test cases whether
it has been executed successfully or not. If an error occurs in the test cases, it will highlight where the particular
error is formed or raised, and in some cases, the automation tool will give hints regarding the issues to solve the
error. The tester or Quality Analyst should analyze the code again and fix the issues if an error occurred.
5. Submission of Test Reports: This is the last stage in the testing process. Here, all the details such as who is
responsible for testing, the tool used in the testing process, number of test cases generated, number of test cases
executed successfully or not, time is taken to execute each test case, number of times test cases failed, number of
times errors occurred. These details are either documented or updated in the software. The report will be submitted
to the respective team.
Advantages of Backend Testing
The following are some of the benefits of backend testing:
• Errors are easily detectable at the earlier stage.
• It avoids deadlock creation on the server-side.
• Web load management is easily achieved.
• The functionality of the database is maintained properly.
• It reduces data loss.
• Enhances the functioning of the system.
• It ensures the security and protection of the system.
• While doing the backend testing, the errors in the UI parts can also be detected and replaced.
• Coverage of all possible test cases.
Disadvantages of Backend Testing
The following are some of the disadvantages of backend testing:
• Good domain knowledge is required.
• Providing test cases for testing requires special attention.
• Investment in Organizational costs is higher.
• It takes more time to test.
• If more testing becomes fails then It will lead to a crash on the server-side in some cases.
• Errors or Unexpected results from one test case scenario will affect the other system results also.
Test-driven development
Test Driven Development (TDD) is software development approach in which test cases are developed to specify
and validate what the code will do. In simple terms, test cases for each functionality are created and tested first
and if the test fails then the new code is written in order to pass the test and making code simple and bug-free.
Test-Driven Development starts with designing and developing tests for every small functionality of an
application. TDD framework instructs developers to write new code only if an automated test has failed. This
avoids duplication of code. The TDD full form is Test-driven development.
The simple concept of TDD is to write and correct the failed tests before writing new code (before development).
This helps to avoid duplication of code as we write a small amount of code at a time in order to pass tests. (Tests
are nothing but requirement conditions that we need to test to fulfill them).
Test-Driven development is a process of developing and running automated test before actual development of the
application. Hence, TDD sometimes also called as Test First Development.
TDD is usually described as a sequence of events, as follows:
Implement the test: As the name implies, you start out by writing the test and write the code afterwards. One way
to see it is that you implement the interface specifications of the code to be developed and then progress by writing
the code. To be able to write the test, the developer must find all relevant requirement specifications, use cases,
and user stories.
The shift in focus from coding to understanding the requirements can be beneficial for implementing them
correctly.
Verify that the new test fails: The newly added test should fail because there is nothing to implement the behavior
properly yet, only the stubs and interfaces needed to write the test. Run the test and verify that it fails.
Write code that implements the tested feature: The code we write doesn't yet have to be particularly elegant or
efficient. Initially, we just want to make the new test pass.
Verify that the new test passes together with the old tests: When the new test passes, we know that we have
implemented the new feature correctly. Since the old tests also pass, we haven't broken existing functionality.
Refactor the code: The word "refactor" has mathematical roots. In programming, it means cleaning up the code
and, among other things, making it easier to understand and maintain. We need to refactor since we cheated a bit
earlier in the development.
TDD is a style of development that fits well with DevOps, but it's not necessarily the only one. The primary
benefit is that you get good test suites that can be used in Continuous Integration tests.
REPL-driven development
REPL-driven development (Read-Eval-Print Loop) is an interactive programming approach that allows
developers to execute code snippets and see their results immediately. This enables developers to test their code
quickly and iteratively, and helps them to understand the behavior of their code as they work.
In a REPL environment, developers can type in code snippets, and the environment will immediately evaluate the
code and return the results. This allows developers to test small bits of code and quickly see the results, without
having to create a full-fledged application.
REPL-driven development is commonly used in dynamic programming languages such as Python, JavaScript,
and Ruby. Some popular REPL environments include the Python REPL, Node.js REPL, and IRB (Interactive
Ruby).
Benefits of REPL-driven development include:
Increased efficiency: The immediate feedback provided by a REPL environment allows developers to test and
modify their code quickly, without having to run a full-fledged application.
Improved understanding: By being able to see the results of code snippets immediately, developers can better
understand how the code works and identify any issues early on.
Increased collaboration: REPL-driven development makes it easy for developers to share code snippets and
collaborate on projects, as they can demonstrate the behavior of the code quickly and easily.
Overall, REPL-driven development is a useful tool for developers looking to improve their workflow and increase
their understanding of their code. By providing an interactive environment for testing and exploring code, REPL-
driven development can help developers to be more productive and efficient.
Virtualization stacks
In DevOps, virtualization refers to the creation of virtual machines, containers, or environments that allow
multiple operating systems to run on a single physical machine. The following are some of the commonly used
virtualization stacks in DevOps:
Docker: An open-source platform for automating the deployment, scaling, and management of containerized
applications.
Kubernetes: An open-source platform for automating the deployment, scaling, and management of containerized
applications, commonly used in conjunction with Docker.
VirtualBox: An open-source virtualization software that allows multiple operating systems to run on a single
physical machine.
VMware: A commercial virtualization software that provides a comprehensive suite of tools for virtualization,
cloud computing, and network and security management.
Hyper-V: Microsoft's hypervisor technology that enables virtualization on Windows-based systems.
These virtualization stacks play a crucial role in DevOps by allowing developers to build, test, and deploy
applications in isolated, consistent environments, while reducing the costs and complexities associated with
physical infrastructure.
Here, the client is referred to as a Puppet agent/slave/node, and the server is referred to as a Puppet master.
Ansible
Ansible is simple open source IT engine which automates application deployment, intra service orchestration,
cloud provisioning and many other IT tools.
Ansible is easy to deploy because it does not use any agents or custom security infrastructure.
Ansible uses playbook to describe automation jobs, and playbook uses very simple language i.e. YAML (It’s a
human-readable data serialization language & is commonly used for configuration files, but could be used in
many applications where data is being stored) which is very easy for humans to understand, read and write. Hence
the advantage is that even the IT infrastructure support guys can read and understand the playbook and debug if
needed (YAML – It is in human readable form).
Ansible is designed for multi-tier deployment. Ansible does not manage one system at time, it models IT
infrastructure by describing all of your systems are interrelated. Ansible is completely agentless which means
Ansible works by connecting your nodes through ssh(by default). But if you want other method for connection
like Kerberos, Ansible gives that option to you.
After connecting to your nodes, Ansible pushes small programs called as “Ansible Modules”. Ansible runs that
modules on your nodes and removes them when finished. Ansible manages your inventory in simple text files
(These are the hosts file). Ansible uses the hosts file where one can group the hosts and can control the actions
on a specific group in the playbooks.
Ansible Workflow
Ansible works by connecting to your nodes and pushing out a small program called Ansible modules to them.
Then Ansible executed these modules and removed them after finished. The library of modules can reside on any
machine, and there are no daemons, servers, or databases required.
In the above image, the Management Node is the controlling node that controls the entire execution of the
playbook. The inventory file provides the list of hosts where the Ansible modules need to be run.
The Management Node makes an SSH connection and executes the small modules on the host's machine and
install the software.
Ansible removes the modules once those are installed so expertly. It connects to the host machine executes the
instructions, and if it is successfully installed, then remove that code in which one was copied on the host machine.
Ansible Architecture
The Ansible orchestration engine interacts with a user who is writing the Ansible playbook to execute the Ansible
orchestration and interact along with the services of private or public cloud and configuration management
database. You can show in the below diagram, such as:
Inventory: Inventory is lists of nodes or hosts having their IP addresses, databases, servers, etc. which are need to be
managed.
API's: The Ansible API's works as the transport for the public or private cloud services.
Modules: Ansible connected the nodes and spread out the Ansible modules programs. Ansible executes the modules and
removed after finished. These modules can reside on any machine; no database or servers are required here. You can work
with the chose text editor or a terminal or version control system to keep track of the changes in the content.
Plugins: Plugins is a piece of code that expends the core functionality of Ansible. There are many useful plugins, and you
also can write your own.
Playbooks: Playbooks consist of your written code, and they are written in YAML format, which describes the tasks and
executes through the Ansible. Also, you can launch the tasks synchronously and asynchronously with playbooks.
Hosts: In the Ansible architecture, hosts are the node systems, which are automated by Ansible, and any machine such as
RedHat, Linux, Windows, etc.
Networking: Ansible is used to automate different networks, and it uses the simple, secure, and powerful agentless
automation framework for IT operations and development. It uses a type of data model which separated from the Ansible
automation engine that spans the different hardware quite easily.
Cloud: A cloud is a network of remote servers on which you can store, manage, and process the data. These servers are
hosted on the internet and storing the data remotely rather than the local server. It just launches the resources and instances
on the cloud, connect them to the servers, and you have good knowledge of operating your tasks remotely.
CMDB: CMDB is a type of repository which acts as a data warehouse for the IT installations.
Deployment tools
Chef
Chef is an open source technology developed by Opscode. Adam Jacob, co-founder of Opscode is known as the
founder of Chef. This technology uses Ruby encoding to develop basic building blocks like recipe and cookbooks.
Chef is used in infrastructure automation and helps in reducing manual and repetitive tasks for infrastructure
management.
Chef have got its own convention for different building blocks, which are required to manage and automate
infrastructure.
Why Chef?
Chef is a configuration management technology used to automate the infrastructure provisioning. It is developed
on the basis of Ruby DSL language. It is used to streamline the task of configuration and managing the company’s
server. It has the capability to get integrated with any of the cloud technology.
In DevOps, we use Chef to deploy and manage servers and applications in-house and on the cloud.
Features of Chef
Following are the most prominent features of Chef −
• Chef uses popular Ruby language to create a domain-specific language.
• Chef does not make assumptions on the current status of a node. It uses its mechanisms to get the current
status of machine.
• Chef is ideal for deploying and managing the cloud server, storage, and software.
Advantages of Chef
Chef offers the following advantages −
• Lower barrier for entry − As Chef uses native Ruby language for configuration, a standard configuration
language it can be easily picked up by anyone having some development experience.
• Excellent integration with cloud − Using the knife utility, it can be easily integrated with any of the cloud
technologies. It is the best tool for an organization that wishes to distribute its infrastructure on multi-cloud
environment.
Disadvantages of Chef
Some of the major drawbacks of Chef are as follows −
• One of the huge disadvantages of Chef is the way cookbooks are controlled. It needs constant babying so
that people who are working should not mess up with others cookbooks.
• Only Chef solo is available.
• In the current situation, it is only a good fit for AWS cloud.
• It is not very easy to learn if the person is not familiar with Ruby.
• Documentation is still lacking.
Chef - Architecture
Chef works on a three-tier client server model wherein the working units such as cookbooks are developed on the
Chef workstation. From the command line utilities such as knife, they are uploaded to the Chef server and all the
nodes which are present in the architecture are registered with the Chef server.
• In order to get the working Chef infrastructure in place, we need to set up multiple things in sequence.
• In the above setup, we have the following components.
• Chef Workstation
• This is the location where all the configurations are developed. Chef workstation is installed on the local
machine. Detailed configuration structure is discussed in the later chapters of this tutorial.
• Chef Server
• This works as a centralized working unit of Chef setup, where all the configuration files are uploaded post
development. There are different kinds of Chef server, some are hosted Chef server whereas some are built-
in premise.
• Chef Nodes
• They are the actual machines which are going to be managed by the Chef server. All the nodes can have
different kinds of setup as per requirement. Chef client is the key component of all the nodes, which helps
in setting up the communication between the Chef server and Chef node. The other components of Chef
node is Ohai, which helps in getting the current state of any node at a given point of time.
Salt Stack
Salt Stack is an open-source configuration management software and remote execution engine. Salt is a command-
line tool. While written in Python, SaltStack configuration management is language agnostic and simple. Salt
platform uses the push model for executing commands via the SSH protocol. The default configuration system
is YAML and Jinja templates. Salt is primarily competing with Puppet, Chef and Ansible.
Salt provides many features when compared to other competing tools. Some of these important features are listed
below.
• Fault tolerance − Salt minions can connect to multiple masters at one time by configuring the master
configuration parameter as a YAML list of all the available masters. Any master can direct commands to
the Salt infrastructure.
• Flexible − The entire management approach of Salt is very flexible. It can be implemented to follow the
most popular systems management models such as Agent and Server, Agent-only, Server-only or all of the
above in the same environment.
• Scalable Configuration Management − SaltStack is designed to handle ten thousand minions per master.
• Parallel Execution model − Salt can enable commands to execute remote systems in a parallel manner.
• Python API − Salt provides a simple programming interface and it was designed to be modular and easily
extensible, to make it easy to mold to diverse applications.
• Easy to Setup − Salt is easy to setup and provides a single remote execution architecture that can manage
the diverse requirements of any number of servers.
• Language Agnostic − Salt state configuration files, templating engine or file type supports any type of
language.
Benefits of SaltStack
Being simple as well as a feature-rich system, Salt provides many benefits and they can be summarized as below
−
• Robust − Salt is powerful and robust configuration management framework and works around tens of
thousands of systems.
• Authentication − Salt manages simple SSH key pairs for authentication.
• Secure − Salt manages secure data using an encrypted protocol.
• Fast − Salt is very fast, lightweight communication bus to provide the foundation for a remote execution
engine.
• Virtual Machine Automation − The Salt Virt Cloud Controller capability is used for automation.
• Infrastructure as data, not code − Salt provides a simple deployment, model driven configuration
management and command execution framework.
SaltStack – Architecture
The architecture of SaltStack is designed to work with any number of servers, from local network systems to other
deployments across different data centers. Architecture is a simple server/client model with the needed
functionality built into a single set of daemons.
Take a look at the following illustration. It shows the different components of SaltStack architecture.
• SaltMaster − SaltMaster is the master daemon. A SaltMaster is used to send commands and configurations
to the Salt slaves. A single master can manage multiple masters.
• SaltMinions − SaltMinion is the slave daemon. A Salt minion receives commands and configuration from
the SaltMaster.
• Execution − Modules and Adhoc commands executed from the command line against one or more minions.
It performs Real-time Monitoring.
• Formulas − Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can
be used for tasks such as installing a package, configuring and starting a service, setting up users or
permissions and many other common tasks.
• Grains − Grains is an interface that provides information specific to a minion. The information available
through the grains interface is static. Grains get loaded when the Salt minion starts. This means that the
information in grains is unchanging. Therefore, grains information could be about the running kernel or the
operating system. It is case insensitive.
• Pillar − A pillar is an interface that generates and stores highly sensitive data specific to a particular minion,
such as cryptographic keys and passwords. It stores data in a key/value pair and the data is managed in a
similar way as the Salt State Tree.
• Top File − Matches Salt states and pillar data to Salt minions.
• Runners − It is a module located inside the SaltMaster and performs tasks such as job status, connection
status, read data from external APIs, query connected salt minions and more.
• Returners − Returns data from Salt minions to another system.
• Reactor − It is responsible for triggering reactions when events occur in your SaltStack environment.
• SaltCloud − Salt Cloud provides a powerful interface to interact with cloud hosts.
• SaltSSH − Run Salt commands over SSH on systems without using Salt minion.
Docker
Docker is a container management service. The keywords of Docker are develop, ship and run anywhere. The
whole idea of Docker is for developers to easily develop applications, ship them into containers which can then
be deployed anywhere.
The initial release of Docker was in March 2013 and since then; it has become the buzzword for modern world
development, especially in the face of Agile-based projects.
Features of Docker
• Docker has the ability to reduce the size of development by providing a smaller footprint of the operating
system via containers.
• With containers, it becomes easier for teams across different units, such as development, QA and Operations
to work seamlessly across applications.
• You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.
• Since Docker containers are pretty lightweight, they are very easily scalable.
Docker architecture
Docker uses client-server architecture. The Docker client talks to the Docker daemon, which does the heavy
lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on
the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon
communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is Docker
Compose, which lets you work with applications consisting of a set of containers.
The Docker daemon: The Docker daemon (docker) listens for Docker API requests and manages Docker objects
such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to
manage Docker services.
The Docker client: The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to docker, which carries them out.
The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
Docker Desktop: Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment
that enables you to build and share containerized applications and microservices. Docker Desktop includes the
Docker daemon (docker), the Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes, and
Credential Helper.
Docker registries: A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your own private
registry. When you use the docker pull or docker run commands, the required images are pulled from your
configured registry. When you use the docker push command, your image is pushed to your configured registry.
Docker objects: When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
Images: An image is a read-only template with instructions for creating a Docker container. Often, an image
is based on another image, with some additional customization. For example, to build your own image, you create
a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in
a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those
layers which have changed is rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.
Containers: A container is a runnable instance of an image. You can create, start, stop, move, or delete a container
using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even
create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how
isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host
machine.
A container is defined by its image as well as any configuration options you provide to it when you create or start
it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.
Example docker run command
The following command runs an Ubuntu container, attaches interactively to your local command-line session,
and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using the default registry configuration):
1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though you
had run docker pull ubuntu manually.
2. Docker creates a new container; as though you had run a docker container create command manually.
3. Docker allocates a read-write file system to the container, as its final layer. This allows a running container
to create or modify files and directories in its local file system.
4. Docker creates a network interface to connect the container to the default network, since you did not
specify any networking options. This includes assigning an IP address to the container. By default,
containers can connect to external networks using the host machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is running interactively and
attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the
output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can
start it again or remove it.
The underlying technology
Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to
deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace called
the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its
access is limited to that namespace.