Automation Testing
Automation Testing
Some
types of manual testing, such as discovery testing and usability testing, are
invaluable. You can do other kinds of testing—like regression testing and
functional testing—manually, but it’s a fairly wasteful practice for humans to keep
doing the same thing over and over again. It’s these kinds of repetitive tests that
lend themselves to test automation.
Test automation is the practice of running tests automatically, managing test data,
and utilizing results to improve software quality. It’s primarily a quality
assurance measure, but its activities involve the commitment of the entire software
production team. From business analysts to developers and DevOps engineers, getting
the most out of test automation takes the inclusion of everyone.
This post will give you a high-level understanding of what test automation is all
about. There are all kinds of tests, but not all should be automated; therefore,
let’s start with general criteria for test automation.
Repeatable
The test must be repeatable. There’s no sense in automating a test that can only be
run once. A repeatable test has the following three steps:
Determinant
When a function is determinant, it means that the outcome is the same every time
it’s run with the same input. The same is true of tests that can be automated. For
example, say we want to test an addition function. We know that 1 + 1 = 2 and that
394.19 + 5.81 = 400.00. Addition is a determinant function.
Software, on the other hand, may use such a high number of variable inputs that
it’s difficult to have the same result over time. Some variables may even be
random, which may make it difficult to determine the specific outcome. Software
design can compensate for this by allowing for test inputs through a test harness.
Other features of an application may be additive; for example, creating a new user
would add to the number of users. At least when we add a user we know that the
number of users should only grow by one. However, running tests in parallel may
cause unexpected results. Isolation can prevent this kind of false positive.
Unopinionated
You cannot automate matters of opinion. This is where usability testing, beta
testing, and so forth really shine. User feedback is important, but it just can’t
be automated … sorry!
Code Analysis
There are actually many different types of code analysis tools, including static
analysis and dynamic analysis. Some of these tests look for security flaws, others
check for style and form. These tests run when a developer checks in code. Other
than configuring rules and keeping the tools up to date, there isn’t much test
writing to do with these automated tests.
Unit Tests
You can also automate a unit test suite. Unit tests are designed to test a single
function, or unit, of operation in isolation. They typically run on a build server.
These tests don’t depend on databases, external APIs, or file storage. They need to
be fast and are designed to test the code only, not the external dependencies.
Integration Tests
Integration tests are a different kind of animal when it comes to automation. Since
an integration test—sometimes called end-to-end tests—needs to interact with
external dependencies, they’re more complicated to set up. Often, it’s best to
create fake external resources, especially when dealing with resources beyond your
control.
If you, for example, have a logistics app that depends on a web service from a
vendor, your test may fail unexpectedly if the vendor’s service is down. Does this
mean your app is broken? It might, but you should have enough control over the
entire test environment to create each scenario explicitly. Never depend on an
external factor to determine the outcome of your test scenario.
In the end, the automated acceptance test runs to determine if the feature delivers
what’s been agreed upon. Therefore, it’s critical for developers, the business, and
QA to write these tests together. They serve as regression tests in the future, and
they ensure that the feature holds up to what’s expected.
Regression Tests
Without AATs in place, you have to write regression tests after the fact. While
both are forms of functional tests, how they’re written, when they’re written, and
whom they’re written by are vastly different. Like AATs, they can be driven through
an API by code or a UI. Tools exist to write these tests using a GUI.
Performance Tests
Many kinds of performance tests exist, but they all test some aspect of an
application’s performance. Will it hold up to extreme pressure? Are we testing the
system for high heat? Is it simple response time under load we’re after? How about
scalability?
Sometimes these tests require emulating a massive number of users. In this case,
it’s important to have an environment that’s capable of performing such a feat.
Cloud resources are available to help with this kind of testing, but it’s possible
to use on-premises resources as well.
Smoke Tests
What’s a smoke test? It’s a basic test that’s usually performed after a deployment
or maintenance window. The purpose of a smoke test is to ensure that all services
and dependencies are up and running. A smoke test isn’t meant to be an all-out
functional test. It can be run as part of an automated deployment or triggered
through a manual step.
Prepare
First, we need to prepare the state, the test data, and the environment where tests
take place. As we’ve seen, most tests require the environment to be in a certain
state before an action takes place. In a typical scenario, this requires some
setup. Either the data will need to be manipulated or the application will need to
be put into a specific state or both!
Take Action
Once the state and/or environment is in the predefined state, it’s time to take
action! The test driver will run the test, either through calling an application’s
API or user interface or by running the code directly. The test driver is
responsible for “driving” the tests, but the test management system takes on the
responsibility of coordinating everything, including reporting results.
Report Results
A test automation system will record and report results. These results may come in
a number of different formats and may even create problem tickets or bugs in a work
tracking system. The basic result, however, is a pass or fail. Usually, there is a
green or red indicator for each test scenario to indicate pass or fail.
Sometimes, tests are inconclusive or don’t run for some reason. When this happens,
the automation system will have a full log of the output for developers to review.
This log helps them track down the issue. Ideally, they’ll be able to replay the
scenario once they’ve put a fix in place.