ISTQB-CTFL-AT Syllabus v1.0
ISTQB-CTFL-AT Syllabus v1.0
Continuous integration requires the use of In iteration planning, the team selects user
tools, including tools for testing, tools for stories from the prioritized release backlog,
automating the build process, and tools for elaborates the user stories, performs a risk
version control. analysis for the user stories, and estimates the
work needed for each user story. If a user
story is too vague and attempts to clarify it
1.2.5 Release and Iteration Planning
have failed, the team can refuse to accept it
and use the next user story based on priority.
As mentioned in the Foundation Level syllabus
The business representatives must answer the
[ISTQB_FL_SYL], planning is an on-going
team’s questions about each story so the team
activity, and this is the case in Agile lifecycles
can understand what they should implement
as well. For Agile lifecycles, two kinds of
and how to test each story.
planning occur, release planning and iteration
planning.
The number of stories selected is based on
established team velocity and the estimated
Release planning looks ahead to the release of size of the selected user stories. After the
a product, often a few months ahead of the contents of the iteration are finalized, the user
start of a project. Release planning defines stories are broken into tasks, which will be
and re-defines the product backlog, and may carried out by the appropriate team members.
involve refining larger user stories into a
collection of smaller stories. Release planning Testers are involved in iteration planning and
provides the basis for a test approach and test especially add value in the following activities:
plan spanning all iterations. Release plans are • Participating in the detailed risk
high-level. analysis of user stories
• Determining the testability of the user
In release planning, business representatives stories
establish and prioritize the user stories for the • Creating acceptance tests for the user
release, in collaboration with the team (see stories
Section 1.2.2). Based on these user stories, • Breaking down user stories into tasks
project and quality risks are identified and a (particularly testing tasks)
high-level effort estimation is performed (see • Estimating testing effort for all testing
Section 3.2). tasks
• Identifying functional and non-
Testers are involved in release planning and functional aspects of the system to be
especially add value in the following activities: tested
• Defining testable user stories, • Supporting and participating in test
including acceptance criteria automation at multiple levels of testing
• Participating in project and quality risk
analyses Release plans may change as the project
• Estimating testing effort associated proceeds, including changes to individual user
with the user stories stories in the product backlog. These changes
• Defining the necessary test levels may be triggered by internal or external
• Planning the testing for the release factors. Internal factors include delivery
capabilities, velocity, and technical issues.
External factors include the discovery of
new markets and opportunities, new In addition, the larger team estimation effort
competitors, or business threats that may should include consideration of the time and
change release objectives and/or target effort needed to complete the required testing
dates. In addition, iteration plans may change activities.
during an iteration. For example, a particular
user story that was considered relatively
2. Fundamental Agile Testing Princi
simple during estimation might prove more Processes – 105 mins.
complex than expected.
Keywords
These changes can be challenging for build verification test, configuration item,
testers. Testers must understand the big configuration management
picture of the release for test planning
purposes, and they must have an adequate Learning Objectives for
test basis and test oracle in each iteration for Fundamental Agile Testing
test development purposes as discussed in Principles, Practices, and
the Foundation Level syllabus
[ISTQB_FL_SYL], Section 1.4. The required Processes 2.1 The Differences
information must be available to the tester between Testing in Traditional and
early, and yet change must be embraced Agile Approaches
according to Agile principles. This dilemma
requires careful decisions about test FA-2.1.1 (K2) Describe the differences between testing
strategies and test documentation. For more projects
on Agile testing challenges, see [Black09], FA-2.1.2 (K2) Describe how development and testing ac
Chapter 12.
FA-2.1.3 (K2) Describe the role of independent testing in
Release and iteration planning should
address test planning as well as planning for 2.2 Status of Testing in Agile
development activities. Particular test-related Projects
issues to address include: FA-2.2.1 (K2) Describe the tools and techniques used to
• The scope of testing, the extent of Agile project, including test progress and produ
testing for those areas in scope, the FA-2.2.2 (K2) Describe the process of evolving tests ac
test goals, and the reasons for these test automation is important to manage regress
decisions.
• The team members who will carry out
2.3 Role and Skills of a Tester in an
the test activities.
• The test environment and test data Agile Team
needed, when they are needed, and FA-2.3.1 (K2) Understand the sk
whether any additions or changes to domain, and testing) of a tester in an Agile
the test environment and/or data will team
occur prior to or during the project. FA-2.3.2 (K2) Understand the role of a tester
• The timing, sequencing, within an Agile team
dependencies, and prerequisites for 2.1 The Differences between
the functional and non-functional test
activities (e.g., how frequently to run
Testing in Traditional and
regression tests, which features Agile Approaches
depend on other features or test data, As described in the Foundation Level syllabus
etc.), including how the test activities [ISTQB_FL_SYL] and in [Black09], test
relate to and depend on development activities are related to development
activities. activities, and thus testing varies in different
• The project and quality risks to be lifecycles. Testers must understand the
addressed (see Section 3.2.1). differences between testing in traditional
lifecycle models (e.g., sequential such as the
V-model or iterative such as RUP) and Agile In some cases, hardening or stabilization
lifecycles in order to work effectively and iterations occur periodically to resolve any
efficiently. The Agile models differ in terms of lingering defects and other forms of technical
the way testing and development activities debt. However, the best practice is that no
are integrated, the project work products, the feature is considered done until it has been
names, entry and exit criteria used for various integrated and tested with the system
levels of testing, the use of tools, and how [Goucher09]. Another good practice is to
independent testing can be effectively utilized. address defects remaining from the previous
iteration at the beginning of the next iteration,
Testers should remember that organizations as part of the backlog for that iteration
vary considerably in their implementation of (referred to as “fix bugs first”). However,
lifecycles. Deviation from the ideals of Agile some complain that this practice results in a
lifecycles (see Section 1.1) may represent situation where the total work to be done in
intelligent customization and adaptation of the the iteration is unknown and it will be more
practices. The ability to adapt to the context of difficult to estimate when the remaining
a given project, including the software features can be done. At the end of the
development practices actually followed, is a sequence of iterations, there can be a set of
key success factor for testers. release activities to get the software ready for
delivery, though in some cases delivery
2.1.1 Testing and Development occurs at the end of each iteration.
Activities When risk-based testing is used as one of the
One of the main differences between test strategies, a high-level risk analysis
traditional lifecycles and Agile lifecycles is the occurs during release planning, with testers
idea of very short iterations, each iteration often driving that analysis. However, the
resulting in working software that delivers specific quality risks associated with each
features of value to business stakeholders. iteration are identified and assessed in
At the beginning of the project, there is a iteration planning. This risk analysis can
release planning period. This is followed by a influence the sequence of development as
sequence of iterations. At the beginning of well as the priority and depth of testing for the
each iteration, there is an iteration planning features. It also influences the estimation of
period. Once iteration scope is established, the test effort required for each feature (see
the selected user stories are developed, Section 3.2).
integrated with the system, and tested. These
iterations are highly dynamic, with In some Agile practices (e.g., Extreme
development, integration, and testing Programming), pairing is used. Pairing can
activities taking place throughout each involve testers working together in twos to
iteration, and with considerable parallelism test a feature. Pairing can also involve a
and overlap. Testing activities occur tester working collaboratively with a
throughout the iteration, not as a final activity. developer to develop and test a feature.
Pairing can be difficult when the test team is
Testers, developers, and business distributed, but processes and tools can help
stakeholders all have a role in testing, as with enable distributed pairing. For more
traditional lifecycles. Developers perform unit information on distributed work, see
tests as they develop features from the user [ISTQB_ALTM_SYL], Section 2.8.
stories. Testers then test those features. Testers may also serve as testing and quality
Business stakeholders also test the stories coaches within the team, sharing testing
during implementation. Business knowledge and supporting quality assurance
stakeholders might use written test cases, but work within the team. This promotes a sense
they also might simply experiment with and of collective ownership of quality of the
use the feature in order to provide fast product.
feedback to the development team.
Test automation at all levels of testing occurs results (e.g., test dashboards as
in many Agile teams, and this can mean that discussed in Section 2.2.1)
testers spend time creating, executing,
monitoring, and maintaining automated tests In a typical Agile project, it is a common
and results. Because of the heavy use of test practice to avoid producing vast amounts of
automation, a higher percentage of the documentation. Instead, focus is more on
manual testing on Agile projects tends to be having working software, together with
done using experience-based and defect- automated tests that demonstrate
based techniques such as software attacks, conformance to requirements. This
exploratory testing, and error guessing (see encouragement to reduce documentation
[ISTQB_ALTA_SYL], Sections 3.3 and 3.4 applies only to documentation that does not
and [ISTQB_FL_SYL], Section 4.5). While deliver value to the customer. In a successful
developers will focus on creating unit tests, Agile project, a balance is struck between
testers should focus on creating automated increasing efficiency by reducing
integration, system, and system integration documentation and providing sufficient
tests. This leads to a tendency for Agile teams documentation to support business, testing,
to favor testers with a strong technical and development, and maintenance activities. The
test automation background. team must make a decision during release
planning about which work products are
One core Agile principle is that change may required and what level of work product
occur throughout the project. Therefore, documentation is needed.
lightweight work product documentation is
favored in Agile projects. Changes to existing Typical business-oriented work products on
features have testing implications, especially Agile projects include user stories and
regression testing implications. The use of acceptance criteria. User stories are the
automated testing is one way of managing Agile form of requirements specifications, and
the amount of test effort associated with should explain how the system should
change. However, it’s important that the rate behave with respect to a single, coherent
of change not exceed the project team’s feature or function. A user story should define
ability to deal with the risks associated with a feature small enough to be completed in a
those changes. 2.1.2 Project Work single iteration. Larger collections of related
Products features, or a collection of sub-features that
make up a single complex feature, may be
Project work products of immediate interest to referred to as “epics”. Epics may include user
Agile testers typically fall into three stories for different development teams. For
categories: example, one user story can describe what is
1. Business-oriented work products that required at the API-level (middleware) while
describe what is needed (e.g., another story describes what is needed at the
requirements specifications) and how UIlevel (application). These collections may
to use it (e.g., user documentation) be developed over a series of sprints. Each
2. Development work products that epic and its user stories should have
describe how the system is built (e.g., associated acceptance criteria.
database entityrelationship
diagrams), that actually implement Typical developer work products on Agile
the system (e.g., code), or that projects include code. Agile developers also
evaluate individual pieces of code often create automated unit tests. These
(e.g., automated unit tests) tests might be created after the development
3. Test work products that describe how of code. In some cases, though, developers
the system is tested (e.g., test create tests incrementally, before each
strategies and plans), that actually portion of the code is written, in order to
test the system (e.g., manual and provide a way of verifying, once that portion
automated tests), or that present test of code is written, whether it works as
expected. While this approach is referred to sequentially through the following test
as test first or test-driven development, in activities:
reality the tests are more a form of executable • Unit testing, typically done by the
low-level design specifications rather than developer
tests [Beck02]. • Feature acceptance testing, which
is sometimes broken into two
Typical tester work products on Agile projects activities:
include automated tests, as well as • Feature verification testing, which is
documents such as test plans, quality risk often automated, may be done by
catalogs, manual tests, defect reports, and developers or testers, and involves
test results logs. The documents are captured testing against the user story’s
in as lightweight a fashion as possible, which acceptance criteria
is often also true of these documents in • Feature validation testing, which is
traditional lifecycles. Testers will also produce usually manual and can involve
test metrics from defect reports and test developers, testers, and business
results logs, and again there is an emphasis stakeholders working
on a lightweight approach. collaboratively to determine
whether the feature is fit for use, to
In some Agile implementations, especially improve visibility of the progress
regulated, safety critical, distributed, or highly made, and to receive real feedback
complex projects and products, further from the business stakeholders
formalization of these work products is
required. For example, some teams In addition, there is often a parallel process of
transform user stories and acceptance criteria regression testing occurring throughout the
into more formal requirements specifications. iteration. This involves re-running the
Vertical and horizontal traceability reports automated unit tests and feature verification
may be prepared to satisfy auditors, tests from the current iteration and previous
regulations, and other requirements. iterations, usually via a continuous integration
framework.
2.1.3 Test Levels
In some Agile projects, there may be a
Test levels are test activities that are logically system test level, which starts once the first
related, often by the maturity or completeness user story is ready for such testing. This can
of the item under test. involve executing functional tests, as well as
non-functional tests for performance,
In sequential lifecycle models, the test levels reliability, usability, and other relevant test
are often defined such that the exit criteria of types.
one level are part of the entry criteria for the
next level. In some iterative models, this rule Agile teams can employ various forms of
does not apply. Test levels overlap. acceptance testing (using the term as
Requirement specification, design explained in the Foundation Level syllabus
specification, and development activities may [ISTQB_FL_SYL]). Internal alpha tests and
overlap with test levels. external beta tests may occur, either at the
close of each iteration, after the completion of
In some Agile lifecycles, overlap occurs each iteration, or after a series of iterations.
because changes to requirements, design, User acceptance tests, operational
and code can happen at any point in an acceptance tests, regulatory acceptance
iteration. While Scrum, in theory, does not tests, and contract acceptance tests also may
allow changes to the user stories after occur, either at the close of each iteration,
iteration planning, in practice such changes after the completion of each iteration, or after
sometimes occur. During an iteration, any a series of iterations.
given user story will typically progress
2.1.4 Testing and Configuration [Jones11]. Automated tests at the integration
Management and system levels are also required.
2.2.1 Communicating Test Status, Testing tasks on the task board relate to the
acceptance criteria defined for the user
Progress, and Product Quality stories. As test automation scripts, manual
Agile teams progress by having working tests, and exploratory tests for a test task
software at the end of each iteration. To achieve a passing status, the task moves into
determine when the team will have working the done column of the task board. The
software, they need to monitor the progress whole team reviews the status of the task
of all work items in the iteration and release. board regularly, often during the daily stand-
Testers in Agile teams utilize various methods up meetings, to ensure tasks are moving
to record test progress and status, including across the board at an acceptable rate. If any
test automation results, progression of test tasks (including testing tasks) are not moving
tasks and stories on the Agile task board, and or are moving too slowly, the team reviews
burndown charts showing the team’s and addresses any issues that may be
progress. These can then be communicated blocking the progress of those tasks.
to the rest of the team using media such as
wiki dashboards and dashboard-style emails, The daily stand-up meeting includes all
as well as verbally during stand-up meetings. members of the Agile team including testers.
Agile teams may use tools that automatically At this meeting, they communicate their
generate status reports based on test results current status. The agenda for each member
and task progress, which in turn update wiki- is [Agile Alliance Guide]:
style dashboards and emails. This method of • What have you completed since the
communication also gathers metrics from the last meeting?
testing process, which can be used in • What do you plan to complete by the
process improvement. Communicating test next meeting?
status in such an automated manner also • What is getting in your way?
frees testers’ time to focus on designing and Any issues that may block test progress are
executing more test cases. communicated during the daily stand-up
meetings, so the whole team is aware of the
Teams may use burndown charts to track issues and can resolve them accordingly.
progress across the entire release and within
each iteration. A burndown chart [Crispin08] To improve the overall product quality, many
represents the amount of work left to be done Agile teams perform customer satisfaction
against time allocated to the release or surveys to receive feedback on whether the
iteration. product meets customer expectations. Teams
may use other metrics similar to those
To provide an instant, detailed visual captured in traditional development
representation of the whole team’s current methodologies, such as test pass/fail rates,
defect discovery rates, confirmation and and to retire test cases that are no longer
regression test results, defect density, defects relevant. Tests written in earlier iterations to
found and fixed, requirements coverage, risk verify specific features may have little value in
coverage, code coverage, and code churn to later iterations due to feature changes or new
improve the product quality. features which alter the way those earlier
As with any lifecycle, the metrics captured features behave.
and reported should be relevant and aid
decision-making. Metrics should not be used While reviewing test cases, testers should
to reward, punish, or isolate any team consider suitability for automation. The team
members. needs to automate as many tests as possible
from previous and current iterations. This
2.2.2 Managing Regression Risk with allows automated regression tests to reduce
regression risk with less effort than manual
Evolving Manual and Automated Test
regression testing would require. This
Cases reduced regression test effort frees the
In an Agile project, as each iteration testers to more thoroughly test new features
completes, the product grows. Therefore, the and functions in the current iteration.
scope of testing also increases. Along with
testing the code changes made in the current It is critical that testers have the ability to
iteration, testers also need to verify no quickly identify and update test cases from
regression has been introduced on features previous iterations and/or releases that are
that were developed and tested in previous affected by the changes made in the current
iterations. The risk of introducing regression iteration. Defining how the team designs,
in Agile development is high due to extensive writes, and stores test cases should occur
code churn (lines of code added, modified, or during release planning. Good practices for
deleted from one version to another). Since test design and implementation need to be
responding to change is a key Agile principle, adopted early and applied consistently. The
changes can also be made to previously shorter timeframes for testing and the
delivered features to meet business needs. In constant change in each iteration will
order to maintain velocity without incurring a increase the impact of poor test design and
large amount of technical debt, it is critical implementation practices.
that teams invest in test automation at all test
levels as early as possible. It is also critical Use of test automation, at all test levels,
that all test assets such as automated tests, allows Agile teams to provide rapid feedback
manual test cases, test data, and other on product quality. Well-written automated
testing artifacts are kept upto-date with each tests provide a living document of system
iteration. It is highly recommended that all test functionality [Crispin08]. By checking the
assets be maintained in a configuration automated tests and their corresponding test
management tool in order to enable version results into the configuration management
control, to ensure ease of access by all team system, aligned with the versioning of the
members, and to support making changes as product builds, Agile teams can review the
required due to changing functionality while functionality tested and the test results for any
still preserving the historic information of the given build at any given point in time.
test assets.
Automated unit tests are run before source
Because complete repetition of all tests is code is checked into the mainline of the
seldom possible, especially in tight-timeline configuration management system to ensure
Agile projects, testers need to allocate time in the code changes do not break the software
each iteration to review manual and build. To reduce build breaks, which can slow
automated test cases from previous and down the progress of the whole team, code
current iterations to select test cases that may should not be checked in unless all
be candidates for the regression test suite, automated unit tests pass. Automated unit
test results provide immediate feedback on • Deployment of builds into the test
code and build quality, but not on product environments
quality. • Restoration of a test environment
(e.g., the database or website data
Automated acceptance tests are run regularly files) to a baseline
as part of the continuous integration full • Comparison of data outputs
system build. These tests are run against a Automation of these tasks reduces the
complete system build at least daily, but are overhead and allows the team to spend time
generally not run with each code check-in as developing and testing new features.
they take longer to run than automated unit
tests and could slow down code checkins.
The test results from automated acceptance
tests provide feedback on product quality with 2.3 Role and Skills of a Tester in
respect to regression since the last build, but an Agile Team
they do not provide status of overall product
quality. In an Agile team, testers must closely
collaborate with all other team members and
with business stakeholders. This has a
Automated tests can be run continuously
number of implications in terms of the skills a
against the system. An initial subset of
tester must have and the activities they
automated tests to cover critical system
perform within an Agile team.
functionality and integration points should be
created immediately after a new build is
deployed into the test environment. These 2.3.1 Agile Tester Skills
tests are commonly known as build Agile testers should have all the skills
verification tests. Results from the build mentioned in the Foundation Level syllabus
verification tests will provide instant feedback [ISTQB_FL_SYL]. In addition to these skills, a
on the software after deployment, so teams tester in an Agile team should be competent
don’t waste time testing an unstable build. in test automation, test-driven development,
acceptance test-driven development, white-
Automated tests contained in the regression box, black-box, and experience-based
test set are generally run as part of the daily testing.
main build in the continuous integration
environment, and again when a new build is As Agile methodologies depend heavily on
deployed into the test environment. As soon collaboration, communication, and interaction
as an automated regression test fails, the between the team members as well as
team stops and investigates the reasons for stakeholders outside the team, testers in an
the failing test. The test may have failed due Agile team should have good interpersonal
to legitimate functional changes in the current skills. Testers in Agile teams should:
iteration, in which case the test and/or user • Be positive and solution-oriented with
story may need to be updated to reflect the team members and stakeholders
new acceptance criteria. Alternatively, the • Display critical, quality-oriented,
test may need to be retired if another test has skeptical thinking about the product
been built to cover the changes. However, if • Actively acquire information from
the test failed due to a defect, it is a good stakeholders (rather than relying
practice for the team to fix the defect prior to entirely on written specifications)
progressing with new features. • Accurately evaluate and report test
results, test progress, and product
In addition to test automation, the following quality
testing tasks may also be automated: • Work effectively to define testable
• Test data generation user stories, especially acceptance
• Loading test data into systems criteria, with customer representatives
and stakeholders
• Collaborate within the team, working • Testers work so closely to developers
in pairs with programmers and other that they lose the appropriate tester
team members mindset
• Respond to change quickly, including • Testers become tolerant of or silent
changing, adding, or improving test about inefficient, ineffective, or low-
cases Plan and organize quality practices within the team
their own work • Testers cannot keep pace with the
incoming changes in time-constrained
Continuous skills growth, including iterations
interpersonal skills growth, is essential for all To mitigate these risks, organizations may
testers, including those on Agile teams. consider variations for preserving
independence discussed in Section 2.1.5.
2.3.2 The Role of a Tester in an Agile
Team 3. Agile Testing Methods, Techniqu
The role of a tester in an Agile team includes
activities that generate and provide feedback mins.
not only on test status, test progress, and Keywords
product quality, but also on process quality. In acceptance criteria, exploratory testing,
addition to the activities described elsewhere performance testing, product risk, quality risk,
in this syllabus, these activities include: regression testing, test approach, test charter,
• Understanding, implementing, and test estimation, test execution automation,
updating the test strategy test strategy, test-driven development, unit
• Measuring and reporting test test framework
coverage across all applicable
coverage dimensions Learning Objectives for Agile
• Ensuring proper use of testing tools
• Configuring, using, and managing test
Testing Methods, Techniques, and
environments and test data Tools
• Reporting defects and working with
the team to resolve them 3.1 Agile Testing Methods
• Coaching other team members in FA-3.1.1 (K1) Recall the concepts of test-driven
relevant aspects of testing development, and behavior-driven developmen
• Ensuring the appropriate testing tasks FA-3.1.2 (K1) Recall the concepts of the test pyramid
are scheduled during release and FA-3.1.3 (K2) Summarize the testing quadrants and th
iteration planning testing types
• Actively collaborating with developers FA-3.1.4 (K3) For a given Agile project, practice the role
and business stakeholders to clarify
requirements, especially in terms of
testability, consistency, and 3.2 Assessing Quality Risks and
completeness Estimating Test Effort
• Participating proactively in team FA-3.2.1 (K3) Assess quality risks within an
retrospectives, suggesting and Agile project
implementing improvements FA-3.2.2 (K3) Estimate testing effort ba
iteration content and quality risks
Within an Agile team, each team member is
responsible for product quality and plays a
role in performing test-related tasks.
3.3 Techniques in Agile Projects
FA-3.3.1 (K3) Interpret relevant information to support te
Agile organizations may encounter some test- FA-3.3.2 (K2) Explain to business stakeholders how to d
related organizational risks: FA-3.3.3 (K3) Given a user story, write acceptance test-
FA-3.3.4 (K3) For both functional and non-functional be
test design techniques based on given user sto
FA-3.3.5 (K3) Perform exploratory testing to support the • testing
Refactor
of anthe
Agile
code
project
after the test is
passed, re-running the test to ensure
3.4 Tools in Agile Projects it continues to pass against the
refactored code
FA-3.4.1 (K1) Recall different tools available
• Repeat this process for the next small
to testers according to their purpose and to
piece of code, running the previous
activities in Agile projects
tests as well as the added tests
3.1 Agile Testing Methods
There are certain testing practices that can be The tests written are primarily unit level and
followed in every development project (agile are code-focused, though tests may also be
or not) to produce quality products. These written at the integration or system levels.
include writing tests in advance to express Test-driven development gained its popularity
proper behavior, focusing on early defect through Extreme Programming [Beck02], but
prevention, detection, and removal, and is also used in other Agile methodologies and
ensuring that the right test types are run at sometimes in sequential lifecycles. It helps
the right time and as part of the right test developers focus on clearly-defined expected
level. Agile practitioners aim to introduce results. The tests are automated and are
these practices early. Testers in Agile used in continuous integration.
projects play a key role in guiding the use of
these testing practices throughout the Acceptance Test-Driven Development
lifecycle. Acceptance test-driven development
[Adzic09] defines acceptance criteria and
3.1.1 Test-Driven Development, tests during the creation of user stories (see
Section 1.2.2). Acceptance test-driven
Acceptance Test-Driven development is a collaborative approach that
Development, and allows every stakeholder to understand how
Behavior-Driven the software component has to behave and
Development what the developers, testers, and business
representatives need to ensure this behavior.
Test-driven development, acceptance test-
The process of acceptance test-driven
driven development, and behavior-driven
development is explained in Section 3.3.2.
development are three complementary
techniques in use among Agile teams to carry
Acceptance test-driven development creates
out testing across the various test levels.
reusable tests for regression testing. Specific
Each technique is an example of a
tools support creation and execution of such
fundamental principle of testing, the benefit of
tests, often within the continuous integration
early testing and QA activities, since the tests
process. These tools can connect to data and
are defined before the code is written.
service layers of the application, which allows
tests to be executed at the system or
Test-Driven Development
acceptance level. Acceptance test-driven
Test-driven development (TDD) is used to development allows quick resolution of
develop code guided by automated test defects and validation of feature behavior. It
cases. The process for test-driven helps determine if the acceptance criteria are
development is: met for the feature.
• Add a test that captures the
programmer’s concept of the desired Behavior-Driven Development
functioning of a small piece of code Behavior-driven development [Chelimsky10]
• Run the test, which should fail since allows a developer to focus on testing the
the code doesn’t exist code based on the expected behavior of the
• Write the code and run the test in a software. Because the tests are based on the
tight loop until the test passes exhibited behavior of the software, the tests
are generally easier for other team members including developers, testers, and business
and stakeholders to understand. representatives.
As mentioned earlier, an iteration starts with Quality risks can also be mitigated before test
iteration planning, which culminates in execution starts. For example, if problems
estimated tasks on a task board. These tasks with the user stories are found during risk
can be prioritized in part based on the level of identification, the project team can thoroughly
quality risk associated with them. Tasks review user stories as a mitigating strategy.
associated with higher risks should start
earlier and involve more testing effort. Tasks 3.2.2 Estimating Testing Effort Based
associated with lower risks should start later
and involve less testing effort.
on Content and Risk
During release planning, the Agile team
An example of how the quality risk analysis estimates the effort required to complete the
process in an Agile project may be carried out release. The estimate addresses the testing
during iteration planning is outlined in the effort as well. A common estimation technique
following steps: used in Agile projects is planning poker, a
1. Gather the Agile team members consensus-based technique. The product
together, including the tester(s) owner or customer reads a user story to the
2. List all the backlog items for the estimators. Each estimator has a deck of
current iteration (e.g., on a task cards with values similar to the Fibonacci
board) sequence (i.e., 0, 1, 2, 3, 5, 8, 13, 21, 34, 55,
3. Identify the quality risks associated 89, …) or any other progression of choice
with each item, considering all (e.g., shirt sizes ranging from extra-small to
relevant quality characteristics extra-extra-large). The values represent the
4. Assess each identified risk, which number of story points, effort days, or other
includes two activities: categorizing units in which the team estimates. The
the risk and determining its level of Fibonacci sequence is recommended
risk based on the impact and the because the numbers in the sequence reflect
likelihood of defects that uncertainty grows proportionally with the
size of the story. A high estimate usually
means that the story is not well understood or functional requirements may follow a
should be broken down into multiple smaller predefined format or standard, such as
stories. [ISO25000], or an industry specific standard.
The estimators discuss the feature, and ask
questions of the product owner as needed. The user stories serve as an important test
Aspects such as development and testing basis. Other possible test bases include:
effort, complexity of the story, and scope of • Experience from previous projects
testing play a role in the estimation. • Existing functions, features, and
Therefore, it is advisable to include the risk quality characteristics of the system
level of a backlog item, in addition to the • Code, architecture, and design
priority specified by the product owner, before • User profiles (context, system
the planning poker session is initiated. When configurations, and user behavior)
the feature has been fully discussed, each • Information on defects from existing
estimator privately selects one card to and previous projects
represent his or her estimate. All cards are • A categorization of defects in a defect
then revealed at the same time. If all taxonomy
estimators selected the same value, that • Applicable standards (e.g., [DO-178B]
becomes the estimate. If not, the estimators for avionics software)
discuss the differences in estimates after • Quality risks (see Section 3.2.1)
which the poker round is repeated until
agreement is reached, either by consensus or During each iteration, developers create code
by applying rules (e.g., use the median, use which implements the functions and features
the highest score) to limit the number of poker described in the user stories, with the relevant
rounds. These discussions ensure a reliable quality characteristics, and this code is
estimate of the effort needed to complete verified and validated via acceptance testing.
product backlog items requested by the To be testable, acceptance criteria should
product owner and help improve collective address the following topics where relevant
knowledge of what has to be done [Cohn04]. [Wiegers13]:
• Functional behavior: The externally
observable behavior with user actions
3.3 Techniques in Agile Projects as input operating under certain
Many of the test techniques and testing levels configurations.
that apply to traditional projects can also be • Quality characteristics: How the
applied to Agile projects. However, for Agile system performs the specified
projects, there are some specific behavior. The characteristics may
considerations and variances in test also be referred to as quality
techniques, terminologies, and attributes or non-functional
documentation that should be considered. requirements. Common quality
characteristics are performance,
3.3.1 Acceptance Criteria, Adequate reliability, usability, etc.
Coverage, and Other Information for • Scenarios (use cases): A sequence of
actions between an external actor
Testing (often a user) and the system, in
Agile projects outline initial requirements as order to accomplish a specific goal or
user stories in a prioritized backlog at the business task.
start of the project. Initial requirements are • Business rules: Activities that can
short and usually follow a predefined format only be performed in the system
(see Section 1.2.2). Nonfunctional under certain conditions defined by
requirements, such as usability and outside procedures and constraints
performance, are also important and can be (e.g., the procedures used by an
specified as unique user stories or connected insurance company to handle
to other functional user stories. Non- insurance claims).
• External interfaces: Descriptions of • 100% decision coverage where
the connections between the system possible, with careful reviews of
to be developed and the outside any infeasible paths
world. External interfaces can be • Static analysis performed on all
divided into different types (user code
interface, interface to other systems, • No unresolved major defects
etc.). (ranked based on priority and
• Constraints: Any design and severity)
implementation constraint that will • No known unacceptable technical
restrict the options for the developer. debt remaining in the design and
Devices with embedded software the code [Jones11]
must often respect physical • All code, unit tests, and unit test
constraints such as size, weight, and results reviewed
interface connections. • All unit tests automated
• Data definitions: The customer may • Important characteristics are
describe the format, data type, within agreed limits (e.g.,
allowed values, and default values for performance)
a data item in the composition of a • Integration testing
complex business data structure • All functional requirements
(e.g., the ZIP code in a U.S. mail tested, including both positive
address). and negative tests, with the
number of tests based on size,
In addition to the user stories and their complexity, and risks
associated acceptance criteria, other • All interfaces between units
information is relevant for the tester, tested
including: • All quality risks covered
• How the system is supposed to work according to the agreed extent of
and be used testing
• The system interfaces that can be • No unresolved major defects
used/accessed to test the system (prioritized according to risk and
• Whether current tool support is importance)
sufficient • All defects found are reported
• Whether the tester has enough • All regression tests automated,
knowledge and skill to perform the where possible, with all
necessary tests automated tests stored in a
Testers will often discover the need for common repository
additional information (e.g., code coverage) • System testing
throughout the iterations and should work • End-to-end tests of user stories,
collaboratively with the rest of the Agile team features, and functions
members to obtain that information. Relevant • All user personas covered
information plays a part in determining • The most important quality
whether a particular activity can be characteristics of the system
considered done. This concept of the covered (e.g., performance,
definition of done is critical in Agile projects robustness, reliability)
and applies in a number of different ways as • Testing done in a production-like
discussed in the following sub-subsections. environment(s), including all
hardware and software for all
Test Levels supported configurations, to the
Each test level has its own definition of done. extent possible
The following list gives examples that may be • All quality risks covered
relevant for the different test levels. according to the agreed extent of
• Unit testing testing
• All regression tests automated, Iteration
where possible, with all The definition of done for the iteration may
automated tests stored in a include the following:
common repository • All features for the iteration are ready
• All defects found are reported and individually tested according to
and possibly fixed the feature level criteria
• No unresolved major defects • Any non-critical defects that cannot
(prioritized according to risk and be fixed within the constraints of the
importance) iteration added to the product backlog
and prioritized
User Story • Integration of all features for the
The definition of done for user stories may be iteration completed and tested
determined by the following criteria: • Documentation written, reviewed, and
• The user stories selected for the approved
iteration are complete, understood by
the team, and have detailed, testable At this point, the software is potentially
acceptance criteria releasable because the iteration has been
• All the elements of the user story are successfully completed, but not all iterations
specified and reviewed, including the result in a release.
user story acceptance tests, have
been completed Release
• Tasks necessary to implement and The definition of done for a release, which
test the selected user stories have may span multiple iterations, may include the
been identified and estimated by the following areas:
team • Coverage: All relevant test basis
elements for all contents of the
Feature release have been covered by
The definition of done for features, which may testing. The adequacy of the
span multiple user stories or epics, may coverage is determined by what is
include: new or changed, its complexity and
• All constituent user stories, with size, and the associated risks of
acceptance criteria, are defined and failure.
approved by the customer • Quality: The defect intensity (e.g.,
• The design is complete, with no how many defects are found per day
known technical debt or per transaction), the defect density
• The code is complete, with no known (e.g., the number of defects found
technical debt or unfinished compared to the number of user
refactoring stories, effort, and/or quality
• Unit tests have been performed and attributes), estimated number of
have achieved the defined level of remaining defects are within
coverage acceptable limits, the consequences
• Integration tests and system tests for of unresolved and remaining defects
the feature have been performed (e.g., the severity and priority) are
according to the defined coverage understood and acceptable, the
criteria residual level of risk associated with
• No major defects remain to be each identified quality risk is
corrected understood and acceptable.
• Feature documentation is complete, • Time: If the pre-determined delivery
which may include release notes, date has been reached, the business
user manuals, and online help considerations associated with
functions releasing and not releasing need to
be considered.
• Cost: The estimated lifecycle cost involving the necessary preconditions, if any,
should be used to calculate the return the inputs, and the related outputs.
on investment for the delivered
system (i.e., the calculated The examples must cover all the
development and maintenance cost characteristics of the user story and should
should be considerably lower than the not add to the story. This means that an
expected total sales of the product). example should not exist which describes an
The main part of the lifecycle cost aspect of the user story not documented in
often comes from maintenance after the story itself. In addition, no two examples
the product has been released, due to should describe the same characteristics of
the number of defects escaping to the user story.
production.
3.3.3 Functional and Non-Functional
3.3.2 Applying Acceptance Test- Black Box Test Design
Driven Development In Agile testing, many tests are created by
Acceptance test-driven development is a test- testers concurrently with the developers’
first approach. Test cases are created prior to programming activities. Just as the
implementing the user story. The test cases developers are programming based on the
are created by the Agile team, including the user stories and acceptance criteria, so are
developer, the tester, and the business the testers creating tests based on user
representatives [Adzic09] and may be manual stories and their acceptance criteria. (Some
or automated. The first step is a specification tests, such as exploratory tests and some
workshop where the user story is analyzed, other experience-based tests, are created
discussed, and written by developers, testers, later, during test execution, as explained in
and business representatives. Any Section 3.3.4.) Testers can apply traditional
incompleteness, ambiguities, or errors in the black box test design techniques such as
user story are fixed during this process. equivalence partitioning, boundary value
analysis, decision tables, and state transition
The next step is to create the tests. This can testing to create these tests. For example,
be done by the team together or by the tester boundary value analysis could be used to
individually. In any case, an independent select test values when a customer is limited
person such as a business representative in the number of items they may select for
validates the tests. The tests are examples purchase.
that describe the specific characteristics of
the user story. These examples will help the In many situations, non-functional
team implement the user story correctly. requirements can be documented as user
Since examples and tests are the same, stories. Black box test design techniques
these terms are often used interchangeably. (such as boundary value analysis) can also
The work starts with basic examples and be used to create tests for nonfunctional
open questions. quality characteristics. The user story might
contain performance or reliability
Typically, the first tests are the positive tests, requirements. For example, a given
confirming the correct behavior without execution cannot exceed a time limit or a
exception or error conditions, comprising the number of operations may fail less than a
sequence of activities executed if everything certain number of times.
goes as expected. After the positive path
tests are done, the team should write For more information about the use of black
negative path tests and cover non-functional box test design techniques, see the
attributes as well (e.g., performance, Foundation Level syllabus [ISTQB_FL_SYL]
usability). Tests are expressed in a way that and the Advanced Level Test Analyst syllabus
every stakeholder is able to understand, [ISTQB_ALTA_SYL].
containing sentences in natural language
3.3.4 Exploratory Testing and Agile • Oracle notes: how to evaluate the
Testing product to determine correct results
(e.g., to capture what happens on the
Exploratory testing is important in Agile screen and compare to what is written
projects due to the limited time available for in the user’s manual)
test analysis and the limited details of the • Variations: alternative actions and
user stories. In order to achieve the best evaluations to complement the ideas
results, exploratory testing should be described under activities
combined with other experience-based
techniques as part of a reactive testing To manage exploratory testing, a method
strategy, blended with other testing strategies called session-based test management can
such as analytical risk-based testing, be used. A session is defined as an
analytical requirements-based testing, model- uninterrupted period of testing which could
based testing, and regression-averse testing. last from 60 to 120 minutes. Test sessions
Test strategies and test strategy blending is include the following:
discussed in the Foundation Level syllabus • Survey session (to learn how it works)
[ISTQB_FL_SYL]. • Analysis session (evaluation of the
functionality or characteristics)
In exploratory testing, test design and test Deep coverage (corner
execution occur at the same time, guided by cases, scenarios, interactions)
a prepared test charter. A test charter
provides the test conditions to cover during a The quality of the tests depends on the
time-boxed testing session. During testers’ ability to ask relevant questions about
exploratory testing, the results of the most what to test.
recent tests guide the next test. The same
Examples include the following:
white box and black box techniques can be
• What is most important to find out
used to design the tests as when performing
about the system?
pre-designed testing.
• In what way may the system fail?
• What happens if.....?
A test charter may include the following
• What should happen when.....?
information:
• Are customer needs, requirements,
• Actor: intended user of the system
and expectations fulfilled?
• Purpose: the theme of the charter
• Is the system possible to install (and
including what particular objective the
remove if necessary) in all supported
actor wants to achieve, i.e., the test
upgrade paths?
conditions
• Setup: what needs to be in place in
During test execution, the tester uses
order to start the test execution
creativity, intuition, cognition, and skill to find
• Priority: relative importance of this
possible problems with the product. The
charter, based on the priority of the
tester also needs to have good knowledge
associated user story or the risk level
and understanding of the software under test,
• Reference: specifications (e.g., user
the business domain, how the software is
story), risks, or other information
used, and how to determine when the system
sources
fails.
• Data: whatever data is needed to
carry out the charter
A set of heuristics can be applied when
• Activities: a list of ideas of what the
testing. A heuristic can guide the tester in how
actor may want to do with the system
to perform the testing and to evaluate the
(e.g., “Log on to the system as a
results [Hendrickson]. Examples include:
super user”) and what would be
• Boundaries
interesting to test (both positive and
• CRUD (Create, Read, Update,
negative tests)
Delete)
• Configuration variations are used the same way and some tools have
• Interruptions (e.g., log off, shut down, more relevance for Agile projects than they
or reboot) have in traditional projects. For example,
although the test management tools,
It is important for the tester to document the requirements management tools, and incident
process as much as possible. Otherwise, it management tools (defect tracking tools) can
would be difficult to go back and see how a be used by Agile teams, some Agile teams
problem in the system was discovered. The opt for an all-inclusive tool (e.g., application
following list provides examples of information lifecycle management or task management)
that may be useful to document: that provides features relevant to Agile
• Test coverage: what input data have development, such as task boards, burndown
been used, how much has been charts, and user stories. Configuration
covered, and how much remains to management tools are important to testers in
be tested Agile teams due to the high number of
• Evaluation notes: observations during automated tests at all levels and the need to
testing, do the system and feature store and manage the associated automated
under test seem to be stable, were test artifacts.
any defects found, what is planned as
the next step according to the current In addition to the tools described in the
observations, and any other list of Foundation Level syllabus [ISTQB_FL_SYL],
ideas testers on Agile projects may also utilize the
• Risk/strategy list: which risks have tools described in the following subsections.
been covered and which ones remain These tools are used by the whole team to
among the most important ones, will ensure team collaboration and information
the initial strategy be followed, does it sharing, which are key to Agile practices.
need any changes
• Issues, questions, and anomalies:
any unexpected behavior, any 3.4.1 Task Management and Tracking
questions regarding the efficiency of
Tools
the approach, any concerns about the
ideas/test attempts, test environment, In some cases, Agile teams use physical
test data, misunderstanding of the story/task boards (e.g., whiteboard,
function, test script or the system corkboard) to manage and track user stories,
under test tests, and other tasks throughout each sprint.
• Actual behavior: recording of actual Other teams will use application lifecycle
behavior of the system that needs to management and task management software,
be saved (e.g., video, screen including electronic task boards. These tools
captures, output data files) serve the following purposes:
• Record stories and their relevant
The information logged should be captured development and test tasks, to
and/or summarized into some form of status ensure that nothing gets lost during a
management tools (e.g., test management sprint
tools, task management tools, the task • Capture team members’ estimates on
board), in a way that makes it easy for their tasks and automatically calculate
stakeholders to understand the current status the effort required to implement a
for all testing that was performed. story, to support efficient iteration
planning sessions
• Associate development tasks and test
3.4 Tools in Agile Projects tasks with the same story, to provide
Tools described in the Foundation Level a complete picture of the team’s effort
syllabus [ISTQB_FL_SYL] are relevant and required to implement the story
used by testers on Agile teams. Not all tools
• Aggregate developer and tester Instant messaging, audio teleconferencing,
updates to the task status as they and video chat tools provide the following
complete their work, automatically benefits:
providing a current calculated • Allow real time direct communication
snapshot of the status of each story, between team members, especially
the iteration, and the overall release distributed teams
• Provide a visual representation (via • Involve distributed teams in standup
metrics, charts, and dashboards) of meetings
the current state of each user story, • Reduce telephone bills by use of
the iteration, and the release, allowing voice-over-IP technology, removing
all stakeholders, including people on cost constraints that could reduce
geographically distributed teams, to team member communication in
quickly check status distributed settings
• Integrate with configuration
management tools, which can allow Desktop sharing and capturing tools provide
automated recording of code check- the following benefits:
ins and builds against tasks, and, in • In distributed teams, product
some cases, automated status demonstrations, code reviews, and
updates for tasks even pairing can occur
• Capturing product demonstrations at
3.4.2 Communication and Information the end of each iteration, which can
Sharing Tools be posted to the
team’s wiki
In addition to e-mail, documents, and verbal
communication, Agile teams often use three These tools should be used to complement
additional types of tools to support and extend, not replace, face-to-face
communication and information sharing: communication in Agile teams.
wikis, instant messaging, and desktop
sharing.
3.4.3 Software Build and Distribution
Wikis allow teams to build and share an Tools
online knowledge base on various aspects of As discussed earlier in this syllabus, daily
the project, including the following: build and deployment of software is a key
• Product feature diagrams, feature practice in Agile teams. This requires the use
discussions, prototype diagrams, of continuous integration tools and build
photos of whiteboard discussions, distribution tools. The uses, benefits, and
and other information risks of these tools was described earlier in
• Tools and/or techniques for Section 1.2.4.
developing and testing found to be
useful by other members of the team 3.4.4 Configuration Management
• Metrics, charts, and dashboards on
product status, which is especially Tools
useful when the wiki is integrated with On Agile teams, configuration management
other tools such as the build server tools may be used not only to store source
and task management system, since code and automated tests, but manual tests
the tool can update product status and other test work products are often stored
automatically in the same repository as the product source
• Conversations between team code. This provides traceability between
members, similar to instant which versions of the software were tested
messaging and email, but in a way with which particular versions of the tests,
that is shared with everyone else on and allows for rapid change without losing
the team historical information. The main types of
version control systems include centralized
source control systems and distributed component. In other cases, bulk-
version control systems. The team size, loading using the database
structure, location, and requirements to management systems is also
integrate with other tools will determine which possible.
version control system is right for a particular • Automated test execution tools: There
Agile project. are test execution tools which are
3.4.5 Test Design, Implementation, more aligned to Agile testing.
and Execution Tools Specific tools are available via both
commercial and open source
Some tools are useful to Agile testers at avenues to support test first
specific points in the software testing process. approaches, such as behavior-driven
While most of these tools are not new or development, test-driven
specific to Agile, they provide important development, and acceptance test-
capabilities given the rapid change of Agile driven development. These tools
projects. allow testers and business staff to
• Test design tools: Use of tools such express the expected system
as mind maps have become more behavior in tables or natural language
popular to quickly design and define using keywords.
tests for a new feature. • Exploratory test tools: Tools that
• Test case management tools: The capture and log activities performed
type of test case management tools on an application during an
used in Agile may be part of the exploratory test session are beneficial
whole team’s application lifecycle to the tester and developer, as they
management or task management record the actions taken. This is
tool. useful when a defect is found, as the
• Test data preparation and generation actions taken before the failure
tools: Tools that generate data to occurred have been captured and can
populate an application’s database be used to report the defect to the
are very beneficial when a lot of data developers. Logging steps performed
and combinations of data are in an exploratory test session may
necessary to test the application. prove to be beneficial if the test is
These tools can also help re-define ultimately included in the automated
the database structure as the product regression test suite.
undergoes changes during an Agile
project and refactor the scripts to
generate the data. This allows quick
updating of test data as changes
3.4.6 Cloud Computing and
occur. Some test data preparation Virtualization Tools
tools use production data sources as Virtualization allows a single physical
a raw material and use scripts to resource (server) to operate as many
remove or anonymize sensitive data. separate, smaller resources. When virtual
Other test data preparation tools can machines or cloud instances are used, teams
help with validating large data inputs have a greater number of servers available to
or outputs. them for development and testing. This can
• Test data load tools: After data has help to avoid delays associated with waiting
been generated for testing, it needs to for physical servers. Provisioning a new
be loaded into the application. server or restoring a server is more efficient
Manual data entry is often time with snapshot capabilities built into most
consuming and error prone, but data virtualization tools. Some test management
load tools are available to make the tools now utilize virtualization technologies to
process reliable and efficient. In fact, snapshot servers at the point when a fault is
many of the data generator tools detected, allowing testers to share the
include an integrated data load
snapshot with the developers investigating
the fault.
retrospective, 14, 30 root cause analysis,
14 Scrum, 11, 12, 13, 21, 30, 41 Scrum
Master, 12 security testing, 29 self-
organizing teams, 10 software lifecycle, 8
sprint, 12 sprint backlog, 12, 16 stand-up
meetings, 10, 23 story card, 13 story
points, 32 sustainable development, 10
technical debt, 19, 24 test approach, 16, 27
test automation, 8, 10, 20, 23, 24, 25, 30
test basis, 8, 17, 33 test charter, 27, 37 test
data preparation tools, 40 test estimation,
27 test execution automation, 27 test first
programming, 12 test oracle, 8, 17 test
pyramid, 27, 29 test strategy, 26, 27, 30
test-driven development, 8, 21, 27 testing
quadrant model, 29 testing quadrants, 29
timeboxing, 12, 13 transparency, 12 twelve
principles, 10 unit test framework, 27
usability testing, 29
user stories, 8, 13, 14, 15, 16, 19, 20, 21,
23, 25, 32, 33, 34, 35, 36, 38 user story, 8,
11, 13, 16, 17, 20, 21, 25, 28,
29, 32, 35, 36, 37, 39
velocity, 16, 24 version control,
39 whole-team approach, 8, 9,
10 working software, 9
XP. See Extreme Programming