ISTQB CTFL Syllabus-V4.0
ISTQB CTFL Syllabus-V4.0
Foundation Level
QC is a product-oriented, corrective approach that focuses on those activities supporting the achievement
of appropriate levels of quality. Testing is a major form of quality control, while others include formal
methods (model checking and proof of correctness), simulation and prototyping.
QA is a process-oriented, preventive approach that focuses on the implementation and improvement of
processes. It works on the basis that if a good process is followed correctly, then it will generate a good
product. QA applies to both the development and testing processes, and is the responsibility of
everyone on a project.
Test results are used by QA and QC. In QC they are used to fix defects, while in QA they
provide feedback on how well the development and test processes are performing.
illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during
testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in
detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be
modified, and new tests may need to be written. However, in some cases, repeating the same tests can
have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is
done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification
will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the
defects found could still produce a system that does not fulfill the users’ needs and expectations, that
does not help in achieving the customer’s business goals, and that is inferior compared to other
competing systems. In addition to verification, validation should also be carried out (Boehm 1981).
also includes defining the test data requirements, designing the test environment and identifying
any other required infrastructure and tools. Test design answers the question “how to test?”.
Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test
data). Test cases can be organized into test procedures and are often assembled into test suites. Manual
and automated test scripts are created. Test procedures are prioritized and arranged within a test
execution schedule for efficient test execution (see section 5.1.5). The test environment is built and
verified to be set up correctly.
Test execution includes running the tests in accordance with the test execution schedule (test runs).
Test execution may be manual or automated. Test execution can take many forms, including continuous
testing or pair testing sessions. Actual test results are compared with the expected results. The test
results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to
report the anomalies based on the failures observed (see section 5.5).
Test completion activities usually occur at project milestones (e.g., release, end of iteration, test level
completion) for any unresolved defects, change requests or product backlog items created. Any testware
that may be useful in the future is identified and archived or handed over to the appropriate teams. The
test environment is shut down to an agreed state. The test activities are analyzed to identify lessons
learned and improvements for future iterations, releases, or projects (see section 2.1.6). A test
completion report is created and communicated to the stakeholders.
1.4.3. Testware
Testware is created as output work products from the test activities described in section 1.4.1. There is a
significant variation in how different organizations produce, shape, name, organize and manage their
work products. Proper configuration management (see section 5.4) ensures consistency and integrity
of work products. The following list of work products is not exhaustive:
• Test planning work products include: test plan, test schedule, risk register, and entry and exit
criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact
and information about risk mitigation (see section 5.2). Test schedule, risk register and entry and
exit criteria are often a part of the test plan.
• Test monitoring and control work products include: test progress reports (see section 5.3.2),
documentation of control directives (see section 5.3) and risk information (see section 5.2).
• Test analysis work products include: (prioritized) test conditions (e.g., acceptance criteria,
see section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).
• Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
• Test implementation work products include: test procedures, automated test scripts, test
suites, test data, test execution schedule, and test environment elements. Examples of test
environment elements include: stubs, drivers, simulators, and service virtualizations.
• Test execution work products include: test logs, and defect reports (see section 5.5).
• Test completion work products include: test completion report (see section 5.3.2), action
items for improvement of subsequent projects or iterations, documented lessons learned, and
change requests (e.g., as product backlog items).
• Testers are involved in reviewing work products as soon as drafts of this documentation are
available, so that this earlier testing and defect detection can support the shift-left strategy (see
section 2.1.5)
• Automation through a delivery pipeline reduces the need for repetitive manual testing
• The risk in regression is minimized due to the scale and range of automated regression tests
DevOps is not without its risks and challenges, which include:
• The DevOps delivery pipeline must be defined and established
• CI / CD tools must be introduced and maintained
• Test automation requires additional resources and may be difficult to establish and maintain
Although DevOps comes with a high level of automated testing, manual testing – especially from the
user's perspective – will still be needed.
systems is also possible. System testing may be performed by an independent test team, and
is related to specifications for the system.
• System integration testing focuses on testing the interfaces of the system under test and
other systems and external services . System integration testing requires suitable test
environments preferably similar to the operational environment.
• Acceptance testing focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs. Ideally, acceptance testing
should be performed by the intended users. The main forms of acceptance testing are: user
acceptance testing (UAT), operational acceptance testing, contractual and regulatory acceptance
testing, alpha testing and beta testing.
Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test
activities:
• Test object
• Test objectives
• Test basis
• Defects and failures
• Approach and responsibilities
they use the same functional tests, but check that while performing the function, a non-functional
constraint is satisfied (e.g., checking that a function performs within a specified time, or a function can be
ported to a new platform). The late discovery of non-functional defects can pose a serious threat to the
success of a project. Non-functional testing sometimes needs a very specific test environment, such as a
usability lab for usability testing.
Black-box testing (see section 4.2) is specification-based and derives tests from documentation
external to the test object. The main objective of black-box testing is checking the system's behavior
against its specifications.
White-box testing (see section 4.3) is structure-based and derives tests from the system's
implementation or internal structure (e.g., code, architecture, work flows, and data flows). The main
objective of white-box testing is to cover the underlying structure by the tests to the acceptable level.
All the four above mentioned test types can be applied to all test levels, although the focus will be
different at each level. Different test techniques can be used to derive test conditions and test cases
for all the mentioned test types.
Code defects can be detected using static analysis more efficiently than in dynamic testing,
usually resulting in both fewer code defects and a lower overall development effort.
• Static testing finds defects directly, while dynamic testing causes failures from which
the associated defects are determined through subsequent analysis
• Static testing may more easily detect defects that lay on paths through the code that are
rarely executed or hard to reach using dynamic testing
• Static testing can be applied to non-executable work products, while dynamic testing can only be
applied to executable work products
• Static testing can be used to measure quality characteristics that are not dependent on executing
code (e.g., maintainability), while dynamic testing can be used to measure quality characteristics
that are dependent on executing code (e.g., performance efficiency)
Typical defects that are easier and/or cheaper to find through static testing include:
• Defects in requirements (e.g., inconsistencies, ambiguities, contradictions, omissions,
inaccuracies, duplications)
• Design defects (e.g., inefficient database structures, poor modularization)
• Certain types of coding defects (e.g., variables with undefined values, undeclared
variables, unreachable or duplicated code, excessive code complexity)
• Deviations from standards (e.g., lack of adherence to naming conventions in coding standards)
• Incorrect interface specifications (e.g., mismatched number, type or order of parameters)
• Specific types of security vulnerabilities (e.g., buffer overflows)
• Gaps or inaccuracies in test basis coverage (e.g., missing tests for an acceptance criterion)
Frequent stakeholder feedback throughout the SDLC can prevent misunderstandings about
requirements and ensure that changes to requirements are understood and implemented earlier. This
helps the development team to improve their understanding of what they are building. It allows them to
focus on those features that deliver the most value to the stakeholders and that have the most positive
impact on identified risks.
• Moderator (also known as the facilitator) – ensures the effective running of review meetings,
including mediation, time management, and a safe review environment in which everyone can
speak freely
• Scribe (also known as recorder) – collates anomalies from reviewers and records review
information, such as decisions and new anomalies found during the review meeting
• Defining clear objectives and measurable exit criteria. Evaluation of participants should never
be an objective
• Choosing the appropriate review type to achieve the given objectives, and to suit the type of
work product, the review participants, the project needs and context
• Conducting reviews on small chunks, so that reviewers do not lose concentration during an
individual review and/or the review meeting (when held)
• Providing feedback from reviews to stakeholders and authors so they can improve the product
and their activities (see section 3.2.1)
• Providing adequate time to participants to prepare for the review
• Support from management for the review process
• Making reviews part of the organization’s culture, to promote learning and process improvement
• Providing adequate training for all participants so they know how to fulfil their role
• Facilitating meetings
A partition containing valid values is called a valid partition. A partition containing invalid values is called
an invalid partition. The definitions of valid and invalid values may vary among teams and organizations.
For example, valid values may be interpreted as those that should be processed by the test object or as
those for which the specification defines their processing. Invalid values may be interpreted as those
that should be ignored or rejected by the test object or as those for which no processing is defined in the
test object specification.
In EP, the coverage items are the equivalence partitions. To achieve 100% coverage with this technique,
test cases must exercise all identified partitions (including invalid partitions) by covering each partition at
least once. Coverage is measured as the number of partitions exercised by at least one test case,
divided by the total number of identified partitions, and is expressed as a percentage.
Many test objects include multiple sets of partitions (e.g., test objects with more than one input
parameter), which means that a test case will cover partitions from different sets of partitions. The
simplest coverage criterion in the case of multiple sets of partitions is called Each Choice coverage
(Ammann 2016). Each Choice coverage requires test cases to exercise each partition from each set of
partitions at least once. Each Choice coverage does not take into account combinations of partitions.
In all states coverage, the coverage items are the states. To achieve 100% all states coverage, test
cases must ensure that all the states are visited. Coverage is measured as the number of visited
states divided by the total number of states, and is expressed as a percentage.
In valid transitions coverage (also called 0-switch coverage), the coverage items are single valid
transitions. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions.
Coverage is measured as the number of exercised valid transitions divided by the total number of valid
transitions, and is expressed as a percentage.
In all transitions coverage, the coverage items are all the transitions shown in a state table. To achieve
100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute
invalid transitions. Testing only one invalid transition in a single test case helps to avoid fault masking,
i.e., a situation in which one defect prevents the detection of another. Coverage is measured as the
number of valid and invalid transitions exercised or attempted to be covered by executed test cases,
divided by the total number of valid and invalid transitions, and is expressed as a percentage.
All states coverage is weaker than valid transitions coverage, because it can typically be achieved without
exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion.
Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions
coverage guarantees both full all states coverage and full valid transitions coverage and should be a
minimum requirement for mission and safety-critical software.
• The types of errors the developers tend to make and the types of defects that result from these
errors
• The types of failures that have occurred in other, similar applications
In general, errors, defects and failures may be related to: input (e.g., correct input not accepted,
parameters wrong or missing), output (e.g., wrong format, wrong result), logic (e.g., missing cases, wrong
operator), computation (e.g., incorrect operand, wrong computation), interfaces (e.g., parameter
mismatch, incompatible types), or data (e.g., incorrect initialization, wrong type).
Fault attacks are a methodical approach to the implementation of error guessing. This technique requires the
tester to create or acquire a list of possible errors, defects and failures, and to design tests that will identify
defects associated with the errors, expose the defects, or cause the failures. These lists can be built based on
experience, defect and failure data, or from common knowledge about why software fails.
See (Whittaker 2002, Whittaker 2003, Andrews 2006) for more information on error guessing and fault
attacks.
Some checklist entries may gradually become less effective over time because the developers will
learn to avoid making the same errors. New entries may also need to be added to reflect newly found
high severity defects. Therefore, checklists should be regularly updated based on defect analysis.
However, care should be taken to avoid letting the checklist become too long (Gawande 2009).
In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of
consistency for the testing. If the checklists are high-level, some variability in the actual testing is likely to
occur, resulting in potentially greater coverage but less repeatability.
5.3 Test Monitoring, Test Control and Test Completion FL-5.3.1 (K1) Recall
metrics used for testing FL-5.3.2 (K2) Summarize the purposes, content, and
audiences for test reports FL-5.3.3 (K2) Exemplify how to communicate the status
of testing
software development. In Planning Poker, estimates are usually made using cards with numbers
that represent the effort size.
Three-point estimation. In this expert-based technique, three estimations are made by the experts: the
most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The
final estimate (E) is their weighted arithmetic mean. In the most popular version of this technique, the
estimate is calculated as E = (a + 4*m + b) / 6. The advantage of this technique is that it allows the
experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in person-
hours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12
person-hours), because E = (6 + 4*9 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.
See (Kan 2003, Koomen 2006, Westfall 2009) for these and many other test estimation techniques.
(component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see
section 2.2.1) can also be used.
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the
more important is its treatment.
Test completion collects data from completed test activities to consolidate experience, testware, and any
other relevant information. Test completion activities occur at project milestones such as when a test level
is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is
released, or a maintenance release is completed.
A test completion report is prepared during test completion, when a project, test level, or test type is
complete and when, ideally, its exit criteria have been met. This report uses test progress reports
and other data. Typical test completion reports include:
• Test summary
• Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit
criteria)
• Deviations from the test plan (e.g., differences from the planned schedule, duration, and effort).
• Testing impediments and workarounds
• Test metrics based on test progress reports
• Unmitigated risks, defects not fixed
• Lessons learned that are relevant to the testing
Different audiences require different information in the reports, and influence the degree of formality and the
frequency of reporting. Reporting on test progress to others in the same team is often frequent and informal,
while reporting on testing for a completed project follows a set template and occurs only once.
The ISO/IEC/IEEE 29119-3 standard includes templates and examples for test progress reports
(called test status reports) and test completion reports.
For a complex configuration item (e.g., a test environment), CM records the items it consists of, their
relationships, and versions. If the configuration item is approved for testing, it becomes a baseline and
can only be changed through a formal change control process.
Configuration management keeps a record of changed configuration items when a new baseline
is created. It is possible to revert to a previous baseline to reproduce previous test results.
To properly support testing, CM ensures the following:
• All configuration items, including test items (individual parts of the test object), are uniquely
identified, version controlled, tracked for changes, and related to other configuration items so that
traceability can be maintained throughout the test process
• All identified documentation and software items are referenced unambiguously in
test documentation
Continuous integration, continuous delivery, continuous deployment and the associated testing are
typically implemented as part of an automated DevOps pipeline (see section 2.1.4), in which automated
CM is normally included.
• Description of the failure to enable reproduction and resolution including the steps that detected
the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
• Expected results and actual results
• Severity of the defect (degree of impact) on the interests of stakeholders or requirements
• Priority to fix
• Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting
confirmation testing, re-opened, closed, rejected)
• References (e.g., to the test case)
Some of this data may be automatically included when using defect management tools (e.g., identifier,
date, author and initial status). Document templates for a defect report and example defect reports can be
found in the ISO/IEC/IEEE 29119-3 standard, which refers to defect reports as incident reports.
7. References
Standards
ISO/IEC/IEEE 29119-1 (2022) Software and systems engineering – Software testing – Part 1:
General Concepts
ISO/IEC/IEEE 29119-2 (2021) Software and systems engineering – Software testing – Part 2:
Test processes
ISO/IEC/IEEE 29119-3 (2021) Software and systems engineering – Software testing – Part 3:
Test documentation
ISO/IEC/IEEE 29119-4 (2021) Software and systems engineering – Software testing – Part 4:
Test techniques
ISO/IEC 25010, (2011) Systems and software engineering – Systems and software Quality
Requirements and Evaluation (SQuaRE) System and software quality models
ISO/IEC 20246 (2017) Software and systems engineering – Work product reviews
ISO/IEC/IEEE 14764:2022 – Software engineering – Software life cycle processes – Maintenance
ISO 31000 (2018) Risk management – Principles and guidelines
Books
Adzic, G. (2009) Bridging the Communication Gap: Specification by Example and Agile Acceptance
Testing, Neuri Limited
Ammann, P. and Offutt, J. (2016) Introduction to Software Testing (2e), Cambridge University Press
Andrews, M. and Whittaker, J. (2006) How to Break Web Software: Functional and Security Testing
of Web Applications and Web Services, Addison-Wesley Professional
Beck, K. (2003) Test Driven Development: By Example, Addison-Wesley
Beizer, B. (1990) Software Testing Techniques (2e), Van Nostrand Reinhold: Boston MA
Boehm, B. (1981) Software Engineering Economics, Prentice Hall, Englewood Cliffs, NJ
Buxton, J.N. and Randell B., eds (1970), Software Engineering Techniques. Report on a conference
sponsored by the NATO Science Committee, Rome, Italy, 27–31 October 1969, p. 16
Chelimsky, D. et al. (2010) The Rspec Book: Behaviour Driven Development with Rspec, Cucumber, and
Friends, The Pragmatic Bookshelf: Raleigh, NC
Cohn, M. (2009) Succeeding with Agile: Software Development Using Scrum, Addison-Wesley
Copeland, L. (2004) A Practitioner’s Guide to Software Test Design, Artech House: Norwood MA
Craig, R. and Jaskiel, S. (2002) Systematic Software Testing, Artech House: Norwood MA
Crispin, L. and Gregory, J. (2008) Agile Testing: A Practical Guide for Testers and Agile Teams, Pearson
Education: Boston MA
Forgács, I., and Kovács, A. (2019) Practical Test Design: Selection of traditional and automated
test design techniques, BCS, The Chartered Institute for IT
Gawande A. (2009) The Checklist Manifesto: How to Get Things Right, New York, NY:
Metropolitan Books
Gärtner, M. (2011), ATDD by Example: A Practical Guide to Acceptance Test-Driven Development,
Pearson Education: Boston MA
Gilb, T., Graham, D. (1993) Software Inspection, Addison Wesley
Hendrickson, E. (2013) Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing, The
Pragmatic Programmers
Hetzel, B. (1988) The Complete Guide to Software Testing, 2nd ed., John Wiley and Sons
Jeffries, R., Anderson, A., Hendrickson, C. (2000) Extreme Programming Installed, Addison-
Wesley Professional
Jorgensen, P. (2014) Software Testing, A Craftsman’s Approach (4e), CRC Press: Boca Raton
FL Kan, S. (2003) Metrics and Models in Software Quality Engineering, 2nd ed., Addison-Wesley
Kaner, C., Falk, J., and Nguyen, H.Q. (1999) Testing Computer Software, 2nd ed., Wiley
Kaner, C., Bach, J., and Pettichord, B. (2011) Lessons Learned in Software Testing: A Context-
Driven Approach, 1st ed., Wiley
Kim, G., Humble, J., Debois, P. and Willis, J. (2016) The DevOps Handbook, Portland, OR
Koomen, T., van der Aalst, L., Broekman, B. and Vroon, M. (2006) TMap Next for result-driven testing,
UTN Publishers, The Netherlands
Myers, G. (2011) The Art of Software Testing, (3e), John Wiley & Sons: New York NY
O’Regan, G. (2019) Concise Guide to Software Testing, Springer Nature Switzerland
Pressman, R.S. (2019) Software Engineering. A Practitioner’s Approach, 9th ed., McGraw Hill
Roman, A. (2018) Thinking-Driven Testing. The Most Reasonable Approach to Quality Control, Springer
Nature Switzerland
Van Veenendaal, E (ed.) (2012) Practical Risk-Based Testing, The PRISMA Approach, UTN Publishers:
The Netherlands
Watson, A.H., Wallace, D.R. and McCabe, T.J. (1996) Structured Testing: A Testing Methodology
Using the Cyclomatic Complexity Metric, U.S. Dept. of Commerce, Technology Administration, NIST
Westfall, L. (2009) The Certified Software Quality Engineer Handbook, ASQ Quality
Press Whittaker, J. (2002) How to Break Software: A Practical Guide to Testing, Pearson
Whittaker, J. (2009) Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test
Design, Addison Wesley
Whittaker, J. and Thompson, H. (2003) How to Break Software Security, Addison Wesley Wiegers,
K. (2001) Peer Reviews in Software: A Practical Guide, Addison-Wesley Professional
Enders, A. (1975) “An Analysis of Errors and Their Causes in System Programs,” IEEE Transactions on
Software Engineering 1(2), pp. 140-149
Manna, Z., Waldinger, R. (1978) “The logic of computer programming,” IEEE Transactions on Software
Engineering 4(3), pp. 199-229
Marick, B. (2003) Exploration through Example, http://www.exampler.com/old-
blog/2003/08/21.1.html#agile-testing-project-1
Nielsen, J. (1994) “Enhancing the explanatory power of usability heuristics,” Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems: Celebrating Interdependence, ACM
Press, pp. 152–158
Salman. I. (2016) “Cognitive biases in software quality and testing,” Proceedings of the 38th International
Conference on Software Engineering Companion (ICSE '16), ACM, pp. 823-826.
Wake, B. (2003) “INVEST in Good Stories, and SMART Tasks,” https://xp123.com/articles/invest-in-good-
stories-and-smart-tasks/
Level 2: Understand (K2) – the candidate can select the reasons or explanations for statements related
to the topic, and can summarize, compare, classify and give examples for the testing concept.
Action verbs: classify, compare, contrast, differentiate, distinguish, exemplify, explain, give
examples, interpret, summarize.
Examples:
• “Classify the different options for writing acceptance criteria.”
• “Compare the different roles in testing” (look for similarities, differences or both).
• “Distinguish between project risks and product risks” (allows concepts to be differentiated).
• “Exemplify the purpose and content of a test plan.”
• “Explain the impact of context on the test process.”
• “Summarize the activities of the review process.”
Level 3: Apply (K3) – the candidate can carry out a procedure when confronted with a familiar task, or
select the correct procedure and apply it to a given context.
Action verbs: apply, implement, prepare, use.
Examples:
• “Apply test case prioritization” (should refer to a procedure, technique, process, algorithm etc.).
• “Prepare a defect report.”
• “Use boundary value analysis to derive test cases.”
References for the cognitive levels of learning objectives:
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching
Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon
BO10-FL
BO11-FL
BO12-FL
BO13-FL
BO14-FL
BO1-FL
BO3-FL
BO4-FL
BO6-FL
BO7-FL
BO8-FL
BO9-FL
BO2-
BO5-
Business Outcomes: Foundation Level
FL
FL
BO1 Understand what testing is and why it is beneficial 6
BO2 Understand fundamental concepts of software testing 22
Identify the test approach and activities to be implemented depending on
BO3 6
the context of testing
BO4 Assess and improve the quality of documentation 9
BO5 Increase the effectiveness and efficiency of testing 20
BO6 Align the test process with the software development lifecycle 6
BO7 Understand test management principles 6
BO8 Write and communicate clear and understandable defect reports 1
Understand the factors that influence the priorities and efforts related to
BO9 7
testing
BO10 Work as part of a cross-functional team 8
BO11 Know risks and benefits related to test automation. 1
BO12 Identify essential skills required for testing 5
BO13 Understand the impact of risk on testing 4
BO14 Effectively report on test progress and quality 4
BUSINESS OUTCOMES
Chapter/
K-
section/ Learning objective
BO1-FL
BO2-FL
BO3-FL
BO4-FL
BO5-FL
BO6-FL
BO7-FL
BO8-FL
BO9-FL
level
BO10-
BO11-
BO12-
BO13-
BO14-
subsection
FL
FL
FL
FL
FL
Chapter 1 Fundamentals of Testing
2.1.1 Explain the impact of the chosen software development lifecycle on testing K2 X
2.1.2 Recall good testing practices that apply to all software development lifecycles K1 X
2.1.6 Explain how retrospectives can be used as a mechanism for process improvement K2 X X
Recognize types of products that can be examined by the different static test K1
3.1.1 X X
techniques
3.1.2 Explain the value of static testing K2 X X X
Explain how to write user stories in collaboration with developers and business K2
4.5.1 X X
representatives
4.5.2 Classify the different options for writing acceptance criteria K2 X
5.1.2 Recognize how a tester adds value to iteration and release planning K1 X X X
5.1.7 Summarize the testing quadrants and their relationships with test levels and test types K2 X X
5.2.1 Identify risk level by using risk likelihood and risk impact K1 X X
5.2.3 Explain how product risk analysis may influence thoroughness and scope of testing K2 X X X
5.2.4 Explain what measures can be taken in response to analyzed product risks K2 X X X
5.3.2 Summarize the purposes, content, and audiences for test reports K2 X X X
o More focus on practices like: test-first approach (K1), shift-left (K2), retrospectives (K2)
o New section on testing in the context of DevOps (K2)
o Integration testing level split into two separate test levels: component integration
testing and system integration testing
• Major changes in chapter 3 (Static Testing)
o Section on review techniques, together with the K3 LO (apply a review
technique) removed
• Major changes in chapter 4 (Test Analysis and Design)
o Use case testing removed (but still present in the Advanced Test Analyst syllabus)
o More focus on collaboration-based approach to testing: new K3 LO about using ATDD to
derive test cases and two new K2 LOs about user stories and acceptance criteria
o Decision testing and coverage replaced with branch testing and coverage (first, branch
coverage is more commonly used in practice; second, different standards define the
decision differently, as opposed to “branch”; third, this solves a subtle, but serious flaw
from the old FL2018 which claims that „100% decision coverage implies 100%
statement coverage” – this sentence is not true in case of programs with no decisions)
o Section on the value of white-box testing improved
• Major changes in chapter 5 (Managing the Test Activities)
o Section on test strategies/approaches removed
o New K3 LO on estimation techniques for estimating the test effort
o More focus on the well-known Agile-related concepts and tools in test management:
iteration and release planning (K1), test pyramid (K1), and testing quadrants (K2)
o Section on risk management better structured by describing four main activities:
risk identification, risk assessment, risk mitigation and risk monitoring
• Major changes in chapter 6 (Test Tools)
o Content on some test automation issues reduced as being too advanced for the
foundation level – section on tools selection, performing pilot projects and
introducing tools into organization removed
11. Index