ISTQB Foundation Answers
ISTQB Foundation Answers
False-fail result
test result - defect reported but no defect exists in test object
Error - Mistake
Human action produces an incorrect result
Defect (bug, fault)
Flaw in a component or system causing component or system to fail to perform required
function.
Failure
Deviation of the component or system from its expected delivery, service or result.
Quality
The degree to which a component, system or process meets specified requirements and/or
user/customer needs and expectations.
Risk
A factor that could result in future negative consequences; usually expressed as impact and
likelihood.
False-pass result
A test result which fails to identify the presence of a defect that is actually present in the
test object.
Testing
Also know as evaluation. The process consisting of all life-cycle activities both static and
dynamic, concerned with planning, preparation, and evaluation of software products and
related work products to determine that they are fit for purpose and to detect defects.
Requirement
A condition or capability needed by a user to solve a problem or achieve an objective that
must be met or possessed by a system or system component to satisfy a contract, standard,
specification, or other formally imposed document.
Review
An evaluation of a product or project status to ascertain discrepancies from planned results
and to recommend improvements. Examples include management review, informal review,
technical review, inspection, and walkthrough.
Debugging
The process of finding, analyzing, and removing the causes of failures in software.
Confirmation testing (re-testing)
Testing that runs test cases that failed the last time they were run, in order to verify the
success of corrective actions.
Test Strategy
A high-level description of the test levels to be performed and the testing within those levels
for an organization or program (one or more projects)
Test Execution
The process of running a test on the component or system under test, producing actual
results
Test approach
The implementation of the test strategy for a specific project.
Test Plan
A document describing the scope, approach, resources, and schedule of intended test
activities. It identifies among others test items, the features to be tested, the testing tasks,
who will do each task, degree of test independence, the test environment, the test design
techniques, and entry an exit criteria to be used, and the rationale for their choice, and any
risks requiring contingency planning. It is a record of the test planning process.
Test monitoring
A test management task that deals with the activities related to periodically checking the
status of a test project. Reports are prepared that compare the actuals to that which was
planned.
Test condition
An item or event of a component or system that could be verified by one or more test cases,
e.g. a function, transaction, feature, quality attribute, or structural element.
Test basis
All documents from which the requirements of a component or system can be inferred. the
documentation on which the test cases are based. If a document can be amended only by
way of formal amendment procedure, then the test basis is called a frozen test basis.
Test Data
Data the exists before a test is executed, and that affects or is affected by the component or
system under test. Example - in a database
Coverage (test coverage)
The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.
Test procedure specification (test procedure, test script, manual test script)
A document specifying a sequence of actions for the execution of a test.
Test Suite
A set of several test cases for a component or system under test, where the post condition
of one test if often used as the precondition for the next one.
Incident
Also known as deviation. Any event occurring that requires investigation.
Testware
Artifacts produced during the test process required to plan, design, and execute tests, such
as documentation, scripts, inputs, expected results, set-up, and clear-up procedures, files,
databases, environment, and any additional software or utilities used in testing.
Regression testing
Testing of a previously tested program following modification to ensure that defects have
not been introduced or uncovered in unchanged areas of the software, as a result of the
changes made. It is performed when the software or its environment is changed.
Exit Criteria
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting
a process to be officially completed. The purpose of exit criteria is to prevent a task from
being considered completed when there are still outstanding parts of the task that have not
been finished. Exit criteria are used to report against and to plan when to stop testing.
Test log
A chronological record of relevant details about the execution of tests.
Test summary report
A document summarizing testing activities and results. It is also contains an evaluation of
the corresponding test items against exist criteria.
Error Guessing
A test design technique where the experience of the test is used to anticipate what defects
might be present in the component or system under test as a result of errors made, and to
design tests specifically to expose them.
Independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing
Test policy
A high level document describing the principles, approach, and major objectives regarding
testing.
Verification
Confirmation by examination and through provision of objective evidence that specified
requirements have been fulfilled.
Validation
Confirmation by examination and through provision of objective evidence that the
requirements for a specific intended use or application have been fulfilled.
V-model
A framework to describe the software development lifecycle activities from requirements
specification to maintenance. The V-model illustrates how testing activities can be
integrated into each phase of the software development lifecycle.
Test level
A group of test activities that are organized and managed together. A test level is linked to
the responsibilities in a project. Examples of test levels are component test, integration test,
system test, and acceptance test.
Integration
The process of combining components or systems into larger assemblies.
Off-the-shelf software (commercial off-the-shelf software, COTS)
A software product that is developed for the general market, i.e. for a large number of
customers, and that is delivered to many customers in identical format.
Performance
The degree to which a system or component accomplishes its designated functions within
given constraints regarding processing time and throughput rate.
Incremental development model
A development lifecycle where a project is broken into a series of increments, each of which
delivers a portion of the functionality in the overall project requirements. The requirements
are prioritized and delivered in priority order in the appropriate increment. in some but not
all versions of this lifecycle model, each subproject follows a "mini V-model" with its own
design, coding, and testing phases.
Iterative development model
A development lifecycle where a project is broken into a usually large number of iterations.
An iteration is a complete development loop resulting in a release (internal or external) of
an executable product, a subset of the final product under development, which grows from
iteration to iteration to become the final product.
Agile software development
A group of software development methodologies based on iterative incremental
development., where requirements and solutions evolve through collaboration between
self-organizing cross-functional teams.
Agile manifesto
A statement on the values that underpin agile software development. The values are:
individuals and interactions over processes and tools; working software over comprehensive
documentation; customer collaboration over contract negotiation; responding to change
over following a plan.
Efficiency testing
The process of testing to determine the efficiency of a software product.
Component testing (unit testing, module testing)
The testing of individual software components. Synonym to program testing.
Stub
A skeletal or special-purpose implementation of a software component, used to develop a
that calls or is otherwise dependent on it. It replaces a called component.
Driver (test driver)
A software component or test tool that replaces a component that takes care of the control
and/or the calling of a component or system.
Robustness testing
Testing to determine the robustness of the software product.
Test-driven development
A way of developing software where the test cases are developed, and often automated,
before the software is developed to run those test cases.
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between
integrated components or systems.
System testing
The process of testing an integrated system to verify that it meets specified requirements.
Functional requirement
A requirement that specifies a function that a component or system must perform.
Non-functional requirement
A requirement that does not relate to functionality, but to attributes such as reliability,
efficiency, usability, maintainability, and portability.
Test environment (test bed)
An environment containing hardware, instrumentations, simulators, software tools, and
other support elements needed to conduct a test.
Acceptance testing (acceptance, user acceptance testing)
Formal testing with respect to user needs, requirements, and business processes conducted
to determine whether or not a system satisfies the acceptance criteria and to enable the
user, customers, or other authorized entity to determine whether or not to accept the
system.
Maintenance
Modification of a software product after delivery to correct defects, to improve
performance or other attributes or to adapt the product to a modified environment.
Alpha testing
Simulated or actual operational testing by potential users/customers or an independent test
team at the developers' site, but outside the development organization. Alpha testing is
often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing (field testing)
Operational testing by potential and/or existing users/customers at an external site not
otherwise involved with the developers, to determine whether or not a component or
system satisfies the user/customer needs and fits within the business processes. Beta
testing is often employed as a form of external testing for off-the-shelf software in order to
acquire feedback from the market.
Test Type
A group of test activities aimed at testing a component or system, focused on a specific test
objective, i.e. functional test, usability test, regression test, etc. A test type may take place
on one or more test levels or test phases.
Functional testing
Testing based on an analysis of the specification of the functionality of a component or
system.
Black-box testing (specification based testing)
Testing, either functional or non-functional , without reference to the internal structure of
the component or system.
Functionality testing
The process of testing to determine the functionality of a software product.
Interoperability testing
Also known as compatibility testing. The process of testing to determine the interoperability
of a software product.
Security
Attributes of software products that bear on its ability to prevent unauthorized access,
whether accidental or deliberate, to programs and data.
Security testing
Testing to determine the security of the software product.
Performance testing
The process of testing to determine the performance of a software product.
Load testing
A type of performance testing conducted to evaluate the behavior of a component or
system with increasing load, e.g. numbers of parallel users and/or numbers of transactions,
to determine what load can be handled by the component or system.
Stress testing
A type of performance testing conducted to evaluate a system or component at or beyond
the limits of its anticipated or specified work loads, or with reduced availability of resources
such as to member or servers.
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn,
easy to operate, and attractive to the users under specified conditions.
Maintainability testing
The process of testing to determine the maintainability of a software product.
Reliability testing
The process of testing to determine the reliability of a software product.
Portability testing
Also known as Configuration testing. The process of testing to determine the portability of a
software product.
Functionality
The capability of the software product to provide functions which meet stated and implied
needs when the software is used under specified conditions.
Reliability
The ability of the software product to perform its required functions under stated conditions
for a specified number of operations.
Robustness
The degree to which a component or system can function correctly in the presence of
invalid
Usability
The capability of the software to be understood, learned, used and attractive to the user
when used under specified conditions.
Efficiency
The capability of the software product to provide appropriate performance, relative to the
amount of resources used under stated conditions.
Maintainability
The ease with which a software product can be modified to correct defects, modified to
meet new requirements, modified to make future maintenance easier, or adapted to a
changed environment.
Portability
The ease with which the software product can be transferred from one hardware or
software environment to another.
Black-box (specification-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the specification, either
functional or non-functional, of a component or system without reference to its internal
structure.
White-box testing (structure-based testing)
Testing based on an analysis of the internal structure of the component or system.
Code coverage
An analysis method that determines which parts of the software have been executed
(covered) by the test suite and which parts have not been executed, e.g. statement
coverage, decision coverage, or condition coverage.
White-box (structural-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the internal structure of
a component or system.
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an
operational system.
Impact Analysis
The assessment of change to the layers of development documentation, test
documentation, and components, in order to implement a given change to specified
requirements.
Static testing
Testing of a component or system at specification or implementation level without
execution of that software, e.g. reviews or static analysis.
Dynamic testing
Testing that involves the execution of the software of a component or system.
Informal review
Also known as Adhoc review. Review not based on a formal (documented) procedure.
Formal review
A review characterized by documented procedures and requirements, e.g. inspection.
Moderator (inspection leader)
The leader and main person responsible for an inspector or other review process.
Entry criteria
The set of generic and specific conditions for permitting a process to go forward with a
defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting
which would entail more (wasted) effort needed to remove the failed entry criteria.
Metric
A measurement scale and the method used for measurement.
Technical review
A peer group discussion activity that focuses on achieving consensus on the technical
approach to be taken.
Peer review
A review of a software work product by colleagues of the producer of the product for the
purpose of identifying defects and improvements. Examples are inspection, technical review
and walkthrough
Inspection
A type of peer review that relies on visual examination of documents to detect defects, e.g.
violations of development standards and non-conformance to higher level documentation.
The most formal review technique and therefore always based on a documented procedure.
Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of
these software development artifacts. Static analysis is usually carried out by means of a
supporting tool.
Compiler
A software tool that translates programs expressed in a high order language into their
machine language equivalents.
Test case specification
A document specifying a set of test cases (objective, inputs, test actions, expected results,
and execution preconditions) for a test item.
Test design technique
Procedure used to derive and/or select test cases.
Traceability
The ability to identify related items in documentation and software, such as requirements
with associated tests. See also horizontal traceability, vertical traceability.
Horizontal traceability
The tracing of requirements for a test level through the layers of test documentation (e.g.
test plan, test design specification, test case specification, and test procedure specification
or test script.)
Vertical traceability
The tracing of requirements through the layers of development documentation to
components.
Test script
Commonly used to refer to a test procedure specification, especially an automated one.
Test execution schedule
A scheme for the execution of test procedures. The test procedures are included in the text
execution schedule in their context and in the order in which they are to be executed.
Experience-based test design technique
Procedure to derive and/or select test cases based on the tester's experience, knowledge,
and intuition.
Equivalence partitioning
A black box test design technique in which test cases are designed to execute
representatives from equivalence partitions. In principle test cases are designed to cover
each partition at least once.
Equivalence Partition
A portion of an input or output domain for which the behavior of a component or system is
assumed to be the same based on the specification.
Boundary value analysis
Also know as Boundary value. A black box test design technique in which test cases are
designed based on boundary values.
Boundary value
An input value or output value which is on the edge of an equivalence partition or at the
smallest incremental distance on either side of an edge, for example the minimum or
maximum value of a range.
Decision table testing
A black box test design technique in which test cases are designed to execute the
combinations of inputs and/or stimuli (causes) show in a decision table.
Decision table
Also known as Cause-effect decision table. A table showing combinations of inputs and/or
stimuli (causes) with their associated outputs and/or actions (effects), which can be used to
design test cases.
State transition testing
A black box test design technique in which test cases are designed to execute valid and
invalid state transactions.
State diagram
A diagram that depicts the states that a component or system can assume and shows the
events or circumstances that cause and/or results from a change form one state to another.
State table
A grid showing the resulting transitions for each state combined with each possible event,
showing both valid and invalid transitions.
Use case testing
A black box test design technique in which test cases are designed to execute scenarios of
use cases.
Statement coverage
The percentage of executable statements that have been exercised by a test suite.
Decision coverage
The percentage of decision outcomes that have been exercised by a test suite. 100% branch
coverage and 100% statement coverage.
Branch coverage
The percentage of branches that have been exercised by a test suite. 100% branch coverage
implies both 100% decision coverage and 100% statement coverage.
Fault attack (attack)
Directed and focused attempt to evaluate the quality, especially reliability, of a test object
by attempting to force specific failures to occur.
Exploratory testing
An informal test design technique where the tester actively controls the design of the tests
as those test are performed and uses information gained while testing to design new and
better tests.
Test management
The planning, estimating, monitoring, and control of test activities, typically carried out by a
test manager.
Tester
A skilled professional who is involved in the testing of a component or system.
Test manager (test leader)
The person responsible for project management of testing activities and resources, and
evaluation of a test object. The individual who directs, controls, administers, plans, and
regulates the evaluation of a test object.
Configuration management
A discipline applying technical and administrative directions and surveillance to: identify and
document the functional and physical characteristics of a configuration item, control
changes to those characteristics, record and report change processing and implementation
status, and verify compliance with specified requirements.
Configuration control (version control)
An element of configuration management, consisting of the evaluation, co-ordination,
approval, or disapproval, and implementation of changes to configuration items after formal
establishment of their configuration identification.
Product risk
A risk directly related to the test object.
Risk-based testing
An approach to testing to reduce the level of product risks and uniform stakeholders of their
status, starting in the initial stages of a project. It involves the identification of product risks
and the use of risk levels to guide the test process.
Project risk
A risk related to management and control of the (test) project, e.g. lack of staffing, strict
deadlines, changing requirements, etc.
Incident management
The process of recognizing, investigating, taking action, and disposing of incidents. It
involves logging incidents, classifying them, and identifying the impact.
Incident logging
Recording the details of any incident that occurred, e.g. during testing.
Defect report (bug report, problem report)
A document reporting on any flaw in a component or system that can cause the component
or system to fail to perform its required function.
Defect detection percentage (DDP)
The number of defects found by a test phase, divided by the number found by that test
phase and any other means afterwards.
Incident report
Also known as deviation report. A document reporting on any event that occurred, e.g.
during testing, which requires investigation.
Priority
The level of (business) importance assigned to an item, e.g. defect.
Severity
The degree of impact that a defect has on the development or operation of a component or
system.
Root cause
A source of a defect such that if it is removed, the occurrence of the defect type is
decreased or removed.
Test framework
1) Reusable and extensible testing libraries that can be used to build testing tools (which are
also called test harnesses); 2) A type of design of test automation (e.g. data-driven and
keyword - driven); and 3) An overall process of execution of testing.
Probe effect
The effect on the component or system by the measurement instrument when the
component or system is being measured, e.g. by a performance testing tool or monitor. For
example performance may be slightly worse when performance testing tools are being
used.
Test management tool
A tool that provides support to the test management and control part of a test process. It
often has several capabilities, such as testware management , scheduling of tests, the
logging of results, progress tracking, incident management and test reporting.
Requirements management tool
A tool that supports the recording of requirements, requirements attributes (e.g. priority,
knowledge responsible) and annotation, and facilitates traceability through layers of
requirements and requirements change management. Some requirements management
tools also provide facilities for static analysis such as consistency checking and violations to
pre-defined requirements rules.
Incident management tool (defect management tool)
A tool that facilitates the recording and status tracking of incidents. They often have
workflow-oriented facilities to track and control the allocation, correction, and re-testing of
incidents and provide importing facilities.
Configuration management tool
A tool that provides support for the identification and control of configuration items, their
status over changes and versions, and the release of baselines consisting of configuration
items.
Review tool
A tool that provides support to the review process. Typical features include review planning
and tracking support, communication support, collaborative reviews, and a repository for
collection and reporting of metrics.
Static analysis tool (static analyzer)
A tool that carries out static analysis.
Static code analyzer
A tool that carries out static code analysis. The tool checks source code for certain
properties such as conformance to coding standards, quality metrics or data flow anomalies.
Static code analysis
Analysis of source code carried out without execution of that software.
Modeling tool
A tool that supports the creation, amendment, and verification of models of the software or
system.
Test design tool
A tool that supports the test design activity by generating test inputs from a specification
that may be held in a CASE tool repository, e.g. requirements management tool, from
specified test conditions held in the tool itself, or from code.
Test data preparation
A type of test tool that enables data to be selected from existing databases or created,
generated, manipulated and edited for use in testing.
Test execution tool
A type of test tool that is able to execute other software using an automated test script, e.g.
capture/playback.
Capture/playback tool (capture/replay tool)
A type of test execution tool where inputs are recorded during manual testing in order to
generate automated test scripts that can be executed later (i.e. replayed). These tools are
often used to support automated regression testing.
Unit test framework tool
A tool that provides an environment for unit or component testing in which a component
can be tested in isolation with suitable stubs and drivers. It also provides other support for
the developer, such as debugging capabilities.
Test comparator
Also known as comparator. A test tool to perform automated test comparison of actual
results with expected results.
Test comparison
The process of identifying differences between the actual results produced by the
component or system under test and the expected results for a test. Test comparison can be
performed during test execution (dynamic comparison) or after test execution.
Coverage tool
Also known as Coverage measurement tool. A tool that provides objective measures of what
structural elements, e.g. statements, branches have been executed by a test suite.
Security testing tool
A tool that provides support for testing security characteristics and vulnerabilities.
Security tool
A tool that supports operational security.
Dynamic analysis tool
A tool that provides run-time information on the state of the software code. The tools are
most commonly used to identify unassigned pointers, check pointer arithmetic, and to
monitor the allocation, use and de-allocation of memory and to flag memory leaks.
Performance-testing tool (load-testing tool)
A tool to support performance testing that usually has two main facilities: load generation
and test transaction measurement. Load generation can simulate either multiple users or
high volumes of input data. During execution, response time measurements are taken from
selected transactions and these are logged. Performance testing tools normally provide
reports based on test logs and graphs of load against response times.
Volume testing
Testing where the system is subjected to large volumes of data.
Stress testing tool
A tool that supports stress testing.
Monitor (monitoring tool)
A software tool or hardware device that runs concurrently with the component or system
under test and supervises, records, and/or analyzes the behavior of the component or
system.
Debugging tool
Also known as debugger. A tool used by programmers to reproduce failures, investigate the
state of programs and find the corresponding defect. Debuggers enable programmers to
execute programs step by step, to halt a program at any program statement and to set and
examine program variables.
Scripting language
A programming language in which executable test scripts are written, used by a test
execution tool (e.g. a capture/playback tool).
Data-driven
A scripting technique that stores test input and expected results in a table or spreadsheet,
so that a single control script can execute all of the tests in the table. Data driven testing is
often used to support the application of test execution tools such as capture/playback tools.
High level test case
Also known as Abstract Test Case. A test case without concrete (implementation level)
values for input data and expected results. Logical operators are used; instances of the
actual values are not yet defined and/or available.
Acceptance testing
Also known as Acceptance. Formal testing with respect to user needs, requirements and
business processes conducted to determine whether or not a system satisfied the
acceptance criteria and to enable the users. customers, or other authorized entity to
determine whether or not to accept the system.
Accessibility testing
Testing to determine the ease by which user with disabilities can use a component or
system.
Accuracy
The capability of the software product to provide the right or agreed results or effects with
the needed degree of precision.
Accuracy testing
The process of testing to determine the accuracy of a software product.
Acting (IDEAL)
The phase within IDEAL model where the improvements are developed, put into practice,
and deployed across the organization. The acting phase consists of the activities: create
solution, pilot/test solution, refine solution and implement solution.
Actual result
Also known as Actual outcome. The behavior produced/observed when a component or
system is tested.
Adhoc testing
Testing carried out informally; no formal test preparation takes place, no recognized test
design technique is used, there are no expectations for results and arbitrariness guides the
test execution activity.
Adaptability
The capability of the software product to be adapted for different specified environments
without applying actions or means other than those provided for this purpose for the
software considered.
Agile testing
Testing practice for a project using agile methodologies such as extreme programming (XP),
treating development a the customer of testing and emphasizing the test-first design
paradigm.
Branch testing
Also known as Algorithm test or Arc testing. A white box test design technique in which test
cases are designed to execute branches.
Analyzability
The capability of the software product to be diagnosed for deficiencies or causes of failures
in the software, or for the parts to be modified to be identified.
Static Analyzer
Also known as Analyzer. A tool that carries out static analysis.
Anomaly
Any condition that deviates from expectation based on requirement specifications, design
documents, user documents, standards, etc. or from someone's perception or experience.
Anomalies may be found but not limited to, reviewing testing, analysis, compilation, or use
of software products or applicable documentation.
Assessment report
A document summarizing the assessment results, e.g. conclusions, recommendations and
findings.
Assessor
A person who conducts an assessment; any member of an assessment team.
Attack
Directed and focused attempt to evaluate the quality, especially reliability of a test object by
attempting to force specific failures to occur.
Attractiveness
The capability of he software product to be attractive to the user.
Audit
An independent evaluation of software products or processes to ascertain compliance to
standards, guidelines, specifications, and/or procedures based on objective criteria,
including documents that specify: 1. Form or content of products to be produced; 2. The
process by which the products shall be produced; 3. How compliance to standards or
guidelines shall be measured.
Audit trail
A path by which the original input to a process (e.g. data) can be traced back through the
process, taking the process output as a starting point. This facilitates defect analysis and
allows a process audit to be carried out.
Automated testware
Testware used in automation testing, such as tool scripts.
Availability
The degree to which a component or system is operational and accessible when required for
use. Often expressed as a percentage.
Back-to-back testing
Testing in which two or more variants of a component or system are executed with the
same inputs, the outputs compared, and analyzed in cases or discrepancies.
Balanced scorecard
A strategic performance management tool for measuring whether the operational activities
of a company are aligned with its objectives in terms of business vision and strategy.
Baseline
A specification or software product that has been formally reviewed or agreed upon that
thereafter serves as the basis for further development, and that can be changed only
through a formal change control process.
Basic block
A sequence of one or more consecutive executable statements containing no branches.
Basic block set
A set of test cases derived from the internal structure of a component or specification to
ensure that 100% of a specified coverage criteria will be achieved.
Fault Seeding
Also known as Bebugging. The process of intentionally adding known defects to those
already in the component or system for the purpose of monitoring the rate of detection and
removal and estimating the number of remaining defects.
Behavior
The response of a component or system to a set of input values and preconditions.
Benchmark test
1. A standard against which measurements or comparisons can be made. 2. A test that is
used to compare component or systems to each other or to a standard as in 1.
Bespoke software
Also known as customer software. Software developed specifically for a set of users or
customers. The opposite is off-the-shelf software.
Best practice
A superior method or innovative practice that contributes to the improved performance of
an organization under given context, usually recognized 'best' by other peer organizations.
Big-bang testing
A type of integration testing in which software elements, hardware elements, or both are
combined all at once into a component or an overall system, rather than in stages.
Black box test design technique
Also known as black box technique. Procedure to derive and/or select test cases based on
an analysis of the specification, either functional or non-functional, of a component or
system without reference to its internal structure.
Blocked test case
A test case that cannot be executed because the precondition for its execution are not
fulfilled.
Bottom-up testing
An incremental approach to integration testing where the lowest level components are
tested first, and then used to facilitate the testing of higher level components. This process
is repeated until the component at the top of the hierarchy is tested.
Boundary value coverage
The percentage of boundary values that have been exercised by a test suite.
Branch
A basic block that can be selected for execution based on a program construct in which one
of two or more alternatively program paths is available, e.g. case, jump, go to, if-then-else.
Condition
Also known as Branch condition or Test condition. A logical expression that can be evaluated
as true or false.
Multiple condition coverage
Also known as Branch condition combination coverage or Condition combination testing.
The percentage of combinations of all single condition outcomes within one statement that
have been exercised by a test suite. 100% condition determine coverage.
Multiple condition testing
Also known as Branch condition combination testing or Condition combination testing. A
white box test design technique in which test cases are designed to execute combinations of
single condition outcomes (within one statement).
Buffer
A device or storage area used to store data temporarily for differences in rates of data flow,
time, or occurrence of events, or amounts of data that can be handled by the devices or
processes involved in the transfer or use of the data.
Buffer overflow
A memory access failure due to the attempt by a process to store data beyond the
boundaries of a fixed length buffer, resulting in overwriting of adjacent memory areas or the
raising of an overflow exception.
Defect
Also known as a Bug. A flaw in a component or system that can cause the component or
system to fail to perform its required function, e.g. an incorrect statement or data
definition. A defect, if encountered during execution, may cause a failure of the component
or system.
Defect taxonomy
Also known as Bug taxonomy. A system of (hierarchal) categories designed to be a useful aid
for reproducibly classifying defects.
Defect management tool
Also known as Bug tracking tool or defect tracking tool. A tool that facilitates the recording
and status tracking of defects and changes. They often have workflow-oriented facilities to
track and control the allocation, correction, and re-entering of defects and provide reporting
facilities.
Business process-based testing
An approach to testing in which test cases are designed based on descriptions and/or
knowledge of business processes.
Call graph
An abstract representation of calling relationships between subroutines in a program.
Capability Maturity Model (CMM)
A five level staged framework that described the key elements of an effective software
process. The Capability Maturity Model covers best-practices for planning, engineering, and
managing software development and maintenance.
Capability Maturity Model integration (CMMI)
A framework that describes the key elements of an effective product development and
maintenance process. The Capability Maturity Model Integration covers best-practices for
planning, engineering, and managing product development and maintenance. CMMI is the
designated successor of the CMM.
Capture/playback tool
Also known as Capture/replay tool. A type of test execution tool where inputs are recorded
during manual testing in order to generate automated test scripts that can be executed later
(i.e. replayed). These tools are often used to support automated regression testing.
CASE
Acronym for Computer Aided Software Engineering.
CAST
Acronym for Computer Aided Software Testing.
Causal analysis
The analysis of defects to determine their root cause.
Cause-effect graphing
Also known as Cause-effect analysis. A black box test design technique in which test cases
are designed from cause-effect graphs.
Cause-effect diagram
A graphical representation used to organize and display the interrelationships of various
possible root causes of a problem. Possible causes of a real or potential defect or failure are
organized in categories and subcategories in a horizontal tree-structure, with the (potential)
defect or failure as the root node.
Cause-effect graph
A graphical representation of inputs and/or stimuli (causes) with their associated outputs
(effects), which can be used to design test cases.
Certification
The process of confirming that a component, system or person complies with its specified
requirements, e.g. by passing an exam.
Configuration control
Also known as Change control or Change control board. An element of configuration
management consisting of the evaluation, co-ordination, approval or disapproval, and
implementation of changes to configuration items after formal establishment of their
configuration identification.
Change management
1. A structured approach to transitioning individuals, teams, and organizations from a
current state to a desired future state. 2. Controlled way to effect a change, or a proposed
change, to a product or service.
Changeability
The capacity of the software product to enable specified modifications to be implemented.
Test charter
Also known as Charter. A statement of test objectives, and possibly test ideas about how to
test. Test charters are used in exploratory testing.
Reviewer
Also known as Checker. The person involved in the review that identifies and describes
anomalies in the product or project under review. Reviewers can be chose to represent
different viewpoints and roles in the review process.
Checklist-based testing
An experience-based test design technique whereby the experienced tester uses a high-
level list of items to be noted, checked, or remembered, or a set of rules or criteria against
which a product has to be verified.
N-switch coverage
Also known as Chow's coverage metrics. The percentage of sequences of N+1 transitions
that
have been exercised by a test suite
Classification tree
A tree showing equivalence partitions hierarchically ordered, which is used to design test
cases in the classification tree method.
Classification tree method
A black box test design technique in which test cases, described by means of a classification
tree, are designed to execute combinations of repesentatives of input and/or output
domains.
White- box testing
Also known as clear-box testing or code based testing. Testing based on an analysis of the
internal structure of the component or system.
Code
Computer instruction and data definitions expressed in a programming language or in a
form of output by an assembler, compiler, or other translator.
Static code anaylzer
Also known as code analyzer. A tool that carries out static code analysis. The tool checks
source code, for certain properties such as conformance to coding standards, quality
metrics, or data flow anomalies.
Codependent behavior
Excessive emotional or psychological dependence on another person, specifically in trying to
change that person's current (undesirable) behavior while supporting them in continuing
that behavior. For example, in software testing, complaining about late delivery to test and
yet enjoying the necessary "heroism" working additional hours to make-up time when
delivery is running late, therefore reinforcing the lateness.
Co-existence
The capability of the software product to co-exist with other independent software in a
common environment sharing common resources.
Exhaustive testing
Also known as complete testing. A test approach in which the test suite comprises all
combinations of input values and preconditions.
Exit criteria
Also known as completion criteria. The set of generic and specific conditions agreed upon
with the stakeholders, for permitting a process to be officially completed. The purpose of
exit criteria is to prevent a task from being considered completed when there are still
outstanding parts of the task which have not been finished. Exit criteria are used to report
against and to plan when to stop testing.
Complexity
The degree to which a component or system has a design and /or internal structure that is
difficult to understand, maintain, and verify.
Compliance
The capability of the software product to adhere to standards, conventions, or regulations in
laws and similar prescriptions.
Compliance testing
Also known as Conformance testing. The process of testing to determine the compliance of
the component or system.
Component
A minimal software item that can be tested in isolation.
Component integration testing
Testing performed to expose defects in the interfaces and interaction between integrated
components.
Component specification
A description of a component's function in terms of its output values for specified input
values under specified conditions, and required non-functional behavior (e.g. resource-
utilization).
Component testing
The testing of individual software components.
Compound condition
Two or more single conditions joined by means of a logical operator (AND, OR or XOR) e.g.
'A>B and C>1000'.
Low level test case
Also known as concrete test case. A test case with concrete (implementation level) values
for input data and expected results. Logical operators from high level test cases are replaced
by actual values that correspond to the objectives of the logical operators.
Concurrency testing
Testing to determine how the occurrence of two or more activities within the same interval
of time, achieved either by interleaving the activities or by simultaneous execution is
handled by the component or system.
Condition coverage
The percentage of condition outcomes that have been exercised by a test suite. 100%
condition coverage requires each single condition in every decision statement to be tested
as True and False.
Condition determination coverage
The percentage of single condition outcomes that independently affect a decision outcome
that have been exercised by a single test case suite. 100% condition determination coverage
implies 100% decision condition coverage.
Condition deteremination testing
A white box test design technique in which test cases are designed to execute single
condition outcomes that independently affect decision outcome.
Condition outcome
The evaluation of a condition to True or False.
Condition testing
A white box test design technique in which test cases are designed to execute condition
outcomes.
Smoke test
Also known as a confidence test. A subset of all defined/planned test cases that cover the
main functionality of a component or system, to ascertaining that the most crucial
functions, of a program work, but not bothering with finer details. A daily build and smoke
test is among industry best practices.
Configuration
The composition of a component or system as defined by the number, nature, and
interconnections of its constituent parts.
Configuration auditing
The function to check on the contents of libraries of configuration items, e.g. for standards
compliance.
Configuration control board (CCB)
A group of people responsible for evaluating and approving or disapproving proposed
changes to configuration items and for ensuring implementation of approved changes.
Configuration identification
An element of configuration management, consisting of selecting the configuration items for
a system and recording their functional and physical characteristics in technical
documentation.
Configuration item
An aggregation of hardware, software, or both, that is designated for configuration
management and treated as a single entity in the configuration management process.
Re-testing
Also known as Confirmation testing. Testing that runs test cases that failed the last time
they were run, in order to verify the success of corrective actions.
Consistency
The degree of uniformity, standardization, and freedom from contradiction among the
documents or parts of a component or system.
Content-based model
A process model providing a detailed description of good engineering practices, e.g. test
practices.
Continuous representation
A capability maturity model structure wherein capability levels provide a recommended
order for approaching process improvement within specified process areas.
Control flow
A sequence of events (paths) in the execution through a component or system.
Control flow analysis
A form of static analysis based on a representation of unique paths (sequences of events) in
the execution through a component or system. Control flow analysis evaluates the integrity
of control flow structures, looking for possible control flow anomalies such as closed loops
or logically unreachable process steps.
Control flow graph
An abstract representation of all possible sequences of events (paths) in the execution
through a component or system.
Path
Also known as Control flow path. A sequence of events e.g. executable statements of a
component or system from an entry point to an exit point.
Conversion testing
Testing of software used to convert data from existing systems for use in replacement
systems.
Corporate dashboard
A dashboard-style representation of the statue of corporate performance data.
Cost of quality
The total costs incurred on quality activities and issues and often split into prevention costs,
appraisal costs, internal failure costs and external failure costs.
Coverage
The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test suite.
Coverage analysis
Measurement of achieved coverage to a specified coverage item during test execution
referring to predeteremined criteria to determine whether additional testing is required and
if so, which test cases are needed.
Coverage item
An entity or property used as a basis for test coverage, e.g. equivalence partitions of code
statements.
Critical success factor
An element which is necessary for an organization or project to achieve its mission. They are
the critical factors or activities required for ensuring the success.
Critical testing processes (CTP)
An content-based model for test process improvement built around twelve critical
processes. These include highly visible processes, by which peers and management judge
competence and mission-critical processes in which performance affects the company's
profits and reputation.
Cyclomatic complexity
Also known as Cyclomatic number. The number of independent paths through a program.
Daily build
A development activity where a complete system is compiled and linked every day (usually
overnight) so that a consistent system is available at any time including all latest changes.
Dashboard
A representation of dynamic measurements of operational performance for some
organization or activity, using metrics represented via metaphors such as visual "dials",
"counters", and other devices resembling those on the dashboard of an automobile, so that
the effects or events or activities can be easily understood and related to operational goals.
Data definition
An executable statement where a variable is assigned a value.
Data driven testing
A scripting technique that stores test input and and expected results in a table or
spreadsheet, so that a single control script can execute all of the test in the table. Data
driven testing is often used to support the application of test execution tools such as
capture/playback tools.
Data flow
An abstract representation of the sequence and possible changes of the state of data
objects, where the state of an object is any of: creation, usage, or destruction.
Data flow analysis
A form or static analysis based on the definition and usage of variables.
Data flow coverage
The percentage of definition-use pairs that have been exercised by a test suite.
Data flow testing
A white box test design technique in which test cases are designed to execute definition and
use pairs of variables.
Database integrity testing
Also known as data integrity testing. Testing the methods and processes used to access and
manage the data(base), to ensure access methods, processes and data rules function as
expected and that during access to the database, data is not corrupted or unexpectedly
deleted, updated, or created.
dd-path
A path of execution (usually through a graph representing a program, such as a flow-chart)
that does not include any conditional modes such as the path of execution between two
decisions.
Unreachable code
Also known as dead code. Code that cannot be reached and is therefore impossible to
execute.
Decision
A program point at which the control flow two or more alternative routes. A node with two
or more links to separate branches.
Decision condition coverage
The percentage of all condition outcomes and decision outcomes that have been exercised
by a test suite. 100% decision condition coverage implies both 100% condition coverage and
100% decision coverage.
Decision condition testing
A white box test design technique in which test cases are designed to execute condition
outcomes and decision outcomes.
Decision outcome
The result of a decision (which therefore determines the branches to be taken).
Decision testing
A white box testing technique in which test cases are designed to execute decision
outcomes.
Defect based test design technique
Also known as defect based technique. A procedure to derive and/or select test cases
targeted at one or more defect categories, with tests being developed from what is known
about the specific defect category.
Defect density
The number of defects identified in a component or system divided by the size of the
component or system (expressed in standard measurement terms, e.g. lines-of-code,
number of classes or function points).
Defect masking
An occurrence in which one defect prevents the detection of another.
Defect report
A document reporting on any flaw in a component or system that can cause the component
or system to fail to perform its required function.
Definition-use pair
The association of the definition of a variable with the use of that variable. Variable uses
include computational (e.g. multiplication) or to direct the execution of a path.
Deliverable
Any (work) product that must be delivered to someone other than the (work) product's
author.
Deming cycle
An iterative four-step problem-solving process, (plan-to-do-check-act), typically used in
process improvement.
Design-based testing
An approach to to testing in which test cases are designed based on the architecture and/or
detailed design of a component or system (e.g. tests of interfaces between components or
systems).
Desk checking
Testing of software or a specification by manual simulation of its execution.
Development testing
Formal or informal testing conducted during the implementation of a component or system,
usually in the development environment by developers.
Diagnosing (IDEAL)
The phase within the IDEAL model where it is determined where one is, relative to where
one wants to be. The diagnosing phase consists of the activities:characterize current and
desired states and develop recommendations.
Negative testing
Also known as dirty testing. Test aimed at showing that a component or system does not
work. Negative testing is related to the tester's attitude rather than a specific test approach
or test design technique, e.g. testing with invalid input values or exceptions.
Documentation testing
Testing the quality of the documentation, e.g. user guide or installation guide.
Domain
The set from which valid input and/or output values can be selected.
Driver
A software component or test tool that replaces a component that takes care of the control
and/or the calling of a component or system.
Dynamic analysis
The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or
component during execution.
Dynamic comparison
Comparison of actual and expected results, performed while the software is being executed,
for example by a test execution tool.
EFQM (European Foundation for Quality Management) excellence model
A non-prescriptive framework for an organization's quality management system, defined
and owned by the European Foundation for Quality Management, based on five 'Enabling'
criteria (covering what an organization does) , and four 'Results' criteria (covering what an
organization achieves).
Elementary comparison testing
A black box test design technique in which test cases are designed to execute combinations
of inputs using the concept of condition determination coverage.
Emotional intelligence
The ability, capacity, and skill to identify, assess, and manage the emotions one's self, of
others, and of groups.
Emulator
A device, computer program, or system that accepts the same inputs and produces the
same outputs as a given system.
Entry point
An executable statement or process step which defines a point at which a given process is
intended to begin.
Equivalence partition
Also known as equivalence class. A portion of an input or output domain for which the
behavior of a component or system is assumed to be the same, based on the specification.
Equivalence partition coverage
The percentage of equivalence partitions that have been exercised by a test suite.
Error
A human action that produces an incorrect result.
Error guessing
A test design technique where the experience of the tester is used to anticipate what
defects might be present in the component or system under test as a result of errors made,
and to design tests specifically to expose them.
Fault seeding
Also known as Error seeding. The process of intentionally adding known defects to those
already in the component or system for the purpose of monitoring the rate of detection and
removal, estimating the number of remaining defects.
Fault seeding tool
Also known as error seeding tool. A tool for seeding ((i.e. intentionally inserting) faults in a
component or system.
Error tolerance
The ability of a system or component to continue normal operation despite the presence of
erroneous inputs.
Establishing (IDEAL)
The phase within the IDEAL model where the specifics of how an organization will reach its
destination are planned. The establishing phase consists of the activities:set priorities,
develop approach and plan actions.
Exception handling
Behavior of a component or system in response to erroneous input, from either a human
user or from another component or system, or to an internal failure.
Executable statement
A statement which, when compiled, is translated into object code, and which will be
executed procedurally when the program is running may perform an action on data.
Exercised
A program element is said to be exercised by a test case when the input value cause the
execution of that element, such as a statement, decision, or other structural element.
Exit point
An executable statement or process step which defines a point at which a given process is
intended to cease.
New Questions
What is the test basis?
a. The point during software development when testing should start
b. The body of knowledge used for test analysis and design
c. The source to determine the actual results from a set of tests
d. The method used to systematically devise test conditions
B
(B is correct. Per the glossary, the test basis is "a source to determine expected results to
compare with the actual result of the system under test". A is not correct because this is
usually defined as the moment of involvement for testing. C is not correct because this is
the test oracle. D is not correct because the test basis is not a method)
When the tester verifies the test basis while designing tests early in the lifecycle, which
common test
objective is being achieved?
a. Gaining confidence
b. Finding defects
c. Preventing defects
d. Providing information for decision making
C
(C is correct per the syllabus. The other three are achieved primarily by doing dynamic
testing. This is a bit tricky because you are very likely to find defects while doing this analysis
and this may lead to either gaining or destroying confidence and needing to supply
information to the decision makers. However, the wording of the question matches the
wording in the syllabus that defines preventing defects.)
When following the fundamental test process, when should the test control activity take
place?
a. During the planning activities
b. During the implementation and execution activities
c. During the monitoring activities
d. During all the activities
D
(D is correct. Control occurs throughout the
project to ensure that it is staying on track based
on the plan and to take any corrective steps that
may be necessary. The monitoring information is
used to determine if control actions are needed.)
Which of the following is a correct statement?
a. A developer makes a mistake which causes a defect that may be seen as a failure during
dynamic testing
b. A developer makes an error which results in a failure that may be seen as a fault when
the software is executed
c. A developer has introduced a failure which results in a defect that may be seen as a
mistake during dynamic testing
d. A developer makes a mistake which causes a bug that may be seen as a defect when the
software is executed
A
(A is correct. The developer makes a mistake/error which causes a defect/fault/bug which
may cause a failure when the code is dynamically tested or executed. B is incorrect because
fault and failure are reversed. C is incorrect because failure and mistake are reversed. D is
incorrect because it's a failure that's seen during execution, not the defect itself. The failure
is a symptom of the defect.)
Which of the following is an example of debugging?
a. A tester finds a defect and reports it
b. A tester retests a fix from the developer and finds a regression
c. A developer finds and fixes a defect
d. A developer performs unit testing
C
(C is correct. Debugging is what the developer
does to identify the cause of the defect, analyze
it and fix it. D may involve debugging, if the
developer finds a defect, but the act of unit
testing is not the same as debugging.)
Which of the following is a true statement about exhaustive testing?
a. It is a form of stress testing
b. It is not feasible except in the case of trivial software
c. It is commonly done with test automation
d. It is normally the responsibility of the developer during unit testing
B
(B is correct. Exhaustive testing, all combinations
of inputs and preconditions, is not feasible unless
the software is trivially simple.Otherwise it would
take too long and might not even be possible.)
A new retail product was released to production by your company. Shortly after the release
it was
apparent that there were numerous problems with the point of sale application. This
resulted in a
number of customer complaints and negative postings on social media encouraging people
to take their
business to your competitor. You have investigated the problems and have discovered that
the
production point of sale equipment is a later model than the model used in testing. The
software
functions correctly on the old version, but fails on the later model.
Given this scenario, what is the root cause and what is the effect?
a. The root cause is the old equipment and the effect is the new equipment
b. The root cause is the customer complaints and the effect is the social media postings
c. The root cause is conducting the testing on the wrong version of the equipment and the
effect
is the customer complaints and postings
d. The root cause is the software failing on the later model and the effect is the customer
complaints
C
(C is correct. The root cause is that the testing,
and maybe the development, were conducted on
the wrong version of the POS equipment. The
effect of the problem is the customer complaints
and the social media postings.)
If you need to provide a report showing test case execution coverage of the requirements,
what do you
need to track?
a. Traceability between the test cases and the requirements
b. Coverage of the risk items by test case
c. Traceability between the requirements and the risk items
d. Coverage of the requirements by the test cases that have been designed
A
(A is correct. In order to show the test execution
coverage of the requirements you will need
traceability between the requirements and the
test cases. As the test cases are executed this
traceability can be used to record tests executed
against the requirements. B is not correct
because it's looking for requirements coverage,
not risk coverage. C is not correct because it's
looking for test execution, not risk items. D is not
correct because it's looking for test cases that
have been executed, not just designed.)
Which of the following is most correct regarding the test level at which functional tests may
be
executed?
a. Unit and integration
b. Integration and system
c. System and acceptance
d. All levels
D
(D is correct. Functional testing should be
conducted at all levels)
Which of the following is a true statement regarding the V-model lifecycle?
a. Testing involvement starts when the code is complete
b. The test process is integrated with the development process
c. The software is built in increments and each increment has activities for requirements,
design,
build and test
d. All activities for development and test are completed sequentially
B
(B is correct. In the V-Model, testing activities are
paired with each development activity. A and D
are not correct. These behaviors are typical of a
waterfall model. C is not correct. This is typical of
an incremental model.)
Usability testing is an example of which type of testing?
a. Functional
b. Non-functional
c. Structural
d. Change-related
B
(B is correct. Usability is one of the non-functional
test types according to ISO 25010)
What type of testing is normally conducted to verify that a product meets a particular
regulatory
requirement?
a. Unit testing
b. Integration testing
c. System testing
d. Acceptance testing
D
(D is correct per the syllabus. Regulatory
acceptance is a form of acceptance testing. The
other types of testing should be conducted as
well, but the focus on the compliance with the
regulatory requirements should occur during
acceptance testing. It is a good practice to
conduct this testing as early as possible, but
formal acceptance by a regulatory agency is
normally done during acceptance testing.)
You have been receiving daily builds from the developers. Even though they are
documenting the fixes
they are including in each build, you are finding that the fixes either aren't in the build or are
not
working. What type of testing is best suited for finding these issues?
a. Unit testing
b. System testing
c. Confirmation testing
d. Regression testing
C
(C is correct. Confirmation testing will determine if
a fix is present in a build and if it actually fixes
the defect it is supposed to fix. A is not correct
because this would be conducted by the
developer as they fix the issues. While it might
catch a fix that doesn't work, it's not likely to
catch the check-in/build process that is excluding
the fix from the build. B is not correct because
system testing will take longer to pinpoint this
problem and may result in more troubleshooting
time when the problem is discovered again. D is
not correct because this is the testing that is
done to see if there have been any unintended
changes in the software's behavior.)
In a formal review, which role is normally responsible for documenting all the open issues?
a. The facilitator
b. The author
c. The scribe
d. The manager
C
(C is correct. The scribe is normally responsible
for documenting all issues, problems and open points. The author may take notes as well,
but that is not their primary role.)
Which testing technique would be most effective in determining and improving the
maintainability of the code (assuming developers fix what is found)?
a. Peer reviews
b. Static analysis
c. Dynamic testing
d. Unit testing
B
(B is correct. Static analysis with tools will give
the best results for improving maintainability and
ensuring adherence to good coding practices. A
may help, but depending on the peer, it may just
reinforce bad habits. C is unlikely to help
because only failures will be observed, not the
underlying code. D may help, but since it's
usually done by the developer who wrote the
code, it's unlikely to highlight maintenance
issues.)
For a formal review, at what point in the process are the entry and exit criteria defined?
a. Planning
b. Review initiation
c. Individual review
d. Fixing and reporting
A
(A is correct. The entry and exit criteria should be
defined during the planning step in the review
process. These should be evaluated at each step
to ensure the product is ready for the next step in
the process. B, C and D are not correct because
the criteria should already be defined by this
point.)
If the author of the code is leading a code review for other developers and testers, what
type of review
is it?
a. An informal development review
b. A walkthrough
c. An inspection
d. An audit
B
(B is correct. When the author of the document being reviewed is leading the review, this is
a typically a walkthrough. A is not correct because there is a group of people conversing in
the review rather than a more formal review meeting. C is a formal review type and the
facilitator (moderator) would lead the review. D is not correct because an audit is normally
conducted by a third party external to the team.)
You are participating in a role-based review session. Your assigned role is that of a senior
citizen. The
product is an online banking application that is targeted for use on smart phones. You are
currently
reviewing the user interface of the product with a prototype that works on iPhones. Which
of the
following is an area that you should review?
a. The speed of response from the banking backend
b. The attractiveness of the application
c. The size and clarity of the instruction text
d. The reliability of the application when the connection is dropped
C
(C is correct. As a senior citizen, you should be
checking that the size of the instruction text is
clearly readable and you should verify that the
instructions will make sense to a senior citizen. A
is not correct because this is not particular to
your role as senior citizens are generally not as
time compressed as younger users. B is not
correct. Although it is nice, attractiveness tends
to be very subjective and is difficult to evaluate at
a role level. D is not correct because the
reliability will be assumed by the senior citizen.
This should be reviewed by the technical users
and people on the go who are likely to move
between covered and not covered zones.)
Which of the following is an extension of equivalence partitioning?
a. Decision tables
b. Decision testing
c. Boundary value analysis
d. State transition testing
C
(C is correct. BVA is an extension of EP, looking
at the boundaries on the edges of the partitions
(or classes) of values.)
If test cases are derived from looking at the code, what type of test design technique is
being used?
a. Black-box
b. White-box
c. Specification-based
d. Behavior-based
B
(B is correct. A, C and D are all black-box and
use the specifications or requirements for the
test design)
Which of the following is a good reason to use experience-based testing?
a. You can find defects that might be missed by more formal techniques
b. You can test for defects that only experienced users would encounter
c. You can target the developer's efforts to the areas that users will be more likely to use
d. It is supported by strong tools and can be automated
A
(A is correct. Experience-based testing is often
used to fill in the gaps left by the more formal
testing techniques. B is not correct because it is used by experienced testers and has
nothing to
do with the experience level of the users. C is not
correct because it is a test technique, not a
development technique. D is not correct. There is
not much tool support for these techniques and
automation is not usually a goal because the
effectiveness depends on the experience of the
tester)
If you are using error guessing to target your testing, which type of testing are you doing?
a. Specification-based
b. Structure-based
c. Experience-based
d. Reference-based
C
(C is correct. This is an experience-based
technique.)
If you are testing a module of code, how do you determine the level of decision coverage
you have
achieved?
a. By taking the number of decisions you have tested and dividing that by the total number
of
executable statements in the module
b. By taking the number of decisions you have tested and dividing that by the total number
of
decisions in the module
c. By taking the number of decisions you have tested and dividing that by the total lines of
code in
the module
d. By taking the number of decision outcomes you have tested and dividing that by the total
number of decision outcomes in the module
D
(D is correct. Decision coverage looks at the
number of decision outcomes, not just the
decision statements)
Which of the following best describes the behaviors defined in a use case that should be
covered by
tests?
a. Positive path and negative path
b. Basic, exception and error
c. Normal, error, data, and integration
d. Control flow, data flow and decision paths
B
(B is correct. The basic, exception and error
behaviors should be covered with tests for a use
case. A good use case should define all of these.
A is not correct. Positive path equals the basic
behavior and negative path equals the error
behavior but neither handles the exception
paths. C and D are not correct as they are
investigating areas that would not be covered in
a use case)
You are testing a machine that scores exam papers and assigns grades. Based on the score
achieved
the grades are as follows: 1-49 = F, 50-59 = D-, 60-69 = D, 70-79 = C, 80-89 = B, 90-100=A
If you apply equivalence partitioning, how many test cases will you need to achieve
minimum test
coverage?
a. 6
b. 8
c. 10
d. 12
B
(B is correct. You need a test for the invalid too
low (0 or less), one for each valid partition, and
one for invalid too high (>100). It may not allow
you to enter a value > 100, but it should be
tested to make sure it provides a proper error)
You are testing a machine that scores exam papers and assigns grades. Based on the score
achieved
the grades are as follows: 1-49 = F, 50-59 = D-, 60-69 = D, 70-79 = C, 80-89 = B, 90-100=A
If you apply two-value boundary value analysis, how many test cases will you need to
achieve minimum
test coverage?
a. 8
b. 10
c. 12
d. 14
D
(D is correct. You need the following test cases:
0, 1, 49, 50, 59, 60, 69, 70, 79, 80, 89, 90, 100,
101)
You have been given the following conditions and results from those condition
combinations. Given this
information, using the decision table technique, what is the minimum number of test cases
you would
need to test these conditions?
Conditions:
Valid Cash
Valid Credit Card
Valid Debit Card
Valid Pin
Bank Accepts
Valid Selection
Item in Stock
Results:
Reject Cash
Reject Card
Error Message
Return Cash
Refund Card
Sell Item
a. 7
b. 13
c. 15
d. 18
C
(C is correct.)
You are testing a thermostat for a heating/air conditioning system. You have been given the
following
requirements:
• When the temperature is below 70 degrees, turn on the heating system
• When the temperature is above 75 degrees, turn on the air conditioning system
• When the temperature is between 70 and 75 degrees, inclusive, turn on fan only
Which of the following is the minimum set of test temperature values to achieve 100% two-
value
boundary value analysis coverage?
a. 70, 75
b. 65, 72, 80
c. 69, 70, 75, 76
d. 70, 71, 74, 75, 76
C
(C is correct. For the heating system, the values to test are 69, 70. For the air conditioning
system, the values are 75, 76
For the fan only, the values are 69, 70, 75, 76
<-- 69 | 70 - 75 | 76 ->
The proper test set combines all these values,
69, 70, 75, 76)
Who is normally responsible for the creation and update of a test plan for a project?
a. The project manager
b. The test manager
c. The tester
d. The product owner
B
(B is correct per the syllabus. A is not correct. The
PM is usually responsible for the overall project
plan. C is not correct. The tester may contribute
to the plan but is generally not responsible for
writing it. D is not correct. The PO may contribute
to the plan and review the plan, but they do not
normally write the plan)
You have been given the following requirement:
A user must log in to the system with a valid username and password. If they fail to enter
the correct
combination three times, they will receive an error and will have to wait 10 minutes before
trying again.
The test terminates when the user successfully logs in.
How many test cases are needed to provide 100% state transition coverage?
a. 1
b. 2
c. 4
d. 5
B
Which of the following is a project risk?
a. A module that performs incorrect calculations due to a defect in a formula
b. A failed performance test
c. An issue with the interface between the system under test and a peripheral device
d. A problem with the development manager which is resulting in his rejecting all defect
reports
D
(D is a project risk. The other three are product
risks.)
Which of the following is a benefit of test independence?
a. Testers have different biases than developers
b. Testers are isolated from the development team
c. Testers lack information about the test object
d. Testers will accept responsibility for quality
A
(A is correct. Testers bring different biases than
the developers have, so they may be able to see
different types of failures and check for
assumptions the developers may have made. B
and C are disadvantages. D is definitely a
disadvantage and is sometimes seen in the
developers losing their sense of responsibility for
quality.)
You are working in a team of testers who are all writing test cases. You have noticed that
there is a significant inconsistency with the length and amount of detail in the different test
cases. Where should
the test case guidelines have been documented?
a. The test plan
b. The test approach
c. The test case template
d. The project plan
A
(A is correct. The level of detail and structure for
the test documentation should be included in the
test plan.)
You have received the following description section in a defect report:
The report executed per the attached steps, but the data was incorrect. For example, the
information in
column 1 was wrong. See the attached screenshot. This report is critical to the users and
they will be
unable to do their jobs without this information.
What is the biggest problem with this defect report?
a. The developer won't know how important the problem is
b. The developer won't know how to repeat the test
c. The developer won't be able to see what the tester is saying is wrong
d. The developer won't know what the tester expected to see
D
(D is correct. From this information, the developer
only knows the tester thinks the information is
wrong, but it's not clear what was expected. A is
incorrect because, although vague, the incident
report seems to indicate this is an important
problem. B is incorrect because the steps are
attached (or so it says). C is incorrect because
the screen shot should indicate column 1 that is
wrong.)
Which of the following is an example of a good exit criterion from system testing?
a. All tests should be completed
b. The project budget should be spent
c. All defects should be fixed
d. All severity 1 defects must be resolved
D
(D is correct. This is measurable and clear. A is
not correct because completed is not a clear
term and this might not be a reasonable goal. B
is not correct because spending the budget is
generally not the goal and you wouldn't expect
the budget to be spent when system testing is
done because that leaves no money for
acceptance testing or roll out. C is not correct because this is a should and also probably is
not realistic)
A metric that tracks the number of test cases executed is gathered during which activity in
the test
process?
a. Planning
b. Implementation
c. Execution
d. Reporting
C
(C is correct. Test execution metrics are gathered
during the Test Execution activity. These metrics
are used in reporting.)
Which of the following variances should be explained in the Test Summary Report?
a. The variances between the weekly status reports and the test exit criteria
b. The variances between the defects found and the defects fixed
c. The variances between what was planned for testing and what was actually tested
d. The variances between the test cases executed and the total number of test cases
C
(C is correct. The variances or deviations
between the test plan and the testing that was
actually done must be explained in the test
summary report. A is not correct because if the
weekly status reports have been tracking
incorrectly to the test exit criteria, something is
wrong and should have been caught a lot earlier.
B is not correct because this information should
be included in the test summary report, but a
variance is expected. D is not correct because
this should be tracked in the metrics section of
the report rather than as a variance.)
You have been given the following set of test cases to run. You have been instructed to run
them in
order by risk and to accomplish the testing as quickly as possible to provide feedback to the
developers
as soon as possible. Given this information, what is the best order in which to run these
tests?
ID Duration Risk Dependency
1 30minutes Low 6
2 10minutes Medium none
3 45minutes High 1
4 30minutes High 2
5 10minutes Medium 4
6 15minutes Low 2
a. 2, 4, 5, 6, 1, 3
b. 4, 3, 2, 5, 6, 1
c. 2, 5, 6, 4, 1, 3
d. 6, 1, 3, 2, 4, 5
A
(A is correct because it addresses the highest risk
and fastest tests first. It runs a fast medium test
before a slow and more dependent high risk test
because this will give feedback to the developers
more quickly.)
Why is it important to define usage guidelines for a new tool?
a. Because this is a proven success factor in tool deployment
b. Because this will ensure the licensing restrictions are enforced
c. Because management needs to understand the details of the tool usage
d. Because this will provide the information needed for the cost/benefit analysis
A
(A is correct. This is one of the success factors in
tool deployment. B is not correct because the
usage guidelines are for the actual users, not the
overall organization which is where the licensing
requirements might be a concern. C is not
correct because management is not focusing on
the details. D is not correct because the
cost/benefit information needs to be gathered
long before the tool is procured.)
Which of the following is an example of a tool that supports static testing?
a. A tool that assists with tracking the results of reviews
b. A defect tracking tool
c. A test automation tool
d. A tool that helps design test cases for security testing
A
(A is correct. Reviews are a form of static testing
and a tool that supports reviews is an example of
a tool that supports static testing. B is an
example of a management tool used for defect
management. C is an example of a test
execution tool. D is an example of a test design
tool.)
Which one of the following answers describes a test condition?
a) An attribute of a component or system specified or implied by requirements
documentation.
b) An aspect of the test basis that is relevant to achieve specific test objectives.
c) The degree to which a software product provides functions which meet stated and
implied
needs when the software is used under specified conditions.
d) The percentage of all single condition outcomes that independently affect a decision
outcome that have been exercised by a test suite.
b
(a) Is not correct: Definition of feature according to glossary.
b) Is correct: From glossary.
c) Is not correct: Definition of functionality suitability according to glossary.
d) Is not correct: Definition of modified condition decision coverage according to glossary.)
Which of the following statements is a valid objective for testing?
a) The test should start as late as possible so that development had enough time to create a
good product.
b) To find as many failures as possible so that defects can be identified and corrected.
c) To prove that all possible defects are identified.
d) To prove that any remaining defects will not cause any failures
b
(a) Is not correct: Contradiction to principle 3: "Early testing saves time and money".
b) Is correct: This is one objective of testing (syllabus chapter 1.1.1).
c) Is not correct: Principle #2 states that exhaustive testing is impossible, so one can never
prove that all defects were identified (syllabus chapter 1.3).
d) Is not correct: To make an assessment whether a defect will cause a failure or not, one
has to detect the defect first. Saying that no remaining defect will cause a failure implicitly
means that all defects were found. This again contradicts principle #2. (syllabus chapter
1.3).)
Which of the following statements correctly describes the difference between testing and
debugging?
a) Testing identifies the source of defects; debugging analyzes the defects and proposes
prevention activities.
b) Dynamic testing shows failures caused by defects; debugging finds, analyzes, and
removes
the causes of failures in the software.
c) Testing removes defects; debugging identifies the causes of failures.
d) Dynamic testing prevents the causes of failures; debugging removes the failures
b
(a) Is not correct: Testing does not identify the source of defects, debugging identifies the
defects (syllabus chapter 1.1.2). b) Is correct: Dynamic testing can show failures that are
caused by defects in the software. Debugging eliminates the defects, which are the source
of failures, not the root cause of the defects (syllabus 1.1.2). c) Is not correct: Testing does
not remove faults, but debugging remove defects that cause the faults (syllabus chapter
1.1.2). d) Is not correct: Dynamic testing does not directly prevent the causes of failures
(defects), but detects the presence of defects (syllabus chapter 1.3, 1. principle).)
Which one of the statements below describes the most common situation for a failure
discovered
during testing or in production?
a) The product crashed when the user selected an option in a dialog box.
b) The wrong version of a compiled source code file was included in the build.
c) The computation algorithm used the wrong input variables.
d) The developer misinterpreted the requirement for the algorithm.
a
(a) Is correct: A crash is clearly noticeable by the user (syllabus chapter 1.2.3).
b) Is not correct: this is a defect, not a failure, since there is something wrong in the code. It
may not result in a visible or noticeable failure, for example if the changes in the source
code file are only in comments (syllabus chapter 1.2.3).
c) Is not correct: The use of wrong input variables may not result in a visible or noticeable
failure, for example if nobody uses this particular algorithm; or if the wrong input variable
has a similar value to the correct input variable; or if the FALSE result of the algorithm is not
used (syllabus chapter 1.2.3).
d) Is not correct: This type of fault will not necessarily lead to a failure; for example, if no
one uses this special algorithm (syllabus chapter 1.2.3).)
Mr. Test has been testing software applications on mobile devices for a period of 5 years. He
has a
wealth of experience in testing mobile applications and achieves better results in a shorter
time than
others. Over several months Mr. Test did not modify the existing automated test cases and
did not
create any new test cases. This leads to fewer and fewer defects being found by executing
the
tests. What principle of testing did Mr. Test not observe?
a) Testing depends on the environment.
b) Exhaustive testing is not possible.
c) Repeating of tests will not find new defects.
d) Defects cluster together.
c
(a) Is not correct: Testing is context dependent, regardless of it being manual or automated
(syllabus chapter 1.3, 6. principle), but does not result in detecting a decreasing number of
faults as described above.
b) Is not correct: Exhaustive testing is impossible, regardless of the amount of effort put into
testing (syllabus chapter 1.3, 2. principle).
c) Is correct: Syllabus 1.3: principle #5 says "If the same tests are repeated over and over
again, eventually these tests no longer find any new defects. To detect new defects, existing
tests and test data may need changing, and new tests may need to be written." Automated
regression testing of the same test cases will not bring new findings.
d) Is not correct: "Defects cluster together" (syllabus chapter 1.3.4, 4. principle). A small
number of modules usually contain most of the defects, but this does not mean that fewer
and fewer defects will be found.)
In what way can testing be part of Quality Assurance?
a) It ensures that requirements are detailed enough.
b) It contributes to the achievement of quality in a variety of ways.
c) It ensures that standards in the organization are followed.
d) It measures the quality of software in terms of number of executed test cases.
b
(a) Is not correct: This is quality assurance but not testing
(syllabus chapter 1.2.2).
b) Is correct: Syllabus 1.2.2. Testing contributes to the
achievement of quality in a variety of ways, e.g. such as
reducing the risk of inadequate software quality (syllabus chapter 1.1.1).
c) Is not correct: This is quality assurance but not testing
(syllabus chapter 1.2.2).
d) Is not correct: The quality cannot be measured by counting the number of executed test
cases without knowing the outcome (syllabus chapter 1.2.2).)
Which of the following activities is part of the main activity "test analysis" in the test
process?
a) Identifying any required infrastructure and tools.
b) Creating test suites from test scripts.
c) Analyzing lessons learned for process improvement.
d) Evaluating the test basis for testability.
d
(a) Is not correct: This activity is performed during the test design activity (syllabus chapter
1.4.2, test design).
b) Is not correct: This activity is performed during the test implementation activity (syllabus
chapter 1.4.2, test
implementation).
c) Is not correct: This activity is performed during the test completion activity (syllabus
chapter 1.4.2, test completion).
d) Is correct: This activity is performed during the test analysis activity (syllabus chapter
1.4.2, test analysis).)
Match the following test work products (1-4) with the right description (A-D).
1. Test suite.
2. Test case.
3. Test script.
4. Test charter.
A. A group of test scripts with a sequence of instructions.
B. A set of instructions for the execution of a test.
C. Contains expected results.
D. An instruction of test goals and possible test ideas on how to test.
a) 1A, 2C, 3B, 4D.
b) 1D, 2B, 3A, 4C.
c) 1A, 2C, 3D, 4B.
d) 1D, 2C, 3B, 4A.
a
(Test suite: Glossary 3.2.1: A set of test cases or test
procedures to be executed in a specific test cycle."(1A).
Test case: Glossary "A set of preconditions, inputs, actions (where applicable), expected
results and post conditions, developed based on test conditions" (2C).
Test script: Glossary "A sequence of instructions for the
execution of a test" (3B).
Test charter: Glossary "A statement of test objectives, and possibly test ideas about how to
test. Documentation of test activities in session-based exploratory testing" (4D).
Thus:
a) Is correct
b) Is not correct
c) Is not correct
d) Is not correct)
How can white-box testing be applied during acceptance testing?
a) To check if large volumes of data can be transferred between integrated systems.
b) To check if all code statements and code decision paths have been executed.
c) To check if all work process flows have been covered.
d) To cover all web page navigations.
c
(a) Is not correct: Relevant for integration testing (syllabus chapter 2.2.2).
b) Is not correct: Relevant for component testing (syllabus chapter 2.2.1).
c) Is correct: syllabus chapter 2.3.5: For acceptance testing, tests are designed to cover all
supported financial data file structures and value ranges for bank-to-bank transfers.
d) Is not correct: Relevant for system testing (syllabus chapter 2.2.3).)
Which of the following statements comparing component testing and system testing is
TRUE?
a) Component testing verifies the functionality of software modules, program objects, and
classes that are separately testable, whereas system testing verifies interfaces between
components and interactions between different parts of the system.
b) Test cases for component testing are usually derived from component specifications,
design specifications, or data models, whereas test cases for system testing are usually
derived from requirement specifications or use cases.
c) Component testing only focuses on functional characteristics, whereas system testing
focuses on functional and non-functional characteristics.
d) Component testing is the responsibility of the testers, whereas system testing typically is
the responsibility of the users of the system.
b
(a) Is not correct: System testing does not test interfaces between components and
interactions between different parts of the system; this is a target of integration tests
(syllabus chapter
2.2.2).
b) Is correct: Syllabus 2.2.1: Examples of work products that can be used as a test basis for
component testing include detailed design, code, data model, component specifications.
Syllabus
2.2.3: Examples of work products for system testing include system and software
requirement specifications (functional and non-functional) use cases.
c) Is not correct: Component testing does not ONLY focus on functional characteristics.
d) Is not correct: Component tests are also executed by
developers, whereas system testing typically is the
responsibility of testers (syllabus chapter 2.2).)
Which one of the following is TRUE?
a) The purpose of regression testing is to check if the correction has been successfully
implemented, while the purpose of confirmation testing is to confirm that the correction
has no side effects.
b) The purpose of regression testing is to detect unintended side effects, while the purpose
of confirmation testing is to check if the system is still working in a new environment.
c) The purpose of regression testing is to detect unintended side effects, while the purpose
of confirmation testing is to check if the original defect has been fixed.
d) The purpose of regression testing is to check if the new functionality is working, while the
purpose of confirmation testing is to check if the original defect has been fixed.
c
(a) Is not correct: Regression testing does not check successful implementation of
corrections and confirmation testing does not check for side effects (syllabus chapter 2.4).
b) Is not correct: The statement about confirmation testing should be about regression
testing (syllabus chapter 2.4).
c) Is correct: Syllabus chapter 2.3.4.
d) Is not correct: Testing new functionality is not regression testing (syllabus chapter 2.4).)
Which one of the following is the BEST definition of an incremental development model?
a) Defining requirements, designing software and testing are done in phases where in each
phase a piece of the system is added.
b) A phase in the development process should begins when the previous phase is complete.
c) Testing is viewed as a separate phase which takes place after development has been
completed.
d) Testing is added to development as an increment.
a
(a) Is correct: Syllabus chapter 2.1.1: incremental development involves establishing
requirements, designing, building, and testing a system in pieces.
b) Is not correct: This is a sequential model (syllabus chapter 2.1.1).
c) Is not correct: This describes the waterfall model (syllabus chapter 2.1.1).
d) Is not correct: Testing alone is not an increment/additional step in the development
(syllabus chapter 2.1.1).)
Which of the following should NOT be a trigger for maintenance testing?
a) Decision to test the maintainability of the software.
b) Decision to test the system after migration to a new operating platform.
c) Decision to test if archived data is possible to be retrieved.
d) Decision to test after "hot fixes".
a
(a) Is correct: This is maintainability testing, not maintenance testing.
b) Is not correct: This is a trigger for maintenance testing, see the syllabus chapter 2.4.1:
Operational tests of the new environment as well as of the changed software.
c) Is not correct: This is the trigger for maintenance testing, see the syllabus chapter 2.4.1:
testing restore/retrieve procedures after archiving for long retention periods.
d) Is not correct: This is the trigger for maintenance testing, see the syllabus chapter 2.4.1:
Reactive modification of a delivered software product to correct emergency defects that
have caused actual failures)
Which of the following options are roles in a formal review?
a) Developer, Moderator, Review leader, Tester.
b) Author, Moderator, Manager, Developer.
c) Author, Manager, Review leader, Designer.
d) Author, Moderator, Review leader, Scribe.
d
(a) Is not correct: Tester and developer are NOT roles in a formal review as per syllabus
chapter 3.2.2.
b) Is not correct: Developer is NOT a role in a formal review as per syllabus chapter 3.2.2.
c) Is not correct: Designer is NOT a role in a formal review as per syllabus chapter 3.2.2.
d) Is correct: See syllabus chapter 3.2.2.)
Which activities are carried out within the planning of a formal review?
a) Collection of metrics for the evaluation of the effectiveness of the review.
b) Answer any questions the participants may have.
c) Verification of input criteria for the review..
d) Evaluation of the review findings against the exit criteria.
c
(a) Is not correct: 'Collection of metrics' belongs to the main
activity "Fixing and Reporting" (syllabus chapter 3.2.1).
b) Is not correct: 'Answer any question.' belongs to the main activity "Initiate Review"
(syllabus chapter 3.2.1).
c) Is correct: According to syllabus chapter 3.2.1: The checking of entry criteria takes place in
the planning of a formal review.
d) Is not correct: The evaluation of the review findings against the exit criteria belongs to the
main activity "Issue communication and analysis" (syllabus chapter 3.2.1).)
Which of the review types below is the BEST option to choose when the review must follow
a formal
process based on rules and checklists?
a) Informal Review.
b) Technical Review.
c) Inspection.
d) Walkthrough.
c
(a) Is not correct: Informal review does not use a formal process (syllabus chapter 3.2.3).
b) Is not correct: Use of checklists are optional (syllabus chapter 3.2.3).
c) Is correct: As per syllabus 3.2.3: inspection is a formal process based on rules and
checklists.
d) Is not correct: Does not explicitly require a formal process and the use of checklists is
optional (syllabus chapter 3.2.3).)
Which TWO of the following statements about static testing are MOST true?
a) Static testing is a cheap way to detect and remove defects.
b) Static testing makes dynamic testing less challenging.
c) Static testing allows early validation of user requirements.
d) Static testing makes it possible to find run-time problems early in the lifecycle.
e) When testing safety-critical system, static testing has less value because dynamic testing
finds the defects better.
Select TWO options
a,c
(a) Is correct: Syllabus chapter 3.1.2: defects found early are often much cheaper to remove
than defects detected later in the lifecycle.
b) Is not correct: Dynamic testing still has its challenging objectives (syllabus chapter 3.1.2).
c) Is correct: Syllabus chapter 3.1.2: preventing defects in design or coding by uncovering
omissions, inaccuracies, inconsistencies, ambiguities, and redundancies in requirements.
d) Is not correct: This is dynamic testing (see glossary V.3.2).
e) Is not correct: Static testing is important for safety-critical computer systems (syllabus
chapter 3.1).)
You will be invited to a review. The work product to be reviewed is a description of the in-
house
document creation process. The aim of the description is to present the work distribution
between
the different roles involved in the process in a way that can be clearly understood by
everyone.
You will be invited to a checklist-based review. The checklist will also be sent to you. It
includes the
following points:
i. Is the person who performs the activity clearly identified for each activity?
ii. Is the entry criteria clearly defined for each activity?
iii. Is the exit criteria clearly defined for each activity?
iv. Are the supporting roles and their scope of work clearly defined for each activity?
In the following we show an excerpt of the work result to be reviewed, for which you should
use the
checklist above:
"After checking the customer documentation for completeness and correctness, the
software
architect creates the system specification. Once the software architect has completed the
system
specification, he invites testers and verifiers to the review. A checklist describes the scope of
the
review. Each invited reviewer creates review comments - if necessary - and concludes the
review
with an official review done-comment."
Which of the following statements about your review is correct?
a) Point ii) of the checklist has been violated because it is not clear which condition must be
fulfilled in order to invite to the review.
b) You notice that in addition to the tester and the verifier, the validator must also be
invited.
Since this item is not part of your checklist, you do not create a corresponding comment.
c) Point iii) of the checklist has been violated as it is not clear what marks the review as
completed.
d) Point i) of the checklist has been violated because it is not clear who is providing the
checklist for the invitation to the review.
d
(a) Is not correct: It is described that the software architect must have completed the
system specification. b) Is not correct: In syllabus chapter 3.2.4 'checklist-based', last
sentence it is documented that you should also look for defects outside the checklist.
c) Is not correct: It is described: every reviewer did his review done comment.
d) Is correct: It is described that a checklist is available, but who provides the checklist?)
What is checklist-based testing?
a) A test technique in which tests are derived based on the tester's knowledge of past faults,
or general knowledge of failures.
b) Procedure to derive and/or select test cases based on an analysis of the specification,
either functional or non-functional, of a component or system without reference to its
internal structure.
c) An experience-based test technique whereby the experienced tester uses a list of items to
be noted, checked, or remembered, or a set of rules or criteria against which a product has
to be verified.
d) An approach to testing where the testers dynamically design and execute tests based on
their knowledge, exploration of the test item and the results of previous tests.
c
(a) Is not correct: This is error guessing, defined in glossary V.3.2.
b) Is not correct: This is black-box test technique, defined in glossary V.3.2.
c) Is correct: Defined in glossary V.3.2.
d) Is not correct: This is exploratory testing, defined in Glossary V.3.2.)
Which one of the following options is categorized as a black-box test technique?
a) A technique based on analysis of the architecture.
b) A technique checking that the test object is working according to the technical design.
c) A technique based on the knowledge of past faults, or general knowledge of failures.
d) A technique based on formal requirements.
d
(a) Is not correct: This is a white-box test technique (syllabus chapter 2.2.2 and 4.1.2).
b) Is not correct: This is a white-box test technique (syllabus chapter 4.1.2).
c) Is not correct: This is an experience-based test technique (syllabus chapter 4.4).
d) Is correct: Syllabus 4.1.2: Black-box test techniques are based on an analysis of the
appropriate test basis (e.g. formal requirements documents, specifications, use cases, user
stories).)
The following statement refers to decision coverage:
"When the code contains only a single 'if' statement and no loops or CASE statements, and
its execution is not nested within the test, any single test case we run will result in 50%
decision coverage."
Which of the following statement is correct?
a) The statement is true. Any single test case provides 100% statement coverage and
therefore 50% decision coverage.
b) The statement is true. Any single test case would cause the outcome of the "if" statement
to be either true or false.
c) The statement is false. A single test case can only guarantee 25% decision coverage in this
case.
d) The statement is false. The statement is too broad. It may be correct or not, depending
on the tested software.
b
(a) Is not correct: While the given statement is true, the explanation is not. The relationship
between statement and decision coverage is misrepresented (syllabus chapter 4.3).
b) Is correct: Since any test case will cause the outcome of the "if" statement to be either
TRUE or FALSE, by definition we achieved 50% decision coverage (syllabus chapter 4.3).
c) Is not correct: A single test case can give more than 25% decision coverage, this means
according to the statement above always 50 % decision coverage (syllabus chapter 4.3).
d) Is not correct: The statement is specific and always true, because each test case achieves
50 % decision coverage (syllabus chapter 4.3))
Which one of the following is the description of statement coverage?
a) It is a metric, which is the percentage of test cases that have been executed.
b) It is a metric, which is the percentage of statements in the source code that have been
executed.
c) It is a metric, which is the number of statements in the source code that have been
executed by test cases that are passed.
d) It is a metric, that gives a true/false confirmation if all statements are covered or not.
b
(a) Is not correct: Statement coverage measures the percentage of statements exercised by
test cases.
b) Is correct: Syllabus 4.3.1: statement testing exercises the executable statements in the
code. Statement coverage is measured as the number of statements executed by the tests
divided by the total number of executable statements in the test object, normally expressed
as a percentage.
c) Is not correct: The coverage does not measure pass/fail.
d) Is not correct: It is a metric and does not provide true/false statements.)
Which statement about the relationship between statement coverage and decision
coverage is true?
a) 100% decision coverage also guarantees 100% statement coverage.
b) 100% statement coverage also guarantees 100% decision coverage.
c) 50% decision coverage also guarantees 50% statement coverage.
d) Decision coverage can never reach 100%.
a
(a) Is correct: The statement is true. Achieving 100% decision coverage guarantees 100%
statement coverage (syllabus chapter 4.3.3 third paragraph).
b) Is not correct: The statement is false because achieving 100 % statement coverage does
not in any case mean that the decision coverage is 100% (syllabus chapter 4.3.3 third
paragraph).
c) Is not correct: The statement is false, because we can only do statements about 100%
values (syllabus chapter 4.3.3 third paragraph).
d) Is not correct: The statement is false (syllabus chapter 4.3.3).)
For which of the following situations is explorative testing suitable?
a) When time pressure requires speeding up the execution of tests already specified.
b) When the system is developed incrementally and no test charter is available.
c) When testers are available who have sufficient knowledge of similar applications and
technologies.
d) When an advanced knowledge of the system already exists and evidence is to be
provided
that it should be tested intensively.
c
(a) Is not correct: Exploratory testing is not suitable to speed up tests, which are already
specified. It is most useful when there are few or inappropriate specified requirements or
significant time pressure on testing (syllabus chapter 4.4.2).
b) Is not correct: The absence of a test charter, which may have been derived from the test
analysis, is a poor precondition for the use of exploratory testing (syllabus chapter 1.4.3 and
4.4.2).
c) Is correct: Exploratory tests should be performed by experienced testers with knowledge
of similar applications and technologies (syllabus chapter 4.4 and 1.4.2).
d) Is not correct: Explorative testing alone is not suitable to provide evidence that the test
was very intensive, instead the evidence is provided in combination with other test methods
(syllabus chapter 4.4.2).)
An employee's bonus is to be calculated. It cannot be negative, but it can be calculated
down to
zero. The bonus is based on the length of employment:
less than or equal to 2 years,
more than 2 years but less than 5 years,
5 to 10 years inclusively or longer than 10 years.
What is the minimum number of test cases required to cover all valid equivalence partitions
for
calculating the bonus?
a) 3.
b) 5.
c) 2.
d) 4.
d
(a) Is not correct: one too few (see the four correct partitions in d).
b) Is not correct: one too much (see the four correct partitions in d).
c) Is not correct: two too few (see the four correct partitions in d).
d) Is correct: The 4 equivalence partitions correspond to the description in the question; i.e.
at least one test case must be created for each equivalence partition
1. Equivalence partition: 0 ≤ employment time ≤ 2.
2. Equivalence partition: 2 < employment time < 5.
3. Equivalence partition: 5 ≤ employment time ≤ 10.
4. Equivalence partition: 10 < employment time)
A company's employees are paid bonuses if they work more than a year in the company and
achieve a target which is individually agreed before.
These facts can be shown in a decision table:
Condition T1 T2 T3 T4
C1 Employment for more than a year YES NO NO YES
C2 Agreed Target NO NO YES YES
C3 Achieved Target NO NO YES YES
Action Bonus Payment NO NO NO NO
Which of the following test cases represents a situation that can happen in real life,and is
missing
in the above decision table?
a) Condition1 = YES, Condition2 = NO, Condition3 = YES, Action= NO
b) Condition1 = YES, Condition2 = YES, Condition3 = NO, Action= YES
c) Condition1 = NO, Condition2 = NO, Condition3 = YES, Action= NO
d) Condition1 = NO, Condition2 = YES, Condition3 = NO, Action= NO
d
(a) Is not correct: If there was no agreement on targets, it is impossible to reach the targets.
Since this situation can´t occur, this is not a scenario happening in reality. b) Is not correct:
The test case is objectively wrong, since under these conditions no bonus is paid because
the agreed target was not reached.
c) Is not correct: There was no agreement on targets, it is impossible to reach the targets.
Since this situation can´t occur, this is not a scenario happening in reality.
d) Is correct: The test case describes the situation that the too short period of employment
and the non-fulfilment of the agreed target leads to non-payment of the bonus. This
situation can occur in practice, but is missing in the decision table)
A video application has the following requirement: The application shall allow playing a
video on the
following display resolution:
1. 640x480.
2. 1280x720.
3. 1600x1200.
4. 1920x1080.
Which of the following list of test cases is a result of applying the equivalence partitioning
test
technique to test this requirement?
a) Verify that the application can play a video on a display of size 1920x1080 (1 test case).
b) Verify that the application can play a video on a display of size 640x480 and 1920x1080 (2
test cases).
c) Verify that the application can play a video on each of the display sizes in the requirement
(4 test cases).
d) Verify that the application can play a video on any one of the display sizes in the
requirement (1 test case).
c
(a) Is not correct: See correct answer c).
b) Is not correct: See correct answer c).
c) Is correct: This is a case where the requirement gives an enumeration of discrete values.
Each enumeration value is an equivalence class by itself; therefore, each will be tested when
using equivalence partitioning test technique.
d) Is not correct: See correct answer c).)
Which of the following statements BEST describes how tasks are divided between the test
manager
and the tester?
a) The test manager plans testing activities and chooses the standards to be followed, while
the tester chooses the tools and set the tools usage guidelines.
b) The test manager plans and controls the testing activities, while the tester specifies the
tests and decides on the test automation framework.
c) The test manager plans, monitors, and controls the testing activities, while the tester
designs tests and decides on the release of the test object.
d) The test manager plans and organizes the testing and specifies the test cases, while the
tester prioritizes and executes the tests.
b
(a) Is not correct: Selection of tools is a test manager task (syllabus 5.1.2 11. dot.).
b) Is correct: See Syllabus 5.1.2 (test manager 2. + 4. + 8.dot; tester 5.+ 6.dot).
c) Is not correct: The tester does not decide on the release of the test object (syllabus
chapter 5.1.2)
d) Is not correct: The tester specifies the test cases, the test manager does the prioritization
(syllabus chapter 5.1.2).)
Which TWO of the following can affect and be part of the (initial) test planning?
a) Budget limitations.
b) Test objectives.
c) Test log.
d) Failure rate.
e) Use cases.
Select TWO options.
a,b
(a) Is correct: According to syllabus chapter 5.2.1 budgeting (7.dot) and making decisions
about what to test (4.dot) are documented in the test plan. This means when you are
planning the test and there are budget limitations, prioritizing is needed; what should be
tested and what should be omitted.
b) Is correct: See syllabus 5.2.1.
c) Is not correct: See syllabus 1.4.2, test monitoring and control.
d) Is not correct: See syllabus chapter 5.3.1, common test metrics, and 4. dot. e) Is not
correct: It is a part of test analysis (syllabus chapter 1.4.2).)
Which of the following metrics would be MOST useful to monitor during test execution?
a) Percentage of executed test cases.
b) Average number of testers involved in the test execution.
c) Coverage of requirements by source code.
d) Percentage of test cases already created and reviewed .
a
(a) Is correct: Syllabus chapter 5.3.1: test case execution (e.g. number of test cases run/not
run, and test cases passed/failed).
b) Is not correct: This metric can be measured, but its value is low. The number of testers
does not give any information about the quality of the test object or test progress.
c) Is not correct: the coverage of requirements by source code is not measured during test
execution. At most, the TEST(!) coverage of the code or requirements is measured.
d) Is not correct: This metric is part of test preparation and not test execution.)
Which one of the following is NOT included in a test summary report?
a) Defining pass/fail criteria and objectives of testing.
b) Deviations from the test approach.
c) Measurements of actual progress against exit criteria.
d) Evaluation of the quality of the test item.
a
(a) Is correct: This information has been defined earlier in the test project. b) Is not correct:
This information is included in a test report; see the syllabus chapter 5.3.2: information on
what occurred during a test period. c) Is not correct: This information is included in a test
report; see syllabus chapter 5.3.2: Information and metrics to support recommendations
and decisions about future actions, such as an assessment of defects remaining, the
economic benefit of continued testing, outstanding risks, and the level of confidence in the
tested software. d) Is not correct: This information is included in a test report; see syllabus
chapter 5.3.2: Information and metrics to support recommendations and decisions about
future actions, such as an assessment of defects remaining, the economic benefit of
continued testing, outstanding risks, and the level of confidence in the tested software.)
Which of the following lists contains only typical exit criteria from testing?
a) Reliability measures, test coverage, test cost, schedule and status about fixing errors and
remaining risks.
b) Reliability measures, test coverage, degree of tester's independence and product
completeness.
c) Reliability measures, test coverage, test cost, availability of test environment, time to
market
and product completeness.
d) Time to market, remaining defects, tester qualification, availability of testable use cases,
test coverage and test cost.
a
(a) Is correct: See syllabus chapter 5.2.3 (all 5 dots).
b) Is not correct: The "degree of tester's independence" does not play a role in exit criteria
(syllabus chapter 5.2.3).
c) Is not correct: "Availability of test environment" is an entry criterion, see syllabus 5.2.3 3.
dot.
d) Is not correct: "The availability of testable requirements" is an entry criterion (syllabus
chapter 5.2.3).)
The project develops a "smart" heating thermostat. The control algorithms of the
thermostat were
modeled as Matlab/Simulink models and run on the internet connected server. The
thermostat uses
the specifications of the server to trigger the heating valves.
The test manager has defined the following test strategy/approach in the test plan:
1. The acceptance test for the whole system is executed as an experience-based test.
2. The control algorithms on the server are tested during implementation using
continuous integration.
3. The functional test of the thermostat is performed as risk-based testing.
4. The security tests of data / communication via the internet are executed together with
external security experts.
What four common types of test strategies/approaches did the test manager implement in
the test
plan?
a) methodical, analytical, reactive and regression-averse.
b) analytical, model-based, consultative and reactive.
c) model-based, methodical, analytical and consultative.
d) regression-averse, consultative, reactive and methodical.
b
(The mapping of points 1 to 4 to approaches according to syllabus chapter 5.2. is only
correct for option b). The mappings can be justified as follows:
1. See syllabus chapter 5.2.2, 7. dot, last sentence: Exploratory testing is a common
technique employed in reactive strategies, whereby the explorative testing is assigned to
the experience-based testing category
2. The control algorithms is 'modelled' on the server, therefore it's tested with a model-
based strategy (see syllabus chapter 5.2.2, 2. dot).
3. See syllabus chapter 5.2.2, 1. dot, second sentence: "Risk-based testing is an example of
an analytical approach, where tests are designed and prioritized based on the level of risk".
4. See syllabus chapter 5.2.2, 5. dot: "This type of test strategy is driven primarily by the
advice, guidance, or instructions of stakeholders, business domain experts, or technology
experts, who may be outside the test team or outside the organization itself.
Thus:
a) Is not correct
b) Is correct
c) Is not correct
d) Is not correct)
A speed control and reporting system has the following characteristics:
If you drive 50 km/h or less, nothing will happen.
If you drive faster than 50 km/h, but no more than 55 km/h, you will be warned.
If you drive faster than 55 km/h but not more than 60 km/h, you will be fined.
If you drive faster than 60 km/h, your driving license will be suspended.
The speed in km/h is available to the system as an integer value.
Which would be the most likely set of values (km/h) identified by applying the boundary
value
analysis, where only the boundary values on the boundaries of the equivalence classes are
relevant?
a) 0, 49, 50, 54, 59, 60.
b) 50, 55, 60.
c) 49, 50, 54, 55, 60, 62.
d) 50, 51, 55, 56, 60, 61.
d
Which one of the following is the characteristic of a metrics-based approach for test
estimation?
a) Budget which was used by a previous similar test project.
b) Overall experience collected in interviews with test managers.
c) Estimation of effort for test automation agreed in the test team.
d) Average of calculations collected from business experts.
a
(a) Is correct: See syllabus chapter 5.2.6: the metrics-based approach: estimating the testing
effort based on metrics of former similar projects or based on typical values.
b) Is not correct: This is expert-based approach: estimating the tasks based on estimates
made by the owners of the tasks or by experts (syllabus chapter 5.2.6).
c) Is not correct: This is expert-based approach: estimating the tasks based on estimates
made by the responsible team of the tasks or by experts (syllabus chapter 5.2.6).
d) Is not correct: This is expert-based approach: estimating the tasks based on estimates
made by the owners of the tasks or by experts (syllabus chapter 5.2.6).)
You are testing a new version of software for a coffee machine. The machine can prepare
different
types of coffee based on four categories. i.e., coffee size, sugar, milk, and syrup. The criteria
are as
follows:
Coffee size (small, medium, large),
Sugar (none, 1 unit, 2 units, 3 units, 4 units),
Milk (yes or no),
Coffee flavor syrup (no syrup, caramel, hazelnut, vanilla).
Now you are writing a defect report with the following information:
Title: Low coffee temperature.
Short summary: When you select coffee with milk, the time for preparing coffee is too long
and the temperature of the beverage is too low (less than 40 °C )
Expected result: The temperature of coffee should be standard (about 75 °C).
Degree of risk: Medium
Priority: Normal
What valuable information was omitted in the above defect report?
a) The actual test result.
b) Data identifying the tested coffee machine.
c) Status of the defect.
d) Ideas for improving the test case
b
(a) Is not correct: The test result is given in the short summary.
b) Is correct: When testing different versions of software, identifying information is
necessary (syllabus chapter 5.6, paragraph: "A defect report...." 4. dot).
c) Is not correct: You are just writing the defect report; hence, the status is automatically
open.
d) Is not correct: This information is useful for the tester but does not need to be included in
the defect report.)
Which one of the following is MOST likely to be a benefit of test execution tools?
a) It is easy to create regression tests.
b) It is easy to maintain version control of test assets.
c) It is easy to design tests for security testing.
d) It is easy to run regression tests
d
(a) Is not correct: The benefits are not when creating regressions tests, more in executing
them.
b) Is not correct: This is done by configuration management tools.
c) Is not correct: This needs specialized tools.
d) Is correct: Syllabus chapter 6.1.2: Reduction in repetitive manual work (e.g. running
regression tests, environment set up/tear down tasks, re-entering the same test data, and
checking against coding standards), thus saving time)
Which test tool (A-D) is characterized by the classification (1-4) below?
1. Tool support for management of testing and testware.
2. Tool support for static testing.
3. Tool support for test execution and logging.
4. Tool support for performance measurement and dynamic analysis.
A. Coverage tools.
B. Configuration management tools.
C. Review tools.
D. Monitoring tools.
a) 1A, 2B, 3D, 4C.
b) 1B, 2C, 3D, 4A.
c) 1A, 2C, 3D, 4B.
d) 1B, 2C, 3A, 4D.
d
(Tool support for management of testing and testware, syllabus chapter 6.1.1, configuration
management tools (1B).
Tool support for static testing, syllabus chapter 6.1.1, tools that support reviews (2C).
Tool support for test execution and logging, syllabus chapter 6.1.1, coverage tools (3A).
Tool support for performance measurement and dynamic analysis, syllabus chapter 6.1.1,
performance testing tools/monitoring tools/dynamic analysis tools (4D).
Thus:
a) Is not correct
b) Is not correct
c) Is not correct
d) Is correct)
Which of the following provides the BEST description of a test case?
a) A document specifying a sequence of actions for the execution of a test. Also known as
test
script or manual test script.
b) A set of input values and expected results, with execution preconditions and execution
postconditions, developed for a particular test condition.
c) An attribute of a system specified by requirements documentation (for example
reliability,
usability or design constraints) that is executed in a test.
d) An item or event of a system that could be verified by one or more test conditions, e.g., a
function, transaction, feature, quality attribute, or structural element.
b
(a) Is not correct: Based on definition of a test procedure specification.
b) Correct: Based on definition from Glossary.
c) Is not correct: Based on Glossary definition of feature.
d) Is not correct: Based on definition of test condition but replaced the term test case with
test condition.)
Which of the following is a major objective of testing?
a) To prevent defects.
b) To validate the project plan works as required.
c) To gain confidence in the development team.
d) To make release decisions for the system under test.
a
(a) Correct: One of the major objectives of testing from the syllabus (1.1.1).
b) Is not correct: Validation of the project plan would be a project management activity.
c) Is not correct: Gaining confidence in the development team would be achieved through
observation and experience.
d) Is not correct: One of the main objectives during acceptance testing may be to give
information to stakeholders about the risk of releasing the system at a given time - so
testing provides information for stakeholders to make decisions, it does not provide the
release decision)
Which of the following is an example of a failure in a car cruise control system?
a) The developer of the system forgot to rename variables after a cut-and-paste operation.
b) Unnecessary code that sounds an alarm when reversing was included in the system.
c) The system stops maintaining a set speed when the radio volume is increased or
decreased.
d) The design specification for the system wrongly states speeds in km/h.
c
(a) Is not correct: This is an example of a mistake made by the developer.
b) Is not correct: This is an example of a defect (something wrong in the code that may
cause a failure). c) Correct: This is a deviation from the expected functionality - a cruise
control system should not be affected by the radio.
d) Is not correct: This is an example of a defect (something wrong in a specification that may
cause a failure if subsequently implemented).)
Which of the following is a defect rather than a root cause in a fitness tracker?
a) Because he was unfamiliar with the domain of fitness training, the author of the
requirements
wrongly assumed that users wanted heartbeat in beats per hour.
b) The tester of the smartphone interface had not been trained in state transition testing, so
missed a major defect.
c) An incorrect configuration variable implemented for the GPS function could cause
location
problems during daylight saving times.
d) Because she had never worked on wearable devices before, the designer of the user
interface
misunderstood the effects of reflected sunlight.
c
(a) Is not correct: The lack of familiarity of the requirements author with the fitness domain
is a root cause.
b) Is not correct: The lack of training of the tester in state transition testing was one of the
root causes of the defect ( the developer presumably created the defect, as well).
c) Correct: The incorrect configuration data represents faulty software in the fitness tracker
(a defect), that may cause failures.
d) Is not correct: The lack of experience in designing user interfaces for wearable devices is a
typical example of a root cause of a defect.)
As a result of risk analysis, more testing is being directed to those areas of the system under
test where
initial testing found more defects than average.
Which of the following testing principles is being applied?
a) Beware of the pesticide paradox.
b) Testing is context dependent.
c) Absence-of-errors is a fallacy.
d) Defects cluster together
d
(a) Is not correct: 'Beware of the pesticide paradox' is concerned with re-running the same
tests and their fault-finding effectiveness decreasing.
b) Is not correct: This testing principle is concerned with performing testing differently
based on the context (e.g. games vs safety-critical).
c) Is not correct: This testing principle is concerned with the difference between a tested
and fixed system and a validated system. No 'errors' does not mean the system is fit for use.
d) Correct: If clusters of defects are identified (areas of the system containing more defects
than average), then testing effort should be focused on these areas.)
Given the following test activities and tasks:
A. Test design
B. Test implementation
C. Test execution
D. Test completion
1. Entering change requests for open defect reports
2. Identifying test data to support the test cases
3. Prioritizing test procedures and creating test data
4. Analyzing discrepancies to determine their cause
Which of the following BEST matches the activities with the tasks?
a) A-2, B-3, C-4, D-1
b) A-2, B-1, C-3, D-4
c) A-3, B-2, C-4, D-1
d) A-3, B-2, C-1, D-4
a
(The correct pairing of test activities and tasks, according to the syllabus (1.4.2) is:
A. Test design - (2) Identifying test data to support the test cases
B. Test implementation - (3) Prioritizing test procedures and creating test data
C. Test execution - (4) Analyzing discrepancies to determine their cause
D. Test completion - (1) Entering change requests for open defect reports
Thus, option A is correct.)
Which of the following BEST describes how value is added by maintaining traceability
between the test
basis and test artifacts?
a) Maintenance testing can be fully automated based on changes to the initial requirements.
b) It is possible to determine if a new test case has increased coverage of the requirements.
c) Test managers can identify which testers found the highest severity defects.
d) Areas that may be impacted by side-effects of a change can be targeted by confirmation
testing
b
(a) Is not correct: Traceability will allow existing test cases to be linked with updated and
deleted requirements (although there is no support for new requirements), but it will not
help with the automation of maintenance testing.
b) Correct: If all test cases are linked with requirements, then whenever a new test case
(with traceability) is added, it is possible to see if any previously-uncovered requirements
are covered by the new test case.
c) Is not correct: Traceability between the test basis and test artifacts will not provide
information on which testers found high-severity defects, and, even if this information could
be determined, it would be of limited value.
d) Is not correct: Traceability can help with identifying test cases affected by changes,
however areas impacted by side-effects would be the focus of regression testing.)
Which of the following qualities is MORE likely to be found in a tester's mindset rather than
in a
developer's?
a) Experience on which to base their efforts.
b) Ability to see what might go wrong.
c) Good communication with team members.
d) Attention to detail.
b
(a) Is not correct: Both developers and testers gain from experience.
b) Correct: Developers are often more interested in designing and building solutions than in
contemplating what might be wrong with those solutions.
c) Is not correct: Both developers and testers should be able to communicate well.
d) Is not correct: Both developers and testers need to pay attention to detail.)
Given the following statements about the relationships between software development
activities and
test activities in the software development lifecycle:
1. Each development activity should have a corresponding testing activity.
2. Reviewing should start as soon as final versions of documents become available.
3. The design and implementation of tests should start during the corresponding
development.
activity
4. Testing activities should start in the early stages of the software development lifecycle.
Which of the following CORRECTLY shows which are true and false?
a) True - 1, 2; False - 3, 4
b) True - 2, 3; False - 1, 2
c) True - 1, 2, 4; False - 3
d) True - 1, 4; False - 2, 3
d
(Considering each statement:
1. Each development activity should have a corresponding testing activity. TRUE - as
described in the syllabus (2.1.1).
2. Reviewing should start as soon as final versions of documents become available. FALSE - it
should start as soon as drafts are available, as per syllabus (2.1.1).
3. The design and implementation of tests should start during the corresponding
development activity. FALSE - the analysis and design of tests should start during the
corresponding development activity, not the implementation, as per syllabus (2.1.1).
4. Testing activities should start in the early stages of the software development lifecycle.
TRUE - as described in the syllabus (2.1.1).
Thus, option D is correct.)
Given that the testing being performed has the following attributes:
based on interface specifications;
focused on finding failures in communication;
the test approach uses both functional and structural test types.
Which of the following test levels is MOST likely being performed?
a) Component integration testing.
b) Acceptance testing.
c) System testing.
d) Component testing.
A
(Considering the scenario and the syllabus (2.2):
1. 'testing is based on interface specifications' - the test basis for component integration
testing includes interface specifications (along with communication protocol specification),
while these are not included for any of the other test levels
2. 'testing is focused on finding failures in communication' - failures in the communication
between tested components is included as a typical failure for component integration
testing, but failures in communication is not included for any of the other test levels
3. 'the test approach uses both functional and structural test types' - functional and
structural test types are both included as possible approaches for component integration
testing, and would also be appropriate for any of the other test levels, although they are
only otherwise explicitly mentioned in the syllabus for system testing
Thus, option A is correct.)
Which of the following statements about test types and test levels is CORRECT?
a) Functional and non-functional testing can be performed at system and acceptance test
levels, while white-box testing is restricted to component and integration testing.
b) Functional testing can be performed at any test level, while white-box testing is restricted
to component testing.
c) It is possible to perform functional, non-functional and white-box testing at any test level.
d) Functional and non-functional testing can be performed at any test level, while Whitebox
testing is restricted to component and integration testing.
c
(a) Is not correct: It is possible to perform any of the test types (functional, non-functional,
white-box) at any test level - so, although it is correct that functional and non-functional
testing can be performed at system and acceptance test levels, it is incorrect to state that
white-box testing is restricted to component and integration testing.
b) Is not correct: It is possible to perform any of the test types (functional, non-functional,
white-box) at any test level - so, it is incorrect to state that white-box testing is restricted to
component testing.
c) Correct: It is possible to perform any of the test types (functional, non-functional, white-
box) at any test level. d) Is not correct: It is possible to perform any of the test types
(functional, non-functional, white-box) at any test level - so, it is incorrect to state that
white-box testing is restricted to component testing and integration testing.)
Which of the following statements BEST compares the purposes of confirmation testing and
regression
testing?
a) The purpose of regression testing is to ensure that all previously run tests still work
correctly,
while the purpose of confirmation testing is to ensure that any fixes made to one part of the
system have not adversely affected other parts.
b) The purpose of confirmation testing is to check that a previously found defect has been
fixed,
while the purpose of regression testing is to ensure that no other parts of the system have
been
adversely affected by the fix.
c) The purpose of regression testing is to ensure that any changes to one part of the system
have
not caused another part to fail, while the purpose of confirmation testing is to check that all
previously run tests still provide the same results as before.
d) The purpose of confirmation testing is to confirm that changes to the system were made
successfully, while the purpose of regression testing is to run tests that previously failed to
ensure that they now work correctly.
b
(a) Is not correct: Although the description of regression testing is largely correct, the
description of confirmation testing (which should be testing a defect has been fixed) is not
correct.
b) Correct: The descriptions of both confirmation and regression testing match the intent of
those in the syllabus.
c) Is not correct: Although the description of regression testing is largely correct, the
description of confirmation testing (re-running all previously run tests to get the same
results) is not correct, as the purpose of confirmation testing is to check that tests that
previously failed now pass (the fix worked).
d) Is not correct: Although the description of confirmation testing is largely correct, the
description of regression testing (re-running tests that previously failed) is not correct (this
is a more detailed description of confirmation testing).)
Which of the following statements CORRECTLY describes a role of impact analysis in
Maintenance
Testing?
a) Impact analysis is used when deciding if a fix to a maintained system is worthwhile.
b) Impact analysis is used to identify how data should be migrated into the maintained
system.
c) Impact analysis is used to decide which hot fixes are of most value to the user.
d) Impact analysis is used to determine the effectiveness of new maintenance test cases.
a
(a) Correct: Impact analysis may be used to identify those areas of the system that will be
affected by the fix, and so the extent of the impact (e.g. necessary regression testing) can be
used when deciding if the change is worthwhile, as per syllabus (2.4.2).
b) Is not correct: Although testing migrated data is part of maintenance testing (see
conversion testing), impact analysis does not identify how this is done.
c) Is not correct: Impact analysis shows which parts of a system are affected by a change, so
it can show the difference between different hot fixes in terms of the impact on the system,
however it does not give any indication of the value of the changes to the user.
d) Is not correct: Impact analysis shows which parts of a system are affected by a change, it
cannot provide an indication of the effectiveness of test cases.)
Which of the following statements CORRECTLY reflects the value of static testing?
a) By introducing reviews, we have found that both the quality of specifications and the time
required for development and testing have increased.
b) Using static testing means we have better control and cheaper defect management due
to the ease of removing defects later in the lifecycle.
c) Now that we require the use of static analysis, missed requirements have decreased and
communication between testers and developers has improved.
d) Since we started using static analysis, we -find coding defects that might have not been
found by performing only dynamic testing.
d
(a) Is not correct: Reviews should increase the quality of specifications, however the time
required for development and testing should decrease, as per syllabus (3.1.2).
b) Is not correct: Removing defects is generally easier earlier in the lifecycle, as per syllabus
(3.1.2).
c) Is not correct: Reviews will result in fewer missed requirements and better
communication between testers and developers, however this is not true for static analysis,
as per syllabus (3.1.2).
d) Correct: This is a benefit of static analysis, as per syllabus (3.1.2).)
Which of the following sequences BEST shows the main activities of the work product
review process?
a) Initiate review - Reviewer selection - Individual review - Issue communication and analysis
-
Rework
b) Planning & preparation - Overview meeting - Individual review - Fix- Report
c) Preparation - Issue Detection - Issue communication and analysis - Rework - Report
d) Plan - Initiate review - Individual review - Issue communication and analysis - Fix defects
&
report
d
(a) Is not correct: Reviewer selection is not one of the main activities for the work product
review process in the syllabus (3.2.1).
b) Is not correct: This is a possible set of activities for a work product review process, but it is
missing the 'Issue communication and analysis' activity, and it does not match the main
activities for the work product review process in the syllabus (3.2.1).
c) Is not correct: This is a possible set of activities for a work product review process, but it is
missing the 'initiate review' activity, and it does not match the main activities for the work
product review process in the syllabus (3.2.1).
d) Correct: This is the order of the activities as provided in the syllabus (3.2.1).)
Which of the following CORRECTLY matches the roles and responsibilities in a formal
review?
a) Manager - Decides on the execution of reviews
b) Review Leader - Ensures effective running of review meetings
c) Scribe - Fixes defects in the work product under review
d) Moderator - Monitors ongoing cost-effectiveness
a
(a) Correct: As stated in the syllabus (3.2.2).
b) Is not correct: The moderator should ensure the effective running of review meetings, as
per syllabus (3.2.2).
c) Is not correct: The author fixes the work product under review, as per syllabus (3.2.2).
d) Is not correct: The manager monitors ongoing cost effectiveness, as per syllabus (3.2.2).)
The reviews being used in your organization have the following attributes:
There is a role of a scribe
The purpose is to detect potential defects
The review meeting is led by the author
Reviewers find potential defects by individual review
A review report is produced
Which of the following review types is MOST likely being used?
a) Informal Review
b) Walkthrough
c) Technical Review
d) Inspection
b
(Considering the attributes and the syllabus (3.2.3):
There is a role of a scribe - specified for walkthroughs, technical reviews and inspections;
thus, the reviews being performed cannot be informal reviews.
The purpose is to detect potential defects - the purpose of detecting potential defects is
specified for all types of review.
The review meeting is led by the author - this is not allowed for inspections and is typically
not the author for technical reviews, but is part of walkthroughs, and allowed for informal
reviews
Reviewers find potential issues by individual review - all types of reviews can include
individual review (even informal reviews)
A review report is produced - all types of reviews can produce a review report, although it
would be less likely for an informal review.
Thus, option B is correct.)
You have been asked to take part in a checklist-based review of the following excerpt from
the
requirements specification for a library system:
Librarians can:
1. Register new borrowers.
2. Return books from borrowers.
3. Accept fines from borrowers.
4. Add new books to the system with their ISBN, author and title.
5. Remove books from the system.
6. Get system responses within 5 seconds.
Borrowers can:
7. Borrow a maximum of 3 books at one time.
8. View the history of books they have borrowed/reserved.
9. Be fined for failing to return a book within 3 weeks.
10. Get system responses within 3 seconds.
11. Borrow a book at no cost for a maximum of 4 weeks.
12. Reserve books (if they are on-loan).
All users (librarians and borrowers):
13. Can search for books by ISBN, author, or title.
14. Can browse the system catalogue.
15. The system shall respond to user requests within 3 seconds.
16. The user interface shall be easy-to-use.
You have been assigned the checklist entry that requires you to review the specification for
inconsistencies between individual requirements (i.e. conflicts between requirements).
Which of the following CORRECTLY identifies inconsistencies between pairs of
requirements?
a) 6-10, 6-15, 7-12
b) 6-15, 9-11
c) 6-10, 6-15, 9-11
d) 6-15, 7-12
b
(Considering the potential inconsistencies:
6-10 - If librarians should get system responses within 5 seconds, it is NOT inconsistent for
borrowers to get system responses within 3 seconds.
6-15 - If librarians should get system responses within 5 seconds, it is inconsistent for all
users to get system responses within 3 seconds.
7-12 - If borrowers can borrow a maximum of 3 books at one time it is NOT inconsistent for
them to also reserve books (if they are on-loan).
9-11 - If a borrower can be fined for failing to return a book within 3 weeks it is inconsistent
for them to also be allowed to borrow a book at no cost for a maximum of 4 weeks - as the
length of valid loans are different.
Thus, of the potential inconsistencies, 6-15 and 9-11 are valid inconsistencies, and so option
B is correct.)
Which of the following provides the BEST description of exploratory testing?
a) A testing practice in which an in-depth investigation of the background of the test object
is used
to identify potential weaknesses that are examined by test cases.
b) An approach to testing whereby the testers dynamically design and execute tests based
on
their knowledge, exploration of the test item and the results of previous tests.
c) An approach to test design in which test activities are planned as uninterrupted sessions
of test
analysis and design, often used in conjunction with checklist-based testing.
d) Testing based on the tester's experience, knowledge and intuition.
b
(a) Is not correct: Exploratory testing is often carried out when timescales are short, so
making in-depth investigations of the background of the test object is unlikely.
b) Correct: Glossary definition.
c) Is not correct: Based on the Glossary definition of session based testing, but with test
execution replaced by test analysis.
d) Is not correct: Glossary definition of experience-based testing)
Which of the following BEST matches the descriptions with the different categories of test
techniques?
1. Coverage is measured based on a selected structure of the test object.
2. The processing within the test object is checked.
3. Tests are based on defects' likelihood and their distribution.
4. Deviations from the requirements are checked.
5. User stories are used as the test basis.
Black - Black-box test techniques
White - White-box test techniques
Experience - Experience-based test techniques
a) Black - 4, 5 White - 1, 2 Experience - 3
b) Black - 3 White - 1, 2 Experience - 4, 5
c) Black - 4 White - 1, 2 Experience - 3, 5
d) Black - 1, 3, 5 White - 2 Experience - 4
A
(The correct pairing of descriptions with the different categories of test techniques,
according to the syllabus (4.1.1) is:
Black-box test techniques Deviations from the requirements are checked (4) User stories are
used as the test basis (5)
White-box test techniques Coverage is measured based on a selected structure of the test
object (1) The processing within the test object is checked (2)
Experience-based test techniques Tests are based on defects' likelihood and their
distribution (3)
Thus, option A is correct.)
A fitness app measures the number of steps that are walked each day and provides
feedback to
encourage the user to keep fit.
The feedback for different numbers of steps should be:
Up to 1000 - Couch Potato!
Above 1000, up to 2000 - Lazy Bones!
Above 2000, up to 4000 - Getting There!
Above 4000, up to 6000 - Not Bad!
Above 6000 - Way to Go!
Which of the following sets of test inputs would achieve the highest equivalence partition
coverage?
a) 0, 1000, 2000, 3000, 4000
b) 1000, 2001, 4000, 4001, 6000
c) 123, 2345, 3456, 4567, 5678
d) 666, 999, 2222, 5555, 6666
d
(The following valid equivalence partitions can be identified:
1) Up to 1000 - Couch Potato!
2) Above 1000, up to 2000 - Lazy Bones!
3) Above 2000, up to 4000 - Getting There!
4) Above 4000, up to 6000 - Not Bad!
5) Above 6000 - Way to Go!
The sets of test inputs therefore cover the following partitions:
a) 0 (1), 1000 (1), 2000 (2), 3000 (3), 4000 (3) - 3 partitions (out of 5)
b) 1000 (1), 2001 (3), 4000 (3), 4001 (4), 6000 (4) - 3 partitions (out of 5)
c) 123 (1), 2345 (3), 3456 (3), 4567 (4), 5678 (4) - 3 partitions (out of 5)
d) 666 (1), 999 (1), 2222 (3), 5555 (4), 6666 (5) - 4 partitions (out of 5)
Thus, option D is correct.)
A daily radiation recorder for plants produces a sunshine score based on a combination of
the number
of hours a plant is exposed to the sun (below 3 hours, 3 to 6 hours or above 6 hours) and the
average
intensity of the sunshine (very low, low, medium, high).
Given the following test cases:
T Hours Intensity Score
T1 1.5 vlow 10
T2 7.0 medium 60
T3 0.5 vlow 10
What is the minimum number of additional test cases that are needed to ensure full
coverage of all valid
INPUT equivalence partitions?
a) 1
b) 2
c) 3
d) 4
b
(The following valid input equivalence partitions can be identified:
Hours
1. below 3 hours
2. 3 to 6 hours
3. above 6 hours
Intensity
4. very low
5. low
6. medium
7. high
The given test cases cover the following valid input equivalence partitions:
T1 1.5 (1) Very low (4)
T2 7.0 (3) Medium (6)
T3 0.5 (1) Very low (4)
Thus, the missing valid input equivalence partitions are: (2), (5) and (7). These can be
covered by two test cases, as (2) can be combined with either (5) or (7). Thus, option B is
correct.)
A smart home app measures the average temperature in the house over the previous week
and
provides feedback to the occupants on their environmental-friendliness based on this
temperature.
The feedback for different average temperature ranges (to the nearest °C) should be:
Up to 10°C - Icy Cool!
11°C to 15°C - Chilled Out!
16°C to 19°C - Cool Man!
20°C to 22°C - Too Warm!
Above 22°C - Hot & Sweaty!
Using two-point BVA, which of the following sets of test inputs provides the highest level of
boundary
coverage?
a) 0°C, 11°C, 20°C, 22°C, 23°C
b) 9°C, 15°C, 19°C, 23°C, 100°C
c) 10°C, 16°C, 19°C, 22°C, 23°C
d) 14°C, 15°C, 18°C, 19°C, 21°C, 22°C
C
(The input equivalence partitions, with two-point boundary values, can be represented as
the number of boundary values covered by the test inputs is therefore: a) 0°C 11°C 20°C
22°C 23°C → 4 (11, 20, 22 and 23)
b) 9°C 15°C 19°C 23°C 100°C → 3 (15, 19 and 23)
c) 10°C 16°C 19°C 22°C 23°C → 5 (10, 16, 19, 22 and 23)
d) 14°C 15°C 18°C 19°C 21°C 22°C → 3 (15, 19 and 22) Thus, option C is correct.)
Which of the following statements BEST describes how test cases are derived from a use
case?
a) Test cases are created to exercise defined basic, exceptional and error behaviors
performed by the system under test in collaboration with actors.
b) Test cases are derived by identifying the components included in the use case and
creating integration tests that exercise the interactions of these components.
c) Test cases are generated by analyzing the interactions of the actors with the system to
ensure the user interfaces are easy to use.
d) Test cases are derived to exercise each of the decision points in the business process
flows of the use case, to achieve 100% decision coverage of these flows.
A
(a) Correct: The syllabus (4.2.5) explains that each use case specifies some behavior that a
subject can perform in collaboration with one or more actors. It also (later) explains that
tests are designed to exercise the defined behaviors (basic, exceptional and errors).
b) Is not correct: Use cases normally specify requirements, and so do not 'include' the
components that will implement them.
c) Is not correct: Tests based on use cases do exercise interactions between the actor and
the system, but they are focused on the functionality and do not consider the ease of use of
user interfaces.
d) Is not correct: Tests do cover the use case paths through the use case, but there is no
concept of decision coverage of these paths, and certainly not of business process flows.)
Which of the following descriptions of statement coverage is CORRECT?
a) Statement coverage is a measure of the number of lines of source code (minus
comments) exercised by tests.
b) Statement coverage is a measure of the proportion of executable statements in the
source code exercised by tests.
c) Statement coverage is a measure of the percentage of lines of source code exercised by
tests.
d) Statement coverage is a measure of the number of executable statements in the source
code exercised by tests.
B
(a) Is not correct: Statement coverage is a measure of the proportion of executable
statements exercised. The number of executable statements is often close to the number of
lines of code minus the comments, but this option only talks about the number of lines of
code exercised and not the proportion exercised.
b) Correct: Statement coverage is a measure of the proportion of executable statements
exercised (normally presented as a percentage), per syllabus (4.3.1).
c) Is not correct: Statement coverage is a measure of the percentage of executable
statements exercised, however many of the lines of source code are not executable (e.g.
comments).
d) Is not correct: Statement coverage is a measure of the proportion of executable
statements exercised. This option only talks about the number of executable statements
exercised and not the proportion (or percentage) exercised)
Which of the following descriptions of decision coverage is CORRECT?
a) Decision coverage is a measure of the percentage of possible paths through the source
code exercised by tests.
b) Decision coverage is a measure of the percentage of business flows through the
component exercised by tests.
c) Decision coverage is a measure of the 'if' statements in the code that are exercised with
both the true and false outcomes.
d) Decision coverage is a measure of the proportion of decision outcomes in the source code
exercised by tests.
D
(a) Is not correct: A path through source code is one potential route through the code from
the entry point to the exit point that could exercise a range of decision outcomes. Two
different paths may exercise all but one of the same decision outcomes, and by just
changing a single decision outcome a new path is followed. Test cases that would achieve
decision coverage are typically a tiny subset of the test cases that would achieve path
coverage. In practice, most non-trivial programs (and all programs with unconstrained
loops, such as 'while' loops) have a potentially infinite number of possible paths through
them and so measuring the percentage covered is practically infeasible.
b) Is not correct: Coverage of business flows can be a focus of use case testing, but use cases
rarely cover a single component. It may be possible to cover the decisions within business
flows, but only if they were specified in enough detail, however this option only suggests
coverage of "business flows" as a whole.
c) Is not correct: Achieving full decision coverage does require all 'if' statements to be
exercised with both true and false outcomes, however, there are typically several other
decision points in the code (e.g. 'case' statements and the code controlling loops) that also
need to be taken into consideration when measuring decision coverage.
d) Correct: Decision coverage is a measure of the proportion of decision outcomes exercised
(normally presented as a percentage), as per syllabus (4.3.2).)
Which of the following BEST describes the concept behind error guessing?
a) Error guessing requires you to imagine you are the user of the test object and guess
mistakes the user could make interacting with it.
b) Error guessing involves using your personal experience of development and the mistakes
you made as a developer.
c) Error guessing involves using your knowledge and experience of defects found in the past
and typical mistakes made by developers.
d) Error guessing requires you to rapidly duplicate the development task to identify the sort
of mistakes a developer might make.
C
(a) Is not correct: error guessing is not a usability technique for guessing how users may fail
to interact with the test object.
b) Is not correct: Although a tester who used to be a developer may use their personal
experience to help them when performing error guessing, the technique is not based on
prior knowledge of development.
c) Correct: The basic concept behind error guessing is that the tester tries to guess what
mistakes may have been made by the developer and what defects may be in the test object
based on past-experience (and sometimes checklists).
d) Is not correct: Duplicating the development task has several flaws that make it
impractical, such as the requirement for the tester to have equivalent skills to the developer
and the time involved in performing the development. It is not error guessing)
Which of the following BEST explains a benefit of independent testing?
a) The use of an independent test team allows project management to assign responsibility
for the quality of the final deliverable to the test team, so ensuring everyone is aware that
quality is the test team's overall responsibility.
b) If a test team external to the organization can be afforded, then there are distinct
benefits in terms of this external team not being so easily swayed by the delivery concerns
of project management and the need to meet strict delivery deadlines.
c) An independent test team can work totally separately from the developers, need not be
distracted with changing project requirements, and can restrict communication with the
developers to defect reporting through the defect management system.
d) When specifications contain ambiguities and inconsistencies, assumptions are made on
their interpretation, and an independent tester can be useful in questioning those
assumptions and the interpretation made by the developer.
D
(a) Is not correct: Quality should be the responsibility of everyone working on the project
and not the sole responsibility of the test team.
b) Is not correct: First, it is not a benefit if an external test team does not meet delivery
deadlines, and second, there is no reason to believe that external test teams will feel they
do not have to meet strict delivery deadlines.
c) Is not correct: It is bad practice for the test team to work in complete isolation, and we
would expect an external test team to be concerned with changing project requirements
and communicate well with developers.
d) Correct: Specifications are never perfect, meaning that assumptions will have to be made
by the developer. An independent tester is useful in that they can challenge and verify the
assumptions and subsequent interpretation made by the developer)
Which of the following tasks is MOST LIKELY to be performed by the test manager?
a) Write test summary reports based on the information gathered during testing.
b) Review tests developed by others.
c) Create the detailed test execution schedule.
d) Analyze, review, and assess requirements, specifications and models for testability
A
(a) Correct: One of the typical tasks of a test manager from the syllabus (5.1.2).
b) Is not correct: One of the typical tasks of a tester from the syllabus (5.1.2).
c) Is not correct: One of the typical tasks of a tester from the syllabus (5.1.2).
d) Is not correct: One of the typical tasks of a tester from the syllabus (5.1.2).)
Given the following examples of entry and exit criteria:
1. The original testing budget of $30,000 plus contingency of $7,000 has been spent.
2. 96% of planned tests for the drawing package have been executed and the remaining
tests are now out of scope.
3. The trading performance test environment has been designed, set-up and verified.
4. Current status is no outstanding critical defects and two high-priority ones.
5. The autopilot design specifications have been reviewed and reworked.
6. The tax rate calculation component has passed unit testing.
Which of the following BEST categorizes them as entry and exit criteria:
a) Entry criteria - 5, 6 Exit criteria - 1, 2, 3, 4
b) Entry criteria - 2, 3, 6 Exit criteria - 1, 4, 5
c) Entry criteria - 1, 3 Exit criteria - 2, 4, 5, 6
d) Entry criteria - 3, 5, 6 Exit criteria - 1, 2, 4
D
(The correct pairings of examples to entry and exit criteria are:
Entry criteria
o (3) The trading performance test environment has been designed, set-up and verified -
example of the need for a test environment to be ready before testing can begin.
o (5) The autopilot design specifications have been reviewed and reworked - example of the
need for the test basis to be available before testing can begin.
o (6) The tax rate calculation component has passed unit testing - example of the need for a
test object to have met the exit criteria for a prior level of testing before testing can begin.
Exit criteria
o (1) The original testing budget of $30,000 plus contingency of $7,000 has been spent -
example of spending the testing budget being a signal to stop testing.
o (2) 96% of planned tests for the drawing package have been executed and the remaining
tests are now out of scope - example of all the planned tests being run being a signal to stop
testing (normally used alongside the exit criteria on outstanding defects remaining).
o (4) Current status is no outstanding critical defects and two high-priority ones - example of
the number of outstanding defects achieving a planned limit being a signal to stop testing
(normally used alongside the exit criteria on planned tests being run).
Thus, option D is correct.)
Given the following priorities and dependencies for these test cases:
TestCase Priority Technical Logical
TC1 High TC4 -
TC2 Low - -
TC3 High - TC4
TC4 Medium - -
TC5 Low - TC2
TC6 Medium TC5 -
Which of the following test execution schedules BEST considers the priorities and technical
and logical dependencies?
a) TC1 - TC3 - TC4 - TC6 - TC2 - TC5
b) TC4 - TC3 - TC1 - TC2 - TC5 - TC6
c) TC4 - TC1 - TC3 - TC5 - TC6 - TC2
d) TC4 - TC2 - TC5 - TC1 - TC3 - TC6
B
(The test cases should be scheduled in priority order, but the schedule must also take
account of the dependencies.
The two highest priority test cases (TC1 and TC3) are both dependent on TC4, so the first
three test cases should be scheduled as either TC4 - TC1 - TC3 or TC4 - TC3 - TC1 (we have
no way to discriminate between TC1 and TC3).
Next, we need to consider the remaining medium priority test case, TC6. TC6 is dependent
on TC5, but TC5 is dependent on TC2, so the next two three cases must be scheduled as TC2
- TC5 - TC6. This means there are two possible optimal schedules:
TC4 - TC1 - TC3 - TC2 - TC5 - TC6 or
TC4 - TC3 - TC1 - TC2 - TC5 - TC6
Thus, option B is correct.)
Which of the following statements about test estimation approaches is CORRECT?
a) With the metrics-based approach, the estimate is based on test measures from the
project and so this estimate is only available after the testing starts.
b) With the expert-based approach, a group of expert users identified by the client
recommends the necessary testing budget.
c) With the expert-based approach, the test managers responsible for the different testing
activities predict the expected testing effort.
d) With the metrics-based approach, an average of the testing costs recorded from several
past projects is used as the testing budget
C
(a) Is not correct: Estimates may be updated as more information becomes available, but
estimates are needed to assist with planning before the testing starts. b) Is not correct: In
the expert-based approach, the experts need to be experts in testing, not in using the test
object.
c) Correct: Test -Managers, who will be leading testers doing the testing, are considered
experts in their respective areas and suitable for estimating the necessary resources
needed.
d) Is not correct: While it is useful to know the testing costs from previous projects, a more
sophisticated approach is needed than simply taking an average of past projects (the new
project may not be like the previous projects, e.g. it may be far larger or far smaller than
previous projects).)
Which of the following BEST defines risk level? a) Risk level is calculated by adding together
the probabilities of all problem situations and the financial harm that results from them.
b) Risk level is estimated by multiplying the likelihood of a threat to the system by the
chance that the threat will occur and will result in financial damage
c) Risk level is determined by a combination of the probability of an undesirable event and
the expected impact of that event.
d) Risk level is the sum of all potential hazards to a system multiplied by the sum of all
potential losses from that system.
C
(a) Is not correct: Risk is determined by considering a combination of the likelihood of
problem situations and the harm that may result from them but cannot be calculated by
adding these together (the probability would be in the range 0 to 1 and the harm could be in
dollars).
b) Is not correct: Risk is determined by considering a combination of a likelihood and an
impact. This definition only considers likelihood and chance (both forms of probability) with
no consideration of the impact (or harm).
c) Correct: As described in the syllabus (5.5.1).
d) Is not correct: Risk is determined by considering a combination of a likelihood and an
impact. This definition only considers hazards and losses (a hazard is a bad event, like a risk,
while loss is a form of impact) with no consideration of the likelihood (or probability).)
Which of the following is MOST likely to be an example of a PRODUCT risk?
a) The expected security features may not be supported by the system architecture.
b) The developers may not have time to fix all the defects found by the test team.
c) The test cases may not provide full coverage of the specified requirements.
d) The performance test environment may not be ready before the system is due for
delivery.
A
(a) Correct: If the expected security features are not supported by the system architecture,
then the system could be seriously flawed. As the system being produced is the problem
here, it is a product risk. b) Is not correct: If the developers run over budget, or run out of
time, that is a problem with the running of the project - it is a project risk.
c) Is not correct: If the test cases do not provide full coverage of the requirements, this
means the testing may not fulfil the requirements of the test plan - it is a project risk.
d) Is not correct: If the test environment is not ready, this means the testing may not be
done, or it may have to be done on a different environment and it is impacting how the
project is run - it is a project risk.)
Which of the following is LEAST likely to be an example of product risk analysis CORRECTLY
influencing the testing?
a) The potential impact of security flaws has been identified as being particularly high, so
security testing has been prioritized ahead of some other testing activities.
b) Testing has found the quality of the network module to be higher than expected, so
additional testing will now be performed in that area.
c) The users had problems with the user interface of the previous system, so additional
usability testing is planned for the replacement system.
d) The time needed to load web pages is crucial to the success of the new website, so an
expert in performance testing has been employed for this project.
B
(a) Is not correct: As we are told security flaws have a particularly high impact, their risk
level will be higher, and thus we have prioritized the security testing ahead of some other
testing. Thus, product risk analysis has influenced the testing.
b) Correct: As less defects than expected have been found in the network module, the
perceived risk in this area should be lower, and so less testing should be focused on this
area, NOT additional testing. Thus, product risk analysis has NOT CORRECTLY influenced the
testing in this situation.
c) Is not correct: Because the users had problems with the user interface of the previous
system, there is now high awareness of the risk associated with the user interface, which
has resulted in additional usability testing being planned. Thus, product risk analysis has
influenced the thoroughness and scope of testing.
d) Is not correct: As the time needed to load web pages has been identified as crucial to the
success of the new website, the performance of the website should be considered a risk,
and the employment of an expert in performance testing helps to mitigate this risk. Thus,
product risk analysis has influenced the testing.)
You are performing system testing of a train booking system and have found that
occasionally the system reports that there are no available trains when you believe that
there should be, based on the test cases you have run. You have provided the development
manager with a summary of the defect and the version of the system you are testing. The
developers recognize the urgency of the defect and are now waiting for you to provide more
details so that they can fix it.
Given the following pieces of information:
1. Degree of impact (severity) of the defect.
2. Identification of the test item.
3. Details of the test environment.
4. Urgency/priority to fix.
5. Actual results.
6. Reference to test case specification.
Apart from the description of the defect, which includes a database dump and screenshots,
which of the pieces of information would be MOST useful to include in the initial defect
report?
a) 1, 2, 6
b) 1, 4, 5, 6
c) 2, 3, 4, 5
d) 3, 5, 6
D
(Considering each of the pieces of information:
1. Degree of impact (severity) of the defect - the developers are already aware of the
problem and are waiting to fix it, so this is a less important piece of information.
2. Identification of the test item - as the developers are already aware of the problem and
you are performing system testing, and you have already provided the version of the system
you are testing you can assume they know the item that was being tested, so this is a less
important piece of information.
3. Details of the test environment - the set-up of the test environment may have a
noticeable effect on the test results, and detailed information should be provided, so this is
an important piece of information.
4. Urgency/priority to fix - the developers are already aware of the problem and are waiting
to fix it, so this is a less important piece of information.
5. Actual results - the actual results may well help the developers to determine what is going
wrong with the system, so this is an important piece of information.
6. Reference to test case specification - this will show the developers the tests you ran,
including the test inputs that caused the system to fail (and expected results), so this is an
important piece of information.
Thus, option D is correct)
Given the following test activities and test tools:
1. Performance measurement and dynamic analysis.
2. Test execution and logging.
3. Management of testing and testware.
4. Test design.
A. Requirements coverage tools.
B. Dynamic analysis tools.
C. Test data preparation tools.
D. Defect management tools. Which of the following BEST matches the activities and tools?
a) 1 - B, 2 - C, 3 - D, 4 - A
b) 1 - B, 2 - A, 3 - C, 4 - D
c) 1 - B, 2 - A, 3 - D, 4 - C
d) 1 - A, 2 - B, 3 - D, 4 - C
C
(The correct pairings of test activities and test tools are, per syllabus (6.1.1):
1. Performance measurement and dynamic analysis - (b) Dynamic analysis tools
2. Test execution and logging - (a) Requirements coverage tools
3. Management of testing and testware - (d) Defect management tools
4. Test design - (c) Test data preparation tools Thus, option C is correct.)
Which of the following is MOST likely to be used as a reason for using a pilot project to
introduce a tool into an organization?
a) The need to evaluate how the tool fits with existing processes and practices and
determining what would need to change.
b) The need to evaluate the test automation skills and training, mentoring and coaching
needs of the testers who will use the tool.
c) The need to evaluate whether the tool provides the required functionality and does not
duplicate existing test tools.
d) The need to evaluate the tool vendor in terms of the training and other support they
provide.
A
(a) Correct: As per syllabus (6.2.2).
b) Is not correct: The evaluation of the test automation skills and training, mentoring and
coaching needs of the testers who will use the tool should have been performed as part of
the tool selection activity, as per syllabus (6.2.1).
c) Is not correct: The decision on whether the tool provides the required functionality and
does not duplicate existing tools should have been performed as part of the tool selection
activity, as per syllabus (6.2.1).
d) Is not correct: The evaluation of the tool vendor in terms of the training and other
support they provide should have been performed as part of the tool selection activity, as
per syllabus (6.2.1).)
What is quality?
a) Part of quality management focused on providing confidence that quality requirements
will be fulfilled.
b) The degree to which a component, system or process meets specified requirements
and/or user/customer needs and expectations.
c) The degree to which a component or system protects information and data so that
persons or other components or systems have the degree of access appropriate to their
types and levels of authorization.
d) The total costs incurred on quality activities and issues and often split into prevention
costs, appraisal costs, internal failure costs and external failure costs.
B
(a) Is not correct: this is the Glossary definition of quality assurance.
b) Is correct: this is the Glossary definition of quality.
c) Is not correct: this is the Glossary definition of security.
d) Is not correct: this is the Glossary definition of cost of quality.)
Which of the following is a typical test objective?
a) Preventing defects
b) Repairing defects
c) Comparing actual results to expected results
d) Analyzing the cause of failure
A
(a) Correct answer. This is an objective listed in section 1.1.
b) Is not correct: this is debugging per section 1.1.2.
c) Is not correct: this is an activity within the test execution group of activities within the test
process described in section 1.4.2.
d) Is not correct: this is part of debugging per section 1.1.2.)
A phone ringing in an adjacent cubicle momentarily distracts a programmer, causing the
programmer to improperly program the logic that checks the upper boundary of an input
variable. Later, during system testing, a tester notices that this input field accepts invalid
input values. The improperly coded logic for the upper boundary check is:
a) The root cause
b) The failure
c) The error
d) The defect
D
(a) Is not correct: the root cause is the distraction that the programmer experienced while
programming
b) Is not correct: the accepting of invalid inputs is the failure.
c) Is not correct: the error is the mistaken thinking that resulted in putting the defect in the
code.
d) Is correct: the problem in the code is a defect.)
A product owner says that your role as a tester on an Agile team is to catch all the bugs
before the end of each iteration. Which of the following is a testing principle that could be
used to respond to this statement?
a) Defect clustering
b) Testing shows the presence of defects
c) Absence of error fallacy
d) Root cause analysis
B
(a) Is not correct: defect clustering has to do with where defects are most likely to be found,
not whether all of them can be found.
b) Is correct: testing can show the presence of defects but cannot prove their absence,
which makes it impossible to know if you have caught all the bugs. Further, the impossibility
of exhaustive testing makes it impossible for you to catch all the bugs.
c) Is not correct: this principle says that you can find and remove many bugs but still release
an unsuccessful software product, which is not what the product owner is asking you to
ensure.
d) Is not correct: root cause analysis is not a testing principle.)
Programmers often write and execute unit tests against code which they have written.
During this selftesting activity, which of the following is a tester mindset that programmers
should adopt to perform this unit testing effectively?
a) Good communication skills
b) Code coverage
c) Evaluating code defects
d) Attention to detail
D
(a) Is not correct: the programmer appears to be performing unit testing on their own.
b) Is not correct: code coverage is useful for unit testing, but it is not a tester mindset
described in section 1.5.2
c) Is not correct: per section 1.5.2, the programmer's mindset included contemplating what
might be wrong with the code, but that is not a tester's mindset
d) Is correct: this tester mindset in section 1.5.2, attention to detail, will help programmers
find defects during unit testing)
Consider the following testing activities:
1. Selecting regression tests
2. Evaluating completeness of test execution
3. Identifying which user stories have open defect reports
4. Evaluating whether the number of tests for each requirement is consistent with the level
of product risk
Consider the following ways traceability can help testing:
A. Improve understandability of test status reports to include status of test basis items
B. Make testing auditable
C. Provide information to assess process quality
D. Analyze the impact of changes Which of the following best matches the testing activity
with how traceability can assist that activity?
a) 1D, 2B, 3C, 4A
b) 1B, 2D, 3A, 4C
c) 1D, 2C, 3A, 4B
d) 1D, 2B, 3A, 4C
D
(Traceability assists with:
• Selecting regression tests in terms of analyzing the impact of changes.
• Evaluating completeness of test execution which makes testing auditable.
• Identifying which user stories have open defect reports which improves understandability
of test status reports to include status of test basis items.
• Evaluating whether the number of tests for each requirement is consistent with the level
of product risk which provides information to assess test process quality (i.e., alignment of
test effort with risk).
Therefore, d is correct, per section 1.4.4)
A tester participated in a discussion about proposed database structure. The tester
identified a potential performance problem related to certain common user searches. This
possible problem was explained to the development team. Which of the following is a
testing contribution to success that BEST matches this situation?
a) Enabling required tests to be identified at an early stage
b) Ensuring processes are carried out properly
c) Reducing the risk of fundamental design defects
d) Reducing the risk of untestable functionality
C
(a) Is not correct: while enabling required tests to be identified in an early stage is a testing
contribution to success per section 1.2.1, there is no indication in the question that the
tester did so.
b) Is not correct: ensuring processes are carried out properly is part of quality assurance, not
a testing contribution to success, per sections 1.2.1 and 1.2.2
c) Is correct: reducing the risk of fundamental design defects is a testing contribution to
success per section 1.2.1. Database structure is related to design, and performance
problems can be a significant product risk.
d) Is not correct: while reducing the risk of untestable functionality is a testing contribution
to success per section 1.2.1, the tester here has not identified something untestable, but
rather something that would result in performance tests failing.)
Which of the following is an example of a task that can be carried out as part of the test
process?
a) Analyzing a defect
b) Designing test data
c) Assigning a version to a test item
d) Writing a user story
B
(a) Is not correct: analyzing a defect is part of debugging, not testing, per section 1.1.2
b) Is correct: creating test data is a test implementation task per section 1.4.2.
c) Is not correct: while a tester may need to identify a test item's version for results
reporting purposes, assigning a test item's version is part of configuration management, per
section 5.4
d) Is not correct: writing a user story is not a testing activity and should be done by the
product owner)
You are running a performance test with the objective of finding possible network
bottlenecks in interfaces between components of a system. Which of the following
statements describes this test?
a) A functional test during the integration test level
b) A non-functional test during the integration test level
c) A functional test during the component test level
d) A non-functional test during the component test level
B
(See section 2.2 for the description of component and integration test levels, and section 2.3
for the description of functional and non-functional tests.
a) Is not correct: while this test does match the description of an integration test, it is a non-
functional test.
b) Is correct: this test matches the description of an integration test and it is a non-
functional test.
c) Is not correct: this test doesn't match the description of a component test and it is not a
functional test.
d) Is not correct: while this test is a non-functional test, it doesn't match the description of a
component test.)
Which of the following statements is true?
a) Impact analysis is useful for confirmation testing during maintenance testing
b) Confirmation testing is useful for regression testing during system design
c) Impact analysis is useful for regression testing during maintenance testing
d) Confirmation testing is useful for impact analysis during maintenance testing
C
(a) Is not correct: while impact analysis is useful during maintenance testing, per section 2.4,
it is not necessary for confirmation testing, since confirmation testing is on the intended
effects of a bug fix or other change per section 2.3.
b) Is not correct: per section 2.3, confirmation and regression testing are two separate
activities, and confirmation testing is not part of system design.
c) Is correct: per section 2.4, impact analysis can be used to select regression tests for
maintenance testing.
d) Is not correct: confirmation testing is not part of impact analysis, per section 2.4, though
confirmation testing will typically happen during maintenance testing)
Consider the following types of defects that a test level might focus on:
1. Defects in separately testable modules or objects
2. Not focused on identifying defects
3. Defects in interfaces and interactions
4. Defects in the whole test object
Which of the following list correctly matches test levels from the Foundation syllabus with
the defect focus options given above?
a) 1 = performance test; 2 = component test; 3 = system test; 4 = acceptance test
b) 1 = component test; 2 = acceptance test; 3 = system test; 4 = integration test
c) 1 = component test; 2 = acceptance test; 3 = integration test; 4 = system test
d) 1 = integration test; 2 = system test; 3 = component test; 4 = acceptance test
C
(Performance testing is a test type per section 2.3, not a test level. Per section 2.2.,
component testing focuses on defects in separately testable modules or objects, integration
testing on defects in interfaces and interactions, system testing on defects in the whole test
object, and acceptance testing is not typically focused on identifying defects.
Therefore, c is the correct answer)
A mass market operating system software product is designed to run on any PC hardware
with an x86- family processor. You are running a set of tests to look for defects related to
support of the various PCs that use such a processor and to build confidence that important
PC brands will work. What type of test are you performing?
a) Performance test
b) Processor test
c) Functional test
d) Portability test
D
(a) Is not correct: while per section 2.3.2, the test described is a non-functional test, it is a
portability test, not a performance test.
b) Is not correct: processor test is not a test type defined in section 2.3.
c) Is not correct: per section 2.3.2, the test described is a nonfunctional test, specifically a
portability test.
d) Is correct: per section 2.3.2, testing supported devices is a non-functional test, specifically
a portability test)
During an Agile development effort, a product owner discovers a previously-unknown
regulatory requirement that applies to most of the user stories within a particular epic. The
user stories are updated to provide for the necessary changes in software behavior. The
programmers on the team are modifying the code appropriately. As a tester on the team,
what types of tests will you run?
a) Confirmation tests
b) Regression tests
c) Functional tests
d) Change-related tests
D
(The change in behavior may be either functional or nonfunctional, per section 2.3.1 and
2.3.2, but, per section 2.3.4, you need to run change-related tests, some of which are
confirmation tests and others are regression tests.
Therefore, d is the correct answer)
In a formal review, what is the role name for the participant who runs an inspection
meeting?
a) Facilitator
b) Programmer
c) Author
d) Project manager
A
(a) Is correct: per section 3.2.2, the facilitator or moderator runs the review meetings.
b) Is not correct: this is not a role name for a formal review participant per section 3.2.2.
c) Is not correct: per section 3.2.2, the facilitator or moderator runs the review meetings.
d) Is not correct: per section 3.2.2, the facilitator or moderator runs the review meetings)
You are reading a user story in the product backlog to prepare for a meeting with the
product owner and a developer, noting potential defects as you go. Which of the following
statements is true about this activity?
a) It is not a static test, because static testing involves execution of the test object
b) It is not a static test, because static testing is always performed using a tool
c) It is a static test, because any defects you find could be found cheaper during dynamic
testing
d) It is a static test, because static testing does not involve execution of the test object.
D
(a) Is not correct: per section 3.1, static testing does not involve execution of the test
object.
b) Is not correct: per section 3.1, some static tests involve the use of a tool, especially static
analysis, but reviews (such as the activity described here) do not necessarily involve the use
of a tool.
c) Is not correct: the review activity described here is part of a static test, but, per section
3.1.2, defects found in static tests are usually cheaper than those found in dynamic testing.
d) Is correct: per section 3.1, static testing does not involve execution of the test object.)
During a period of intensive project overtime, a system architecture document is sent to
various project participants, announcing a previously-unplanned technical review to occur in
one week. No adjustments are made to the participants' list of assigned tasks. Based on this
information alone, which of the following is a factor for review success that is MISSING?
a) Appropriate review type
b) Adequate time to prepare
c) Sufficient metrics to evaluate the author
d) Well-managed review meeting
B
(a) Is not correct: per section 3.2.3, technical reviews are appropriate for technical
documents such as a system architecture.
b) Is correct: per section 3.2.5, adequate time for preparation is important, but people are
working overtime and no adjustments are made for this new set of tasks.
c) Is not correct: per section 3.2.5, gathering metrics from a review to evaluate participants
is a factor that leads to failure, not success, because it destroys trust.
d) Is not correct: per section 3.2.5, a well-managed review meeting is important, but there is
no reason to think the review meeting will not be well managed based on the information
provided)
You are working as a tester on an Agile team, and have participated in over two dozen user
story refinement sessions with the product owner and the developers on the team at the
start of each iteration. As the reviews have gotten more effective at detecting defects in
user stories and the product owner more adept at correcting those defects, you and the
team notice that the team's velocity, as shown in your burndown charts, has started to
increase. Which of the following is a benefit of static testing that MOST DIRECTLY applies to
increased velocity?
a) Increasing total cost of quality
b) Reducing testing cost
c) Increasing development productivity
d) Reducing total cost of quality
C
(a) Is not correct: per section 3.1.2, reviews reduce, not increase, the total cost of quality.
b) Is not correct: while section 3.1.2 lists this as a benefit of static testing, increasing velocity
is a sign of increasing development productivity overall, not just testing, so B only partially
applies.
c) Is correct: section 3.1.2 lists this as a benefit of static testing, and velocity is a way of
measuring productivity in Agile development.
d) Is not correct: while section 3.1.2 does list this as a benefit of static testing, the benefit
mentioned here has to do with increasing overall development team productivity)
You are working on a video game development project, using Agile methods. It is based on
Greek mythology and history, and players can play key roles in scenarios such as the battles
between the Greeks and Trojans.
Consider the following user story and its associated acceptance criteria:
As a player, I want to be able to acquire the Rod of Midas (a new magic object), so that I can
turn objects and other players into gold
AC1: The Rod must work on any object or player, no matter what size, which can be touched
anywhere by the player holding the Rod
AC2: Holding the Rod does not change the player holding it into gold
AC3: Any object or player touched by the Rod transforms completely into gold within one
millisecond
AC4: The Rod appears as shown in Prototype O.W.RoM AC5: The transformation starts at
the point of contact with the Rod and moves at a rate of one meter per millisecond
You are participating in a checklist-based review session of this user story.
This user story and its associated acceptance criteria contain which of the following typical
defects identified by static testing in this type of work product?
a) Deviation from standards
b) Contradiction
c) Security vulnerability
d) Coverage gaps
B
(a) Is not correct: while deviation from standards is a typical defect per section 3.1.3, we
aren't given any standard with which the user stories should comply.
b) Is correct: section 3.1.3 lists contradiction as a typical requirements defect. AC3 and AC5
conflict if the Rod is touched to an object that extends more than 1 meter in any direction
from the point at which touched, since AC1 does not limit the size of the objects to be
touched.
c) Is not correct: while security vulnerabilities are typical defects per section 3.1.3, there is
nothing here related to security.
d) Is not correct: while test coverage gaps are typical defects per section 3.1.3, including
missing tests for acceptance criteria, we are not provided with any information about which
tests do and don't exist.)
What is decision coverage?
a) The percentage of condition outcomes that have been exercised by a test suite
b) Decision coverage is a synonym for statement coverage
c) The percentage of executable statements that have been exercised by a test suite
d) The percentage of decision outcomes that have been exercised by a test suite
D
(a) Is not correct: this is the Glossary definition of condition coverage.
b) Is not correct: decision coverage is a higher level of coverage per section 4.3 and the two
terms are not defined as synonyms in the Glossary.
c) Is not correct: this is the Glossary definition of statement coverage.
d) Is correct: this is the Glossary definition of coverage as applied to decisions.)
Prior to an iteration planning session, you are studying a user story and its acceptance
criteria, deriving test conditions and associated test cases from the user story as a way of
applying the principle of early QA and test. What test technique are you applying?
a) White-box
b) Black-box
c) Experience-based
d) Error guessing
B
(a) Is not correct: per section 4.1.2, structure-based or white-box techniques are based on
an analysis of the architecture, detailed design, internal structure, or the code of the test
object.
b) Is correct: per section 4.1.2, behavior-based or black-box techniques are based on an
analysis of the appropriate test basis (e.g., formal requirements documents, specifications,
use cases, user stories, or business processes), which describe functional and non-functional
behavior.
c) Is not correct: per section 4.1.2, experience-based techniques leverage the experience of
developers, testers, and users to determine what should be tested.
d) Is not correct: per section 4.4.1, error guessing is a type of experience-based testing,
which is not black-box)
Which of the following is a true statement about exploratory testing?
a) More experienced testers who have tested similar applications and technologies are likely
to do better than less experienced testers at exploratory testing
b) Exploratory testing does not identify any additional tests beyond those that would result
from formal test techniques
c) The time required to complete an exploratory testing session cannot be predicted in
advance
d) Exploratory testing can involve the use of black-box techniques but not white-box
techniques
A
(a) Is correct: exploratory testing is a form of experience-based testing, which benefits from
the skills and experience of the tester, per section 4.4.
b) Is not correct: per section 4.4.2, exploratory testing is useful to complement formal
testing techniques.
c) Is not correct: per section 4.4.2, in session-based test management, exploratory testing is
conducted within a defined time-box, and the tester uses a test charter containing test
objectives to guide the testing.
d) Is not correct: per section 4.4.2, exploratory testing can incorporate the use of other
black-box, white-box, and experience-based techniques referenced in this syllabus)
You are testing a mobile app that allows customers to access and manage their bank
accounts. You are running a test suite that involves evaluating each screen and each field on
each screen against a general list of user interface best practices, derived from a popular
book on the topic, that maximize attractiveness, ease-of-use, and accessibility for such apps.
Which of the following options BEST categorizes the test technique you are using?
a) Specification-based
b) Exploratory
c) Checklist-based
d) Error guessing
C
(a) Is not correct: the book provides general guidance, and is not a formal requirements
document, a specification, or a set of use cases, user stories, or business processes as
described in section 4.1.2
b) Is not correct: while you could consider the list as a set of test charters per section 4.4.2,
it more closely resembles the list of test conditions described in section 4.4.3.
c) Is correct: the list of user interface best practices is the list of test conditions described in
section 4.4.3.
d) Is not correct: the tests are not focused on failures that could occur, as described in
section 4.4.1, but rather on knowledge about what is important for the user, in terms of
usability.)
Consider a mobile app that allows customers to access and manage their bank accounts. A
user story has just been added to the set of features that checks customers' social media
accounts and bank records to give personalized greetings on birthdays and other personal
milestones. Which of the following test techniques could a PROGRAMMER use during a unit
test of the code to ensure that coverage of situations when the greetings ARE supposed to
occur and when the greetings ARE NOT supposed to occur?
a) Statement testing
b) Exploratory testing
c) State transition testing
d) Decision testing
D
(a) Is not correct: per section 4.3.1, statement testing exercises the executable statements
in the code, which might result in the absence of certain greetings not being tested.
b) Is not correct: unless the test charter specifically mentioned testing both the presence
and the absence of each type of greeting, coverage can be difficult to assess for an
exploratory test, per section 4.4.
c) Is not correct: per section 4.2.4, state transition testing is useful for situations where the
test object responds differently to an input depending on current conditions or previous
history, but in this case the test object has to decide whether the current date matches a
particular milestone and thus whether to display the relevant greeting.
d) Is correct: per section 4.3.2, decision testing involves test cases that follow the control
flows that occur from a decision point, which in this case would be deciding whether a
greeting should or should not be given.)
A batch application has been in production unchanged for over two years. It runs overnight
once a month to produce statements that will be e-mailed to customers. For each customer,
the application goes through every account and lists every transaction on that account in
the last month. It uses a nested-loop structure to process customers (outer loop), each
customer's accounts (middle loop), and each account's transactions (inner loop).
One night, the batch application terminates prematurely, failing to e-mail statements to
some customers, when it encounters a customer with one account for which no
transactions occurred in the last month. This is a very unusual situation and has not
occurred in the years since this application was placed in production.
While fixing the defect, a programmer asks you to recommend test techniques that are
effective against this kind of defect. Which of the following test techniques would most
likely have been able to detect the underlying defect?
a) Decision testing
b) Statement testing
c) Checklist-based testing
d) Error guessing
A
(a) Is correct: per section 4.3.3, for a loop construct, statement coverage only requires that
all statements within the loop are executed, but decision coverage requires testing of both
the conditions where the loop is executed and when it is bypassed.
b) Is not correct: per section 4.3.3, for a loop construct, statement coverage only requires
that all statements within the loop are executed, but decision coverage requires testing of
both the conditions where the loop is executed and when it is bypassed.
c) Is not correct: per section 4.4.3, checklists are based on experience, defect and failure
data, knowledge about what is important for the user, and an understanding of why and
how software fails, none of which is likely to have led to the inclusion of such a test
condition.
d) Is not correct: while, per section 4.4.1, it's possible that someone might anticipate a
developer making the mistaken assumption that there would always be at least one
transaction in a month for every account, only decision testing, per 4.4.3, guarantees testing
of that condition.)
You are testing an unattended gasoline pump that only accepts credit cards. Once the credit
card is validated, the pump nozzle placed into the tank, and the desired grade selected, the
customer enters the desired amount of fuel in gallons using the keypad. The keypad only
allows the entry of digits. Fuel is sold in tenths (0.1) of a gallon, up to 50.0 gallons.
Which of the following is a minimum set of desired amounts that covers the equivalence
partitions for this input? a) 0.0, 20.0, 60.0
b) 0.0, 0.1, 50.0
c) 0.0, 0.1, 50.0, 70.0
d) -0.1, 0.0, 0.1, 49.9, 50.0, 50.1
A
(Per 4.2.1, there are three equivalence partitions:
- No sale completed (0.0 gallons)
- A valid sale occurs (0.1 to 50.0 gallons)
- An invalid amount is selected (50.1 or more gallons)
So, the correct and incorrect answers are as follows:
a) Is correct: this set of input values has exactly one test per equivalence partition.
b) Is not correct: this set of input values has does not cover the invalid amount partition.
c) Is not correct: this set of input values has two tests for the valid sale equivalence
partition, which is not the minimum.
d) Is not correct: this set of input values covers the three-point boundary values for the two
boundaries, not the minimum number required to cover the equivalence partitions.)
You are testing an e-commerce system that sells cooking supplies such as spices, flour, and
other items in bulk. The units in which the items are sold are either grams (for spices and
other expensive items) or kilograms (for flour and other inexpensive items). Regardless of
the units, the smallest valid order amount is 0.5 units (e.g., half a gram of cardamom pods)
and the largest valid order amount is
25.0 units (e.g., 25 kilograms of sugar). The precision of the units field is 0.1 units.
Which of the following is a set of input values that cover the boundary values with two-
point boundary
values for this field?
a) 0.3, 10.0, 28.0
b) 0.4, 0.5, 0.6, 24,9,25,0, 25.1
c) 0.4, 0.5, 25.0 25.1
d) 0.5, 0.6, 24.9, 25.0
C
(Per 4.2.2, there are three equivalence partitions, with the boundaries as shown:
- Invalid too low (0.4 and below)
- Valid (0.5 to 25.0)
- Invalid too high (25.1 and above) So, the correct and incorrect answers are as follows: a) Is
not correct: none of those four boundary values are included in this set of tests. These tests
do cover the equivalence partitions. b) Is not correct: all of these four boundary values are
included in this set of tests, but two additional values are included, one for each boundary.
These are the values associated with threepoint boundary value analysis. c) Is correct: each
of those four two-point boundary values are included in this set of tests. d) Is not correct:
these four values are all included in the valid partition)
Consider the following decision table for the portion of an online airline reservation system
that allows frequent flyers to redeem points for reward travel:
Condition 1 2 3
Account/password okay N Y Y
Sufficient points - N Y
Action
Show flight history N Y Y
Allow reward travel N N Y
Suppose that there are two equivalence partitions for the condition where
Account/password okay is not true, one where the account is invalid and another where the
account is valid but the password is invalid. Suppose that there is only one equivalence
partition corresponding to the condition where Account/password okay is true, where both
the account and password are valid.
If you want to design tests to cover the equivalence partitions for Account/password okay
and also for this portion of the decision table, what is the minimum number of tests
required?
a) 2
b) 3
c) 4
d) 9
C
(Per section 4.2.3, there is at least one test for each column in the decision table. However,
column one requires two tests, one where the account is invalid and another where the
account is valid but the password is invalid, so the minimum number of tests is four.
Therefore, the answer is c.)
You are testing an e-commerce system that sells cooking supplies such as spices, flour, and
other items in bulk. The units in which the items are sold are either grams (for spices and
other expensive items) or kilograms (for flour and other inexpensive items). Regardless of
the units, the smallest valid order amount is 0.5 units (e.g., half a gram of cardamom pods)
and the largest valid order amount is 25.0 units (e.g., 25 kilograms of sugar). The precision
of the units field is 0.1 units.
Which of the following is a MINIMAL set of input values that cover the equivalence
partitions for this field?
a) 10.0, 28.0
b) 0.4, 0.5, 25.0, 25.1
c) 0.2, 0.9, 29.5
d) 12.3
C
(Per 4.2.2, there are three equivalence partitions, with the boundaries as shown:
- Invalid too low (0.4 and below)
- Valid (0.5 to 25.0)
- Invalid too high (25.1 and above)
So, the correct and incorrect answers are as follows:
a) Is not correct: only two of the equivalence partitions are covered in this set of tests.
b) Is not correct: each of those four boundary values are included in this set of tests, but the
question asked for equivalence partition coverage with minimal tests, so either 0.5 or 25.0
should be dropped.
c) Is correct: each of these three equivalence partitions are covered in this set of tests.
d) Is not correct: only one of those equivalence partitions is covered by this test)
You are working as a tester on an online banking system. Availability is considered one of
the top product (quality) risks for the system. You find a reproducible failure that results in
customers losing their connections to the bank Web site when transferring funds between
common types of accounts and being unable to reconnect for between three and five
minutes.
Which of the following would be a good summary for a defect report for this failure, one
that captures both the essence of the failure and its impact on stakeholders?
a) Web server logs show error 0x44AB27 when running test 07.005, which is not an
expected error message in /tmp filesystem
b) Developers have introduced major availability defect which will seriously upset our
customers
c) Performance is slow and reliability flaky under load
d) Typical funds-transfer transaction results in termination of customer session, with a delay
in availability when attempting to reconnect
D
(a) Is not correct: while this information is useful for developers, it does not provide
managers with a sense of the impact on product quality per section 5.6.
b) Is not correct: this summary does not provide developers or managers with the necessary
information described in section 5.6 and attacks the developers (see section 1.5).
c) Is not correct: this summary does not provide developers or managers with the necessary
information described in section 5.6 and attacks the developers (see section 1.5).
d) Is correct: this summary gives a good sense of the failure and its impact, providing the
information discussed in section 5.6.)
You are testing a mobile app that allows users to find a nearby restaurant, based on the
type of food they want to eat. Consider the following list of test cases, priorities (smaller
number is high priority), and dependencies, in the following format:
Test Case # Test Condition Covered Priority Logical Dependency
01.001 Select type of food 3 none
01.002 Select restaurant 2 01.001
01.003 Get directions 1 01.002
01.004 Call restaurant 1 01.002
01.005 Make reservation 3 01.002
Which of the following is a possible test execution schedule that considers both priorities
and dependencies?
a) 01.001, 01.002, 01.003, 01.005, 01.004
b) 01.001, 01.002, 01.004, 01.003, 01.005
c) 01.003, 01.004, 01.002, 01.001, 01.002
d) 01.001, 01.002, 01.004, 01.005, 01.003
B
(Test 01.001 must come first, followed by 01.002, to satisfy dependencies. Afterwards,
01.004 and 01.003 should be run in either order, followed by 01.005, to satisfy priority.
Therefore, the answer is b.)
Which of the following is a common test metric often used to monitor BOTH test
preparation and test execution?
a) Test case status
b) Defect find/fix rates
c) Test environment preparation
d) Estimated cost to find the next defect
A
(a) Is correct: per section 5.3.1, percentage of test cases prepared is a common metric
during test preparation while percentage of test cases passed, failed, not run, etc., are
common during test execution.
b) Is not correct: defect reports are typically filed during test execution, based on failures
found (see section 5.6).
c) Is not correct: test environment preparation is part implementation and would generally
be complete before test execution (see section 1.4).
d) Is not correct: defects are typically reported during test execution, based on failures
found (see section 5.6), so the cost to find the next defect is available during test execution
only.)
Which of the following are two factors that can be used to determine the level of risk?
a) Testing and development
b) Dynamic and reactive
c) Statement and decision
d) Likelihood and impact
D
(Per section 5.5.1, the level of risk will be determined by the likelihood of an adverse event
happening and the impact (the harm) from that event.
Therefore, the answer is d)
You are working as a project manager on an in-house banking software project. To prevent
rework and excessive find/fix/retest cycles, the following process has been put in place for
resolving a defect once it is found in the test lab:
1. The assigned developer finds and fixes the defect, then creates an experimental build
2. A peer developer reviews, unit tests, and confirmation tests the defect fix on his/her
desktop
3. A tester—usually the one who found the defect—confirmation tests the defect fix in the
development environment
4. Once a day, a new release with all confirmed defect fixes included, is installed in the test
environment
5. The same tester from step 3 confirmation tests the defect fix in the test environment
Nevertheless, a large number of defects which the testers confirmed as fixed in the
development environment (in step 3) are somehow failing confirmation testing in the test
environment, with the resulting rework and cycle time outcomes. You have the highest
confidence in your testers, and have ruled out mistakes or omissions in step 3.
Which of the following is the MOST likely part of the process to check next?
a) The developers, who may not be adequately testing in step 2
b) The testers, who may be confused about what to test in step 5
c) Configuration management, which may not be maintaining the integrity of the product in
step 4
d) The developers, who may not be fixing defects properly in step 1
C
(a) Is not correct: if inadequate developer testing were the problem, the confirmation test
would not pass in step 3.
b) Is not correct: the same tester who successfully performed the confirmation test in step 3
is repeating it in step 5.
c) Is correct: per section 5.4, configuration management maintains the integrity of the
software. If a test that passes in step 3 fails in step 5, then something is different between
those two steps. One possible difference is the test object, the option listed here. Another
possible difference is the between the development environment and the test environment,
but that is not an option listed here.
d) Is not correct: if the developers were not fixing the defect, the confirmation test would
not pass in step 3.)
You are engaged in planning a test effort for a new mobile banking application. As part of
estimation, you first meet with the proposed testers and others on the project. The team is
well-coordinated and has already worked on similar projects. To verify the resulting
estimate, you then refer to some industry averages for testing effort and costs on similar
projects, published by a reputable consultant.
Which statement accurately describes your estimation approach?
a) A simultaneous expert-based and metrics-based approach
b) Primarily an expert-based approach, augmented with a metrics-based approach
c) Primarily a metrics-based approach, augmented with an expert-based approach
d) Primarily planning poker, checked by velocity from burndown charts.
B
(a) Is not correct: the two methods are used sequentially, not simultaneously.
b) Is correct: the primary sources of information come from the experienced testers, who
are the experts. The consultant's industry averages augment the original estimate from
published metrics.
c) Is not correct: the expert-based approach is the primary approach, augmented by a
metrics-based approach.
d) Is not correct: we do not know if this project is following Agile methods, and burndown
charts do not come from external consultants.)
During a project following Agile methods, you find a discrepancy between the developer's
interpretation of an acceptance criteria and the product owner's interpretation, which you
bring up during a user story refinement session. Which of the following is a benefit of test
independence exemplified by this situation?
a) Recognizing different kinds of failures
b) Taking primary responsibility for quality
c) Removing a defect early
d) Challenging stakeholder assumptions
D
(a) Is not correct: while, per section 5.1.1, recognizing different kinds of failures is a benefit
of tester independence, in the scenario here no code yet exists that can fail, and the
problem is that the developer and product owner are both assuming different things about
the acceptance criteria.
b) Is not correct: per section 5.1.1, developers losing a sense of responsibility for quality is a
drawback, not a benefit.
c) Is not correct: while the effect of the discovery of this disagreement is the earlier removal
of the defect, prior to coding, defects can be discovered early by various people, not just
independent testers.
d) Is correct: per section 5.1.1, challenging stakeholder assumptions is a benefit of tester
independence, and here the developer and product owner are both assuming different
things about the acceptance criteria.)
You are defining the process for carrying out product risk analysis as part of each iteration
on an Agile project. Which of the following is the proper place to document this process in a
test plan?
a) Scope of testing
b) Approach of testing
c) Metrics of testing
d) Configuration management of the test object
B
(a) Is not correct: while scope is a topic addressed in a test plan per section 5.2.1, the
implementation of a risk-based testing strategy on this project is the approach, so this topic
should be addressed in that section.
b) Is correct: approach is a topic addressed in a test plan per section 5.2.1, and the
implementation of a risk-based testing strategy on this project is the approach.
c) Is not correct: while metrics for test monitoring and control is a topic addressed in a test
plan per section 5.2.1, the implementation of a risk-based testing strategy on this project is
the approach, so this topic should be addressed in that section.
d) Is not correct: configuration management is not a topic addressed in a test plan per
section 5.2.1.)
Consider the following list of undesirable outcomes that could occur on a mobile app
development effort:
A. Incorrect totals on reports
B. Change to acceptance criteria during acceptance testing
C. Users find the soft keyboard too hard to use with your app
D. System responds too slowly to user input during search string entry
E. Testers not allowed to report test results in daily standup meetings
Which of the following properly classifies these outcomes as project and product risks?
a) Product risks: B, E; Project risks: A, C, D
b) Product risks: A, C, D; Project risks: B, E
c) Product risks: A, C, D, E Project risks: B
d) Product risks: A, C Project risks: B, D, E
B
(As described in section 5.5.2, product risks exist when a work product may fail to satisfy
legitimate needs, while project risks are situations that could have a negative impact on the
project's ability to achieve its objectives. So:
A. Incorrect totals on reports = product risk
B. Change to acceptance criteria during acceptance testing = project risk
C. Users find the soft keyboard too hard to use with your app = product risk
D. System responds too slowly to user input during search string entry = product risk
E. Testers not allowed to report test results in daily standup meetings = project risk
Therefore, the correct and incorrect answers are as follows:
a) Is not correct: this list is entirely backwards.
b) Correct.
c) Is not correct: while e is about product quality and product risks, the failure to
communicate test results is a project risk per the syllabus.
d) Is not correct: product risks can be functional and nonfunctional, so d is also a product
risk.)
You have just completed a pilot project for a regression testing tool. You understand the
tool much better, and have tailored your testing process to it. You have standardized an
approach to using the tool and its associated work products. Which of the following is a
typical test automation pilot project goal that remains to be carried out?
a) Learn more details about the tool
b) See how the tool would fit with existing processes and practices
c) Decide on standard ways of using, managing, storing, and maintaining the tool and the
test assets
d) Assess whether the benefits will be achieved at reasonable cost
D
(a) Is not correct: per section 6.2.2, this is an objective for a pilot, but you have achieved it
because you understand the tool much better due to the pilot.
b) Is not correct: per section 6.2.2, this is an objective for a pilot, but you have achieved it
because you have tailoring your testing processes.
c) Is not correct: per section 6.2.2, this is an objective for a pilot, but you have achieved it
because you have standardized an approach to using the tool and its associated work
products.
d) Is correct: per section 6.2.2, assessing the benefits and configuring the metrics collection
are the two objectives missing from this list.)
Which of the following tools is most useful for reporting test metrics?
a) Test management tool
b) Static analysis tool
c) Coverage tool
d) Security tool
A
(a) Is correct: per 6.1.1, test management tools support the activities associated with test
manager discussed in chapter 5, including metrics.
b) Is not correct: per section 3.1, static code analysis metrics would have to do with the code
only, not testing as a whole.
c) Is not correct: per section 6.1.1., these tools report on test basis coverage and code
coverage only, not testing as a whole.
d) Is not correct: per section 6.1.1, security tools focus on one specific area, not testing as a
whole.)
Which of the following is the activity that removes the cause of a failure?
a. Testing
b. Dynamic testing
c. Debugging
d. Reverse engineering
C
(C is correct. Debugging is the process of finding, analyzing and removing the causes of
failures in software. A and B are incorrect because they will find the failure caused by a
defect. D is incorrect. Reverse engineering is a process for determining the source code from
the object code.)ddddd
As a tester, which of the following is a key to effectively communicating and maintaining
positive relationships with developers when there is disagreement over the prioritization of
a defect?
a. Escalate the issue to human resources and stress the importance of mutual respect
b. Communicate in a setting with senior management to ensure everyone understands
c. Convince the developer to accept the blame for the mistake
d. Remind them of the common goal of creating quality systems
D
(D is correct, per syllabus. Start with collaboration rather than battles and remind everyone
that it is a common goal to build better quality systems. A and B are incorrect because this
type of escalation is inappropriate. C is incorrect as it is the opposite approach to take
because blame placing is not going to build a better team or product)
Why is software testing sometimes required for legal reasons?
a. It prevents developers from suing testers
b. Contracts may specify testing requirements that must be fulfilled
c. International laws require software testing for exported products
d. Testing across systems must be accompanied by legal documentation
B
(B is correct. Software testing may be required to meet contractual requirements and
commitments and penalties are sometimes assessed when quality goals are not met. A is
not correct because lawsuits, unfortunately, are not limited. C is not correct because there
are not international laws covering all exported products. D is not correct because cross
system testing may occur within an organization and require no legal documentation.)
In what way does root cause analysis contribute to process improvement?
a. Helps to better identify and correct the root cause of defects
b. Outlines how development teams can code faster
c. Specifies the desired root causes to be achieved by other teams
d. Contributes to the justification of future project funding
A
(A is correct. Root cause analysis can determine common causes of issues. Addressing these
common causes by process improvement can increase quality. B is incorrect because root
cause analysis will not make the developers code faster, better maybe, not faster. C is
incorrect because root causes generally are not good things that should be transferred
between teams. D is not correct because it will not improve funding)
Why is it important to avoid the pesticide paradox?
a. Dynamic testing is less reliable in finding bugs
b. Pesticides mixed with static testing can allow bugs to escape detection
c. Tests should not be context dependent
d. Running the same tests over and over will reduce the chance of finding new defects
D
(D is correct. As tests are run repeatedly, the pesticide (the tests) become less effective. A is
not correct because dynamic testing should be used and helps to alleviate the pesticide
paradox. B doesn't actually make sense. C is not correct because testing should be context
dependent.)
Which of the following is the activity that compares the planned test progress to the actual
test progress?
a. Test monitoring
b. Test planning
c. Test closure
d. Test control
A
(A is correct. Test monitoring involves the ongoing comparison of actual progress against the
test plan. B is incorrect because it defines testing objectives. C is incorrect because the
activities have already completed and the project is closing down. D is incorrect because
test control is when you take actions to correct any issues observed during monitoring.)
Which of the following is the correct statement?
a. An error causes a failure which results in a defect
b. A defect causes a failure which results in an error
c. A failure is observed as an error and the root cause is the defect
d. An error causes a defect which is observed as a failure
D
(D is correct. The error or the mistake made by the developer causes a defect in the code.
When that code is executed, a failure can be observed.)
What type of activity is normally used to find and fix a defect in the code?
a. Regression testing
b. Debugging
c. Dynamic analysis
d. Static analysis
B
(B is correct. This normally occurs during debugging)
During which level of testing should non-functional tests be executed?
a. Unit and integration only
b. System testing only
c. Integration, system and acceptance only
d. Unit, integration, system and acceptance only
D
(D is correct. Non-functional tests can and should be executed at all levels of testing.)
When a system is targeted for decommissioning, what type of maintenance testing may be
required?
a. Retirement testing
b. Regression testing
c. Data migration testing
d. Patch testing
C
(C is correct, per syllabus. Data migration to another system or data migration to an archival
system may be needed. A is incorrect, there is no such testing type. B is incorrect because
this is more appropriate for current systems, not the system being retired. D is incorrect
because this is of no use for a system being retired.)
If impact analysis indicates that the overall system could be significantly affected by system
maintenance activities, why should regression testing be executed after the changes?
a. To ensure the system still functions as expected with no introduced issues
b. To ensure no unauthorized changes have been applied to the system
c. To assess the scope of maintenance performed on the system
d. To identify any maintainability issues with the code
A
(A is correct, per syllabus. By definition, regression testing is looking for areas in which the
system may have regressed (gone backwards). B is incorrect as the purpose of regression is
not to monitor malicious or erroneous activities by the developers. C is incorrect as it is not
in scope of regression testing but would be a consideration for the impact analysis. D is
incorrect because regression testing will not identify maintainability issues - that will have to
be done via static analysis or specific maintainability tests.)
In an iterative lifecycle model, which of the following is an accurate statement about testing
activities?
a. For every development activity, there should be a corresponding testing activity
b. For every testing activity, appropriate documentation should be produced, versioned and
stored
c. For every development activity resulting in code, there should be a testing activity to
document test cases
d. For every testing activity, metrics should be recorded and posted to a metrics dashboard
for all stakeholders
A
(A is correct. For any lifecycle model, this is a correct statement. B is not correct because
some testing activities may not produce documentation, such as reviews. C is not correct
because test cases are not always written, particularly in an Agile lifecycle (which is an
iterative lifecycle) where only exploratory testing might be used. D is not correct because
not all testing activities produce metrics (such as test case creation, reviews, etc.) and, even
if they did, not all stakeholders would be interested in those metrics)
Use cases are a test basis for which level of testing?
a. Unit
b. System
c. Load and performance
d. Usability
B
(B is correct. Use cases are a good test basis for system testing because they include end-to-
end transaction scenarios. A is not correct because unit testing concentrates on individual
components, not transactions. C and D are not testing levels)
Which of the following techniques is a form of static testing?
a. Error guessing
b. Automated regression testing
c. Providing inputs and examining the resulting outputs
d. Code review
D
(D is correct, per syllabus. A, B and C are all forms of dynamic testing)
Which of the following is a benefit of static analysis?
a. Defects can be identified that might not be caught by dynamic testing
b. Early defect identification requires less documentation
c. Early execution of the code provides a gauge of code quality
d. Tools are not needed because reviews are used instead of executing code
A
(A is correct, per syllabus. Static analysis with a static analyzer can be used to find defects
such as uninitialized variables that could be difficult to catch with dynamic testing. B is
incorrect because defects will still need to be documented regardless of how early they are
found. C is incorrect because this is dynamic analysis. D is incorrect because static analysis
usually requires the use of tools.)
What is the main difference between static and dynamic testing?
a. Static testing is performed by developers; dynamic testing is performed by testers
b. Manual test cases are used for dynamic testing; automated tests are used for static
testing
c. Static testing must be executed before dynamic testing
d. Dynamic testing requires executing the software; the software is not executed during
static testing
D
(D is correct. Dynamic testing is done while the software is actually running whereas static
testing depends on examining the software while it is not running. A is not correct because
both types of testing can be done by both developers and testers. B is not correct because
manual and automated tests can be used for dynamic testing. C is not correct because static
testing can occur at any time although it is usually done before dynamic testing.)
If a review session is led by the author of the work product, what type of review is it?
a. Ad hoc
b. Walkthrough
c. Inspection
d. Audit
B
(B is correct. In a walkthrough, the author normally leads the review session. A is not correct
as this is not normally an organized session. C is not correct because an inspection is
normally led by the facilitator (moderator). D is not correct because an audit is usually led
by a third party)
You are preparing for a review of a mobile application that will allow users to transfer
money between bank accounts from different banks. Security is a concern with this
application and the previous version of this application had numerous security
vulnerabilities (some of which were found by hackers). It is very important that this doesn't
happen again. Given this information, what type of review technique would be most
appropriate?
a. Ad hoc
b. Role-based
c. Checklist-based
d. Scenario
C
(C is correct. This review should be conducted with checklist guidance with the checklist
including security vulnerabilities. A is not correct because this will not provide the needed
guidance. B is not correct because the roles are not a concern - even the hacker role -
compared to checking the vulnerabilities. D is not correct because the concern is about the
security vulnerabilities, not the functionality of the product.)
Which of the following is an experience-based testing technique?
a. Error guessing
b. Intuitive testing
c. Oracle-based testing
d. Exhaustive testing
A
(A is correct, per syllabus. B and C are not testing techniques. D is not possible unless the
code is trivial and is not an experience-based technique)
Which of the following test techniques uses the requirements specifications as a test basis?
a. Structure-based
b. Black-box
c. White-box
d. Exploratory
B
(B is correct, per syllabus. Black-box testing is based off the requirements documents. A and
C are incorrect because these use the structure of the software as the test basis. D is
incorrect because exploratory testing is often done when there is no specification, thus
giving the tester the opportunity to learn about the software while testing.)
How is statement coverage determined?
a. Number of test decision points divided by the number of test cases b. Number of decision
outcomes tested divided by the total number of executable statements
c. Number of possible test case outcomes divided by the total number of function points
d. Number of executable statements tested divided by the total number of executable
statements
D
(D is correct, per syllabus. A, B and C are not valid measures)
If you have a section of code that has one simple IF statement, how many tests will be
needed to achieve 100% decision coverage?
a. 1
b. 2
c. 5
d. Unknown with this information
B
(B is correct. A simple IF statement will be composed of If ... then ... else.... end if. There are
two decision outcomes, one for the result of the If being true and one for it being false.
Since 100% decision coverage requires at least one test case for each decision outcome, two
tests are needed. A and C are incorrect because these are the wrong numbers of tests. D
would be correct if this weren't defined as a simple if statement because a complex if
statement could include more than two outcomes.)
What is error guessing?
a. A testing technique used to guess where a developer is likely to have made a mistake
b. A technique used for assessing defect metrics
c. A development technique to verify that all error paths have been coded
d. A planning technique used to anticipate likely schedule variances due to faults
A
(A is correct. Error guessing is a technique used to anticipate where developers are likely to
make errors and to create tests to cover those areas. B, C and D are not correct.)
When exploratory testing is conducted using time-boxing and test charters, what is it
called?
a. Schedule-based testing
b. Session-based testing
c. Risk-based testing
d. Formal chartering
B
(B is correct. This is often called session-based testing and may use session sheets. A is not
correct. Exploratory doesn't usually comply to schedules but rather allows the tester to
explore and learn about the software. Coverage is difficult to assess which is one of the
reasons it is difficult to match the time spent to the amount accomplished. C is not correct.
This may be one of the forms of risk-based testing, but it is not entirely RBT. D is not correct
as this is not actually a testing term)
You are testing a scale system that determines shipping rates for a regional web-based auto
parts distributor. You want to group your test conditions to minimize the testing. Identify
how many equivalence classes are necessary for the following range. Weights are rounded
to the nearest pound.
Weight 1-10lb 11-25lb 26-50lb 51lb&up
Shipping Cost $5 $7.5 $12 $17
a. 8
b. 6
c. 5
d. 4
C
(C is correct. You need a partition for each of the 4 classes and one for a zero or negative
weight.)
You are testing a scale system that determines shipping rates for a regional web-based auto
parts distributor. Due to regulations, shipments cannot exceed 100 lbs. You want to include
boundary value analysis as part of your black-box test design. How many tests will you need
to execute to achieve 100% two-value boundary value analysis?
Weight 1-10lb 11-25lb 26-50lb 51-100lb
Shipping Cost $5 $7.5 $12 $17
a. 4
b. 8
c. 10
d. 12
C
(C is correct. 2 per valid weight range plus one for a negative weight and one for a weight
exceeding 100 lbs (-1, 0, 10, 11, 25, 26, 50, 51, 100, 101).)
You are testing an e-commerce transaction that has the following states and transitions:
1. Login (invalid) > Login
2. Login > Search
3. Search > Search
4. Search > Shopping Cart
5. Shopping Cart > Search
6. Shopping Cart > Checkout
7. Checkout > Search
8. Checkout > Logout
For a state transition diagram, how many transitions should be shown?
a. 4
b. 6
c. 8
d. 16
C
(C is correct. There are 8 transitions that should be shown in the state transition diagram as
explained in the question. A is not correct as this is only checking one transition from each
state. B is not correct because this is probably excluding login > login and search > search. D
is not correct because it is checking the invalid transitions as well and those would be
included in a state table, not a state transition diagram. These are:
1. Login (invalid) > Login
2. Login > Search
3. Login > Shopping Cart (invalid transition)
4. Login > Checkout (invalid transition)
5. Search > Login (invalid transition)
6. Search > Search
7. Search > Shopping Cart
8. Search > Checkout (invalid transition)
9. Shopping Cart > Login (invalid transition) 10.Shopping Cart > Search
11.Shopping Cart > Shopping Cart (invalid transition)
12.Shopping Cart > Checkout 1
3.Checkout > Login (invalid transition)
14.Checkout > Search
15.Checkout > Shopping Cart (invalid transition)
16.Checkout > Logout)
You are testing a banking application that allows a customer to withdraw 20, 100 or 500
dollars in a single transaction. The values are chosen from a drop-down list and no other
values may be entered.
How many equivalence partitions need to be tested to achieve 100% equivalence partition
coverage?
a. 1
b. 2
c. 3
d. 4
D
(D is correct. The values to be tested are 20, 100, 500 and no selection.)
Level of risk is determined by which of the following?
a. Likelihood and impact
b. Priority and risk rating
c. Probability and practicality
d. Risk identification and mitigation
A
(A is correct. The combination of likelihood and impact is normally used to determine the
overall risk level (sometimes called the risk priority number).)
Who normally writes the test plan for a project?
a. The project manager
b. The product owner
c. The test manager
d. The tester
C
(C is correct. Writing and updating the test plan is normally the responsibility of the test
manager. A, B and D may contribute to the test plan, but the overall responsibility belongs
to the test manager)
What is the biggest problem with a developer testing his own code?
a. Developers are not good testers
b. Developers are not quality focused
c. Developers are not objective about their own code
d. Developers do not have time to test their own code
C
(C is correct. This is the biggest problem. A and B are not necessarily true - some developers
are good testers and have a good quality focus. D is not correct because unit testing is part
of their job and time should be made in the schedule for at least unit testing.)
Which of the following is a project risk?
a. A defect that is causing a performance issue
b. A duplicate requirement
c. An issue with a data conversion procedure
d. A schedule that requires work during Christmas shutdown
D
(D is correct, this is a risk to the entire project. A, B and C are product risks.)
If your test strategy is based off the list of the ISO 25010 quality characteristics, what type of
strategy is it?
a. Regulatory
b. Analytical
c. Methodical
d. Reactive
C
(C is correct. When tests are derived from a systematic use of a preset list of quality
characteristics, this is a methodical strategy. A is not correct because this is not a regulation.
B is not correct because it is based off the predefined list rather than an analysis. D is not
correct because it is not responding to the software's behavior, but rather following
something that is already defined.)
If the developers are releasing code for testing that is not version controlled, what process is
missing?
a. Configuration management
b. Debugging
c. Test and defect management
d. Risk analysis
A
(A is correct. Configuration management is missing if the code is not being properly
versioned and tracked)
You are getting ready to test another upgrade of an ERP system. The previous upgrade was
tested by your team and has been in production for several years. For this situation, which
of the following is the most appropriate test effort estimation technique?
a. Effort-based
b. Expert-based
c. Metric-based
d. Schedule-based
C
(C is correct. In this case, you should have access to the effort that was required on the
previous version of the ERP system and you should be able to use that information to
predict the effort for this release. A and D are not correct because these are not estimation
techniques. B is not correct because it's better to use internal metrics.)
You have been testing software that will be used to track credit card purchases. You have
found a defect that causes the system to crash, but only if a person has made and voided 10
purchases in a row. What is the proper priority and severity rating for this defect?
a. Priority high, severity high
b. Priority high, severity low
c. Priority low, severity low
d. Priority low, severity high
D
(D is correct. This is not likely to happen, so the urgency to fix it is low but it does crash the
system so the impact to the system is high so the severity should be high.)
Consider the following test cases that are used to test an accounting system:
TestID | Name | Dependency | Priority
1 | Purchase Item | None | 2
2 | Receive Invoice | Test 1 | 3
3 | Receive Goods | Test 1 | 2
4 | Send Payment | Test 2 | 3
5 | Report Payments | Test 4 | 1
Given this information, what is the proper order in which to execute these test cases?
a. 5, 1, 3, 2, 4
b. 1, 2, 4, 5, 3
c. 1, 3, 2, 4, 5
d. 3, 4, 5, 1, 2
B
(B is correct. The goal is to run the highest priority tests as soon as possible. Dependency has
to be considered in order for the tests to actually be executed. In order to get the highest
priority test run as soon as possible, the correct order is as follows: test 1 has to go first
since everything else is dependent on it. Then we need to do 2 so we can do 4 and 5 (the
highest priority test) and then 3 is last because 5 is not dependent on it. A is not correct
because 5 cannot be run first. C is not correct because it does not run 5 as soon as possible;
it defers it until after 3 is run. D is not correct because 3 can't be run first as it requires 1 and
2.)
Which of the following are major objectives of a pilot project for a tool introduction?
a. Roll out, adapt, train, implement
b. Monitor, support, revise, implement
c. Learn, evaluate, decide, assess
d. Evaluate, adapt, monitor, support
C
(C is correct. Learn more about the tool, evaluate the fit in the organization, decide on
standard usage and assess benefits to be achieved are all objectives for a pilot project.)
What is the primary purpose of a test execution tool?
a. It runs automated test scripts to test the test object
b. It automatically records defects in the defect tracking system
c. It analyzes code to determine if there are any coding standard violations
d. It tracks test cases, defects and requirements traceability
A
(A is correct. This is the primary purpose of the test execution tools. B may be something the
tool can do, but this is not the primary purpose. C is a static analysis tool and D is a test
management tool.)