ST-Test Management
ST-Test Management
Test administrator:
• Expert(s) for installing and operating the test environment (system administrator
knowledge). Sets up and supports the test environment (often coordinating with
general system administration and network management).
Tasks and Qualifications
Tester:
• Expert(s) for executing tests and reporting failures
• Review test plans and test cases
• Uses the required tools and test monitoring tools
• Execute and log tests, evaluate and document the results and detect
Tasks and Qualifications
Successful Tester
In addition to technical and test-specific skills, a tester needs social skills:
• Ability to work in a team, and political and diplomatic aptitude
• Skepticism (willingness to question apparent facts)
• Persistence and poise
• Ability to get quickly acquainted with (complex fields of) application
Multidisciplinary Team
• Specifically in system testing
Test Planning
Test Plan
• Testing should have a plan.
• Planning and test preparation starts as early as possible in the software project.
• The test policy of the organization and the objectives, risks, and constraints of
the project as well as the criticality of the product influence the test plan
Test planning activities – done by test manager:
• Defining the overall approach to and strategy for testing
• Deciding about the test automation
• Defining the test levels and their interaction
• Deciding integration between the testing and project activities
• Selecting metrics for monitoring testing results and progress
• Define test exit criteria
Test Plan
Test planning activities – done by test manager:
• Determine how much test documentation shall be prepared and determining
templates
• Deciding on what, who, when, and how much testing
• Estimating test effort and test costs; (re)estimating and (re)planning the testing
tasks during later testing work
Prioritizing Tests
• Very often time and budget are not sufficient to run all test cases of a certain test
level.
• Thus, it is necessary to select test cases in a suitable way.
• Even with a reduced number of test cases, it must be assured that as many as
possible critical faults are found. This means test cases must be prioritized.
Criteria for prioritization
• Usage frequency of a function and the probability of failure. If certain functions
of the system are used often and they contain a fault, then the probability of this
fault leading to a failure is high. Thus, test cases for this function should have a
higher priority than test cases for a less-often-used function.
Prioritizing Tests
Criteria for prioritization
• Failure risk. Risk is the combination of severity and failure probability. The
severity is the expected damage. Such risks may be, for example, that the
business of the customer using the software is impaired, thus leading to financial
losses for the customer. Tests that may find failures with a high risk get higher
priority than tests that may find failures with low risks.
• Visibility of a failure. This is especially important in interactive systems. For
example, a user of a city information service will feel unsafe if there are
problems in the user interface and will lose confidence in the remaining
information output
• Priority of the requirements. The different functions delivered by a system have
different importance for the customer. The customer may be able to accept the
loss of some of the functionality if it behaves wrongly. For other parts, this may
not be possible
Prioritizing Tests
Criteria for prioritization
• Development or system architecture. Components that lead to severe
consequences when they fail (for example, a crash of the system) should be
tested especially intensively.
• Complexity of the individual components and system parts. Complex program
parts should be tested more intensively because developers probably introduced
more faults. However, it may happen that program parts seen as easy contain
many faults because development was not done with the necessary care.
Therefore, prioritization in this area should be based on experience.
Test Entry and Exit Criteria
Test start criteria
• The test environment and tool are ready.
• Test objects / basis are ready to go in the testing phase.
• They are compatible with the test environment.
• The necessary test data is available.
These criteria are preconditions for starting test execution. They prevent the test
team from wasting time trying to run tests that are not ready.
Test Entry and Exit Criteria
Test exit criteria
• Exit criteria are used to make sure test work is not stopped by chance or
prematurely.
• They prevent tests from ending too early, for example, because of time pressure
or because of resource shortages.
• They also prevent testing from being too extensive.
Test exit criteria
• Achieved test coverage: covered requirements, code coverage, etc.
• Product quality: Defect severity, failure rate, and reliability of the test object
• Residual risk: Tests not executed, defects not repaired
• Economic constraints: Allowed cost, project risks, release deadlines
Cost and Economy Aspect
Costs of Defects
Direct defect costs:
• Costs that arise for the customer due to failures during operation of the software
product.
• Examples are costs due to calculation mistakes (data loss, wrong orders, damage
of hardware or parts of the technical installation, damage to personnel).
Indirect defect costs:
• Costs or loss of sales for the vendor that occur because the customer is
dissatisfied with the product.
• Some examples include reduction of payment for failure to meet requirements,
increased costs for the customer support, bad publicity, legal costs such as loss
of license (for example, for safety critical software).
Costs of Defects
Costs for defect correction:
• Time needed for failure analysis, correction, and regression test
• Time for redistribution and reinstallation, new customer and user training
• Delay of new products due to tying up the developers with maintenance of the
existing product, decreasing competitiveness
Cost of Testing
The following list shows the most important factors that a test manager should
take into account when estimating the cost of testing.
Maturity of the development process
• Stability of the organization
• Developer’s error rate
• Change rate for the software
• Time pressure because of unrealistic plans
Quality and testability of the software
• Number, severity, and distribution of defects in the software
• Size and type of the software
• Complexity of the problem domain and the software
Cost of Testing
Test infrastructure
• Availability of testing tools
Test approach
• The chosen test techniques (black box or white box)
• Test schedule (start and execution of the test work in the project or in the
software life cycle)
Test Strategy and Test Approach
Preventative vs. Reactive Approach
Preventive approaches
• Testers are involved from the beginning: test planning and design start as early
as possible.
• The test manager can really reduce testing costs.
• Use of the general V-model, including design reviews, etc., will contribute a lot
to prevent defects.
• Early test specification and preparation, as well as application of reviews and
static analysis, contribute to finding defects early and thus lead to reduced defect
density during test execution.
• When safety-critical software is developed, a preventive approach may be
mandatory.
Preventative vs. Reactive Approach
Reactive approaches
• Testers are involved (too) late and a preventive approach cannot be chosen
• Test planning and design starts after the software or system has already been
produced.
• The test manager must find an appropriate solution even in this case.
• One very successful strategy in such a situation is called exploratory testing.
• This is a heuristic approach in which the tester “explores” the test object and test
design, test execution, and evaluation occur nearly concurrently
Analytical vs. Heuristic Approach
Analytical approach
• Test planning is founded on data and its analysis.
• The amount of testing is then chosen such that individual or multiple parameters
(costs, time, coverage, etc.) are optimized
Heuristic approach
• Test planning is founded on experience of experts and/or on rules of thumb.
• Reasons may be that no data is available, mathematical modeling is too
complicated, or the necessary know-how is missing
Analytical vs. Heuristic Approach
Model-based testing
• Uses abstract functional models of the software under test for test case design, to
find test exit criteria, and to measure test coverage.
• An example is state-based testing, where state transition machines are used as
models.
Statistical testing
• Uses statistical models about fault distribution in the test object, failure rates
during use of the software, or the use cases to develop a test approach.
• Based on this data, the test effort is allocated and test techniques are chosen.
Analytical vs. Heuristic Approach
Risk-based testing
• Uses information on project and product risks and directs testing to areas with
high risk.
Checklist-based testing
• Approaches use failure and defect lists from earlier test cycles, or prioritized
quality criteria
Expert-oriented approaches
• Use the expertise and “gut feeling” of involved experts (for the technology used
or the application domain).
• Their personal feeling about the technologies used and/or usage domain
influences and controls their choice of test approach.
Risk-based Testing
• Risk is the possibility of a negative or undesirable outcome or event.
• Risks exist whenever some problem may occur which would decrease customer, user,
participant, or stakeholder perceptions of product quality or project success.
• The primary effect of the potential problem is on product quality risks, project risks or
planning risks.
• Quality risks are identified during a product quality risk analysis with the stakeholders.
The test team then designs, implements, and executes tests to mitigate the quality
risks.
• Quality includes the features that affect customer / user / stakeholder satisfaction.
• Examples of quality risks: a system includes incorrect calculations in reports (a
functional risk related to accuracy), slow response to user input (a non-functional risk
related to efficiency and response time), and difficulty in understanding screens and
fields (a non-functional risk related to usability and understandability).
• When tests reveal defects, testing has mitigated quality risk by providing the
awareness of defects and opportunities to deal with them before release. When tests do
not find defects, testing has mitigated quality risk by ensuring that, under the tested
conditions, the system operates correctly.
Risk-based Testing
• Risk-based testing uses product quality risks to select test conditions, to allocate
test effort for those conditions, and to prioritize the resulting test cases.
• Risk-based testing has the objective of using testing to reduce the overall level
of quality risk, and, specifically to reduce that level of risk to an acceptable
level.
• Risk-based testing consists of four main activities:
1. Risk identification
2. Risk assessment
3. Risk mitigation
4. Risk management
Risk-based Testing
Risk Identification
Technical analysts can identify risks through one or more of the following
techniques:
• Expert interviews
• Independent assessments
• Use of risk templates
• Project retrospectives
• Risk workshops
• Brainstorming
• Checklists
• Calling on past experience
By involving the broadest possible sample of stakeholders, the risk identification
process is most likely to identify most of the significant product quality risks.
Risk-based Testing
Risk Identification
Risks that might be identified by the Technical Test Analyst
• Performance efficiency (e.g., inability to achieve required response times under
high load conditions)
• Security (e.g., disclosure of sensitive data through security attacks)
• Reliability (e.g., application unable to meet availability specified in the Service
Level Agreement)
Risk-based Testing
Risk Assessment
• Risk assessment is the study of identified risks in order to categorize each risk
and determine the likelihood and impact associated with it.
• The likelihood of occurrence is usually interpreted as the probability that the
potential problem could exist in the system under test.
• The Technical Test Analyst contributes to finding potential technical product risk
and the potential business impact of the problem.
Risk-based Testing
Risk Assessment
Project risks can impact the overall success of the project. Typically, the following
generic project risks need to be considered:
• Conflict between stakeholders regarding technical requirement
• Communication problems resulting from the geographical distribution of the
development organization
• Tools and technology (including relevant skills)
• Time, resource and management pressure
• Lack of earlier quality assurance
• High change rates of technical requirements
Risk-based Testing
Risk Assessment
Product risk factors may result in higher numbers of defects. Typically, the
following generic product risks need to be considered:
• Complexity of technology
• Complexity of code structure
• Amount of re-use compared to new code
• Large number of defects found relating to technical quality characteristics
• Interface and integration issues
Given the available risk information, the Technical Test Analyst proposes an
initial risk level according to the guidelines established by the Test Manager. For
example, the Test Manager may determine that risks should be categorized with a
value from 1 to 10, with 1 being highest risk.
Risk-based Testing
Risk Mitigation
Risk mitigation depends on how testing mapped to the identified risks. This
generally involves the following:
• Reducing risk by executing the most important tests (those addressing high risk
areas) and by putting into action measures as stated in the test plan
• Evaluating risks based on additional information gathered as the project unfolds,
and using that information to implement mitigation measures aimed at
decreasing the likelihood or avoiding the impact of those risks.
The Technical Test Analyst will often cooperate with specialists in areas such as
security and performance to define risk mitigation measures and elements of the
organizational test strategy.
Incident Management
Test Log
Test Log Analysis
• After each test run, or at the latest upon completion of a test cycle, the test logs
are evaluated.
• Real results are compared to the expected results.
• Each significant, unexpected event that occurred during testing could be an
indication of a test object malfunctioning.
• Corresponding passages in the test log are analyzed.
• The testers ascertain whether a deviation from the predicted outcome really has
occurred or whether an incorrectly designed test case, incorrect test automation,
or incorrect test execution caused the deviation (testers, too, can make mistakes).
Test Log
Documenting Incidents
• If the test object caused the problem, a defect or incident report is created.
• This is done for every unexpected behavior or observed deviation from the
expected results found in the test log.
• An observation may be a duplicate of an observation recorded earlier. In this
case, it should be checked to see whether the second observation yields
additional information, which may make it possible to more easily search for the
cause of the problem.
Incident Reporting
• In general, a central database is established for each project, in which all
incidents and failures discovered during testing (and possibly during operation)
are registered and managed.
• All personnel involved in development as well as customers and users can report
incidents.
• These reports can refer to problems in the tested (parts of) programs as well as
to faults in specifications, user manuals, or other documents
• To allow for smooth communication and to enable statistical analysis of the
incident reports, every report must follow a project-wide unique report template.
Incident Reporting
Defect Classification
Defect Classification
Incident Status
Incident Status