0% found this document useful (0 votes)
79 views26 pages

Basics of Software Testing: Test Progress Monitoring and Control

The document discusses test progress monitoring and control, risk and testing in software development. It describes monitoring test preparation, scope, and metrics during testing. Test reporting involves summarizing test information and metrics to support decisions. Test control describes corrective actions based on monitoring. Risks include project risks like skills and requirements, and product risks like failures. Risk analysis helps determine test techniques, extent, and priorities to minimize failures.

Uploaded by

Andrei Galan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views26 pages

Basics of Software Testing: Test Progress Monitoring and Control

The document discusses test progress monitoring and control, risk and testing in software development. It describes monitoring test preparation, scope, and metrics during testing. Test reporting involves summarizing test information and metrics to support decisions. Test control describes corrective actions based on monitoring. Risks include project risks like skills and requirements, and product risks like failures. Risk analysis helps determine test techniques, extent, and priorities to minimize failures.

Uploaded by

Andrei Galan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Basics of Software Testing

Test Progress Monitoring and


Control
Test Progress Monitoring
● Test Preparation
○ Estimate number of tests needed
○ Estimate time to prepare
○ Refine these figures
○ Track percentage of tests prepared
● Scope
○ Feedback & visibility about what we do
Test Progress Monitoring
● The purpose of test monitoring is to give feedback and visibility about test
activities
● Information may be collected manually or automatically and may be used
to measure exit criteria, such as coverage
● Metrics may be used to assess progress against the planned schedule and
budget
Test Progress Monitoring
● While testing, you can monitor (common test metrics):
○ Percentage of work done in test case preparation and in test environment preparation
○ Test case execution - the number of tests run/not run, passed/failed
○ Defect information - the number of defects found and fixed, failure rate, retest results
■ These can be categorised by Severity, Priority and Probability
○ Test coverage of requirements, risks or code
○ Dates of milestones
Test Reporting
● Is concerned with summarizing information about the testing endeavour,
including:
○ What happened during a period of testing, such as dates when exit criteria were met
○ Analyzed information and metrics to support recommendations and decisions about
future actions, such as an assessment of defects remaining
● According to the ‘Standard for Software Test Documentation’ (IEEE 829),
reporting has a defined structure
Test Reporting
● Metrics should be collected during and at the end of a test level in order
to assess:
○ The adequacy of the test objectives for that test level
○ The adequacy of the test approach taken
○ The effectiveness of the testing with respect to its objectives
● The status of the project should be regularly reported
○ Any deviations from the schedule raised ASAP
○ Any critical faults found should be raised immediately
Test Reporting
● A test summary report shall have the following structure:
○ Test summary report identifier
○ Summary
○ Variances
○ Comprehensive assessment
○ Summary of results
○ Evaluation
○ Summary of activities
○ Approvals
Test Control
● Describes any guideding or corrective actions taken as a result of
information and metrics gathered and reported
● Controlling measures:
○ Assign extra resource
○ Re-allocate resource
○ Adjust the test schedule
○ Arrange for extra test environments
○ Refine the completion criteria
Test Control
● Some examples of test control actions:
○ Making decisions based on information from test monitoring
○ Re-prioritize tests when an identified risk occurs (e.g. software delivered late)
○ Change the test schedule due to availability of a test environment
○ Set an entry criteria on requiring fixes to have been retested (confirmation tested) by a
developer before accepting them into a build
Summary
● Multiple factors must be considered when estimating the length of time
we need to perform testing
● Once testing has started it is necessary to monitor the situation as it
progresses
● Careful control must be kept to ensure project success
Basics of Software Testing
Configuration Management
Configuration Management
● The IEEE definition of Configuration Management
● A discipline applying technical and administrative direction and
surveillance to:
○ Identify and document the functional and physical characteristics of a configuration item
○ Control changes to those characteristics
○ Record and report change processing and implementation status, and
○ Verify compliance with specified requirements
Configuration Management
● The purpose - to establish and maintain the integrity of the products
(components, data and documentation) of the software or system
through the project and product life cycle
● For the tester, Configuration Management helps to uniquely identify (and
to reproduce) the tested item, test documents, the tests and the tests
harness
● During test planning, the COnfiguration Management procedures and
infrastructure (tools) should be chosen, documented and implemented
Configuration Management
● Closely linked to Version Control
○ Version Control looks at each component
○ Holds the latest version of each component
○ What versions of components works with others in a configuration
● It enables you to understand what versions of components work with
each other
○ It allows you to understand the relationship between test cases, specifications and
components
Other Configuration Management terms
● Configuration identification - all Configuration Items and their versions
are known
○ Selecting the configuration items for a system and recording their functional physical
characteristics in technical documentation
● Configuration control - Configuration Items are kept in a library and
records are maintained on how Configuration Items change over time
○ Evaluation, coordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification
Other Configuration Management terms
● Status accounting - all actions on Configuration items are recorded and
reported on
○ Recording and reporting of information needed to manage a configuration effectively.
This information includes
■ A listing of the approved configuration identification
■ The status of proposed changes to the configuration
■ The implementation status of the approved changes
● Configuration auditing
○ The function to change on the contents of libraries of configuration items for standards
compliance
Summary
● Configuration Management enables us to store all information on a
system, provides traceability and enables reconstruction
● Configuration Management is a necessary part of any system
development
● All assets must be known and controlled
Basics of Software Testing
Risk and Testing
Risk
● Can be defined as the changes of an event, hazard, threat or situation
occurring and its undesirable consequences, a potential problem
● The level of the risk will be determined by the likelihood of and adverse
event happening and the impact (the harm resulting from that event)
● When analyzing, managing and mitigating risks, the test manager follows
well established project management principles
Project Risks
● The risks that surround the project’s capability to deliver its objectives,
such as:
1. Organizational factors:
■ Skill and staff shortages
■ Personal and training issues
■ Political issues
● Problems with testers communicating their needs and test results
● Failure to follow up on information found in testing and reviews
■ Improper attitude towards or expectations of testing
Project Risks
2. Technical issues:
○ Problems in defining the right requirements
○ The extent that requirements can be met given existing constraints
○ The quality of the design, code and tests
3. Supplier issues:
○ Failure of a third party
○ Contractual issues
Product Risks
● Potential failures areas in the software or system
● They are a risk to the quality of the product, such as:
○ Failure-prone software delivered
○ The potential that the software/hardware could cause harm to an individual or company
○ Poor software characteristics (e.g. functionality, reliability, usability and performance)
○ Software that does not perform its intended functions
Product Risks
● Risks are used to decide where to start testing and where to test more
● Testing is used to reduce the risk of an adverse effect occurring
● They are special types of risk to the success of a project
● A risk-based approach to testing provides proactive opportunities to
reduce the levels of product risk, starting in the initial stages of a project
Product Risks
● In a risk-based approach the risks identified may be used to:
○ Determine the test techniques to be employed
○ Determine the extent of testing to be carried out
○ Prioritize testing in an attempt to find the critical defects as early as possible
○ Determine whether any non-testing activities could be employed to reduce risk (e.g.
providing training to inexperienced designers)
Product Risks
● To ensure that the chance of a product failure is minimized, risk
management activities provide a disciplined approach to:
○ Assess what can go wrong (risks)
○ Determine what risks are important to deal with
○ Implement actions to deal with those risks
● Testing may support the identification of new risks, may help to
determine what risks should be reduced, and may lower uncertainties
about risks
Risk Analysis
● Used to maximise the effectiveness of the overall testing process
● A risk factor should be allocated to each function in order to differentiate
between
○ Critical functions - must be fully tested and available as soon as the changes go live. The
cost to the business is high if these functions are unavailable for any reason
○ Required functions - not absolutely critical to the business. Usually possible to find
adequate methods to ‘work around’ these problems using other mechanisms

You might also like