Istqb 5.2
Istqb 5.2
Test Management
Test Organization
Test Organization
● Test organization and independence - options for independence (1)
○ Testing may be the responsibility of the individual developers
■ Developers are only responsible for testing their own work
■ Goes against the ideal that testing ought to be independent
■ Tends to confirm the system works rather than detect errors
○ Testing may be the responsibility of the developer team
■ A “Buddy system”
■ Each developer is responsible for testing his/her colleagues work
○ Each team has a single tester
■ Is not involved in the actual development, solely concentrates on testing
■ Able to work with the team throughout development process
■ May be too close to the team to be objective
Test Organization
● Test organization and independence - options for independence (2)
○ Dedicated test teams may be in place
■ These do no development whatsoever
■ Take a more objective view of the system under test
○ Internal Test Consultants
■ Provide testing advice to project teams
■ Involvement throughout the lifecycle
○ Testing outsourced to specialist agencies
■ Guarantees independence
■ Empathy with the Business of the organization?
Test organization and independence
● The benefits of independence include:
○ Independent testers see other and different defects, and are unbiased
○ And independent tester can verify assumptions people made during specification and
implementation of the system
● The drawbacks of independence include:
○ Isolation from the development team
○ Independent testers may be the bottleneck as the last checkpoint
○ Developers may lose a sense of responsibility for quality
Task of the test leader / test manager
● Depend on
○ The project and product context
○ The people in the role
○ The organization
● Sometime the test leader is called a test manager or test coordinator
● In large projects two positions may exist; test leader and test manager
Task of the test leader / test manager
● Engine and mind of the testing effort
● Estimate the time and cost of testing
● Determine the needed personnel skills
● Plan for and acquire the necessary resources
● Negotiate and set test completion criteria
● Define which test cases, by which tester, in which sequence and at which
point in time are to be carried out
● Adapt the test plan to the results and prefress of the testing
● Introduce suitable compensatory measures
● Report test results and progress / Receive reports
● Decide when the tests can be completed based on test completion criteria
● Introduce and use metrics
Typical testing tasks for testers
● Prepare, review and contribute the test plans
● Analyze, review and assess user requirements, specifications and models
for testability
● Create test specifications
● Set up the test environment (often coordinating with system
administration and network management)
● Prepare and acquire test data
● Review tests developed by others
Typical testing tasks for testers
● Implement tests on all test levels, execute and log the tests, evaluate the
results and document the deviations from expected results
● Use test administration or management tools and test monitoring tools as
required
● Automate tests (may be supported by a developer or a test automation
expert)
● Measure performance of components and system (if applicable)
Testing teams need to cover
● Production of Testing Strategies ● Logging Results
● Creation of Test Plans ● Executing necessary re-tests
● Production of Testing Scripts ● Automation expertise
● Proving of Test Scripts ● User Interface expertise
● Execution of Testing Scripts ● Project Management
● Test Asset Management ● Technical support
○ Database admin, environment admin
● Reviewing & Reporting
● Quality Assurance
Testing teams will typically include
● Test analysts to prepare, execute and analyse tests
● Test Consultants prepare strategies and plans
● Test automation experts
● Load & performance experts
● Database administrator or designer
● Test managers / team leaders
● Test environment management
● And others...
Beside technical skills and knowledge testers should be/have
Socially competent and good team players
It identifies amongst others test items, the features to be tested, the testing tasks,
who will do each task, degree of tester independence, the test environment, the test
design techniques and test measurement techniques to be used, and the rationale
for their choice, and any risk requiring contingency planning.
Between the test items, features to be tested and features not to be tested,
we have scope of the project.
Test plan - Approach
● Describes the approach to testing the SUT
● This should be high level, but sufficient to estimate the time and
resources required
○ What this approach will achieve
○ Specify major activities
○ Testing techniques
○ Testing tools / aids
○ Constraints to testing
○ Support required - environment & staffing
Test plan
● Test plan - Item Pass / Fail Criteria
○ How to judge whether a test item has passed
■ Expected vs Actual results
■ Certain % of tests pass
■ Number of faults remaining (know and estimated)
■ Should be defined for each test item
● Test plan - Suspension criteria & Resumption criteria
○ Reasons that would cause testing to be suspended
○ Steps necessary to resume testing
Test plan
● Test plan - Test Deliverables
○ Everything that goes to make up the tests
○ All documentation - Specification, test plans, procedures, reports
○ Data
○ Testing tools - Test management tools, automation tools, Excel, Word, etc.
○ Test system - Manual and automated test cases
● Test plan - Testing Tasks
○ Preparation to perform testing
■ Test case identification
■ Test case design
■ Test data storage
■ Baseline application
○ Special skills needed
■ Spreadsheet skills, test analysis, automation, etc.
○ Inter-Task dependencies
Test plan
● Test plan - Environment
○ Requirements for test environment
○ Hardware & software
■ PCs, servers, routers, etc.
■ SUT, interfaced applications, databases
○ Configuration
■ May be operating systems or middleware to test against
○ Facilities
■ Office space, desks, internet access
● Test plan - Responsibilities
○ Who is responsible?
○ For which activities
○ For which deliverables
○ For the environment
Test plan
● Test plan - Staffing and Training Needs
○ Staff required
■ Test managers, team leaders, testers, test analysts
○ Skill levels required
■ Automation skills, spreadsheet skills
○ Training requirements
■ Tools specific training, refresher courses
● Test plan - Schedule
○ Timescales, dates and milestones; Resources required to meet milestones
○ Availability of software and environment
○ Deliverables
Test plan
● Test plan - Risks and Contingencies
○ What might go wrong?
○ Actions for minimising impact on testing if things will go wrong
● Test plan - Approvals
○ Who has approved the test plan
○ Names and dates of approval
○ Why is it so important?
■ Evidence that the document has been viewed
■ Shows that the approach has been agreed and has the backing of those who matter
■ You have commitment, now make them stick to it
Test planning activities may include
● Determining the scope and risks, and identifying the objectives of testing
● Defining the overall approach of testing (the test strategy), including the
definition of the test levels and entry and exit criteria
● Integrating and coordinating the testing activities into the software life
cycle activities: acquisitions, supply, development, operation and
maintenance
● Making decision about what to test, what roles will perform the test
activities, how the test activities should be done, and how the test results
will be evaluated
Test planning activities may include
● Scheduling test analysis and design activities
● Scheduling test implementation, execution and evaluation
● Assigning resources for the different activities defined
● Defining the amount, level of detail, structure and templates for the test
documentation
● Selecting metrics for monitoring and controlling test preparation and
execution, defect resolution and risk issues
● Setting the level of detail for test procedures in order to provide enough
information to support reproducible test preparation and execution
Entry Criteria
● Entry criteria define when to start testing such as at the beginning of a
test level or when a set of tests is ready for execution
● Typically entry criteria may cover the following:
○ Test environment availability and readiness
○ Test tool readiness in the test environment
○ Testable code availability
○ Test data availability
Exit / Completion Criteria
● Should be used to determine whether the software is ready to be
released
● There is never enough time, money or resource to test everything so
testers must focus on business critical functions
● Some examples of test completion criteria
○ Has Key Functionality been tested
○ Has Test Coverage been achieved
○ Has the Budget been used
○ What is the Defect detection rate
○ Is Performance acceptable
○ Are any defects outstanding
Test estimation
● Test estimation - approaches for the estimation of test effort:
○ Metric-based - estimating the testing effort based on metrics of former or similar project
or based on typical values
○ Expert-based - estimating the tasks by the owner of these tasks or by experts
● Once the test effort is estimated, resources can be identified and
schedule can be drawn up
Test Estimation
● The testing effort may depend on a number of factors:
○ Characteristics of the product: the test basis, the size of the product, the complexity of the
problem domain, the requirements for reliability and security, the requirements for
documentation
○ Characteristics of the development process: the stability of the organization, tools used,
test process, skills of the people involved, time pressure
○ The outcome of testing: the number of defects and the amount of rework required
Test Estimation
● The same as estimating for any project
● We need to identify the number of tasks (tests) to be performed, the
length of each test, the skills and resources required and the various
dependencies
● Testing has a high degree of dependency
○ You cannot test something until it has been delivered
○ Faults found need to be fixed and re-tested
○ the environment must be available whenever a test is to be run
Test Approaches / Test Strategy
● A way to classify test approaches or strategies is based on the point in
time at which the bulk of the test design has begun
○ Preventive - tests are designed as early as possible
○ Reactive - test design comes after the software or system as been produced
Typical Approaches / strategies
● Analytical - such as risk-based testing
● Model-based - such as stochastic testing
● Methodical - such as failure-based, experience-based
● Process - or standard-compliant
● Dynamic and heuristic - such as exploratory testing
● Consultative
● Regression-averse