0% found this document useful (0 votes)
15 views35 pages

SWEN3165 Lecture 7

The document outlines various organizational structures for testing, highlighting the pros and cons of developer testing, team testing, independent testing, and third-party testing. It emphasizes the importance of configuration management in testing processes, detailing its definition and the issues arising from poor management. Additionally, it discusses the need for effective test estimation, monitoring, incident management, and adherence to testing standards to ensure quality outcomes.

Uploaded by

kid unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views35 pages

SWEN3165 Lecture 7

The document outlines various organizational structures for testing, highlighting the pros and cons of developer testing, team testing, independent testing, and third-party testing. It emphasizes the importance of configuration management in testing processes, detailing its definition and the issues arising from poor management. Additionally, it discusses the need for effective test estimation, monitoring, incident management, and adherence to testing standards to ensure quality outcomes.

Uploaded by

kid unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

SWEN3165

Test Management

Mr. Matthew Ormsby

University of the West Indies


Mona, Kingston, Jamaica
1 2 3 ISTQB / ISEB Foundation Exam Practice
4 6

Test Management

Organisation
Organisational structures for testing

• Developer responsibility (only)


• Development team responsibility (buddy system)
• Tester(s) on the development team
• Dedicated team of testers (not developers)
• Internal test consultants (advice, review, support, not perform the testing)
• Outside organisation (3rd party testers)
Testing by developers

Pro’s:
know the code best
will find problems that the testers will miss
they can find and fix faults cheaply

Con’s
difficult to destroy own work
tendency to 'see' expected results, not actual results
subjective assessment
Testing by development team

Pro’s:
some independence
technical depth
on friendly terms with “buddy” - less threatening

Con’s
pressure of own development work
technical view, not business view
lack of testing skill
Tester on development team

Pro’s:
independent view of the software
dedicated to testing, no development responsibility
part of the team, working to same goal: quality

Con’s
lack of respect
lonely, thankless task
corruptible (peer pressure)
a single view / opinion
Independent test team

Pro’s:
dedicated team just to do testing
specialist testing expertise
testing is more objective & more consistent

Con’s
“over the wall” syndrome
may be antagonistic / confrontational
over-reliance on testers, insufficient testing by developers
Internal test consultants

Pro’s:
highly specialist testing expertise, providing support and help to improve testing done by all
better planning, estimation & control from a broad view of testing in the organisation

Con’s
someone still has to do the testing
level of expertise enough?
needs good “people” skills - communication
influence, not authority
Outside organisation (3rd party)

Pro’s:
highly specialist testing expertise (if out-sourced to a good organisation)
independent of internal politics

Con’s
lack of company and product knowledge
expertise gained goes outside the company
expensive?
Skills needed in testing

• Technique specialists
• Automators
• Database experts
• Business skills & understanding
• Usability expert
• Test environment expert
• Test managers
1 2 3 ISTQB / ISEB Foundation Exam Practice
4 6

Test Management

Configuration Management
Problems resulting from poor configuration management

• can’t reproduce a fault reported by a customer


• can’t roll back to previous subsystem
• one change overwrites another
• emergency fault fix needs testing but tests have been updated to new
software version
• which code changes belong to which version?
• faults which were fixed re-appear
• tests worked perfectly - on old version
• “Shouldn’t that feature be in this version?”
A definition of Configuration Management

• “The process of identifying and defining the configuration items in a system,

• controlling the release and change of these items throughout the system life
cycle,

• recording and reporting the status of configuration items and change


requests,

• and verifying the completeness and correctness of configuration items.”


• ANSI/IEEE Std 729-1983, Software Engineering Terminology
Products for CM in testing

• test plans
• test designs
• test cases:
• test input CM is critical
• test data for controlled
• test scripts testing
• expected results
• actual results
What would not be under
• test tools configuration management?

Live data!
Test estimation,
monitoring and
control
Estimating testing is no different

Estimating any job involves the following


• identify tasks
• how long for each task
• who should perform the task
• when should the task start and finish
• what resources, what skills
• predictable dependencies
• task precedence (build test before running it)
• technical precedence (add & display before edit)
Estimating testing is different

Additional destabilising dependencies


• testing is not an independent activity
• delivery schedules for testable items missed
• test environments are critical

Test Iterations (Cycles)


• testing should find faults
• faults need to be fixed
• after fixed, need to retest
• how many times does this happen?
Test cycles / iterations

Theory: Test
Iden Des Bld Ex Ver
Retest

Practice:

Test Debug Retest D R D R

3-4 iterations is typical


Estimating iterations

• past history
• number of faults expected
• can predict from previous test effectiveness and previous faults found (in test, review,
Inspection)
• % faults found in each iteration (nested faults)
• % fixed [in]correctly
• time to report faults
• time waiting for fixes
• how much in each iteration?
Time to report faults

The more fault reports you write, the less testing you will be able to do.

Test Fault analysis & reporting

Mike Royce: suspension criteria: when testers spend > 25% time on faults
Measuring test execution progress 1

tests planned

what would
tests run you do?

tests passed

now release
date
Diverging S-curve

Possible causes Potential control actions

poor test entry criteria tighten entry criteria


ran easy tests first cancel project
insufficient debug effort do more debugging
common faults affect all stop testing until faults
tests fixed
software quality very continue testing to scope
poor software quality

Note: solutions / actions will impact other


things as well, e.g. schedules
Measuring test execution progress 2
tests
planned

run

passed

action old release new release


taken date date
Measuring test execution progress 3
tests
planned

run

passed

action old release new release


taken date date
Case history

Incident Reports (IRs)

200
180
160
140
120
Opened IRs
100
Closed IRs
80
60
40
20
0
04-Jun 24-Jul 12-Sep 01-Nov 21-Dec 09-Feb

Source: Tim Trew, Philips, June 1999


Incident
management
Incident management

Incident: any event that occurs during testing or in production that requires
subsequent investigation or correction.
• actual results do not match expected results
• possible causes:
• software fault
• test was not performed correctly
• expected results incorrect

• can be raised for documentation as well as code


Incidents

• May be used to monitor and improve testing


• Should be logged (after hand-over)
• Should be tracked through stages, e.g.:
• initial recording
• analysis (s/w fault, test fault, enhancement, etc.)
• assignment to fix (if fault)
• fixed not tested
• fixed and tested OK
• closed
Use of incident metrics
We’re better
than last year

Is this testing approach “wearing out”?

What happened How many faults


in that week? can we expect?
What information about incidents?

• Test ID
• Test environment
• Software under test ID
• Actual & expected results
• Severity, scope, priority
• Name of tester
• Any other relevant information (e.g. how to reproduce it)
Severity versus priority

• Severity
• impact of a failure caused by this fault

• Priority
• urgency to fix a fault company name,
board member:
priority, not severe
• Examples
• minor cosmetic typo
• crash if this feature is used
Experimental,
not needed yet:
severe, not priority
Incident Lifecycle
Tester Tasks Developer Tasks
1 steps to reproduce a fault
2 test fault or system fault
3 external factors that
influence the symptoms 4 root cause of the problem
5 how to repair (without
introducing new problems)
6 changes debugged and
properly component tested

7 is the fault fixed?


Source: Rex Black “Managing the Testing Process”, MS Press, 1999
Standards for
testing
Standards for testing

• QA standards (e.g. ISO 9000)


• testing should be performed

• industry-specific standards (e.g. railway, pharmaceutical, medical)


• what level of testing should be performed

• testing standards (e.g. BS 7925-1&2)


• how to perform testing
Key Points
• Independence can be achieved by different organisational structures

• Configuration Management is critical for testing

• Tests must be estimated, monitored and controlled

• Incidents need to be managed

• Standards for testing: quality, industry, testing

You might also like