Chapter 5 Test Documentation and Management
Chapter 5 Test Documentation and Management
1
Software documentation
2
1. Testing Documentation
3
Cont’d …
PRD (Product Requirement Document)
What: Set of Software Requirements
Who: Product Marketing, Sales, Technical
Support
When: planning stage
Why: we need to know what the product is
supposed to do
Quality Assurance (QA) role:
Participate in reviews
Analyze for completeness
Spot ambiguities
Highlight contradictions
Provide feedback on features/usability
4
Cont’d …
FS (Functional Specification)
What: software design document;
Who: Engineering, Architects;
When: (planning)/design/(coding) stage(s);
Why: we need to know how the product will be
designed;
Quality Assurance (QA) roles:
Participate in reviews;
Analyze for completeness;
Spot ambiguities;
Highlight contradictions.
5
Cont’d …
Test Plan
What: a document describing the scope, approach,
resources and schedule of intended testing activities;
identifies test items, the features to be tested, the testing
tasks, who will do each task and any risks requiring
contingency planning;
Who: QA;
When: (planning)/design/coding/testing stage(s);
6
Cont’d …
Test Strategy
What:
Who: QA;
When:(planning/design/coding/
testing stages);
7
Cont’d …
Test Plan (cont’d)
Why:
Divide responsibilities between teams involved; if more than
one QA team is involved (ie, manual / automation, or English
/ Localization) – responsibilities between QA teams ;
Plan for test resources / timelines ;
Plan for test coverage;
Plan for OS / DB / software deployment and configuration
models coverage.
- QA role:
Create and maintain the document;
Analyze for completeness;
Have it reviewed and signed by Project Team
leads/managers.
8
9
Cont’d …
Test Case
What: a set of inputs, execution preconditions and
expected outcomes developed for a particular objective,
such as exercising a particular program path or verifying
compliance with a specific requirement;
Who: QA;
When: (planning)/(design)/coding/testing stage(s);
Why:
Plan test effort / resources / timelines;
Plan / review test coverage;
Track test execution progress;
Track defects;
Track software quality criteria / quality metrics;
Unify Pass/Fail criteria across all testers;
Planned/systematic testing vs Ad-Hoc.
10
Cont’d …
11
Cont’d …
Test Case (cont’d)
Optional elements of a Test Case:
Title – verbal description indicative of testcase objective;
Goal / objective – primary verification point of the test
case;
Project / application ID / title – for TC classification /
better tracking;
Functional area – for better TC tracking;
Bug numbers for Failed test cases – for better error /
failure tracking (ISO 9000);
Positive / Negative class – for test execution planning;
Manual / Automatable / Automated parameter etc – for
planning purposes;
Test Environment.
12
Cont’d …
Test Case (cont’d)
Inputs:
Through the UI;
From interfacing systems or devices;
Files;
Databases;
State;
Environment.
Outputs:
To UI;
To interfacing systems or devices;
Files;
Databases;
State;
Response time.
13
Cont’d …
Test Case (cont’d)
Format – follow company standards; if no
standards – choose the one that works best for
you:
MS Word document;
MS Excel document;
Memo-like paragraphs (MS Word, Notepad, Wordpad).
Classes:
Positive and Negative;
Functional, Non-Functional and UI;
Implicit verifications and explicit verifications;
Systematic testing and ad-hoc;
14
Cont’d …
Test Suite
A document specifying a sequence of actions for the
execution of multiple test cases;
Purpose: to put the test cases into an executable order,
although individual test cases may have an internal set of
steps or procedures;
Is typically manual, if automated, typically referred to as
test script (though manual procedures can also be a type
of script);
Multiple Test Suites need to be organized into some
sequence – this defined the order in which the test cases or
scripts are to be run, what timing considerations are, who
should run them etc.
15
Cont’d …
Traceability matrix
What: document tracking each software feature
from PRD to FS to Test docs (Test cases, Test
suites);
Who: Engineers, QA;
When: (design)/coding/testing stage(s);
Why: we need to make sure each requirement is
covered in FS and Test cases;
QA role:
Analyze for completeness;
Make sure each feature is represented;
Highlight gaps.
16
Cont’d …
17
Cont’d …
Testing Strategies
– Depend on Development model.
– Incremental: testing modules as they are developed, each
piece is tested separately. Once all elements are tested,
integration/system testing can be performed.
– Requires additional code to be written, but allows to easily
identify the source of error
– Big Bang: testing is performed on fully integrated system,
everything is tested with everything else.
– No extra code needed, but errors are hard to find.
18
Cont’d …
Risk Analysis:
– What: The process of assessing identified risks to estimate
their impact and probability of occurrence (likelihood).
19
Cont’d …
Risk Analysis (cont’d):
Who: PM, Tech Support, Sales, Engineers, QA;
When: (design)/coding/testing stage(s);
Why:
It helps us choose the best test techniques
It helps us define the extent of testing to be carried out
The higher the risk, the more focus given
It allows for the prioritization of the testing
Attempt to find the critical defects as early as possible
Are there any non-testing activities that can be
employed to reduce risk? e.g., provide training to
inexperienced personnel
20
Cont’d …
2. Create 5 7 35
database
record
3. Modify 3 6 18
database
record
21
Typical Steps
1. Define “units” vs non-units for testing
2. Determine what types of testing will be
performed
3. Determine extent of testing
4. Document
5. Determine Input Sources
6. Decide who will test
7. Estimate resources
8. Indentify metrics to be collected
Testing 22
1. Unit vs non-unit tests
What constitutes a “unit” is defined by
the development team
Include or don’t include packages?
Common sequence of unit testing in OO
design
Test the methods of each class
Test the classes of each package
Test the package as a whole
Test the basic units first before testing
the things that rely on them
Testing 23
2. Determine type of testing
Interface testing:
validate functions exposed by modules
Integration testing
Validates combinations of modules
System testing
Validates whole application
Usability testing
Validates user satisfaction
Testing 24
2. Determine type of testing
Regression testing
Validates changes did not create defects in existing code
Acceptance testing
Customer agreement that contract is satisfied
Installation testing
Works as specified once installed on required platform
Robustness testing
Validates ability to handle anomalies
Performance testing
Is fast enough / uses acceptable amount of memory
Testing 25
3. Determine the extent
Impossible to test for every situation
Do not just “test until time expires”
Prioritize, so that important tests are
definitely performed
Consider legal data, boundary data, illegal
data
More thoroughly test sensitive methods
(withdraw/deposit in a bank app)
Establish stopping criteria in advance
Concrete conditions upon which testing stops
Testing 26
Stopping conditions
When tester has not been able to find another
defect in 5 (10? 30? 100?) minutes of testing
When all nominal, boundary, and out-of-
bounds test examples show no defect
When a given checklist of test types has been
completed
After completing a series of targeted coverage
(e.g., branch coverage for unit testing)
When testing runs out of its scheduled time
Testing 27
4. Decide on test documentation
Documentation consists of test procedures,
input data, the code that executes the test,
output data, known issues that cannot be
fixed yet, efficiency data
Test drivers and utilities are used to
execute unit tests, must be document for
future use
JUnit is a professional test utility to help
developers retain test documentation
Testing 28
Documentation questions
Include an individual’s personal document set?
How/when to incorporate all types of testing?
How/when to incorporate testing in formal
documents
How/when to use tools/test utilities
Testing 29
5. Determine input sources
Applications are developed to solve
problem in specific area
May be test data specific to the application
E.g., standard test stock market data for a
brokerage application
Output from previous versions of application
Need to plan how to get and use such
domain-specific test input
Testing 30
6. Decide who will test
Individual engineer responsible for some (units)?
Testing beyond the unit usually
planned/performed by people other than coders
Unit level tests made available for
inspection/incorporation in higher level tests
How/when inspected by QA
Typically black box testing only
How/when designed and performed by third
parties?
Testing 31
7. Estimate the resources
Unit testing often bundles with
development process (not its own budget
item)
Good process respects that reliability of units
is essential and provides time for developers
to develop reliable units
Other testing is either part of project
budget or QA’s budget
Use historical data if available to estimate
resources needed
Testing 32
8. Identify & track metrics
Must specify the form in which developers
record defect counts, defect types, and
time spent on testing
Resulting data used:
to assess the state of the application
To forecast eventual quality and completion
date
As historical data for future projects
Testing 33
Activity5 [Page Limit:5 page, Due date:15 May2023]
1 Identify software requirements of an application of your choice
2 Create Test Plan to verify and validate the above software requirment
3 Design Test Cases for testing the requirment
4 Create Test Suites
5 Create and execute the test by choosing an open source testing tools
like testlink or test detector.
6 Write the testing report
7 Create traceability matrix
Note: Use standard test plan and test case template and if your entire
report is exceeding the page limit, you can add graphics, references, and
tables as appendex. Moreover, use Times News Romans font-style, font
size: 12pt, justified alignment. Your report should be submitted both in hard
and soft copy. And you will also present it to your class.
34