Test Specification: Issue 1
Test Specification: Issue 1
Issue 1
TABLE OF CONTENTS
0 PREFACE.....................................................................................................................1
0.1 Purpose of this document............................................................................................1
0.2 Use of this document....................................................................................................1
0.3 Overview........................................................................................................................2
0.4 Testing within IDA.......................................................................................................2
1 INTRODUCTION........................................................................................................4
1.1 Purpose..........................................................................................................................4
1.2 Scope of testing.............................................................................................................5
1.3 Definitions, Acronyms and Abbreviations.................................................................6
1.4 References.....................................................................................................................6
1.5 Overview........................................................................................................................6
2 TEST PLANNING.......................................................................................................8
2.1 Test items.......................................................................................................................8
2.2 Features to be tested.....................................................................................................9
2.3 Features not to be tested..............................................................................................9
2.4 Approach.......................................................................................................................9
2.5 Item pass/fail criteria.................................................................................................10
2.6 Suspension criteria and resumption requirements.................................................10
2.7 Test deliverables.........................................................................................................11
2.8 Testing tasks................................................................................................................11
2.9 Environmental needs..................................................................................................11
2.10 Responsibilities...........................................................................................................12
2.11 Staffing and training needs.......................................................................................12
2.12 Schedule.......................................................................................................................12
2.13 Risks and contingencies.............................................................................................13
3 TEST DESIGNS.........................................................................................................14
3.1 Test Design Identifier.................................................................................................14
4 TEST CASE SPECIFICATION...............................................................................16
4.1 Test Case identifier.....................................................................................................16
5 TEST PROCEDURES...............................................................................................19
5.1 Test Procedure identifier...........................................................................................19
6 TEST REPORTS........................................................................................................21
6.1 Test Report identifier.................................................................................................21
DOCUMENT CONTROL.....................................................................................................22
DOCUMENT SIGNOFF.......................................................................................................22
DOCUMENT CHANGE RECORD.....................................................................................22
0 PREFACE
#1 This section should provide an overview of the entire document and a description of
the scope of the system. Specific attention should be paid to the nature of the system
from the IDA viewpoint. In effect what is the mission for the system? For example,
this section should address question such as:
a. is this a system that is designed to collected information from different Member
States that will go into Brussels for use in an operational sense, such as system
which may track goods across Europe? or
b. is it a system that collects statistics and moves them in a common format into
Luxembourg to Eurostat?
The way the system is to be tested will depend upon the mission defined for the
system.
#2 In describing the scope of the system it is important to recognise at this point the
implications of how distributed the system is in operation as this will have a major
impact upon the testing that is to be defined in this document. An IDA system that
collects a specific statistic monthly is not likely to be tested in the same way as a
system providing daily updates of the movement of animals across the borders of the
European Union. Security risks will also need to be considered1.
#3 One facet that should concern the author of the Test Specification is the degree to
which the system that is being described is a federated, distributed or point-to-point
architecture. Federated systems are essentially those which act as equals in providing
and consuming information to and from the architecture. Distributed systems may
have a master node, perhaps based on a star configuration, that is the single point of
failure. Point-to-point systems can operate in a closely coupled relationship where
bespoke protocols enable them to communicate with one another as their application
software is activated.
#4 Another vital point is the degree to which the system has to be resilient to failure.
Some systems may require to operate in support of law enforcement agencies and be
required to operate on a 24 by 7 basis. Others can withstand downtime of several
days before creating difficulties for their users. This aspect of the characterisation of
the system is another important facet that needs to be addressed in this part of the
document in order that the testing required can be planned, designed, specified and
reported on correctly.
1.1 PURPOSE
#1 This section should summarise the system features to be tested. The scale of the
testing will depend upon many things about the characteristics of the system. For
example, in a simple thin-client configuration the scale of the testing at the front end
of the system might be quite simple and straightforward. However, the system may
need to be subjected to detailed load tests to ensure that the server component can
withstand the load placed on it.
#2 Testing may also need to vary depending upon the implementation language involved
in the systems development. For example, where Java may be used it could be
important to undertake specific tests on the run-time environment- or Java Virtual
Machine (JVM) – into which the Java software will be loaded, compiled and
executed. Some JVM have a self optimisation capability that enables the software to
learn from the first time it is run and be faster on subsequent invocations – it is
reported that it is possible to be up to 25% faster on a second run of the executable
software. In any time-sensitive systems this could prove to be important.
#3 Security might be an issue for some IDA projects. Whilst statistics may be of little
concern to criminals operating across Europe and out into the international stage,
information on the movement of goods may well attract attention and be subject to
attempts to delete for example information on a particular shipment as it contains
illegal substances. For these types of systems there should be a greater emphasis on
security and formal penetration testing looking for weaknesses in the system. This is
an aspect of testing that may require specialist assistance and should be considered
early on in the project planning stages.
#4 If the system simply supports the transmission, say once a week, of a simple
spreadsheet full of statistics into Luxembourg and this can be done via email over a
weekend then the scale of testing required will be considerably less. The scope of
testing may require, for example, tests to be carried out over several weeks to ensure
the system is reliable and operates as required.
1.4 REFERENCES
#1 This section should provide a complete list of all the applicable and reference
documents, identified by title, author and date. Each document should be marked as
applicable or reference. If appropriate, report number, journal name and publishing
organisation should be included.
1.5 OVERVIEW
/1 Section 1 is the introduction and includes a description of the project, applicable and
reference documents.
/2 Section 2 describes the test planning.
/3 Section 3 contains the test designs.
/4 Section 4 describes the test cases.
/5 Section 5 contains the test procedures.
/6 Section 6 describes the test reports.
2 TEST PLANNING
#1 Testing needs to be planned. The Review and Test Management Plan document
describes the global approach to testing, but this section will provide detail for the
tests themselves.
#2 In planning tests a wide range of factors have to be taken into consideration. These
may include:
Performance requirements that emerge from the system specification, where
response times or latency (delays) in the system delivering messages matter for
example.
Investigations of the system under low-loading conditions, such as when a single
user is logged onto the system and if appropriate where larger numbers of users
are using it in parallel.
The architecture of the system and any security implications that may arise or
need to be addressed. For example a two tier-thin client architecture will require
a different range of testing to a fat client two-tier system or a three-tier
architecture. Moreover security testing should address firewalls, operating
system vulnerabilities and system administration controls.
The distributed nature of the system and how test data might be made available
to simulate ‘live’ data being passed over the network.
#1 This section should identify the test items, such as test data, test rigs (hardware &
software), test harnesses and any reporting tools looking at the results of the testing. 2
#2 Particular consideration should be given to the need to carry out tests on the Security
Model that underpins access controls to the system if they are required. This may
require obtaining external assistance to devise what are often referred to as
Penetration Tests – in effect ways of trying to enter the system illegally. This may
require obtaining specialist software tools to enable testing of passwords and system
administration and firewall controls.
#3 References to other software documents, e.g. the Technical Design document, should
be supplied to provide information about what the test items are supposed to do, how
they work, and how they are operated. It is recommended that test items should be
grouped according to release number if the delivery is to be incremental.
#1 This section should identify all the features and combinations of features that are to
be tested. This may be done by referencing sections of requirements or technical
design documents.
2
This is important, as it may be necessary to plan ahead in this area to give time to develop
some bespoke software to support detailed testing. This would need to be done as it may be difficult
to move the system directly from development into the live environment without what is in effect
some factory acceptance rather than in-situ acceptance testing.
#2 References should be precise yet economical, e.g.:
'the acceptance tests will cover all requirements in the User Requirement
Document except those identified in Section 2.4';
'the unit tests will cover all modules specified in the Technical Design Document
except those modules listed in Section 2.4'.
#3 Features should be grouped according to release number if delivery is to be
incremental.
#1 This section should identify all the features and significant combinations of features
that are not to be tested and explain why. If it is not possible to test some features at
their most appropriate level of testing but which will be tested at a later level, this
information should be included here. An example is where volume testing cannot be
tested as part of System testing, but will be tested as part of the Acceptance testing as
only the User’s system will have sufficient data to test.
2.4 APPROACH
#1 This section should specify the major activities, methods (e.g. structured testing) and
tools that are to be used to test the designated groups of features.
#2 For example, this section will have to address whether or not a test environment
needs to be created to simulate a heavy loading on a central server when it might be
under high utilisation. This may require developing or purchasing some software that
would simulate the client side interactions3 of a user or it may require this to be
developed internally within the project.
#3 Similarly, the approach taken to testing the security features within the system should
also be considered. Experts acknowledge that people intent on attacking a system will
often try to find it weakest link, this may be through a badly configured firewall, its
underlying operating system, the lack of authentication of users or the controls for
accessing the database.
#4 Activities should be described in sufficient detail to allow identification of the major
testing tasks and estimation of the resources and time needed for the tests. The
coverage required should be specified.
#1 This section should specify the criteria to be used to decide whether each test item
has passed or failed testing. Critical functions that must work in the system should be
identified. If any of these fail the system testing must be halted.
#2 It would be useful to introduce a capability to grade failures and then use these to
decide if a number of failures in each category to decide if the system is at all
3
These are often expensive pieces of software that require specialised skills to set them up. It
is recommended that projects consider developing their own relatively simple test harness, but that
they keep in touch with any Horizontal Actions & Measures (HAM) that might see a united approach
being developed to creating test harnesses.
acceptable. These categories and criteria should be consistent with any agreed
contractual specifications and other higher level documents such as the Review and
Test Management Plan. For example,
Serious system failures would be deemed a ‘A’ failure. These might also include
the inability of the system to fail in a controlled way – perhaps requiring a re-
boot if a communications line failed etc. Security problems or failure under load
testing might also be considered category ‘A’ failures. A defined number of ‘A’
failures would halt testing.
‘B’ failures would be seen to be failures of the system to represent the
functionality requested in the specification.
Category ‘C’ failures would be those which showed inconsistencies in the
operation of the system in areas such as the GUI displays.
Category ‘D’ failures might be areas where minor problems exist with the
system.
Category ‘E’ failures would be ones where the specification might seem to be in
variance with the requirements specification.
#3 It may be that the pass fail criteria would use a set number of these failures. An
example might be:
5 category ‘A’ failures
10 category ‘B’ failures
20 category ‘C’ failures
40 category ‘D’ failures
50 category ‘E’ failures
#4 This might be an acceptance profile that would allow a system to be partially
accepted through a first round of testing. This section should address the implications
of load tests etc. If the system has to have security features then this section should
also address what constitutes failure in this area.
#1 This section should specify the criteria used to suspend all, or a part of, the testing
activities on the test items associated with the plan. The typical profile outlined in the
last section could be used to decide if the system is acceptable or not. This section
should specify the testing activities that must be repeated when testing is resumed.
#1 This section should identify the items that must be delivered before testing begins,
which should include:
test plan;
test designs;
test cases;
test procedures;
test input data & test environment;
test tools, both proprietary and test reference tools.
#2 The use of tools from other IDA and OSN projects should be considered here,
including Horizontal Actions and Measures and similar projects in other sectors; for
instance, Test Reference Tools that simulate the other end of an information
exchange in a specified-manner.
#3 This section should identify the items that must be delivered when testing is finished,
which should include:
test reports;
test output data;
problem reports.
#1 This section should identify the set of tasks necessary to prepare for and perform
testing. This will include getting test data and setting up any communications links
and preparing any systems that will be linked up for the testing activities. This will
have to take account of the approach being taken to building the system, such as
waterfall etc. This section should also identify all inter-task dependencies and any
special skills required. It is recommended that testing tasks should be grouped
according to release number if delivery is to be incremental.
#1 The test environment will have to reflect the approach being taken to building the
system. The environment will be different depending upon whether or not the
development is being carried out under waterfall, ‘V’ model or RAD. This section
should specify both the necessary and desired properties of the test environment,
including:
physical characteristics of the facilities including hardware;
communications software;
system software, such as operating system;
browser & server software where this is appropriate;
any special software such as a Java Virtual Machine (JVM)
mode of use (i.e. standalone, networked);
security software, such as PKI;
test tools;
geographic distribution where appropriate.
#2 Environmental needs should be grouped according to release number if delivery is to
be incremental.
2.10 RESPONSIBILITIES
#1 This section should identify the groups responsible for managing, designing,
preparing, executing, witnessing, and checking tests. Groups may include developers,
operations staff, user representatives, technical support staff, data administration
staff, independent verification and validation personnel and quality assurance staff.
#2 The involvement of different “players” should be considered:
Commission Services and EU Agencies, including the output of related
programmes
Member State Administrations
Contractors
2.12 SCHEDULE
#1 This section should include test milestones identified in the software project schedule
and all item delivery events, for example:
delivery of the test harness and any test data required to create the test
environment
programmer delivers software for integration testing;
developers deliver system for independent verification.
#2 This section should specify:
any additional test milestones and state the time required for each testing task;
the schedule for each testing task and test milestone;
the period of use for all test resources (e.g. facilities, tools, staff).
#3 This section should also address the implications of failing to meet the schedule as
resources may have been redeployed or vital test equipment may no longer be
available. Contingencies should also be planned into the timetable where it is
appropriate.
2.13 RISKS AND CONTINGENCIES
#1 This section should identify the high-risk assumptions of the test plan. It should
specify contingency plans for each. Risks may include technological, geographical or
political issues. This section will also describe what should take place if one of the
risks occurs; in other words what is the contingency and what impact would it have
on the testing etc.?
3 TEST DESIGNS
#1 A test design shows how a requirement or subsystem is to be tested. Each design will
result in one or more test cases. The ‘design’ of a test is a vital aspect of getting tests
set out correctly to verify the system4.
#2 A particular input to test design will be to look at which approach to the development
of the system was taken c.f. waterfall, the so-called ‘V’ approach and RAD or any
other accepted approach. In a classic waterfall design the test designs will be
different from those involved in a RAD development. Systems test designs should
specify the test approach for each requirement in the System/Software Requirement
document. In the case where the ‘V’ approach is used it is important to calibrate the
test designs at each stage in the process. System Acceptance test designs should
specify the test approach for each requirement in the User Requirement document.
#1 The title of this section should specify the test design uniquely. The content of this
section should briefly describe the test design and its intended purpose. Key inputs
should also be identified, as should external dependencies.
4
One facet of this would be to look at areas of the testing where a specific aspect of the
overall performance needs to be tested. In client-server systems it may be that solutions may be based
upon a range of technologies, such as Java. It may be that test designs have to reflect this by also
addressing testing the performance of any Java Virtual Machine as this may impact the response time
of the system from the user aspect. Moreover there may be other areas of the system that may require
equal testing to ensure that no unnecessary overheads are being added in that might be reducing the
possible performance of the system.
3.1.2 Approach refinements
#1 This section should describe the results of the application of the methods described in
the approach section of the test plan. It will need to take account of whether or not
the testing will be carried out under a waterfall, ‘V’ or RAD development approach.
#2 Specifically it may define the:
component integration sequence (for integration testing);
paths through the control flow (for integration testing);
types of test (e.g. white-box, black-box, performance, stress etc).
#3 The description should provide the rationale for test-case selection and the packaging
of test cases into procedures. The method for analysing test results should be
identified e.g. compare with expected output, compare with old results, proof of
consistency etc. The tools required to support testing should also be identified.
#1 Test cases specify the inputs, predicted results and execution conditions. Each test
case should aim to evaluate the operation of a key element or function of the system.
#2 Failure of a test case, depending upon the severity of the failure, would be
catalogued as part of the overall evaluation of the suitability of the system as a whole
for its intended use.
#3 Test cases can start with a specific ‘form’ that allows operator entry of data into the
system. This needs to be mapped, if the architecture is based upon an n-tier solution,
through the business logic and rules into the server systems with transactions being
evaluated both in a ‘nominal’ mode where the transaction is a success and for those
occasions when the transaction or ‘thread’ fails. Test design may also require one
or more test cases and one or more test cases may be executed by a test procedure.
#1 The title of this section should specify the test case uniquely. The content of this
section should briefly describe the test case and the objectives of the test case. It
should also identify the functions within the system that the test case will evaluate –
both in terms of a successful operation and where errors occur.
Hardware
#1 This should specify the characteristics and configurations of the hardware required to
execute this test case. If the system will be required to run on several platforms, there
should be test cases for each one, until it can be shown that the results will be the
same whatever the platform. An example would be where a standard-compliant Java
Virtual Machine is to be used on all platforms.
Software
#1 This should specify the system and application software required to execute this test
case. This should be defined in terms of a build file that contains the subsystem or
system that is to be tested under configuration control.
Other
#1 This should specify any other requirements such as special equipment or specially
trained personnel in an area such as testing security related to the integrity of the
system.
#1 The title of this section should specify the test procedure uniquely.
5.1.1 Purpose
#1 This section should describe the purpose of this procedure. A reference for each test
case the test procedure uses should be given.
Log
#4 This should describe any special methods or formats for logging the results of test
execution, the incidents observed, and any other events pertinent to the test.
Set up
#5 This should describe the sequence of actions necessary to prepare for execution of the
procedure. This may include the setting up of the test reference tool, although where
an identical set-up is used by more than one test procedure, it may be more useful to
create a separate procedure, which would be run before this procedure and be listed
in the Special Requirements section above.
Start
#6 This should describe the actions necessary to begin execution of the procedure.
Actions
#7 This should describe the actions necessary during the execution of the procedure. It
should also include the specific data to be input and the expected results.
#8 This is especially important where the user interface is being testing, e.g. for HTML
and web-based applications, where data must be entered on the screen and the result
cannot be seen in a log file. Consideration should be given to the use of screen prints.
#9 It may be useful to use a table like this:
Shut down
#10 This should describe the actions necessary to suspend testing when interruption is
forced by unscheduled events.
Restart
#11 This should identify any procedural restart points and describe the actions necessary
to restart the procedure at each of these points.
Stop
#12 This should describe the actions necessary to bring execution to an orderly halt.
Wrap up
#13 This should describe the actions necessary to terminate testing.
Contingencies
#14 This should describe the actions necessary to deal with anomalous events that may
occur during execution. These may include where the expected results do not occur
and a problem report has to be raised. This section should describe whether the test
can simply be repeated or whether data would have to reset before the test can be re-
run.
6 TEST REPORTS
#1 This section may be extracted and produced as a separate document, to allow the
descriptions of the testing to be finalised and the test specification document issued
before the testing begins.
#2 The detail in each test report will depend upon the level of testing - e.g. subsystem,
system, and acceptance. For example, for unit testing, each test report may cover a
whole day’s testing; for acceptance testing each test procedure may have its own test
report section.
#1 The title of this section should specify the test report uniquely.
6.1.1 Description
#1 This section should identify the items being tested including their version numbers.
The attributes of the environment in which testing was conducted should be
identified.
Execution description
#3 This section should identify the test procedure(s) being executed. All the people
involved and their roles should be identified, including those who witnessed each
event.
Procedure results
#4 For each execution, this should record the visually observable results (e.g. error
messages generated, aborts and requests for operator action). The location of any
output, and the result of the test, should be recorded.
#5 It may prove easiest to use the table as described in the test procedure section above
with an extra column showing if the expected results were achieved. It may also be
better to include the actual sheets used during the test as part of the test report, either
scanned in or physically included although the latter would mean that the report
would have to be in hardcopy.
Environmental information
#6 This should record any environmental conditions specific for this entry, particularly
deviations from the nominal.
DOCUMENT CONTROL
Title: Test Specification
Issue: Issue 1
Date:
Author:
Distribution:
Reference:
Filename:
Control: Reissue as complete document only
DOCUMENT SIGNOFF
Nature of Signoff Person Signature Date Role
Authors
Reviewers