Ada Test Canata & IEC1508
Ada Test Canata & IEC1508
and
IEC 61508
Executive Summary
This paper describes how AdaTEST 95 and Cantata can be used to assist with the
development oI soItware to the standard IEC 61508: Functional SaIety - SaIety-Related
Systems. In particular, it shows how AdaTEST 95 and Cantata can be used to meet
the veriIication and testing requirements oI this standard. It also shows that the tools
have been produced to a high standard, such that their use Ior dynamic testing will not
compromise the saIety and integrity oI the soItware being tested.
The material presented here is suitable Ior inclusion in a justiIication Ior the use oI the
products on a saIety critical soItware development.
IPL is an independent soItware house Iounded in 1979 and based in Bath. IPL was
accredited to ISO9001 in 1988, and gained TickIT accreditation in 1991. Both
AdaTEST 95 and Cantata have been produced to these standards.
1.
Copyright
This document is the copyright oI IPL InIormation Processing Ltd. It may not be
copied or distributed in any Iorm, in whole or in part, without the prior written consent
oI IPL.
IPL
Eveleigh House
Grove Street
Bath
BA1 5LR
UK
Phone. 44 (0) 1225 475000
Fax. 44 (0) 1225 444400
email. ipliplbath.com
Last Update.28/05/2002 12.18
File.IEC61508.DOC
IPL InIormation Processing Ltd
2
2. Introduction
AdaTEST 95 and Cantata are tools which support the dynamic testing, dynamic
(coverage) analysis, and static analysis oI Ada, C and C soItware. The original
requirements Ior the tools included the suitability Ior testing saIety critical soItware,
although both tools are suIIiciently Ilexible Ior the testing oI any Ada, C or C
soItware.
This paper describes how AdaTEST 95 and Cantata can be used to enable a soItware
development to meet many oI the veriIication and testing requirements oI IEC 61508:
Functional SaIety - SaIety-Related Systems. It should be read in conjunction with the
standard which is reIerred to throughout this paper as IEC 61508.
To assist in cross-reIerencing, terms which are used by the standard are shown in italics
in this text. Where clause numbers oI the standard are reIerenced, these have been
checked to be valid against IEC 61508-3:2001, dated December 2001.
The description consists oI two parts. Section 3 looks at AdaTEST 95 and Cantata as
tools to Iacilitate developing soItware to meet the requirements oI the standard. Section
4 looks at the integrity oI the tools themselves, including standards used during their
development.
3. AdaTEST 95 and Cantata++ - Capabilities and Use
3.1. Overview of Requirements
The IEC 61508 standard is structured as 7 parts which identiIy process issues,
techniques and measures applicable to all aspects oI Iunctional saIety. The parts which
identiIy validation, veriIication and test requirements which are relevant to AdaTEST 95
and Cantata are:
(a) IEC 61508 Part 3: SoItware Requirements;
CLAUSE 7.1 - Software Safetv Lifecvcle-General
CLAUSE 7.3 - Software Safetv Jalidation Planning
CLAUSE 7.4.1 - Software Design and Development - Obfectives
CLAUSE 7.4.4 - Requirements for Support Tools and Programming
Languages
CLAUSE 7.4.5 - Requirements for Detailed Design and Development
CLAUSE 7.4.6 - Requirements for Code Implementation
CLAUSE 7.4.7 - Requirements for Software Module Testing-
CLAUSE 7.4.8 - Requirements for Software IntegrationTesting
CLAUSE 7.5 - Programmable Electronics Integration
CLAUSE 7.6 - Software Operation and Modification Procedures
CLAUSE 7.7 - Software Safetv Jalidation
CLAUSE 7.8 - Software Modification
CLAUSE 7.9 - Software Jerification
CLAUSE 8 - Functional Safetv Assessment
ANNEX A - Guide to the Selection of Techniques and Measures
ANNEX B - Detailed Tables
(b) IEC 61508 Part 7: Overview oI Techniques and Measures.
IPL InIormation Processing Ltd
3
3.2. General Description of AdaTEST 95 and Cantata++
AdaTEST 95 and Cantata support the analysis and testing oI Ada, C, and C
soItware at all levels, Irom module test through to Iull integration testing. The Iacilities
provided by AdaTEST 95 and Cantata are summarised in Table 1.
Testing Analysis
Test Preparation Dynamic Testing Static Analysis Dynamic Analysis
Scripts in Ada, C or
C
Test Script wizards
Script generation
Irom test deIinition
data
*2
Test execution
Result collection
Result veriIication
Timing analysis
Stub simulation
External call
wrapping
*1
Host testing
Target testing
Metrics:
Code Counts
Code Complexity
Metrics in Iormats:
CSV
List Iile
*2
Dynamically
checkable
*2
Coverage:
Entry points
Statements
Decisions
Conditions
MC/DC
Call-Pairs
*1
Exceptions
*2
Trace
Assertions
*2
Paths
*2
Table 1 - AdaTEST 95 and Cantata++ Facilities
*1: Cantata only *2: AdaTEST 95 only
Testing with AdaTEST 95 and Cantata is centred on a dynamic test harness. The test
harness can be used to support testing at all levels, Irom module test through to Iull
integration testing, in both host and target environments. In the host environment tests
can either be run directly or under the control oI a debugger. In the target environment,
tests can be run directly, under an emulator, or under a debugger.
Tests are controlled by a structured test script. Test scripts are written in the respective
application language, Ada, C or C, and can be produced independently Irom the code
oI the application. Test scripts are Iully portable between host and target environments.
Test scripts may be written directly by the user, or can be generated by AdaTEST 95 and
Cantata using a test script wizard. AdaTEST 95 also has the option oI generating
scripts Irom test case deIinition (TCD) data Iiles. In addition, the tools allow the
production oI scripts where the test data is kept in separately maintainable Iorm such as
data tables, Iiles, spreadsheets or databases.
AdaTEST 95 and Cantata static analysis provides static analysis oI source and also
instruments code in preparation Ior later dynamic analysis. With AdaTEST 95, the
results oI static analysis are reported in a list Iile and can be made available to the test
script through the instrumented code. Both tools support the production oI static analysis
results in CSV Iormat, which allows them to be exported to spreadsheets and databases.
AdaTEST 95 and Cantata dynamic analysis provides a number oI analysis commands
which can be incorporated into test scripts. During test execution these commands are
used to gather test coverage data, to trace execution, to veriIy data Ilow, and to report on
and check the results oI static analysis and dynamic analysis.
When a test script is executed, AdaTEST 95 and Cantata produce a test report giving
a detailed commentary on the execution oI the tests, Iollowed by an overall pass/Iail
summary at the end oI the report. The anticipated outcomes oI static analysis, dynamic
IPL InIormation Processing Ltd
4
analysis and dynamic testing can thus be speciIied in a single test script and
automatically checked against actual outcomes.
To assist conIiguration management, all AdaTEST 95 and Cantata outputs include
date and time inIormation. The reports output by AdaTEST 95 and Cantata would
typically be incorporated into test logs, review logs, saIety logs etc.
Instrumented code is not a compulsory part oI testing with AdaTEST 95 and Cantata.
The AdaTEST 95 and Cantata test harnesses can be used in a 'non-analysis' mode
with uninstrumented code.
3.3. Dynamic Testing
The dvnamic testing Iacilities oI AdaTEST 95 and Cantata enable the user to exercise
the soItware using test scripts which are structured as a series oI test cases. The tools
place no restrictions on the techniques the user may apply to select test cases. Both
functional testing and structure-based testing techniques can be used within test scripts.
The Iollowing test case selection criteria and techniques identiIied by IEC 61508 are all
compatible with AdaTEST 95 and Cantata:
Boundarv value analvsis
Error guessing
Equivalence Classes and Input Partition Testing
Structure-Based Testing
Cause Consequence diagrams
Interface Testing
Event Tree Analvsis
IEC 61508 correctly demands planning oI dvnamic testing as part oI the design process
Ior each soItware object. The products' structured test scripts can be used as Software
Design Test Specifications and Software Module Test Specifications, as they are easily
readable and auditable by quality assurance staII.
AdaTEST 95 and Cantata tests allow the developer to demonstrate that the soItware
perIorms its intended Iunction, and through negative testing Iacilities, that it does not
perIorm unintended Iunctions. The expected outcome oI dvnamic testing, static analvsis
and dvnamic analvsis can be built into test scripts. These expected outcomes can then be
checked automatically by AdaTEST 95 and Cantata against actual outcomes when the
tests are executed. This enables comprehensive check lists to be Iully automated.
Test scripts can be placed under software configuration management together with the
code to which they relate. However, visibility oI the code to the tester is not necessary in
order to carry out the tests. Hence, testing can be carried out by independent authorities,
iI necessary.
II error seeding is used, the test script can be executed with the seeded soItware and the
eIIectiveness oI the tests evaluated by comparison oI the test reports Ior the seeded and
un-seeded soItware.
Information hiding and encapsulation can potentially obscure data Irom the test
environment. To overcome this problem. IPL recommends the use oI soItware test
IPL InIormation Processing Ltd
5
points to provide visibility oI data Ior testing purposes. See the IPL paper 'Achieving
Testability when Using Ada Packaging and Data Hiding Methods Ior more
inIormation. Cantata supports automated white-box testing by deIault.
The tools allow easy regression testing when conducting re-verification and re-
validation, as tests can be 'batched-up and re-run automatically. The test pass/Iail
status is returned to the operating system to allow special action to be taken in the case
oI tests which Iail. Similarly, when creating a librarv of trusted/verified modules and
components, the tools allow easy re-veriIication when the modules are used on new
projects, provided the soItware and corresponding test scripts are conIiguration managed
in a repository.
3.3.1. 1iming Analysis
AdaTEST 95 and Cantata provide timing analysis Iacilities which can be used to
measure and check that performance requirements are met. The timing analysis Iacilities
are particularly useIul during stress testing.
Timing analysis uses the clock provided by the environment, so accuracy and resolution
are dependent on the environment. In circumstances where greater accuracy or
resolution is required, code can be included within a test script to interIace to suitable
timer devices. The results oI such timings can then be veriIied Irom the test script.
3.3.2. Simulation
SoItware reIerenced by the soItware under test may either be built into a test executable,
or simulated using the AdaTEST 95 and Cantata stub simulation Iacilities. II stub
simulation is used to simulate device interIaces, the behaviour oI the soItware under
various outside world device conditions can be tested beIore the actual devices are
available.
Cantata has another Iacility called Wrapping, which allows Ior the monitoring oI all
data across calls in an integrated soItware system. It can thus be used to check values
going out` oI the soItware being tested, and also change values being passed back to the
soItware under test.
Another use oI AdaTEST 95 and Cantata is Ior a test script to directly provide
instructions to operating system Iacilities and device drivers. In this way entire
subsystems can be simulated using AdaTEST 95 and Cantata; an essential part oI
soItware and system integration. Simulation is oIten the only way to provide realistic
loads during stress testing.
3.3.3. 1est Reports
The test reports produced by AdaTEST 95 and Cantata provide a clear, auditable
account oI the tests perIormed. These results give both a detailed report on the execution
oI the tests and a summary oI the overall test status. Each test has an unequivocal
statement oI test pass/Iail to aid easy checking oI summary results. The location oI any
errors detected is highlighted in the test report by the inclusion oI comprehensive
diagnostic inIormation whenever a check Iails. A summary oI the type and number oI
errors detected is given at the end oI the test report.
IPL InIormation Processing Ltd
6
To assist configuration management, all AdaTEST 95 and Cantata outputs include
date and time inIormation. When dvnamic analvsis Iacilities are used, the Iilenames oI
original uninstrumented code and the date and time oI instrumentation are also included
in the test report.
The test reports output by AdaTEST 95 and Cantata would typically be incorporated
into test logs, review logs, saIety logs etc., Iorming a valuable input to activities such as
Fagan inspections, formal design reviews and walk throughs.
3.4. Test Coverage Analysis
The dvnamic analvsis and testing parts oI AdaTEST 95 and Cantata monitor the test
coverage oI structure-based testing through a number oI coverage metrics.
Statement Coverage is provided Ior statements within normal processing (and statements
within exception handlers Ior Ada). Statement Statistics reports can be used to show that
it is possible to execute all code statements. Automated checking Ior 100 Statement
Coverage will ensure that every code statement is executed at least once.
Decision Coverage includes the coverage oI branches. Decision Statistics reports can be
used to provide evidence that both sides of everv branch have been executed. Automated
checking Ior 100 Decision Coverage will ensure that every branch outcome is
executed at least once.
A range oI Boolean Expression Coverage metrics provide coverage analysis and reports
Ior compound conditions. Automated checking Ior 100 Boolean Expression Coverage
will ensure that everv condition in a compound conditional branch is exercised.
For Ada soItware, the Exception Statistics report available through AdaTEST 95 gives a
detailed analysis oI the execution oI exception handlers. Automated checking Ior 100
Exception Coverage will ensure that all exception handlers have been executed by the
tests.
Call coverage can be used to check that all modules in the Call Graph are invoked at
least once. Cantata call-pair coverage can be used to check that all invocation routes
to a module are exercised. Related analysis provided by AdaTEST 95 and Cantata
includes execution path veriIication (AdaTEST 95 only) and execution tracing, which
can be used to analyse tests Ior entire path coverage.
Execution tracing Iacilities within the tools allow the monitoring oI control flow and
data flow through the code. Tracing may be selected Ior subprogram entry points,
statements, data, decisions, or Boolean expressions. Trace output can be used to
establish that every Ieasible potential path in the code is executed at least once by the
testing process, and to indicate possible errors by identiIying variables that are read
before being written, variables that are written more than once without being read, and
variables that are written but never read.
AdaTEST 95 can be used to demonstrate data flow coverage and veriIy that speciIied
conditions (such as variables holding particular values) should be true at least once
during test execution. Another Iorm oI assertion provided by AdaTEST 95 is the
dynamic assertion. Dynamic assertions are speciIied in a similar Iorm to Data
Assertions, however, Dynamic assertions are used to veriIy assertion conditions which
must always be true. Thus data assertions and dynamic assertions can be used to veriIy
that test cases IulIil boundarv value analvsis and equivalence class partitioning.
IPL InIormation Processing Ltd
7
Where soItware implements a finite state machine, AdaTEST 95 and Cantata
assertions can be used to veriIy state-transition coverage. More details are given in the
IPL paper 'Testing State Machines with AdaTEST 95 and Cantata. Cantata has a
speciIic Iacility Ior monitoring coverage within a (user-deIined) state context.
The results oI AdaTEST 95 and Cantata dvnamic analvsis, including coverage oI
assertion conditions, can be checked against speciIied levels, reported in detail, or
exported and imported between test scripts These advanced coverage and assertion
Iacilities provided by AdaTEST 95 and Cantata provide a dynamic alternative to
many Iunctions which would oIten be associated with static analvsis.
3.5. Static Analysis
AdaTEST 95 and Cantata static analvsis calculates metrics mainly relating to code
construct use and soItware complexity. With both tools the metrics are available in CSV
Iormat. This can be output to a spreadsheet or database Ior Iurther analysis or storage.
With AdaTEST 95 there are also options Ior outputting the metrics to a list Iile (text)
Iormat or dynamically checking metrics during a test run.
From a test script, metrics can be checked against user deIined limits. A limit oI 0 will
result in a test Iailure iI the corresponding construct is used at all. This includes counts
oI comments and identiIiers, but obviously the meaningIulness oI comments and
identiIier names cannot be checked automatically.
The ability to check language construct usage and code complexity provides a means oI
enIorcing a modular approach and Iacilitates the detection oI poor programming
structure, excessive complexity and deviations Irom the required language subset and
coding standards. For example, a programming standard might permit a maximum oI
two loops in a module oI code. With AdaTEST 95 and Cantata, a check can be built
into a test script to report modules containing more than two loops.
Static Analysis provided by AdaTEST 95 and Cantata does not support: automated
checking oI the compliance oI the code to the design speciIication; symbolic execution;
or Iormal prooI. However, the advanced dynamic analysis Iacilities, as described in the
preceding sections oI this paper, provide some compensation Ior this. II dynamic
analysis is an unacceptable alternative, a specialised tool such as SPARK Examiner or
MALPAS should be used in addition to AdaTEST 95 or Cantata.
3.6. Language
AdaTEST 95 and Cantata are specialised tools Ior the testing oI soItware written in
the Ada, C and C languages. They are capable oI analysing and testing code written in
the Iull Ada, C and C languages or language subsets and enIorcing language subsets
and coding standards. Some application speciIic languages such as SPARK are subsets
oI Ada and can thereIore be analysed and tested using AdaTEST 95.
Test scripts are normally written in a minimal subset oI the respective language, but the
Iull capabilities oI the language are available Ior use in scripts when required. This
enables the Iunctionality oI test scripts to be extended to Iacilitate techniques such as
Monte-Carlo simulation and probabilistic testing. The tools ensure that AdaTEST 95
and Cantata commands are used in a structured and disciplined manner.
IPL InIormation Processing Ltd
8
AdaTEST 95 and Cantata provide no Iacilities to directly help with formal proof oI a
program. However, it is possible to Iormally annotate test scripts and to include test
scripts in Iormal prooIs. It is conceivable that a development oI saIety critical soItware
could enIorce a Iormally proven language subset Ior use in test scripts.
Test scripts may either be written manually in the native language, or created
automatically by AdaTEST 95 and Cantata using a test script wizard. However,
where tool integrity is an issue, automatic generation oI test scripts should not be used
without a manual veriIication oI the correctness oI the resulting test script.
Languages other than Ada, C and C can be tested provided that they are accessible
through language interIacing (including machine code insertions and assembler). The
static and dynamic analysis capabilities oI AdaTEST 95 and Cantata will be
unavailable or severely restricted when testing soItware written in other languages.
3.7. Intrusive and Non-Intrusive Testing
Use oI AdaTEST 95 and Cantata Ior test coverage requires the instrumentation oI the
soItware under test, using the instrumenter and static analyser which Iorms part oI the
toolset. Obviously, perIorming tests on code which is instrumented (and thereIore
changed) would not be compliant with the requirements Ior saIety critical soItware.
AdaTEST 95 and Cantata address this problem by operating in one oI two modes.
In Analysis mode, the toolsets expect the soItware under test to be instrumented Ior
coverage analysis. All analysis commands are Iully executed. In Non-Analysis mode, the
toolsets expect the soItware under test to be uninstrumented. All analysis commands are
ignored, but all other (dynamic testing) commands are Iully executed. Naturally, checks
are made to ensure the compatibility oI the soItware under test with the mode oI the
toolsets.
The two modes oI AdaTEST 95 and Cantata should be used to execute each test
script twice. Initially, Analysis mode can be used to ensure a thorough coverage oI the
soItware under test by the test script. When Iull coverage has been achieved, the same
test script can be run again in Non-Analysis mode on uninstrumented soItware, to ensure
that the dynamic test results are unchanged.
4. Tool Integrity and Development Standards
The integrity oI the toolsets used in the development oI saIety critical soItware is a
signiIicant consideration. Ideally, all development should be conducted using certified
tools. This is especially true oI dynamic testing tools.
The AdaTEST 95 and Cantata toolsets have been developed according to the IPL
Quality Management System (QMS), which is certiIicated to ISO 9001 and TickIT.
AdaTEST 95 and Cantata support the development oI all standards oI Ada, C and
C soItware, including saIety critical soItware. The development oI AdaTEST 95 and
Cantata Iollowed IPL's normal ISO9001/TickIT development standards with some
additional high-integrity measures Ior certain parts oI the products.
Key points oI the development and ongoing maintenance are:
(a) The IPL Quality Management System, accredited to ISO 9001:2000 and TickIT;
IPL InIormation Processing Ltd
9
(b) The IPL SoItware Code oI Practice, Iorming part oI the Quality Management
System;
(c) Ha:ard analvsis and the maintenance oI a hazard log (AdaTEST 95);
(d) Independent audit;
(e) The use oI a saIe language subset Ior the core Iunctionality. Minimal exceptions
to this subset Ior other Iunctionality only where absolutely necessary and justiIied
in the hazard reporting process (AdaTEST 95). Tests can be built using just the
core Iunctionality;
(I) Configuration management;
(g) Rigorous dvnamic analvsis and testing, Irom the software module testing level
upwards.
The saIety critical development standards used included consideration oI those used Ior
the European Fighter AircraIt, and DeIence Standards 00-55 and 00-56. Formal methods
were not used in the speciIication oI Cantata or AdaTEST 95. AdaTEST 95 has been
audited and approved as a certified tool Ior use in airborne soItware to the DO178B
standard
AdaTEST 95 and Cantata are in widespread use, providing additional conIidence in
the tools integrity as they have been proven in use. Clients and certiIication authorities
wishing to audit the development and continuing maintenance oI the products may do so
by arrangement with IPL.
5. Conclusion
AdaTEST 95 and Cantata are well suited to the development oI soItware to the IEC
61508 standard and Iacilitate a high degree oI automation oI the veriIication and test
techniques required Ior eIIective use oI the standard.
AdaTEST 95 and Cantata have been developed to the highest practical standard Ior
soItware veriIication tools.
It is believed that AdaTEST 95 and Cantata are the only tools to oIIer this
comprehensive Iunctionality and the only testing tools developed to such high standards.
However, this does not preclude the use oI AdaTEST 95 and Cantata Ior the cost-
eIIective testing oI soItware which is not Ior saIety critical use.