MC-WP-002 Eight Top Code Coverage Questions - 0
MC-WP-002 Eight Top Code Coverage Questions - 0
White Paper
To meet DO-178B/C guidance, testing of airborne software should be supported
with structural code coverage measurements. This paper sets out eight key code
coverage questions for engineers working on embedded avionics systems.
It then introduces RapiCover, which is optimized for on-target structural
coverage analysis. RapiCover helps meet DO-178B/C guidelines, reduces
verification effort and supports engineers working with C, C++ and Ada.
Traceability between the requirements, the test cases and the source code demonstrates:
Can it support all
classes of code
» Every requirement has a test case. than 100%, this points to code that is not coverage? Can it
traceable to requirements, tests or both.
» All source code is traceable to a support different
requirement. Different coverage criteria (see table on variants such as
p.5) allow the degree of rigor in measuring
Measuring code coverage when the masking v. non-
the coverage to reflect the Development
test cases are executed is essential for masking MC/DC?
Assurance Level (DAL) of the system.
this process – where coverage is less
(also referred to as Call coverage Each function has been Not required by DO-178B/C
EUROCAE ED-12B) called at least once, and each
different function call has been
provides guidance for
encountered at least once
specific considerations
Statement coverage Each statement in the code has Required for DO-178B/C Level
for airborne software. It
been encountered at least once A, B, C
calls for demonstration
Decision coverage Each decision (see box below) Required for DO-178B/C level
of code coverage to
in the code has evaluated true at A, B
a level determined least once and evaluated false
by the criticality of at least once, and each function
the application under entry and exit point has been
encountered at least once
consideration. The table
(right) lists a number of Condition coverage Each condition (see box below) Not required by DO-178B/C
in the code has evaluated true at
coverage criteria used
least once and evaluated false at
to assess software least once
testing effectiveness.
Modified Condition/ Decision coverage plus Required by DO-178B/C Level A
The coverage criteria are Decision Coverage each condition has been
defined in the Appendix shown to independently
(p.16). affect the outcome of its
enclosing decision
When developing software for an 2.2.2 On-host testing What to look for in a
embedded application, such as an code coverage tool:
On-host testing involves compiling
avionics system, verification activities can
the application code to run on the Can it do on-host and
be performed on-host or on-target.
host processor, rather than the target
On-target testing means the application on-target testing?
processor. Typically the application also
is tested on the hardware to be deployed
requires a certain amount of adaptation
(the target). It may also be referred to as
to work in the host environment due to
host-target testing or cross-testing. On-host
the following considerations:
testing means testing the application on a
host computer (such as the development » Running under a desktop OS rather
system used to build the application). This than the target’s RTOS may require
may also be referred to as host-host testing. different API calls.
If evidence of code coverage is mandatory that all requirements in the TOR have
for the project, for example because the been verified.
customer requires strict adherence to
» TVR (Tool Verification Records). This
DO-178B/C guidance, it’s also important
comprises test cases, procedures
to be able to provide evidence that the
and results.
process used to collect the data has
worked correctly. » TQP (Tool Qualification Plan). This
describes the process for qualifying
The evidence must show that the tool
the tool.
works correctly within the context of the
development environment for which it
These items combine two main kinds
is producing results. In the case of DO-
of evidence:
178B/DO-330, the following items are
recommended for tool qualification: » Generic evidence. This needs to be
What to look for in a
provided by the tool vendor to define code coverage tool:
» PSAC (Plan for Software Aspects of
the tool operational requirements, and
Certification). This references the TQP Is certification
verification evidence to demonstrate
and TAS (see below).
that the tool meets the requirements. evidence available?
» TOR (Tool Operational Requirements). Will the tool vendor
» Specific evidence. The tool user
This describes what the tool does, how support you in
needs to demonstrate that the
it is used and the environment in which
tool works correctly in a specific generating specific
it performs.
environment. Ideally the tool vendor certification
» TAS (Tool Accomplishment Summary). should provide support to simplify this
evidence?
This is a summary of the data showing process as much as possible.
Your approach to testing may rely » Possibly system constraints force you to
What to look for in a
upon combining coverage results from instrument one only part of your system code coverage tool:
a variety of different tests. This could at a time (consider the advice in Section
occur because: 2.3 to mitigate this issue). Can it combine
coverage data from
» Your strategy includes upon a It may be necessary to perform the coverage
combination of on-target and on- analysis for each of these tests individually,
multiple test
host testing. and to manually merge the results. scenarios into a
single report?
» You need multiple test cases A better approach is to use a tool that
reflecting different system modes. supports the combination of multiple
results into a single report.
RapiCover is designed to
deliver three key benefits:
» Reduced timescales by
running fewer on-target
tests.
technology, RapiCover provides a which parts of the code have been executed. One of the most widely-
range of highly-optimized solutions for used approaches for this is source-code instrumentation. In this
the instrumentation code it generates. approach, instrumentation code is inserted into the source code during
This flexibility allows you to make the the build process. The instrumentation code is used to signal that
best use of the resources available on a specific function, line, condition or decision (depending upon the
your platform. coverage type required) has been executed. Done in a naïve way, this
can negatively impact the executable code in two ways:
This results in best-in-class
instrumentation overheads for an » Too many instrumentation points. Adding more instrumentation points
on-target code coverage tool, and than necessary doesn’t improve the information generated, but does
consequently fewer test builds. result in greater memory requirements and longer execution times.
Logic
Analyser I/O port
Debugger Nexus/ETM
RapiCover Coverage
CPU
data set
Simulator buffer
Network
(e.g.)
HOST RAM
Ethernet
Automatic combination of results from same information into CSV, text, XML or
multiple test runs and the ability to aligned with source code.
justify missing coverage makes the
» Combination of reports from
preparation of coverage Software
multiple sources. Coverage data
Verification Results quicker.
is often generated at multiple phases
A major driver for the use of code of the test program, for example: unit
coverage is the need to meet DO-178B/C test, integration test and system test.
objectives. In addition to providing RapiCover supports the consolidation
options for achieving DO-178B/DO-330 of this data into a single report.
tool qualification, RapiCover also aims
» Justification of missing coverage.
to make the process of gathering and
Where legitimate reasons exist that
presenting code coverage results easier.
specific parts of the code cannot be
This is achieved in the following ways:
executed, RapiCover provides an
» Multiple format report export. automated way of justifying this. The
RapiCover provides you with the ability summary report shows code that is
to browse coverage data using our executed, code that is justified and code
eclipse-based viewer and to export the that is neither executed nor justified.
been exercised.
main.c activity.c
void main(void) { void activity_a(void) {
void (*fp_act)(void); ...
... while (x > 0) {
for (i = 0; i < MSG_SIZE; i++) { if (y < 0) {
switch (msg[i]) { return;
case 0: }
activity_a(); ...
activity_b(); }
break; ...
case 1: return;
activity_b(); }
(*fp_act)();
break; void activity_b(void) {
} ...
} while (x > 0) {
KEY } ...
call site }
call pairs return
}
2 {
exercise the routes through the logic of
3 int Interpolate_Index = 0;
a program. They are derived solely from
4 while ( Sin_Graph[Interpolate_Index+1].x != Sin_Graph_Sentinel )
the structure of the code.
5 {
9 int y2 = Sin_Graph[Interpolate_Index+1].y;
if-statement at line eleven.
10
For decision coverage, the typical 11 if ( v >= x1 && v < x2 )
criterion is that execution has reached 12 {
every point of entry and exit in the code, 13 return (y1 + (v - x1) * (y2 - y1) / (x2 - x1));
17 return Sin_Graph_Default;
once. For the example code, this would
18 }
mean:
Rapita Systems Inc. Tel (USA): Rapita Systems Ltd. Tel (UK/International):
41131 Vincenti Ct. +1 248-957-9801 Atlas House, Osbaldwick Link Road +44 (0)1904 413945
Novi, MI 48375 York , YO10 3JB
Registered in England & Wales: 5011090