0% found this document useful (0 votes)
25 views20 pages

MC-WP-002 Eight Top Code Coverage Questions - 0

Uploaded by

shakeeb ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views20 pages

MC-WP-002 Eight Top Code Coverage Questions - 0

Uploaded by

shakeeb ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Eight top code coverage

questions for DO-178B/C

White Paper
To meet DO-178B/C guidance, testing of airborne software should be supported
with structural code coverage measurements. This paper sets out eight key code
coverage questions for engineers working on embedded avionics systems.
It then introduces RapiCover, which is optimized for on-target structural
coverage analysis. RapiCover helps meet DO-178B/C guidelines, reduces
verification effort and supports engineers working with C, C++ and Ada.

On-target software verification solutions


Contents
1. Introduction 3

2. Eight top code coverage questions 4


2.1 What is code coverage? 4
2.2 Should we do on-target or on-host code coverage? 6
2.3 What are the challenges to on-target code
coverage, and how can we overcome them? 7
2.4 How can I use my code coverage results
to support certification? 8
2.5 What additional benefits come from measuring on-target? 9
2.6 How do I combine results from multiple tests? 9
2.7 How do I deal with missing code coverage? 10
2.8 What should I look for in a code coverage tool? 10

3. Product summary: RapiCover


11
3.1 Reduced timescales by running fewer on-target tests 11
3.2 Reduced risk through greater tool flexibility 12
3.3 Reduced effort for certification activities 13
3.4 Discover what RapiCover can do for you 14

4. About Rapita Systems 15


4.1 RVS 15
4.2 Early Access Program 15

5. Appendix: overview of code coverage criteria 16


5.1 Function Coverage 15
5.2 Call Coverage 15
5.3 Statement coverage 15
5.4 Decision coverage 15
5.5 Modified condition/decision coverage (MC/DC) 15

Eight top code coverage questions | page 2


1. Introduction
Supporting the test process with measurements of Keep up to date
structural code coverage is a key activity for DO-178B/C The Rapita Systems blog
compliance during the development of software for addresses topics related
airborne systems. In this white paper we consider eight to on-target verification,

key questions: including code coverage


and DO-178B/C.
» What is code coverage and how does coverage analysis on code running on
www.rapitasystems.
it benefit my project? an embedded target. The benefits of
com/blog
using RapiCover to conduct structural
» Should we do on-target or on-host
coverage analysis on an embedded
coverage?
target include:
» What are the challenges to on-target
» Reduced timescales by running
code coverage and how can we
fewer on-target tests. Very lightweight
overcome them?
instrumentation means more
» How can I use my code coverage coverage information per test cycle.
results to support certification?
» Reduced risk through greater tool
» What additional benefits are derived flexibility. Adapt RapiCover to work
from measuring on-target? with your system, rather than adapting
your system to work with another tool.
» How do I combine results from
Collect coverage information via a
multiple tests?
wide variety of mechanisms, making
» How do I deal with missing code it easier to integrate RapiCover into
coverage? your system.

» What should I look for in a code » Reduced effort for certification


coverage tool? activities. Automatic combination
of results from multiple test runs
After we have addressed these
and the ability to justify missing
eight key questions, we introduce
coverage makes the preparation of
RapiCover, a software tool designed
coverage quicker.
to efficiently perform structural

Eight top code coverage questions | page 3


2. Eight top code coverage questions

2.1 What is code coverage and how does it benefit my project?

Structural coverage analysis is an important verification tool


for establishing the completeness of testing.
DO-178B/C emphasises the use of requirements-based testing as an important part
of the software verification process. In requirements-based testing, the high and low- What to look for in a
level requirements are used to derive source code and the tests for that source code. code coverage tool:

Traceability between the requirements, the test cases and the source code demonstrates:
Can it support all
classes of code
» Every requirement has a test case. than 100%, this points to code that is not coverage? Can it
traceable to requirements, tests or both.
» All source code is traceable to a support different
requirement. Different coverage criteria (see table on variants such as
p.5) allow the degree of rigor in measuring
Measuring code coverage when the masking v. non-
the coverage to reflect the Development
test cases are executed is essential for masking MC/DC?
Assurance Level (DAL) of the system.
this process – where coverage is less

Eight top code coverage questions | page 4


Measurement Description Notes
DO-178B/C and
code coverage Function coverage Each function has been called at Not required by DO-178B/C
RTCA DO-178B/C least once

(also referred to as Call coverage Each function has been Not required by DO-178B/C
EUROCAE ED-12B) called at least once, and each
different function call has been
provides guidance for
encountered at least once
specific considerations
Statement coverage Each statement in the code has Required for DO-178B/C Level
for airborne software. It
been encountered at least once A, B, C
calls for demonstration
Decision coverage Each decision (see box below) Required for DO-178B/C level
of code coverage to
in the code has evaluated true at A, B
a level determined least once and evaluated false
by the criticality of at least once, and each function
the application under entry and exit point has been
encountered at least once
consideration. The table
(right) lists a number of Condition coverage Each condition (see box below) Not required by DO-178B/C
in the code has evaluated true at
coverage criteria used
least once and evaluated false at
to assess software least once
testing effectiveness.
Modified Condition/ Decision coverage plus Required by DO-178B/C Level A
The coverage criteria are Decision Coverage each condition has been
defined in the Appendix shown to independently
(p.16). affect the outcome of its
enclosing decision

Where code is well-structured and


derived directly from well-written What are conditions and decisions?
requirements and architecture, then a full A condition is a Boolean expression that contains no Boolean
set of high- and low-level requirements- operators. Examples of conditions are: “true”, “iterations > 5” or
based tests is entirely capable of the name of a Boolean variable, such as “MaintenanceMode”. If the
meeting many of the existing code same Boolean expression is repeated several times, each specific
coverage criteria. instance is a different condition.

A decision is a combination of at least one condition with zero or more


Boolean operators to create an overall Boolean expression.

To illustrate the differences, consider:

if (A) { // “A” is a condition and a


... // decision

} else if (B && x < 14) { // “B” is a condition,


// “x < 14” is a condition,
// “B && x < 14” is a decision
A = !(x > 14); // “x > 14” is a condition
// “! (x > 14)” is a decision

Eight top code coverage questions | page 5


2.2 Should we do on-target or on-host code coverage?

When developing software for an 2.2.2 On-host testing What to look for in a
embedded application, such as an code coverage tool:
On-host testing involves compiling
avionics system, verification activities can
the application code to run on the Can it do on-host and
be performed on-host or on-target.
host processor, rather than the target
On-target testing means the application on-target testing?
processor. Typically the application also
is tested on the hardware to be deployed
requires a certain amount of adaptation
(the target). It may also be referred to as
to work in the host environment due to
host-target testing or cross-testing. On-host
the following considerations:
testing means testing the application on a
host computer (such as the development » Running under a desktop OS rather
system used to build the application). This than the target’s RTOS may require
may also be referred to as host-host testing. different API calls.

» If the embedded application


2.2.1 On-target testing
includes libraries, these may not
The key principle behind testing an be available on the host (or may be
application on-target is that code is different, for example, in the case of
executed in the environment for which it was graphics libraries).
designed, rather than in an environment
» Alternative interpretations of
where it was never intended to be executed.
ambiguous/undefined programming
Test results are typically evaluated and
language features or compiler bugs
analysed on a host. This has the following
may cause different behaviors
benefits:
between the host and the target.
» The “credibility gap”, the possibility that
» The embedded application may
some unanticipated difference exists
require access to specific hardware
between executing on-host in a harness
features that are not available on the
and executing on-target, is minimized.
host system.
This results in a lower likelihood of
false-negative errors leading to errors However, there are benefits that can
not being detected, and faulty software arise from on-host testing:
being deployed, and of false-positive
reports where time is wasted tracking » The target may not be available, or
down non-existent problems. there may be only limited access to it
when testing needs to take place.
» The smaller “credibility gap” also
makes it easier to provide an argument » The “build-deploy-analyze” cycle may
to certification authorities that testing be quicker than on-target testing.
achieves an appropriate level of rigor.
» It is well suited to unit testing – a
» Ability to execute all code. Some parts test harness can be used to achieve
of your code might not be possible 100% coverage, even when defensive
to run on host (for example, device programming techniques are used.
specific code).

Eight top code coverage questions | page 6


2.2.3 What to choose? » Unit testing and test case
development on-host gives the
The choice between on-host and
advantages of rapid turnaround;
on-target testing is driven by a trade-
off between cost/convenience and » System/integration testing on-target
credibility of results. provides the confidence that the code
to be deployed has been tested in its
In many cases, using a combination of intended environment.
both techniques offers dual benefits:
Sidebar:

2.3 What are the challenges to on-target code


coverage, and how can we overcome them?

One of the biggest challenges to could be achieved. A code coverage


on-target code coverage is resource solution has to recognize these
limitations in embedded systems. limitations can exist, and to provide a
The standard approach to measuring viable route to dealing with them.
coverage is to instrument source code
There are a number of ways to address
to write tags into a memory buffer.
resource constraints:
This approach evolved from host-
» Alternative data collection. In many
based testing – it is clear that
cases, using an in-memory data
many commercially available code
structure to record coverage will
coverage solutions today begin
be sufficient. However, when there
with a host-based approach and
is not enough room to store this
attempt to transfer it to an embedded
data structure, or if the execution
environment. This approach requires
overhead of this approach is too
a large RAM buffer to store the data
high, alternative approaches need
in, and each instrumentation point
to be available. One such approach
requires a large number of instructions
involves recording a trace of
(increasing execution time and
instrumentation points via an I/O
increasing code size). On a resource-
port. This avoids the need for a large
constrained platform this represents
area of memory and simultaneously
a difficulty.
makes instrumentation overheads
The exact nature of resource very low, typically 1-2 machine
constraints varies between systems. instructions. Advanced debuggers
Data areas might be limited or code (e.g. Nexus or ARM ETM-based
size constrained. On other systems tracing debuggers) can also be
high CPU utilization might limit what used to collect data.

Eight top code coverage questions | page 7


» Partial instrumentation. Rather memory overheads. Instrumenting
What to look for in a
than completely instrumenting an an embedded system for coverage code coverage tool:
application, instrument specific parts of requires knowledge of how
it, perform the tests and combine the instrumentation is carried out. Once Can it adapt to
results to provide an overall picture. set up, there are opportunities to make different embedded
trade-offs between exactly how the environments?
» Optimized instrumentation. Measuring
level of coverage is achieved, and the
certain types of coverage, for example Can it cope with
amount of instrumentation required.
MC/DC, can require significant low memory
environments? Will
it support partial
instrumentation and
provide the ability to
combine results?

2.4 How can I use code coverage results to support certification?

If evidence of code coverage is mandatory that all requirements in the TOR have
for the project, for example because the been verified.
customer requires strict adherence to
» TVR (Tool Verification Records). This
DO-178B/C guidance, it’s also important
comprises test cases, procedures
to be able to provide evidence that the
and results.
process used to collect the data has
worked correctly. » TQP (Tool Qualification Plan). This
describes the process for qualifying
The evidence must show that the tool
the tool.
works correctly within the context of the
development environment for which it
These items combine two main kinds
is producing results. In the case of DO-
of evidence:
178B/DO-330, the following items are
recommended for tool qualification: » Generic evidence. This needs to be
What to look for in a
provided by the tool vendor to define code coverage tool:
» PSAC (Plan for Software Aspects of
the tool operational requirements, and
Certification). This references the TQP Is certification
verification evidence to demonstrate
and TAS (see below).
that the tool meets the requirements. evidence available?
» TOR (Tool Operational Requirements). Will the tool vendor
» Specific evidence. The tool user
This describes what the tool does, how support you in
needs to demonstrate that the
it is used and the environment in which
tool works correctly in a specific generating specific
it performs.
environment. Ideally the tool vendor certification
» TAS (Tool Accomplishment Summary). should provide support to simplify this
evidence?
This is a summary of the data showing process as much as possible.

Eight top code coverage questions | page 8


2.5 What additional benefits come from measuring on-target?

When you instrument source code at which instrumentation points are


What to look for in a
and run your application on target, executed, it is possible to determine code coverage tool:
you are opening the door to collecting timing information. For example,
other information besides simply code RapiTime uses such information Can it exploit the
coverage. For example, if you collect to provide a wide range of timing effort that you’ve put
a trace (i.e. recording the sequence of measurements that can be used for: in to integrate it with
instrumentation points that are executed),
» execution time measurement; your target to provide
it is possible to identify which test cases
additional
execute specific execution paths. Using » worst-case execution time (WCET)
the traces, it is possible to step through calculation; information?
the code forwards and backwards.
» performance optimization.
If a trace also records the specific time

2.6 How do I combine results from multiple tests?

Your approach to testing may rely » Possibly system constraints force you to
What to look for in a
upon combining coverage results from instrument one only part of your system code coverage tool:
a variety of different tests. This could at a time (consider the advice in Section
occur because: 2.3 to mitigate this issue). Can it combine
coverage data from
» Your strategy includes upon a It may be necessary to perform the coverage
combination of on-target and on- analysis for each of these tests individually,
multiple test
host testing. and to manually merge the results. scenarios into a
single report?
» You need multiple test cases A better approach is to use a tool that
reflecting different system modes. supports the combination of multiple
results into a single report.

Eight top code coverage questions | page 9


2.7 How do I deal with missing code coverage?

In some situations, there may be In such a situation, it is useful for


What to look for in a
legitimate reasons for not achieving any code coverage report to provide code coverage tool:
100% code coverage. the ability to justify uncovered code.
Summary reports could then show Is it possible to justify
For example, it might not be possible to
executed code, justified (but unexecuted why some code is not
construct test cases to execute defensive
code) and unjustified code. The executed, and to
programming constructs. In this case,
objective should be for all code to be
alternative forms of verification of this code report the proportion
either justified or executed.
could be agreed upon as acceptable. of executed code,
justified code and
neither executed nor
justified code?

2.8 What to look for in a code coverage tool

Summarizing the above, which » combine coverage reports from


represents knowledge collected from different tests;
our work with avionics software teams
» support justifications for code that
in the aerospace industry, we see that a
has not been executed.
code coverage tool should:

» support all classes of code coverage,


including the specific interpretations
used by your project;

» be capable of supporting on-host and


on-target testing;

» be suitable for different embedded


environments, including low memory
environments;

» support partial instrumentation;

» have certification evidence available;

» be able to collect other classes of


information;

Eight top code coverage questions | page 10


3. Product Summary: RapiCover
RapiCover is
a structural
coverage analysis
tool designed
specifically
to work with
embedded targets.

RapiCover is designed to
deliver three key benefits:

» Reduced timescales by
running fewer on-target
tests.

» Reduced risk through


greater tool flexibility.

» Reduced effort for


certification activities.
RapiCover screen shot

3.1 Reduced timescales by running fewer on-target tests

Running system and integration tests


What should I look for in a code coverage tool? RapiCover
can be time-consuming and runs the
risk of introducing schedule delays, Support all classes of code coverage ✔
especially if the availability of test rigs Including both masking and unique case MC/DC ✔
is limited. If instrumentation overheads
for code coverage are large, and
Support on-host and on-target testing ✔
system resources are limited, obtaining Suitable for low memory embedded environments ✔
coverage can only be achieved through
Support partial instrumentation ✔
multiple test builds. This increases
testing time, especially if additional time Have certification evidence available ✔
on test rigs needs to be negotiated. Ability to collect other classes of information ✔
RapiCover is designed specifically Combine multiple coverage reports ✔
for use in resource-constrained,
Justify non-executed code ✔
embedded applications. Because

Eight top code coverage questions | page 11


there is considerable variation between
About instrumentation
embedded systems, both in their
requirements and their underlying Performing structural code analysis requires some way of identifying

technology, RapiCover provides a which parts of the code have been executed. One of the most widely-

range of highly-optimized solutions for used approaches for this is source-code instrumentation. In this

the instrumentation code it generates. approach, instrumentation code is inserted into the source code during

This flexibility allows you to make the the build process. The instrumentation code is used to signal that

best use of the resources available on a specific function, line, condition or decision (depending upon the

your platform. coverage type required) has been executed. Done in a naïve way, this
can negatively impact the executable code in two ways:
This results in best-in-class
instrumentation overheads for an » Too many instrumentation points. Adding more instrumentation points
on-target code coverage tool, and than necessary doesn’t improve the information generated, but does
consequently fewer test builds. result in greater memory requirements and longer execution times.

» High overhead for each instrumentation point. If the implementation of


an instrumentation point is inefficient, this has a multiplicative effect on
the overheads of the system.

3.2 Reduced risk through greater tool flexibility

Rather than adapting your system to development environment, whether


work with another tool, you can adapt they be highly customized, extremely
how RapiCover works with your system. complex or legacy systems.
RapiCover also helps this flexibility
The two key factors to consider in a
by working with a wide variety of data
deployment of a coverage tool are:
capture mechanisms.
build system integration and coverage
An early design objective for RapiCover data collection.
was to make it easy to deploy into any
Embedded Target
RTBx

Logic
Analyser I/O port

Debugger Nexus/ETM
RapiCover Coverage
CPU
data set

Simulator buffer
Network
(e.g.)
HOST RAM
Ethernet

Figure 1: Data collection alternatives for RapiCover

Eight top code coverage questions | page 12


» Build System Integration. To enable a rapid, high-impact integration
RapiCover is designed to work with into your development environment Rapita
any combination of compiler (C, C++ Systems provide the option of a target-
or Ada), processor and real-time integration service. In this service, Rapita
operating system (RTOS). Its use of Systems’ engineers will work with your
command-line tools and the ability team to establish an optimal integration
to choose between two alternative into your development environment. This
strategies for integrating RapiCover integration will be consistent with Rapita
into pre-existing build systems Systems’ DO-178B/C tool qualification
ensures a seamless integration. process, ensuring that tool qualification
runs smoothly.
» Coverage Data Collection
RapiCover is designed with the A RapiCover integration is based
flexibility to handle data from a upon the RVS (Rapita Verification Suite)
wide variety of possible sources. core toolflow. This makes it easy to
This flexibility means that when extend the integration to support other
creating an integration with a specific RVS components such as RapiTime
target, you can select the most (measurement-based worst-case
convenient collection mechanism, execution time analysis), RapiTask
including legacy approaches such (visualization of scheduling behavior)
as CodeTEST probes. Figure 1 or newer developments based upon
shows alternative data collection Rapita Systems’ Early Access Program
approaches. (see Section 4).

3.3 Reduced effort for certification activities

Automatic combination of results from same information into CSV, text, XML or
multiple test runs and the ability to aligned with source code.
justify missing coverage makes the
» Combination of reports from
preparation of coverage Software
multiple sources. Coverage data
Verification Results quicker.
is often generated at multiple phases
A major driver for the use of code of the test program, for example: unit
coverage is the need to meet DO-178B/C test, integration test and system test.
objectives. In addition to providing RapiCover supports the consolidation
options for achieving DO-178B/DO-330 of this data into a single report.
tool qualification, RapiCover also aims
» Justification of missing coverage.
to make the process of gathering and
Where legitimate reasons exist that
presenting code coverage results easier.
specific parts of the code cannot be
This is achieved in the following ways:
executed, RapiCover provides an
» Multiple format report export. automated way of justifying this. The
RapiCover provides you with the ability summary report shows code that is
to browse coverage data using our executed, code that is justified and code
eclipse-based viewer and to export the that is neither executed nor justified.

Eight top code coverage questions | page 13


Text export of summary report

To facilitate your use of RapiCover enables you to generate evidence


within a DO-178B/C project, we provide that RapiCover works correctly on
several options for tool qualification: your own system.

» Qualification Data. This gives you » Qualification Service. Engineers


access to documents necessary from Rapita Systems work with
to support tool qualification of you to apply the RapiCover tests
RapiCover. to your system and to develop the
necessary qualification arguments
» Qualification Kit. In addition to the
for your certification case.
qualification data, this provides test
code and supporting framework that

3.4 Discover What RapiCover can do for you

» Contact us to find out more about RapiCover:


» E: [email protected]

» Request a trial version to experience RapiCover for yourself:


http://www.rapitasystems.com/trial
» To keep informed about RapiCover developments (and to receive other technical articles
in the area of on-target verification), sign up for our monthly RapiTimes newsletter:
http://www.rapitasystems.com/rapita/mailing_list

Eight top code coverage questions | page 14


4. About Rapita Systems
RVS
Founded in 2004, Rapita Systems develops on-target
embedded software verification solutions for customers
around the world in the avionics and automotive electronics
industries. Our tools help to reduce the cost of measuring,
optimizing and verifying the timing performance and test
effectiveness of their critical real-time embedded systems. Early Access
Program examples

4.1 RVS 4.2 Early Access Examples of

RVS (Rapita Verification Suite) provides Program technologies available


in Rapita Systems’
a framework for on-target verification
We participate in many collaborative Early Access Program
for embedded, real-time software. It
research programs, with a large variety of include:
provides accurate and useful results
organizations. This results in our development
» ED4i. Automatic
by observing software running on its
of a wide range of advanced technologies in
generation of diverse
actual target hardware. By providing
various pre-production stages.
code for reliability.
targeted services alongside RVS, Rapita
Systems provides a complete solution Rapita Systems’ customers have found » RapiCheck.
to customers working in the aerospace access to this technology has been Constraint checking
and automotive industries. very useful. of code running on

Working with us in our Early Access an embedded target.


RVS helps you to verify:
Program gives you the ability to use
» Software timing performance (RapiTime); » Data dependency
our pre-production technology for your
tool. Supports
» Structural code coverage (RapiCover); specific needs. Access to this technology
the conversion of
is normally provided through defined
» Scheduling behavior (RapiTask); sequential code for
engineering services and gives you the
multicore targets.
» Other properties (via Rapita Systems’ opportunity to influence the development
“Early Access Program”). of the technology into a product.

Eight top code coverage questions | page 15


5. Appendix: overview of
code coverage criteria
5.1 Function Coverage
Of the coverage levels discussed here, The example below contains three
function coverage is the easiest to functions: main, activity_a and
achieve. It demonstrates whether each activity_b:
function was called in some way during
Function coverage of this program
your tests. This level of coverage is a
demonstrates:
reasonable indicator that the tests have
exercised a representative subset of the » the program started;
entire functionality of your system, without
» the for-loop executed at least once;
guaranteeing that every line of code has
been executed during testing. Function » at least one of the two switch-
coverage can reveal problems with dead statement cases shown was taken.
code (which is an issue for DO-178B/C) or
Function coverage does not, however,
incomplete requirements-based testing.
reveal:
It can be difficult to achieve full function
» whether both of the two switch-
coverage when working with generic
statement cases shown were taken;
or configurable components that may
contain more functionality than that used » whether activity_a ever returned
by the specific application. When this from within its own loop;
happens, however, it is relatively easy to
» whether activity_b ran any
review the function behaviour and option
iterations of its loop.
selections to justify any omissions in
function coverage.
activity.c
void activity_a(void) {
...
main.c while (x > 0) {
void main(void) { if (y < 0) {
... return;
for (i = 0; i < N; i++) { }
switch (msg[i]) { ...
case 0: }
activity_a(); ...
activity_b(); return;
break; }
case 1:
activity_b(); void activity_b(void) {
activity_a(); ...
break; while (x > 0) {
} ...
} }
} return
}

Eight top code coverage questions | page 16


5.2 Call Coverage

Call coverage represents a slight The two approaches are equivalent if


increase in complexity over function each caller can only call one program Support for call-pair
coverage.
coverage. The term “call coverage” can unit, that is if no caller uses function
actually be used to refer to two slightly pointers, dynamic dispatching or any When function pointers

different types of coverage: similar method. are used, detecting


which call sites are
» Call-pair coverage. A call pair is It is important to locate particular responsible for calling
the combination of a statement in one statements rather than performing the specific functions is
program unit (typically procedure, analysis at the level of entire program difficult when using an
function or method) calling to another units, because there could be multiple “array of booleans”. A
program unit (the callee). Call-pair calls made to a particular unit from side effect of collecting a
coverage shows which of these pairs another particular unit. The following trace of instrumentation
are exercised by a given set of tests. figure shows an example of program points is that call pair
structure and identified call pairs: coverage between
» Call site coverage. A call site is the
point in the program text from which function pointers and

the call is made. Call site coverage functions can easily be

shows whether all such points have detected.

been exercised.

main.c activity.c
void main(void) { void activity_a(void) {
void (*fp_act)(void); ...
... while (x > 0) {
for (i = 0; i < MSG_SIZE; i++) { if (y < 0) {
switch (msg[i]) { return;
case 0: }
activity_a(); ...
activity_b(); }
break; ...
case 1: return;
activity_b(); }
(*fp_act)();
break; void activity_b(void) {
} ...
} while (x > 0) {
KEY } ...
call site }
call pairs return
}

Eight top code coverage questions | page 17


5.3 Statement coverage

To achieve statement coverage, it is activity.c


necessary for each statement in the void activity_a(void) {
source code to have been executed ...
while (x > 0) {
by at least one test in the test suite. If a if (y < 0) {
particular statement cannot be covered, return;
}
it is important to identify why. This may ...
reveal dead code, for example, or it }
...
may be code that cannot be traced to a return;
requirement or architectural structure. }

Statement coverage is particularly useful void activity_b(void) {


...
when dealing with loops and returns. }
Consider our example code again:

Statement coverage reveals the answers


to questions such as: In particular, for this program,
statement coverage could determine
» Did every branch of the switch-
whether testing was sufficient to
statement get executed at least once?
show that the first return statement in
» Did each while-loop run at least once? activity_a was executed.

» Did the code within each if-statement


run at least once?

Eight top code coverage questions | page 18


5.4 Decision coverage

Decision coverage criteria assess the


ability of a set of tests to adequately 1 int sin_a_1000 ( int v )

2 {
exercise the routes through the logic of
3 int Interpolate_Index = 0;
a program. They are derived solely from
4 while ( Sin_Graph[Interpolate_Index+1].x != Sin_Graph_Sentinel )
the structure of the code.
5 {

The code example contains two decisions. 6 int x1 = Sin_Graph[Interpolate_Index].x;

The first governs the while-loop at line four, 7 int y1 = Sin_Graph[Interpolate_Index].y;

and the second is the expression for the 8 int x2 = Sin_Graph[Interpolate_Index+1].x;

9 int y2 = Sin_Graph[Interpolate_Index+1].y;
if-statement at line eleven.
10
For decision coverage, the typical 11 if ( v >= x1 && v < x2 )
criterion is that execution has reached 12 {

every point of entry and exit in the code, 13 return (y1 + (v - x1) * (y2 - y1) / (x2 - x1));

and that for each decision in the source 14 }

code that decision has resulted in each 15 Interpolate_Index++;

possible outcome (true, false) at least 16 }

17 return Sin_Graph_Default;
once. For the example code, this would
18 }
mean:

» entry into function sin_a_1000


reaches line 4; » the expression governing the if-
statement was true at least once,
» function sin_a_1000 has exited on
meaning that the lookup value was
line 13 at least once;
located within one of the interpolation
» function sin_a_1000 has exited on regions for at least one test;
line 17 at least once;
» the expression governing the if-
» the expression governing the while- statement was false at least once,
loop was true at least once, meaning meaning that there is at least one run
that there was at least one non- for which the lookup value was not in
sentinel entry in Sin_Graph and the every region of Sin_Graph.
loop body was executed;

» the expression governing the while-


loop was false at least once, meaning
that a sentinel entry was found in
Sin_Graph and execution skipped to
the bottom of the loop;

Eight top code coverage questions | page 19


5.5 Modified condition/decision coverage (MC/DC)

Modified condition/decision coverage » The value of v<x2 has been true at


(MC/DC) extends decision coverage. least once.
Instead of just examining the outcome
» The value of v<x2 has been false at
of each decision, the coverage check
least once.
also shows that for each condition in
the source code, that condition has » The value of v>=x1 independently
resulted in each possible outcome (true, affected the outcome of the whole
false) at least once, and also that each expression, meaning that it has taken
condition in a decision has been values of true and false while the
shown to independently affect that value of v<x2 was true.
decision’s outcome.
» The value of v<x2 independently
The additional checks for MC/DC for the affected the outcome of the whole
example program show: expression, meaning that it has taken
values of true and false while the
» The value of v>=x1 has been true at
value of v>=x2 was true.
least once.

» The value of v>=x1 has been false


at least once. In the below report, we
see that this is not the case.

Rapita Systems Inc. Tel (USA): Rapita Systems Ltd. Tel (UK/International):
41131 Vincenti Ct. +1 248-957-9801 Atlas House, Osbaldwick Link Road +44 (0)1904 413945
Novi, MI 48375 York , YO10 3JB
Registered in England & Wales: 5011090

Email: [email protected] | Website: www.rapitasystems.com


Document ID: MC-WP-002 Eight top code coverage questions v4LR

You might also like