RTRT Ug
RTRT Ug
User Guide
VERSION: 2003.06.00
[email protected]
http://www.rational.com
ii Rational Test RealTime and PurifyPlus RealTime Installation Guide
Legal Notices
©2001-2003, Rational Software Corporation. All rights reserved.
Any reproduction or distribution of this work is expressly prohibited without the
prior written consent of Rational.
Version Number: 2003.06.00
Rational, Rational Software Corporation, the Rational logo, Rational Developer
Network, AnalystStudio, , ClearCase, ClearCase Attache, ClearCase MultiSite,
ClearDDTS, ClearGuide, ClearQuest, ClearTrack, Connexis, e-Development
Accelerators, DDTS, Object Testing, Object-Oriented Recording, ObjecTime,
ObjecTime Design Logo, Objectory, PerformanceStudio, PureCoverage, PureDDTS,
PureLink, Purify, Quantify, Rational Apex, Rational CRC, Rational Process
Workbench, Rational Rose, Rational Suite, Rational Suite ContentStudio, , Rational
Summit, Rational Visual Test, Rational Unified Process, RUP, RequisitePro,
ScriptAssure, SiteCheck,SiteLoad, SoDA, TestFactory, TestFoundation, TestStudio,
TestMate, VADS, and XDE, among others, are trademarks or registered trademarks of
Rational Software Corporation in the United States and/or in other countries. All
other names are used for identification purposes only, and are trademarks or
registered trademarks of their respective companies.
Portions covered by U.S. Patent Nos. 5,193,180 and 5,335,344 and 5,535,329 and
5,574,898 and 5,649,200 and 5,675,802 and 5,754,760 and 5,835,701 and 6,049,666 and
6,126,329 and 6,167,534 and 6,206,584. Additional U.S. Patents and International
Patents pending.
U.S. GOVERNMENT RIGHTS. All Rational software products provided to the U.S.
Government are provided and licensed as commercial software, subject to the
applicable license agreement. All such products provided to the U.S. Government
pursuant to solicitations issued prior to December 1, 1995 are provided with
“Restricted Rights” as provided for in FAR, 48 CFR 52.227-14 (JUNE 1987) or DFARS,
48 CFR 252.227-7013 (OCT 1988), as applicable.
WARRANTY DISCLAIMER. This document and its associated software may be used
as stated in the underlying license agreement. Except as explicitly stated otherwise in
such license agreement, and except to the extent prohibited or limited by law from
jurisdiction to jurisdiction, Rational Software Corporation expressly disclaims all
other warranties, express or implied, with respect to the media and software product
and its documentation, including without limitation, the warranties of
merchantability, non-infringement, title or fitness for a particular purpose or arising
from a course of dealing, usage or trade practice, and any warranty against
interference with Licensee’s quiet enjoyment of the product.
Chapter - iii
Microsoft, the Microsoft logo, Active Accessibility, Active Client, Active Desktop,
Active Directory, ActiveMovie, Active Platform, ActiveStore, ActiveSync, ActiveX,
Ask Maxwell, Authenticode, AutoSum, BackOffice, the BackOffice logo, bCentral,
BizTalk, Bookshelf, ClearType, CodeView, DataTips, Developer Studio, Direct3D,
DirectAnimation, DirectDraw, DirectInput, DirectX, DirectXJ, DoubleSpace,
DriveSpace, FrontPage, Funstone, Genuine Microsoft Products logo, IntelliEye, the
IntelliEye logo, IntelliMirror, IntelliSense, J/Direct, JScript, LineShare, Liquid Motion,
Mapbase, MapManager, MapPoint, MapVision, Microsoft Agent logo, the Microsoft
eMbedded Visual Tools logo, the Microsoft Internet Explorer logo, the Microsoft
Office Compatible logo, Microsoft Press, the Microsoft Press logo, Microsoft
QuickBasic, MS-DOS, MSDN, NetMeeting, NetShow, the Office logo, Outlook,
PhotoDraw, PivotChart, PivotTable, PowerPoint, QuickAssembler, QuickShelf,
RelayOne, Rushmore, SharePoint, SourceSafe, TipWizard, V-Chat, VideoFlash,
Virtual Basic, the Virtual Basic logo, Visual C++, Visual C#, Visual FoxPro, Visual
InterDev, Visual J++, Visual SourceSafe, Visual Studio, the Visual Studio logo, Vizact,
WebBot, WebPIP, Win32, Win32s, Win64, Windows, the Windows CE logo, the
Windows logo, Windows NT, the Windows Start logo, and XENIX, are either
trademarks or registered trademarks of Microsoft Corporation in the United States
and/or in other countries.
Sun, Sun Microsystems, the Sun Logo, Ultra, AnswerBook 2, medialib, OpenBoot,
Solaris, Java, Java 3D, ShowMe TV, SunForum, SunVTS, SunFDDI, StarOffice, and
SunPCi, among others, are trademarks or registered trademarks of Sun Microsystems,
Inc. in the U.S. and other countries.
Purify is licensed under Sun Microsystems, Inc., U.S. Patent No. 5,404,499.
Licensee shall not incorporate any GLOBEtrotter software (FLEXlm libraries and
utilities) into any product or application the primary purpose of which is software
license management.
BasicScript is a registered trademark of Summit Software, Inc.
Design Patterns: Elements of Reusable Object-Oriented Software, by Erich Gamma,
Richard Helm, Ralph Johnson and John Vlissides. Copyright © 1995 by
Addison-Wesley Publishing Company, Inc. All rights reserved.
Additional legal notices are described in the legal_information.html file that is
included in your Rational software installation.
Preface .......................................................................................................... xi
Audience ..............................................................................................................xi
Contacting Rational Technical Publications .........................................................xi
Other Resources................................................................................................. xii
Customer Support............................................................................................... xii
Product Overview.......................................................................................... 1
Source Code Insertion ......................................................................................... 2
Estimating Instrumentation Overhead ...................................................................2
Reducing Instrumentation Overhead.....................................................................4
Generating SCI Dumps .........................................................................................5
Target Deployment Ports ..................................................................................... 6
Key Capabilities and Benefits................................................................................6
Downloading Target Deployment Ports.................................................................6
Obtaining New Target Deployment Ports..............................................................7
Launching the TDP Editor .....................................................................................7
Reconfiguring a TDP for a Compiler or JDK .........................................................8
Unified Modeling Language ................................................................................. 9
Model Elements and Relationships in Sequence Diagrams .................................9
Activations .............................................................................................................9
Classifier Roles....................................................................................................10
Destruction Markers ............................................................................................12
Lifelines................................................................................................................12
Messages ............................................................................................................13
Objects.................................................................................................................15
Stimuli ..................................................................................................................17
Actions .................................................................................................................18
Exceptions ...........................................................................................................19
Actors...................................................................................................................19
Loops ...................................................................................................................20
Synchronizations .................................................................................................21
Notes ...................................................................................................................22
Upgrading from a Previous Version ................................................................... 22
v
Table Of Contents
Automated Testing...................................................................................... 93
vi
Table Of Contents
vii
Table Of Contents
Command Line Component Testing for C, Ada and C++ ................................ 319
Command Line Component Testing for Java................................................... 320
Command Line System Testing for C .............................................................. 320
Command Line Examples ................................................................................ 321
Runtime Analysis using the Instrumentation Launcher .....................................322
Calculating Metrics ............................................................................................323
Command Line Tasks ...................................................................................... 324
Setting Environment Variables ..........................................................................324
Preparing an Options Header File .....................................................................326
Preparing a Products Header File .....................................................................326
Instrumenting and Compiling the Source Code.................................................327
Compiling the TDP Library ................................................................................328
Compiling the Test Harness ..............................................................................329
Linking the Application.......................................................................................330
Running the Test Harness or Application..........................................................330
Splitting the Trace Dump File ............................................................................330
Troubleshooting Command Line Usage............................................................331
Glossary..................................................................................................... 349
Index........................................................................................................... 359
ix
Preface
Audience
This guide is intended for Rational software users who are using Test RealTime, such
as application developers, quality assurance managers, and quality assurance testers.
You should be familiar with the selected Windows or UNIX platform as well as both
the native and target development environments.
xi
Test RealTime - User Guide
Other Resources
All manuals are available online, either in HTML or PDF format. The online manuals
are on the CD and are installed with the product.
For the most recent updates to the product, including documentation, please visit the
Product Support section of the Web site at:
http://www.rational.com/products/testrt/index.jsp
Documentation updates and printable PDF versions of Rational documentation can
also be downloaded from:
http://www.rational.com/support/documentation/index.jsp
For more information about Rational Software technical publications, see:
http://www.rational.com/documentation.
For more information on training opportunities, see the Rational University Web site:
http://www.rational.com/university.
Customer Support
Before contacting Rational Customer Support, make sure you have a look at the tips,
advice and answers to frequently asked questions in Rational's Solution database:
http://solutions.rational.com/solutions
Choose the product from the list and enter a keyword that most represents your
problem. For example, to obtain all the documents that talk about stubs taking
parameters of type “char”, enter "stub char". This database is updated with more
than 20 documents each month.
When contacting Rational Customer Support, please be prepared to supply the
following information:
• About you:
Name, title, e-mail address, telephone number
• About your company:
Company name and company address
• About the product:
Product name and version number (from the Help menu, select About).
What components of the product you are using
xii
Preface
Sometimes Rational technical support engineers will ask you to fax information to
help them diagnose problems. You can also report a technical problem by fax if you
prefer. Please mark faxes "Attention: Customer Support" and add your fax number to
the information requested above.
Location Contact
North America Rational Software,
18880 Homestead Road,
Cupertino, CA 95014
voice: (800) 433-5444
fax: (408) 863-4001
e-mail: [email protected]
Europe, Middle East, and Africa Rational Software,
Beechavenue 30,
1119 PV Schiphol-Rijk,
The Netherlands
xiii
Test RealTime - User Guide
xiv
Product Overview
For more information about Rational Test RealTime, visit the product Web site at:
http://www.rational.com/products/testrt/index.jsp
1
Test RealTime - User Guide
2
Product Overview
I/O is either performed at the end of the execution or when the end-user decides
(please refer to Coverage Snapshots in the documentation).
As a summary, Hit Count mode and modified/multiple conditions produce the
greatest data and execution time overhead. In most cases you can select each
coverage type independently and use Pass mode by default in order to reduce this
overhead. The source code can also be partially instrumented.
3
Test RealTime - User Guide
maximum number of bytes, which is the sum of the sizes of all tracked blocks in the
queue.
4
Product Overview
Explicit Dump
Code Coverage, Memory Profiling and Performance Profiling allow you to explicitly
invoke the TDP dump function by inserting a call to the _ATCPQ_DUMP(1) macro
definition (the parameter 1 is ignored).
Explicit dumps should not be placed in the main loop of the application. The best
location for an explicit dump call is in a secondary function, for example called by the
user when sending a specific event to the application.
The explicit dump method is sometimes incompatible with watchdog constraints. If
such incompatibilities occur, you must:
• Deactivate any hardware or software watchdog interruptions
• Acknowledge the watchdog during the dump process, by adding a specific call
to the Data Retrieval customization point of the TDP.
Dump on Signal
Code Coverage allows you to dump the traces at any point in the source code by
using the ATC_SIGNAL_DUMP environment variable.
When the signal specified by ATC_SIGNAL_DUMP is received, the Target
Deployment Port function dumps the trace data and resets the signal so that the same
signal can be used to perform several trace dumps.
Before starting your tests, set ATC_SIGNAL_DUMP to the number of the signal that
is to trigger the trace dump.
The signal must be redirectable signal, such as SIGUSR1 or SIGINT for example.
Instrumentor Snapshot
5
Test RealTime - User Guide
The Instrumentor snapshot option enables you to specify the functions of your
application that will dump the trace information on entry, return or call.
In snapshot mode, the Runtime Tracing feature starts dumping messages only if the
Partial Message Dump setting is activated. Code Coverage, Memory Profiling and
Performance Profiling features all dump their internal trace data.
6
Product Overview
Alternatively, from the Help menu, select Download Target Deployment Ports.
Downloaded TDPs can be freely used an modified with the TDP Editor.
7
Test RealTime - User Guide
8
Product Overview
Activations
An activation (also known as a focus of control) is a notation that can appear on a
lifeline to indicate the time during which an instance (an actor instance, object, or
classifier role) is active. An active instance is performing an action, such as executing
an operation or a subordinate operation. The top of the activation represents the time
at which the activation begins, and the bottom represents the time at which the
activation ends.
For example, in a sequence diagram for a "Place Online Order" interaction, there are
lifelines for a ":Cart" object and ":Order" object. An "updateTotal" message points
from the ":Order" object to the ":Cart" object. Each lifeline has an activation to indicate
how long it is active because of the "updateTotal" message.
Shape
An activation appears as a thin rectangle on a lifeline. You can stack activations to
indicate nested stack frames in a calling sequence.
9
Test RealTime - User Guide
Using Activations
Activations can appear on your sequence diagrams to represent the following:
• On lifelines depicting instances (actors, classifier roles, or objects), an activation
typically appears as the result of a message to indicate the time during which an
instance is active.
• On lifelines involved in complex interactions, nested activations (also known as
stacked activations or nested focuses of control) are displayed to indicate nested
stack frames in a calling sequence, such as those that happen during recursive
calls.
• On lifelines depicting concurrent operations, the entire lifeline may appear as an
activation (thin rectangles) instead of dashed lines.
Naming Conventions
An activation is usually identified by the incoming message that initiates it.
However, you may add text labels that identify activations either next to the
activation or in the left margin of the diagram.
Classifier Roles
A classifier role is a model element that describes a specific role played by a classifier
participating in a collaboration without specifying an exact instance of a classifier. A
classifier role is neither a class nor an object. Instead, it is a model element that
specifies the kind of object that must ultimately fulfill the role in the collaboration.
The classifier role limits the kinds of classifier that can be used in the role by
referencing a base classifier. This reference identifies the operations and attributes
that an instance of a classifier will need in order to fulfill its responsibilities in the
collaboration.
Classifier roles are commonly used in collaborations that represent patterns. For
example, a subject-observer pattern may be used in a system. One classifier role
would represent the subject, and one would represent the observer. Each role would
reference a base class that identifies the attributes and operations that are needed to
participate in the subject-observer collaboration. When you use the pattern in the
system, any class that has the specified operations and behaviors can fill the role.
Shape
A classifier role appears as a rectangle. Its name is prefixed with a slash and is not
underlined. In sequence diagrams, a lifeline (a dashed, vertical line) is attached to the
bottom of a classifier role to represent its life over a period of time. For details about
lifelines, see Lifelines.
Classifier Role
Classifier Role with Lifeline
10
Product Overview
Naming Conventions
The name of a classifier role consists of a role name and base class name. You can
omit one of the names. The following table identifies the variations of the naming
convention.
11
Test RealTime - User Guide
Destruction Markers
A destruction marker (also known as a termination symbol) is a notation that can
appear on a lifeline to indicate that an instance (object or classifier role) has been
destroyed. Usually, the destruction of an object results in the memory occupied by
the data members of the object being freed.
For example, when a customer exits the Web site for an e-commerce application, the
":Cart" object that held information about the customer's activities is destroyed, and
the memory that it used is freed. The destruction of the ":Cart" object can be shown in
a sequence diagram by adding a destruction marker on the ":Cart" object's lifeline.
Shape
A destruction marker appears as an X at the end of a lifeline.
Naming Conventions
Destruction markers do not have names.
Lifelines
A lifeline is a notation that represents the existence of an object or classifier role over
a period of time. Lifelines appear only in sequence diagrams, where they show how
each instance (object or classifier role) participates in the interaction.
For example, a "Place Online Order" interaction in an e-commerce application
includes a number of lifelines in a sequence diagram, including lifelines for a ":Cart"
object, ":OnlineOrder" object, and ":CheckoutCart" object. As the interaction is
developed, stimuli are added between the lifelines.
Shape
A lifeline appears as a vertical dashed line in a sequence diagram.
Lifeline for an Lifeline for a
Object Classifier Role
12
Product Overview
Using Lifelines
When a classifier role or object appears in a sequence diagram, it will automatically
have a lifeline. Lifelines indicate the following:
• Creation – If an instance is created during the interaction, its lifeline starts at the
level of the message or stimulus that creates it; otherwise, its lifeline starts at the
top of the diagram to indicate that it existed prior to the interaction.
• Communication – Messages or stimuli between instances are illustrated with
arrows. A message or stimulus is drawn with its end on the lifeline of the
instance that sends it and its arrowhead on the lifeline of the instance that
receives it.
• Activity – The time during which an instance is active (either executing an
operation directly or through a subordinate operation) can be shown with
activations.
• Destruction – If an instance is destroyed during the interaction, its lifeline ends
at the level of the message or stimulus that destroys it, and a destruction marker
appears; otherwise, its lifeline extends beyond the final message or stimulus to
indicate that it exists during the entire interaction.
Naming Conventions
A lifeline has the name of an object or classifier role. For details, see Objects or
Classifier Roles.
Messages
A message is a model element that specifies a communication between classifier roles
and usually indicates that an activity will follow. The types of communications that
messages model include calls to operations, signals to classifier roles, the creation of
classifier roles, and the destruction of classifier roles. The receipt of a message is an
instance of an event.
For example, in the observer pattern, the instance that is the subject sends an
"Update" message to instances that are observing it. You can illustrate this behavior
13
Test RealTime - User Guide
by adding "Subject" and "Observer" classifier roles and then adding an "Update"
message between them.
Shape
A message appears as a line with an arrow. The direction of the arrow indicates the
direction in which the message is sent. In a sequence diagram, messages usually
connect two classifier role lifelines.
Types of Messages
Different types of messages can be used to model different flows of control.
Using Messages
Messages can appear in a sequence diagram to represent the communications
exchanged between classifier roles during dynamic interactions.
Note Both messages and stimuli are supported. Stimuli are added to
collaboration instances, and messages are added to collaborations. For details
about stimuli, see Stimuli.
The messages in a model are usually contained in collaborations and usually appear
in sequence diagrams.
14
Product Overview
Naming Conventions
Messages can be identified by a name or operation signature.
Objects
An object is a model element that represents an instance of a class. While a class
represents an abstraction of a concept or thing, an object represents an actual entity.
An object has a well-defined boundary and is meaningful in the application. Objects
have three characteristics: state, behavior, and identity. State is a condition in which
the object may exist, and it usually changes over time. The state is implemented with
a set of attributes. Behavior determines how an object responds to requests from
other objects. Behavior is implemented by a set of operations. Identity makes every
object unique. The unique identity lets you differentiate between multiple instances
of a class if each has the same state.
The behaviors of objects can be modeled in sequence and activity diagrams. In
sequence diagrams, you can display how instances of different classes interact with
each other to accomplish a task. In activity diagrams, you can show how one or more
instances of an object changes states during an activity. For example, an e-commerce
application may include a "Cart" class. An instance of this class that is created for a
customer visit, such as "cart100:Cart." In a sequence diagram, you can illustrate the
stimuli, such as "addItem( )," that the "cart100:Cart" object exchanges with other
objects. In an activity diagram, you can illustrate the states of the "cart100:Cart"
object, such as empty or full, during an activity such as a user browsing the online
catalog.
Shape
In sequence and activity diagrams, an object appears as a rectangle with its name
underlined. In sequence diagrams, a lifeline (a dashed, vertical line) is attached to the
15
Test RealTime - User Guide
bottom of an object to represent the existence of the object over a period of time. For
details about lifelines, see Lifelines.
Object with
Object Lifeline
Types of Objects
The following table identifies three types of objects.
Using Objects
Objects can appear in a sequence diagram to represent concrete and prototypical
instances. A concrete instance represents an actual person or thing in the real world.
For example, a concrete instances of a "Customer" class would represent an actual
customer. A prototypical instance represents an example person or thing. For
example, a prototypical instance of a "Customer" class would contain the data that a
typical customer would provide.
Naming Conventions
Each object must have a unique name. A full object name includes an object name,
role name, and class name. You may use any combination of these three parts of the
object name. The following table identifies the variations of object names.
16
Product Overview
Stimuli
A stimulus is a model element that represents a communication between objects in a
sequence diagram and usually indicates that an activity will follow. The types of
communications that stimuli model include calls to operations, signals to objects, the
creation of objects, and the destruction of objects. The receipt of a stimulus is an
instance of an event.
Shape
A stimulus appears as a line with an arrow. The direction of the arrow indicates the
direction in which the stimulus is sent. In a sequence diagram, a stimulus usually
connects two object lifelines.
Types of Stimuli
17
Test RealTime - User Guide
Naming Conventions
Stimuli can have either names or signatures.
Actions
An action is represented as shown below:
18
Product Overview
Exceptions
When tracing C++ exceptions, Runtime Tracing locates the throw point of the
exception (the throw keyword in C++) as well as its catch point.
Exceptions are displayed as a slanted red line, as shown in the example below,
generated by Runtime Tracing.
Actors
An actor is a model element that describes a role that a user plays when interacting
with the system being modeled. Actors, by definition, are external to the system.
Although an actor typically represents a human user, it can also represent an
organization, system, or machine that interacts with the system. An actor can
correspond to multiple real users, and a single user may play the role of multiple
actors.
Shape
19
Test RealTime - User Guide
In models depicting software applications, actors represent the users of the system.
Examples include end users, external computer systems, and system administrators.
Naming Conventions
Each actor has a unique name that describes the role the user plays when interacting
with the system.
Loops
Loop detection simplifies UML sequence diagrams by summarizing repeating traces
into a loop symbol.
Note Loops are a Rational extension to UML Sequence Diagrams and are not
supported by the UML standard.
A loop is represented as shown below:
A tag displays the name of the loop and the number of executions.
The loop is linked to its source file. In the UML/SD Viewer, click a loop to open the
Text Editor at the corresponding line in the source code.
20
Product Overview
5. Click OK.
Synchronizations
Synchronizations are an extension to the UML standard that only apply when using
the split trace file feature of Runtime Tracing. They are used to show that all instance
lifelines are synchronized at the beginning and end of each split TDF file.
Shape
A synchronization is represented as shown below:
When the Split Trace capability is enabled, the UML/SD Viewer displays the list of
TDF files generated in the UML/SD Viewer toolbar.
At the beginning of each diagram, before the Synchronization, the Viewer displays
the context of the previous file.
Another synchronization is displayed at the end of each file, to insure that all
instance lifelines are together before viewing the next file.
21
Test RealTime - User Guide
Notes
Notes appear as shown below and are centered on, and attached to, the element to
which they apply:
22
Product Overview
23
Runtime Analysis
The runtime analysis feature set of Test RealTime allows you to closely monitor the
behavior of your application for debugging and validation purposes. Each feature
instruments the source code providing real-time analysis of the application while it is
running, either on a native or embedded target platform.
3. Select the source files under analysis in the wizard to create the application
node.
4. Select the runtime analysis features to be applied to the application in the Build
options.
5. Use the Project Explorer to set up the test campaign and add any additional
runtime analysis or test nodes.
6. Run the application node to build and execute the instrumented application.
7. View and analyze the generated analysis and profiling reports.
The runtime analysis options can be run within a test by simply adding the runtime
analysis setting to an existing test node.
Code Coverage
Source-code coverage consists of identifying which portions of a program are
executed or not during a given test case. Source-code coverage is recognized as one of
the most effective ways of assessing the efficiency of the test cases applied to a
software application.
The Code Coverage feature brings efficient, easy-to-use robust coverage technologies
to real-time embedded systems. Code Coverage provides a completely automated
and proven solution for C, C++, Ada and Java software coverage based on optimized
source-code instrumentation.
Information Modes
The Information Mode is the method used by Code Coverage to code the trace
output. This has a direct impact of the size of the trace file as well as on CPU
overhead.
26
Runtime Analysis
You can change the information mode used by Code Coverage in the Coverage Type
settings. There are three information modes:
• Default mode
• Compact mode
• Hit Count mode
Default Mode
When using Default or Pass mode, each branch generates one byte of memory. This
offers the best compromise between code size and speed overhead.
Compact Mode
The Compact mode is functionally equivalent to Pass mode, except that each branch
needs only one bit of storage instead of one byte. This implies a smaller requirement
for data storage in memory, but produces a noticeable increase in code size (shift/bits
masks) and execution time.
Coverage Types
The Code Coverage feature provides the capability of reporting of various source
code units and branches, depending on the coverage type selected.
By default, Code Coverage implements full coverage analysis, meaning that all
coverage types are instrumented by source code insertion (SCI). However, in some
cases, you might want to reduce the scope of the Code Coverage report, such as to
reduce the overhead generated by SCI for example.
Branches
27
Test RealTime - User Guide
When referring to the Code Coverage feature, a branch denotes a generic unit of
enumeration. For each branch, you specify the coverage type. Code Coverage
instruments each branch when you compile the source under test.
Coverage Levels
The following table provides details of each coverage type as used in each language
supported by the product
Ada Coverage
Block Coverage
When analyzing Ada source code, Code Coverage can provide the following block
coverage types:
• Statement Blocks
• Statement Blocks and Decisions
• Statement Blocks, Decisions, and Loops
Statement Blocks (or Simple Blocks)
28
Runtime Analysis
Simple blocks are the main blocks within units as well as blocks introduced by
decisions, such as:
• then and else (elsif) of an if
• loop...end loop blocks of a for...while
• exit when...end loop or exit when blocks at the end of an instruction sequence
• when blocks of a case
• when blocks of exception processing blocks
• do...end block of the accept instruction
• or and else blocks of the select instruction
• begin...exception blocks of the declare block that contain an exceptions
processing block.
• select...then abort blocks of an ATC statement
• sequence blocks: instructions found after a potentially terminal statement.
A simple block constitutes one branch. Each unit contains at least one simple block
corresponding to its body, except packages that do not contain an initialization block.
Decision Coverage (Implicit Blocks)
An if statement without an else statement introduces an implicit block.
-- Function power_10
-- -block=decision or -block=implicit
function power_10 ( value, max : in integer) return integer is
ret, i : integer ;
begin
if ( value == 0 ) then
return 0;
-- implicit else block
end if ;
for i in 0..9
loop
if ( (max /10) < ret ) then
ret := ret *10 ;
else
ret := max ;
end if ;
end loop ;
return ret;
end ;
29
Test RealTime - User Guide
30
Runtime Analysis
• Control never transferred: Because the trigger event never appears, the
sequence of control starts and reaches the end of the block then abort/end
select, and was never transferred to the block select/then abort.
In the following example, you need to execute the compute_done function several
times to obtain full coverage of the three ATC blocks induced by the select statement:
function compute_done return boolean is
Code Coverage blocks are attached to the Select keyword of the ATC statement.
Call Coverage
When analyzing Ada source code, Code Coverage can provide coverage of function,
procedure, or entry calls.
Code Coverage defines as many branches as it encounters function, procedure, or
entry calls.
This type of coverage ensures that all the call interfaces can be shown to have been
exercised for each Ada unit (procedure, function, or entry). This is sometimes a
pass/fail criterion in the software integration test phase.
Condition Coverage
Basic Conditions
Basic conditions are operands of logical operators (standard or derived, but not
overloaded) or, xor, and, not, or else, or and then, wherever they appear in ADA
units. They are also the conditions of if, while, exit when, when of entry body, and
when of select statement, even if these conditions do not contain logical operators.
For each of these basic conditions, two branches are defined: the sub-condition is
true and the sub-condition is false.
A basic condition is also defined for each when of a case statement, even each sub-
expression of a compound when, that is when A | B: two branches.
-- power_of_10 function
-- -cond
Function power_of_10( value, max : in integer )
is
result : integer ;
Begin
if value = 0 then
31
Test RealTime - User Guide
return 0;
end if ;
result := value ;
for i in 0..9 loop
if ( max > 0 ) and then (( max / value ) < result ) then
result := result * value;
else
result := max ;
end if ;
end loop;
return result ;
end ; -- there are 3 basic conditions (and 6 branches).
-- Near_Color function
Function Near_Color ( color : in ColorType ) return ColorType
is
Begin
case color is
when WHITE | LIGHT_GRAY => return WHITE ;
when RED | LIGHT_RED .. PURPLE => return RED ;
end case ;
End ; -- there are 4 basics conditions (and 4 branches).
Two branches are enumerated for each boolean basic condition, and one per case
basic condition.
Forced Conditions
A forced condition is a multiple condition in which any occurrence of the or else
operator is replaced with the or operator, and the and then operator is replaced with
the and operator. This modification forces the evaluation of the second member of
these operators. You can use this coverage type after modified conditions have been
reached to ensure that all the contained basic conditions have been evaluated. With
this coverage type, you can be sure that only the considered basic condition value
changes between both condition vectors.
-- Original source : -- -
cond=forceevaluation
if ( a and then b ) or else c then
-- Modified source :
if ( a and b ) or c then
Note This replacement modifies the code semantics. You need to verify that
using this coverage type does not modify the behavior of the software.
Example
procedure P ( A : in tAccess ) is
begin
if A /= NULL and then A.value > 0 -- the evaluation of
A.value will raise an
-- exception when using
forced conditions
-- if the A pointer is
nul
then
A.value := A.value - 1;
end if;
end P;
32
Runtime Analysis
Modified Conditions
A modified condition is defined for each basic condition enclosed in a composition of
logical operators (standard or derived, but not overloaded). It aims to prove that this
condition affects the result of the enclosing composition. To do that, find a subset of
values affected by the other conditions, for example, if the value of this condition
changes, the result of the entire expression changes.
Because compound conditions list all possible cases, you must find the two cases that
can result in changes to the entire expression. The modified condition is covered only
if the two compound conditions are covered.
-- State_Control state
-- -cond=modified
Function State_Condtol return integer
is
Begin
if ( ( flag_running and then ( process_count > 10 ) )
or else flag_stopped )
then
return VALID_STATE ;
else
return INVALID_STATE ;
end if ;
End ;
-- There are 3 basic conditions, 5 compound conditions
-- and 3 modified conditions :
-- flag_running : TTX=T and FXF=F
-- process_count > 10 : TTX=T and TFF=F
-- flag_stopped : TFT=T and TFF=F, or FXT=T and FXF=F
-- 4 test cases are enough to cover all the modified
conditions :
-- TTX=T
-- FXF=F
-- TFF=F
-- FTF=F or FXT=T
Note You can associate a modified condition with more than one case, as
shown in this example for flag_stopped. In this example, the modified
condition is covered if the two compound conditions of at least one of these
cases are covered.
Code Coverage calculates cases for each modified condition.
The same number of modified conditions as boolean basic conditions appear in a
composition of logical operators (standard or derived, but not overloaded).
Multiple Conditions
A multiple condition is one of all the available cases of logical operators (standard or
derived, but not overloaded) wherever it appears in an ADA unit. Multiple
conditions are defined by the concurrent values of the enclosed basic boolean
conditions.
33
Test RealTime - User Guide
A multiple condition is noted with a set of T, F, or X letters, which means that the
corresponding basic condition evaluates to true or false, or it was not evaluated,
respectively. Such a set of letters is called a condition vector. The right operand of or
else or and then logical operators is not evaluated if the evaluation of the left operand
determines the result of the entire expression.
-- State_Control Function
-- -cond=compound
Function State_Control return integer
is
Begin
if ( ( flag_running and then ( process_count > 10 ) )
or else flag_stopped
then
return VALID_STATE ;
else
return INVALIDE_STATE ;
end if ;
End ;
-- There are 3 basic conditions
-- and 5 compound conditions :
-- TTX=T <=> ((T and then T) or else X ) = T
-- TFT=T
-- TFF=F
-- FXT=T
-- FXF=F
Code Coverage calculates the computation of every available case for each
composition.
The number of enumerated branches is the number of distinct available cases for each
composition of logical operators (standard or derived, but not overloaded).
Unit Coverage
Unit Entries
Unit entries determine which units are executed and/or evaluated.
-- Function factorial
-- -proc
function factorial ( a : in integer ) return integer is
begin
if ( a > 0 ) then
return a * factorial ( a - 1 );
else
return 1;
end if;
end factorial ;
One branch is defined for each defined and instrumented unit. In the case of a
package, the unit entry only exists if the package body contains the begin/end
instruction block.
For Protected units, no unit entry is defined because this kind of unit does not have
any statements blocks.
34
Runtime Analysis
-- -proc=ret
function factorial ( a : in integer ) return integer is
begin
if ( a > 0 ) then
return a * factorial ( a - 1 );
else
return 1;
end if ;
end factorial ; -- the standard exit is not coverable
-- Procedure divide
procedure divide ( a,b : in integer; c : out integer ) is
begin
if ( b == 0 ) then
text_io.put_line("Division by zero" );
raise CONSTRAINT_ERROR;
end if ;
if ( b == 1 ) then
c := a;
return;
end if ;
c := a / b;
exception
when PROGRAM_ERROR => null ;
end divide ;
For Protected units, no exit is defined because this kind of unit does not have any
statements blocks.
In general, at least two branches per unit are defined; however, in some cases the
coding may be such that:
• There are no unit entries or exits (a package without an instruction block
(begin/end), protected units case).
• There is only a unit entry (an infinite loop in which the exit from the task cannot
be covered and therefore the exit from the unit is not defined).
The entry is always numbered if it exists. The exit is also numbered if it is coverable.
If it is not coverable, it is preceded by a terminal instruction containing return or
raise instructions; otherwise, it is preceded by an infinite loop.
A raise is considered to be terminal for a unit if no processing block for this exception
was found in the unit.
Link Files
Link files are the library management system used for Ada Coverage. These libraries
contain the entire Ada compilation units contained by compiler sources, the
predefined Ada environment and the source files of your projects. You must use link
35
Test RealTime - User Guide
files when using Code Coverage in Ada for the Ada Coverage analyzer to correctly
analyze your source code.
You can include a link file within another link file, which is an easy way to manage
your source code.
Link File Syntax
Link files have a line-by-line syntax. Comments start with a double hyphen (--), and
end at the end of the line. Lines can be empty.
There are two types of configuration lines:
• Link file inclusion: The link filename can be relative to the link file that contains
this line or absolute.
<link filename> LINK
• Compilation unit description: The source filename is the file containing the
described compilation unit (absolute or relative to the link filename). The full
unit name is the Ada full unit name (beware of separated units, or child units).
<source filename> <full unit name> <type> [ada83]
36
Runtime Analysis
• If you use the -STDLINK command line option, the specified standard link file
is loaded first. See the Rational Test RealTime Reference Manual for more
information
• The link file specified by the ATTOLCOV_ADALINK environment variable is
loaded.
• The link files specified by the -Link option is loaded.
Now, you can start analyzing the file instrument.
Loading A Permanent Link File
You can ask Code Coverage to load the link file at each execution. To do that, set the
environment variable ATTOLCOV_ADALINK with the link filename separated by ':'
on a UNIX system, or ';' in Windows. For example:
ATTOLCOV_ADALINK="compiler.alk/projects/myproject/myproject.alk"
A Link file specified on the command line is loaded after the link file specified by this
environment variable.
Additional Statements
Terminal Statements
An Ada statement is terminal if it transfers control of the program anywhere other
than to a sequence (return, goto, raise, exit).
By extension, a decision statement (if, case) is also terminal if all its branches are
terminal (i.e., if, then and else blocks and non-empty when blocks contain a terminal
instruction). An if statement without an else statement is never terminal, since one of
the blocks is empty and therefore transfers control in sequence.
Potentially Terminal Statements
An Ada statement is potentially terminal if it contains a decision choice that transfers
control of the program anywhere other than after it (return, goto, raise, exit ).
Non-coverable Statements
An Ada statement is detected as being not coverable if it is not a goto label and if it is
in a terminal statement sequence. Statements that are not coverable are detected by
the feature during the instrumentation. A warning is generated to signal each one,
which specifies its location source file and line. This is the only action Code Coverage
takes for statements that cannot be covered.
Note Ada units whose purpose is to terminate execution unconditionally are
not evaluated. This means that Code Coverage does not check that procedures
or functions terminate or return.
37
Test RealTime - User Guide
Similarly, exit conditions for loops are not analyzed statistically to determine whether
the loop is infinite. As a result, a for, while or loop/exit when loop is always
considered non-terminal (i.e., able to transfer control in its sequence). This is not
applicable to loop/end loop loops without an exit statement (with or without
condition), which are terminal.
C Coverage
Block Coverage
When running the Code Coverage feature on C source code, Test RealTime can
provide the following coverage types for code blocks:
• Statement Blocks
• Statement Blocks and Decisions
• Statement Blocks, Decisions, and Loops
Statement Blocks (or Simple Blocks)
Simple blocks are the C function main blocks, blocks introduced by decision
instructions:
• THEN and ELSE FOR IF
• FOR, WHILE and DO ... WHILE blocks
• non-empty blocks introduced by switch case or default statements
• true and false outcomes of ternary expressions (<expr> ? <expr> : <expr>)
• blocks following a potentially terminal statement.
/* Power_of_10 Function */ /* -
block */
int power_of_10 ( int value, int max )
{
int retval = value, i;
if ( value == 0 ) return 0; /* potentially terminal statement
*/
for ( i = 0; i < 10; i++ ) /* start of a sequence block */
{
retval = ( max / 10 ) < retval ? retval * 10 : max;
}
return retval;
} /* The power_of_10 function has 6 blocks */
/* Near_color function */
ColorType near_color ( ColorType color )
{
switch ( color )
{
case WHITE :
case LIGHT_GRAY :
return WHITE;
case RED :
case PINK :
38
Runtime Analysis
case BURGUNDY :
return RED;
/* etc ... */
}
} /* The near_color function has at least 3 simple blocks */
Each simple block is a branch. Every C function contains at least one simple block
corresponding to its main body.
Decisions (Implicit Blocks)
Implicit blocks are introduced by an IF statement without an ELSE or a SWITCH
statement without a DEFAULT.
/* Power_of_10 function */
/* -block=decision */
int power_of_10 ( int value, int max )
{
int retval = value, i;
if ( value == 0 ) return 0; else ;
for (i =0;i <10;i++)
{
retval = ( max / 10 ) < retval ? retval * 10 : max;
}
return retval;
}
/* Near_color function */
ColorType near_color ( ColorType color )
{
switch ( color )
{
case WHITE :
case LIGHT_GRAY :
return WHITE;
case RED :
case PINK :
case BURGUNDY :
return RED;
/* etc ... with no default */
default : ;
}
}
39
Test RealTime - User Guide
• The statement block is executed exactly once, the output condition is False, then
True the next time
• The statement block is executed at least twice. (The output condition is False at
least twice, and becomes True at the end)
In a DO...WHILE loop, because the output condition is tested after the block has been
executed, two further branches are created:
• The statement block is executed exactly once. The output is condition True the
first time.
• The statement block is executed at least twice. (The output condition is False at
least once, then true at the end)
In this example, the function try_five_times ( ) must run several times to completely
cover the three logical blocks included in the WHILE loop:
/* Try_five_times function */
/* -block=logical */
int try_five_times ( void )
{
int result,i =0;
/*try ()is afunction whose return value depends
on the availability of a system resource, for example */
while ( ( ( result = try ())!=0 )&&
(++i <5 ));
return result;
} /* 3 logical blocks */
Call Coverage
When analyzing C source code, Code Coverage can provide coverage of function or
procedure calls.
Code Coverage defines as many branches as it encounters function calls.
Procedure calls are made during program execution.
This type of coverage ensures that all the call interfaces can be shown to have been
exercised for each C function. This may be a pass or failure criterion in software
integration test phases.
You can use the -EXCALL option to select C functions whose calls you do not want to
instrument, such as C library functions for example.
Example
/* Evaluate function */
/* -call */
int evaluate ( NodeTypeP node )
{
if ( node == (NodeTypeP)0 ) return 0;
switch ( node->Type )
{
40
Runtime Analysis
int tmp;
case NUMBER :
return node->Value;
case IDENTIFIER :
return current value ( node->Name );
case ASSIGN :
set ( node->Child->Name,
tmp = evaluate ( node->Child->Sibling ) );
return tmp;
case ADD :
return evaluate ( node->Child ) +
evaluate ( node->Child->Sibling );
case SUBTRACT :
return evaluate ( node->Child ) -
evaluate ( node->Child->Sibling );
case MULTIPLY :
return evaluate ( node->Child ) *
evaluate ( node->Child->Sibling );
case DIVIDE :
tmp = evaluate ( node->Child->Sibling );
if ( tmp == 0 ) fatal error ( "Division by zero" );
else return evaluate ( node->Child ) / tmp;
}
} /* There are twelve calls in the evaluate function */
Condition Coverage
When analyzing C source code, Test RealTime can provide the following condition
coverage:
• Basic Coverage
• Forced Coverage
Basic Conditions
Conditions are operands of either || or && operators wherever they appear in the
body of a C function. They are also if and ternary expressions, tests for for, while, and
do/while statements even if these expressions do not contain || or && operators.
Two branches are involved in each condition: the sub-condition being true and the
sub-condition being false.
Basic conditions also enable different case or default (which could be implicit) in a
switch to be distinguished even when they invoke the same simple block. A basic
condition is associated with every case and default (written or not).
/* Power_of_10 function */
/* -cond */
int power_of_10 ( int value, int max )
{
int result = value, i;
if ( value == 0 ) return 0;
for ( i = 0; i < 10; i++ )
{
result = max > 0 && ( max / value ) < result ?
41
Test RealTime - User Guide
result * value :
max;
}
return result ;
} /* There are 4*2 basic conditions in this function */
/* Near_color function */
ColorType near_color ( ColorType color )
{
switch ( color )
{
case WHITE :
case LIGHT_GRAY :
return WHITE;
case RED :
case PINK :
case BURGUNDY :
return RED;
/* etc ... */
}
} /* There are at least 5 basic conditions here */
Two branches are enumerated for each condition, and one per case or default.
Forced Conditions
Forced conditions are multiple conditions in which any occurrence of the | | and &&
operators has been replaced in the code with | and & binary operators. Such a
replacement done by the Instrumentor enforces the evaluation of the right operands.
You can use this coverage type after modified conditions have been reached to be
sure that every basic condition has been evaluated. With this coverage type, you can
be sure that only the considered basic condition changed between the two tests.
/* User source code */ /* -
cond=forceevaluation */
if ( ( a && b ) || c ) ...
/* Replaced with the Code Coverage feature with : */
if ( ( a & b ) | c ) ...
/* Note : Operands evaluation results are enforced to one if
different from 0 */
Note This replacement modifies the code semantics. You need to verify that
using this coverage type does not modify the behavior of the software.
int f ( MyStruct *A )
{
if (A && A->value > 0 ) /* the evaluation of A-
>value will cause a program error using
forced conditions if A
pointer
is null */
{
A->value -= 1;
}
}
42
Runtime Analysis
Modified Conditions
A modified condition is defined for each basic condition enclosed in a composition of
| | or && operators. It aims to prove that this condition affects the result of the
enclosing composition. To do that, find a subset of values affected by the other
conditions, for example, if the value of this condition changes, the result of the entire
expression changes.
Because compound conditions list all possible cases, you must find the two cases that
can result in changes to the entire expression. The modified condition is covered only
if the two compound conditions are covered.
/* state_control function */
int state_control ( void )
{
if ( ( ( flag & 0x01 ) &&
( instances_number > 10 ) ) ||
( flag & 0x04 ) )
return VALID_STATE;
else
return INVALID_STATE;
} /* There are 3 basic conditions, 5 compound conditions
and 3 modified conditions :
flag & 0x01 : TTX=T and FXF=F
nb_instances > 10 : TTX=T and TFF=F
flag & 0x04 : TFT=T and TFF=F, or FXT=T and FXF=F
4 test cases are enough to cover all those modified
conditions :
TTX=T
FXF=F
TFF=F
TFT=T or FXT=T
*/
Note You can associate a modified condition with more than one case, as
shown in this example for flag & 0x04. In this example, the modified condition
is covered if the two compound conditions of at least one of these cases are
covered.
Code Coverage calculates matching cases for each modified condition.
The same number of modified conditions as Boolean basic conditions appears in a
composition of | | and && operators.
Multiple Conditions
A multiple (or compound) condition is one of all the available cases for the || and
&& logical operator's composition, whenever it appears in a C function. It is defined
by the simultaneous values of the enclosed Boolean basic conditions.
A multiple condition is noted with a set of T, F, or X letters. These mean that the
corresponding basic condition evaluated to true, false, or was not evaluated,
respectively. Remember that the right operand of a || or && logical operator is not
43
Test RealTime - User Guide
evaluated if the evaluation of the left operand determines the result of the entire
expression.
/* state_control function */
/* -cond=compound */
int state_control ( void )
{
if ( ( ( flag & 0x01 ) &&
( instances_number > 10 ) ) ||
( flag & 0x04 ) )
return VALID_STATE;
else
return INVALID_STATE;
} /* There are 3 basic conditions
and 5 compound conditions :
TTX=T <=> (( T && T ) || X ) = T
TFT=T
TFF=F
FXT=T
FXF=F
*/
Code Coverage calculates every available case for each composition.
The number of enumerated branches is the number of distinct available cases for each
composition of || or && operators.
Function Coverage
When analyzing C source code, Test RealTime can provide the following function
coverage:
• Procedure Entries
• Procedure Entries and Exits
Procedure Entries
Inputs identify the C functions that are executed.
/* Factorial function */
/* -proc */
int factorial ( int a )
{
if ( a > 0 ) return a * factorial ( a - 1 );
else return 1;
}
else return 1;
} /* standard output cannot be covered */
/* Divide function */
void divide ( int a, int b, int *c )
{
if ( b == 0 )
{
fprintf ( stderr, "Division by zero\n" );
exit ( 1 );
};
if ( b == 1 )
{
*c = a;
return;
};
*c = a / b;
}
Additional Statements
Terminal Statements
A C statement is terminal if it transfers program control out of sequence (RETURN,
GOTO, BREAK, CONTINUE), or stops the execution (EXIT).
By extension, a decision statement (IF or SWITCH) is terminal if all branches are
terminal; that is if the non-empty THEN ... ELSE, CASE, and DEFAULT blocks all
contain terminal statements. An IF statement without an ELSE and a SWITCH
statement without a DEFAULT are never terminal, because their empty blocks
necessarily continue program control in sequence.
Potentially Terminal Statements
The following decision statements are potentially terminal if they contain at least one
statement that transfers program control out of their sequence (RETURN, GOTO,
BREAK, CONTINUE), or that terminates the execution (EXIT):
• IF without an ELSE
• SWITCH
• FOR
• WHILE or DO ... WHILE
Non-coverable Statements in C
45
Test RealTime - User Guide
C++ Coverage
Block Code Coverage
When analyzing C++ source code, Code Coverage can provide the following block
coverage types:
• Statement Blocks
• Statement Blocks and Decisions
• Statement Blocks, Decisions, and Loops
Statement Blocks
Statement blocks are the C++ function or method main blocks, blocks introduced by
decision instructions:
• THEN and ELSE FOR IF, WHILE and DO ... WHILE blocks
• non-empty blocks introduced by SWITCH CASE or DEFAULT statements
• true and false outcomes of ternary expressions (<expr> ? <expr> : <expr>)
• TRY blocks and any associated catch handler
• blocks following a potentially terminal statement.
int main ( ) /* -
BLOCK */
{
try {
if ( 0 )
{
func ( "Hello" );
}
else
{
throw UnLucky ( );
}
}
catch ( Overflow & o ) {
cout << o.String << '\n';
}
46
Runtime Analysis
Each simple block is a branch. Every C++ function and method contains at least one
simple block corresponding to its main body.
Decisions (Implicit Blocks)
Implicit blocks are introduced by IF statements without an ELSE statement, and a
SWITCH statements without a DEFAULT statement.
/* Power_of_10 function */
/* -BLOCK=DECISION or -BLOCK=IMPLICIT */
int power_of_10 ( int value, int max )
{
int retval = value, i;
if ( value == 0 ) return 0; else ;
for ( i = 0; i < 10; i++ )
{
retval = ( max / 10 ) < retval ? retval * 10 : max;
}
return retval;
}
/* Near_color function */
ColorType near_color ( ColorType color )
{
switch ( color )
{
case WHITE :
case LIGHT_GRAY :
return WHITE;
case RED :
case PINK :
case BURGUNDY :
return RED;
/* etc ... with no default */
default : ;
}
}
Each implicit block represents a branch.
Since the sum of all possible decision paths includes implicit blocks as well as simple
blocks, reports provide the total number of simple and implicit blocks as a figure and
a percentage after the term decisions.
Loops (Logical Blocks)
Three branches are created in a for or while loop:
• The first branch is the simple block contained within the loop, and that is
executed zero times (the entry condition is false from the start).
• The second branch is the simple block executed exactly once (entry condition
true, then false the next time).
47
Test RealTime - User Guide
• The third branch is the simple block executed at least twice (entry condition true
at least twice, and false at the end).
Two branches are created in a DO/WHILE loop, as the output condition is tested
after the block has been executed:
• The first branch is the simple block executed exactly once (output condition true
the first time).
• The second branch is the simple block executed at least twice (output condition
false at least once, then true at the end).
/* myClass::tryFiveTimes method */ /* -
BLOCK=LOGICAL */
int myClass::tryFiveTimes ()
{
int result, i = 0;
/* letsgo ( ) is a function whose return value depends
on the availability of a system resource, for example */
while ( ( ( result = letsgo ( ) ) != 0 ) &&
( ++i < 5 ) );
return result;
} /* 3 logical blocks */
You need to execute the method tryFiveTimes ( ) several times to completely cover
the three logical blocks included in the while loop.
48
Runtime Analysis
{
if (b ==0 )
{
fprintf ( stderr, "Division by zero\n" );
exit (1 );
};
if (b ==1 )
{
*c =a;
return;
};
*c =a /b;
}
At least two branches per C++ method are defined. The input is always enumerated,
as is the output if it can be covered. If it cannot, it is preceded by a terminal
instruction involving returns or by a call to exit(), abort(), or terminate().
Potentially Terminal Statements
The following decision statements are potentially terminal if they contain at least one
statement that transfers program control out of its sequence (RETURN, THROW,
GOTO, BREAK, CONTINUE) or that terminates the execution (EXIT).
• IF without an ELSE
• SWITCH, FOR
• WHILE or DO...WHILE
Template Instrumentation
Code Coverage performs the instrumentation of templates, functions, and methods
of template classes, considering that all instances share their branches. The number of
branches computed by the feature is independent of the number of instances for this
template. All instances will cover the same once-defined branches in the template
code.
Files containing template definitions implicitly included by the compiler (no specific
compilation command is required for such source files) are also instrumented by the
Code Coverage feature and present in the instrumented files where they are needed.
For some compilers, you must specifically take care of certain templates (for example,
static or external linkage). You must verify if your Code Coverage Runtime
installation contains a file named templates.txt and, if it does, read that file carefully.
• To instrument an application based upon Rogue Wave libraries , you must use
the -DRW_COMPILE_INSTANTIATE compilation flag that suppresses the
implicit include mechanism in the header files. (Corresponding source files are
so included by pre-processing.)
• To instrument an application based upon ObjectSpace C++ Component Series ,
you must use the -DOS_NO_AUTO_INSTANTIATE compilation flag that
49
Test RealTime - User Guide
Additional Statements
Non-coverable Statements
A C++ statement is non-coverable if the statement can never possibly be executed.
Code Coverage detects non-coverable statements during instrumentation and
produces a warning message that specifies the source file and line location of each
non-coverable statement.
Java Coverage
Block Coverage
When analyzing Java source code, Code Coverage can provide the following block
coverage:
• Statement Blocks
• Statement Blocks and Decisions
• Statement Blocks, Decisions, and Loops
Statement Blocks
Statement blocks are the Java method blocks, blocks introduced by control
instructions:
• THEN for IF and ELSE for IF, WHILE and DO ... WHILE blocks
• non-empty blocks introduced by SWITCH CASE or DEFAULT statements
• true and false outcomes of ternary expressions (<expr> ? <expr> : <expr>)
• TRY blocks and any associated catch handler
• blocks following a potentially terminal statement.
Example
public class StatementBlocks
50
Runtime Analysis
{
public static void func( String _message )
throws UnsupportedOperationException
{
throw new UnsupportedOperationException(_message);
}
public static void main( String[] args )
throws Exception
{
try {
if ( false )
{
func( "Hello" );
}
else
{
throw new Exception("bad luck");
}
}
catch ( UnsupportedOperationException _E )
{
System.out.println( _E.toString() );
}
catch ( Exception _E )
{
System.out.println( _E.toString() );
throw _E ;
} //potentially terminal statement
return ; //sequence block
}
}
Each simple block is a branch. Every Java method contains at least one simple block
corresponding to its main body.
Decisions (Implicit Blocks)
Implicit blocks are introduced by IF statements without an ELSE statement, and a
SWITCH statement without a DEFAULT statement.
Example
public class MathOp
{
static final int WHITE=0;
static final int LIGHTGRAY=1;
static final int RED=2;
static final int PINK=3;
static final int BLUE=4;
static final int GREEN=5;
// power of 10
public static int powerOf10( int _value, int _max )
{
int result = _value, i;
if( _value==0 ) return 0; //implicit else
for( i = 0; i < 10; i++ )
{
result = ( _max / 10 ) < result ? 10*result : _max ;
}
51
Test RealTime - User Guide
return result;
}
// Near color function
int nearColor( int _color )
{
switch( _color )
{
case WHITE:
case LIGHTGRAY:
return WHITE ;
case RED:
case PINK:
return RED;
//implicit default:
}
return _color ;
}
}
52
Runtime Analysis
Method Coverage
Inputs to Procedures
Inputs identify the Java methods executed.
Example
public class Inputs
{
public static int method()
{
return 5;
}
public static void main( String[] argv )
{
System.out.println("Value:"+method());
}
}
}
public static int method1( int _selector )
{
if( _selector < 0 ) return 0;
switch( _selector )
{
case 1: return 0;
case 2: break;
case 3: case 4: case 5: return 1;
}
return (_selector/2);
}
public static void main( String[] argv )
{
method0( 3 );
System.out.println("Value:"+method1( 5 ));
System.exit( 0 );
}
}
At least two branches per Java method are defined. The input is always enumerated,
as is the output if it can be covered.
Potentially Terminal Statements
The following decision statements are potentially terminal if they contain at least one
statement that transfers program control out of its sequence (RETURN, THROW,
GOTO, BREAK, CONTINUE) or that terminates the execution (EXIT).
• IF without an ELSE
• SWITCH, FOR
• WHILE or DO...WHILE
Additional Statements
Non-coverable Statements in Java
A Java statement is non-coverable if the statement can never possibly be executed.
Code Coverage detects non-coverable statements during instrumentation and
produces an error message that specifies the source file and line location of each non-
coverable statement.
54
Runtime Analysis
• A Rates Report, providing detailed coverage rates for each activated coverage
type.
You can use the Report Explorer to navigate through the report. Click a source code
component in the Report Explorer to go to the corresponding line in the Report
Viewer.
You can jump directly to the next or previous Failed test in the report by using the
Next Failed Test or Previous Failed Test buttons from the Code Coverage toolbar.
You can jump directly to the next or previous Uncovered line in the Source report by
using the Next Uncovered Line or Previous Uncovered Line buttons in the Code
Coverage feature bar.
When viewing a Source coverage report, the Code Coverage Viewer provides several
additional viewing features for refined code coverage analysis.
Coverage Types
Depending on the language selected, the Code Coverage feature offers (see Coverage
Types for more information):
• Function or Method code coverage: select between function Entries, Entries and
exits, or None.
• Call code coverage: select Yes or No to toggle call coverage for Ada and C.
• Block code coverage: select the desired block coverage method.
• Condition code coverage: select condition coverage for Ada and C.
Please refer to the related topics for details on using each coverage type with each
language.
Any of the Code Coverage types selected for instrumentation can be filtered out in
the Code Coverage report stage if necessary.
55
Test RealTime - User Guide
Reloading a Report
If a Code Coverage report has been updated since the moment you have opened it in
the Code Coverage Viewer, you can use the Reload command to refresh the display:
To reload a report:
1. From the Code Coverage menu, select Reload.
Resetting a Report
When you run a test or application node several times, the Code Coverage results are
appended to the existing report. The Reset command clears previous Code Coverage
results and starts a new report.
To reset a report:
1. From the Code Coverage menu, select Reset.
Source Report
56
Runtime Analysis
You can use the standards keys (arrow keys, home, end, etc.) to move about and to
select the source code.
Hypertext Links
The Source report provides hypertext navigation throughout the source code:
• Click a plain underlined function call to jump to the definition of the function.
• Click a dashed underlined text to view additional coverage information in a
pop-up window.
• Right-click any line of code and select Edit Source to open the source file in the
Text Editor at the selected line of code.
Macro Expansion
Certain macro-calls are preceded with a magnifying glass icon.
Click the magnifying glass icon to expand the macro in a pop-up window with the
usual Code Coverage color codes.
Hit Count
The Hit Count tool-tip is a special capability that displays the number of times that a
selected branch was covered.
Hit Count is only available when Test-by-Test analysis is disabled and when the Hit
Count option has been enabled for the selected Configuration.
Cross Reference
The Cross Reference tool-tip displays the name of tests that executed a selected
branch.
Cross Reference is only available in Test-by-Test mode.
57
Test RealTime - User Guide
3. In the Code Coverage Viewer window, click a portion of covered source code to
display the Cross Reference tool-tip.
Comment
You can add a short comment to the generated Code Coverage report by using the
Comment option in the Misc. Options Settings for Code Coverage. This can be useful
to distinguish different reports generated with different Configurations.
Comments are displayed as a magnifying glass symbol at the top of the source code
report. Click the magnifying glass icon to display the comment.
Rates Report
From the Code Coverage Viewer window, select the Rates tab to view the coverage
rate report.
Select a source code component in the Report Explorer to view the coverage rate for
that particular component and the selected coverage type. Select the Root node to
view coverage rates for all current files.
Code Coverage rates are updated dynamically as you navigate through the Report
Explorer and as you select various coverage types.
Static Metrics
Source code profiling is an extremely important matter when you are planning a test
campaign or for project management purposes. The graphical user interface (GUI)
provides a Metrics Viewer, which provides detailed source code complexity data and
statistics for your C, C++, Ada and Java source code.
computed each time a node is executed, but can also be calculated without executing
the application.
The metrics are stored in .met metrics files alongside the actual source files.
Report Explorer
The Report Explorer displays the scope of the selected nodes, or selected .met metrics
files. Select a node to switch the Metrics Window scope to that of the selected node.
Metrics Window
Depending on the language of the analyzed source code, different pages are
available:
• Root Page - File View: contains generic data for the entire scope
• Root Page - Object View: contains object related generic data for C++ and Java
only
• Component View: displays detailed component-related metrics for each file,
class, method, function, unit, procedure, etc...
The metrics window offer hyperlinks to the actual source code. Click the name of a
source component to open the Text Editor at the corresponding line.
Static Metrics
60
Runtime Analysis
The Source Code Parsers provide static metrics for the analyzed C and C++ source
code.
61
Test RealTime - User Guide
Halstead Graph
The following display modes are available for the Halstead graph:
• VocabularySize
• Volume
• Difficulty
• Testing Effort
• Testing Errors
• Testing Time
See the Halstead Metrics section for more information.
Metrics Summary
The scope of the metrics report depends on the selection made in the Report Explorer
window. This can be a file, one or several classes or any other set of source code
components.
Below the Halstead graph, the Root page displays a metrics summary table, which
lists for for the source code component of the selected scope:
• V(g): provides a complexity estimate of the source code component
• Statements: shows the number of statements within the component
• Nested Levels: shows the highest nesting level reached in the component
• Ext Comp Calls: measures the number of calls to methods defined outside of the
component class (C++ and Java only)
• Ext Var Use: measures the number of uses of attributes defined outside of the
component class (C++ and Java only)
62
Runtime Analysis
Object View
Root Level Summary
At the top of the Root page, the Metrics Viewer displays a graph based on the sum
ofdata.
On the Root page, the scope of the Metrics Viewer is the entire set of nodes below the
Root node.
File View is the only available view with C or Ada source code. When viewing
metrics for C++ and Java, an Object View is also available.
Two modes are available for the data graph:
• Vocabulary
• Size
• Volume
• Difficulty
63
Test RealTime - User Guide
• Testing Effort
• Testing Errors
• Testing Time
See the Halstead Metrics section for more information.
Metrics Summary
Below the Halstead graph, the Root page displays a metrics summary table, which
lists for each source code component:
• V(g): provides a complexity estimate of the source code component
• Statements: shows the total number of statements within the object
• Nested Levels: shows the highest statement nesting level reached in the object
• Ext Comp Calls: measures the number of calls to components defined outside of
the object
• Ext Var Use: measures the number of uses of variables defined outside of the
object
Note The result of the metrics for a given object is equal to the sum of the
metrics for the methods it contains.
Halstead Metrics
Halstead complexity measurement was developed to measure a program module's
complexity directly from source code, with emphasis on computational complexity.
The measures were developed by the late Maurice Halstead as a means of
determining a quantitative measure of complexity directly from the operators and
operands in the module.
Halstead provides various indicators of the module's complexity
Halstead metrics allow you to evaluate the testing time of any C/C++ source code.
These only make sense at the source file level and vary with the following
parameters:
64
Runtime Analysis
Parameter Meaning
n1 Number of distinct
operators
n2 Number of distinct
operands
N1 Number of operators
instances
N2 Number of operands
instances
When a source file node is selected in the Metrics Viewer, the following results are
displayed in the Metrics report:
In the above formulas, k is the Stroud number, and has a default value of 18. You can
change the value of k in the Metrics Viewer Preferences. Adjustment of the Stroud
number allows you to adapt the calculation of T to the testing conditions: team
background, criticity level, and so on.
When the Root node is selected, the Metrics Viewer displays the total testing time for
all loaded source files.
65
Test RealTime - User Guide
66
Runtime Analysis
remain undetected until triggered by a random event, so that a program can seem to
work correctly when in fact it's only working by accident.
That's where the Memory Profiling feature can help you get ahead.
• You associate Memory Profiling with an existing test node or application code.
• You compile and run your application.
• The application with the Memory Profiling feature, then directs output to the
Memory Profiling Viewer, which provides a detailed report of memory issues.
Memory Profiling uses Source Code Insertion Technology for C and C++.
Because of the different technologies involved, Memory Profiling for Java is covered
in a separate section.
67
Test RealTime - User Guide
Where:
• Allocated is the total memory allocated during the execution of the application
• Unfreed is the memory that remains allocated after the application was
terminated
• Maximum is the highest memory usage encountered during execution
Detailed Report
The detailed section of the report lists memory usage events, including the following
errors and warnings:
• Error messages
• Warning messages
68
Runtime Analysis
69
Test RealTime - User Guide
Memory Profiling actually allocates a larger block by adding a Red Zone at the
beginning and end of each allocated block of memory in the program. Memory
Profiling monitors these Red Zones to detect ABWL errors.
Increasing the size of the Red Zone helps Test RealTime catch bounds errors before or
beyond the block.
The ABWL error does not apply to local arrays allocated on the stack.
Note Unlike Rational PurifyPlus, the ABWL error in the Rational Test
RealTime Memory Profiling feature only applies to heap memory zones and
not to global or local tables.
70
Runtime Analysis
from the pointer to the interior of the block. In general, you should consider a
potential leak to be an actual leak until you can prove that it is not by identifying the
code that performs this subtraction.
Memory in use can appear as an MPK if the pointer returned by some allocation
function is offset. This message can also occur when you reference a substring within
a large string. Another example occurs when a pointer to a C++ object is cast to the
second or later base class of a multiple-inherited object and it is offset past the other
base class objects.
Alternatively, leaked memory might appear as an MPK if some non-pointer integer
within the program space, when interpreted as a pointer, points within an otherwise
leaked block of memory. However, this condition is rare.
Inspection of the code should easily differentiate between different causes of MPK
messages.
Memory Profiling generates a list of potentially leaked memory blocks when you
activate the MPK Memory Potential Leak option in the Memory Profiling Settings.
72
Runtime Analysis
In both of the above situations, Memory Profiling can use the heap management
routines to detect memory leaks, array bounds and other memory-related defects.
Note Application pointers and block sizes can be modified by Memory
Profiling in order to detect ABWL errors (Late Detect Array Bounds Write).
Actual-pointer and actual-size refer to the memory data handled by Memory
Profiling, whereas user pointer and user-size refer to the memory handled
natively by the application-under-analysis. This distinction is important for
the Memory Profiling ABWL and Red zone settings.
Target Deployment Port API
The Target Deployment Port library provides the following API for Memory
Profiling:
void * _PurifyLTHeapAction ( _PurifyLT_API_ACTION, void *,
RTRT_U_INT32, RTRT_U_INT8 );
In the function _PurifyLTHeapAction the first parameter is the type of action that
will be or has been performed on the memory block pointed by the second
parameter. The following actions can be used:
typedef enum {
_PurifyLT_API_ALLOC,
_PurifyLT_API_BEFORE_REALLOC,
_PurifyLT_API_FREE
} _PurifyLT_API_ACTION;
The third parameter is the size of the block. The fourth parameter is either of the
following constants:
#define _PurifyLT_NO_DELAYED_FREE 0
#define _PurifyLT_DELAYED_FREE 1
If an allocation or free has a size of 0 this fourth parameter indicates a delayed free in
order to detect FWML (Late Detect Free Memory Write) and FFM (Freeing Freed
Memory) errors. See the section on Memory Profiling Configuration Settings for
Detect FFM, Detect FMWL, Free Queue Length and Free Queue Size.
73
Test RealTime - User Guide
A freed delay can only be performed if the block can be freed with RTRT_DO_FREE
(situation 1) or ANSI free (situation 2). For example, if a function requires more
parameters than the pointer to de-allocate, then the FMWL and FFM error detection
cannot be supported and FFM errors will be indicated by an FUM (Freeing
Unallocated Memory) error instead.
The following function returns the size of an allocated block, or 0 if the block was not
declared to Memory Profiling. This allows you to implement a library function
similar to the msize from Microsoft Visual 6.0.
RTRT_SIZE_T _PurifyLTHeapPtrSize ( void * );
The following function returns the actual-size of a memory block, depending on the
size requested. Call this function before the actual allocation to find out the quantity
of memory that is available for the block and the contiguous red zones that are to be
monitored by Memory Profiling.
RTRT_SIZE_T _PurifyLTHeapActualSize ( RTRT_SIZE_T );
Examples
In the following examples, my_malloc, my_realloc, my_free and my_msize
demonstrate the four supported memory heap behaviors.
The following routine declares an allocation:
void *my_malloc ( int partId, size_t size )
{
void *ret;
size_t actual_size = _PurifyLTHeapActualSize(size);
/* Here is any user code making ret a pointer to a heap or
simulated heap memory block of actual_size bytes */
...
/* After comes Memory Profiling action */
return _PurifyLTHeapAction ( _PurifyLT_API_ALLOC, ret, size, 0
);
/* The user-pointer is returned */
}
In situation 2, where you have access to a custom memory heap API, replace the "..."
with the actual malloc API function.
For a my_calloc(size_t nelem, size_t elsize), pass on nelem*elsize as the third
parameter of the _PurifyLTHeapAction function. In this case, you might need to
replace this operation with a function that takes into account the alignments of
elements.
To declare a reallocation, two operations are required:
void *my_realloc ( int partId, void * ptr, size_t size )
{
void *ret;
size_t actual_size = _PurifyLTHeapActualSize(size);
/* Before comes first Memory Profiling action */
ret = _PurifyLTHeapAction ( _PurifyLT_API_BEFORE_REALLOC, ptr,
size, 0 );
74
Runtime Analysis
Use the following macros to save customization time when dealing with functions
that have the same prototypes as the standard ANSI functions:
#define _PurifyLT_MALLOC_LIKE(func) \
void *RTRT_CONCAT_MACRO(usr_,func) ( RTRT_SIZE_T size ) \
{ \
void *ret; \
ret = func ( _PurifyLTHeapActualSize ( size ) ); \
return _PurifyLTHeapAction ( _PurifyLT_API_ALLOC, ret, size, 0
); \
}
#define _PurifyLT_CALLOC_LIKE(func) \
void *RTRT_CONCAT_MACRO(usr_,func) ( RTRT_SIZE_T nelem,
RTRT_SIZE_T elsize ) \
{ \
void *ret; \
ret = func ( _PurifyLTHeapActualSize ( nelem * elsize ) ); \
return _PurifyLTHeapAction ( _PurifyLT_API_ALLOC, ret, nelem *
elsize, 0 ); \
}
#define _PurifyLT_REALLOC_LIKE(func,delayed_free) \
void *RTRT_CONCAT_MACRO(usr_,func) ( void *ptr, RTRT_SIZE_T size
75
Test RealTime - User Guide
) \
{ \
void *ret; \
ret = func ( _PurifyLTHeapAction (
_PurifyLT_API_BEFORE_REALLOC, \
ptr, size, delayed_free ), \
_PurifyLTHeapActualSize ( size ) ); \
return _PurifyLTHeapAction ( _PurifyLT_API_ALLOC, ret, size, 0
); \
}
#define _PurifyLT_FREE_LIKE(func,delayed_free) \
void RTRT_CONCAT_MACRO(usr_,func) ( void *ptr ) \
{ \
if ( delayed_free ) \
{ \
_PurifyLTHeapAction ( _PurifyLT_API_FREE, ptr, 0,
delayed_free ); \
} \
else \
{ \
func ( _PurifyLTHeapAction ( _PurifyLT_API_FREE, ptr, 0,
delayed_free ) ); \
} \
}
To reload a report:
1. From the View Toolbar, click the Reload button.
76
Runtime Analysis
Resetting a Report
When you run a test or application node several times, the Memory Profiling results
are appended to the existing report. The Reset command clears previous Memory
Profiling results and starts a new report.
To reset a report:
1. From the View Toolbar, click the Reset button.
Exporting a Report to HTML
Memory Profiling results can be exported to an HTML file.
77
Test RealTime - User Guide
Report Explorer
78
Runtime Analysis
The Report Explorer window displays a Test for each execution of the application
node or for a test node when using Component Testing for Java. Inside each test, a
Snapshot report is created for each Memory Profiling snapshot.
Method Snapshots
The Memory Profiling report displays snapshot data for each method that has
performed an allocation. If the Java CLASSPATH is correctly set, you can click blue
method names to open the corresponding source code in the Text Editor. System
methods are displayed in black and cannot be clicked.
Method data is reset after each snapshot.
For each method, the report lists:
• Method: The method name. Blue method names are hyperlinks to the source
code under analysis
• Allocated Objects: The number of objects allocated since the previous snapshot
• Allocated Bytes: The total number of bytes used by the objects allocated by the
method since the previous snapshot
• Local + D Allocated Objects: The number of objects allocated by the method
since the previous snapshot as well as any descendants called by the method
• Local + D Allocated Bytes: The total number of bytes used by the objects
allocated by the method since the previous snapshot and its descendants
Referenced Objects
If you selected the With objects filter option in the JVMPI Settings dialog box, the
report can display, for each method, a list of objects created by the method and
object-related data.
From the Memory Profiling menu, select Hide/Show Referenced Objects.
For each object, the report lists:
• Reference Object Class: The name of the object class. Blue class names are
hyperlinks to the source code under analysis.
• Referenced Objects: The number of objects that exist at the moment the
snapshot was taken
• Referenced Bytes: The total number of bytes used by the referenced objects
Differential Reports
The Memory Profile report can display differential data between two snapshots
within the same Test. This allows you to compare the referenced objects. There are
two diff modes:
79
Test RealTime - User Guide
JVMPI Technology
Memory Profiling for Java uses a special dynamic library, known as the Memory
Profiling Agent, to provide advanced reports on Java Virtual Machine (JVM) memory
usage.
Garbage Collection
80
Runtime Analysis
JVMs implement a heap that stores all objects created by the Java code. Memory for
new objects is dynamically allocated on the heap. The JVM automatically frees objects
that are no longer referenced by the program, preventing many potential memory
issues that exist in other languages. This process is called garbage collection.
In addition to freeing unreferenced objects, a garbage collector may also reduce heap
fragmentation, which occurs through the course of normal program execution. On a
virtual memory system, the extra paging required to service an ever growing heap
can degrade the performance of the executing program.
JVMPI Agent
Because of the memory handling features included in the JVM, Memory Profiling for
Java is quite different from the feature provided for other languages. Instead of
Source Code Insertion technology, the Java implementation uses a JVM Profiler
Interface (JVMPI) Agent whose task is to monitor JVM memory usage and to provide
a memory dump upon request.
The JVMPI Agent analyzes the following internal events of the JVM:
• Method entries and exits
• Object and primitive type allocations
The JVMPI Agent is a dynamic library —DLL or lib.so depending on the platform
used— that is loaded as an option on the command line that launches the Java
program.
During execution, when the agent receives a snapshot trigger request, it can either an
instantaneous JVMPI dump of the JVM memory, or wait for the next garbage
collection to be performed.
Note Information provided by the instantaneous dump includes actual
memory use as well as intermediate and unreferenced objects that are
normally freed by the garbage collection. In some cases, such information may
be difficult to interpret correctly.
The actual trigger event can be implemented with any of the following methods:
• A specified method entry or exit used in the Java code
• A message sent from the Snapshot button or menu item in the graphical user
interface
• Every garbage collection
The JVMPI Agent requires that the Java code is compiled in debug mode, and cannot
be used with Java in just-in-time (JIT) mode.
81
Test RealTime - User Guide
Performance Profiling
The Performance Profiling feature puts successful performance engineering within
your grasp. It provides complete, accurate performance data—and provides it in an
understandable and usable format so that you can see exactly where your code is
least efficient. Using Performance Profiling, you can make virtually any program run
faster. And you can measure the results.
Performance Profiling measures performance for every component in C , C++ and
Java source code, in real-time, and on both native or embedded target platforms.
Performance Profiling works by instrumenting the C, C++ or Java source code of
your application. After compilation, the instrumented code reports back to Test
RealTime after the execution of the application.
• You associate Performance Profiling with an existing test or application code.
• You build and execute your code in Test RealTime .
• The application under test, instrumented with the Performance Profiling
feature, then directs output to the Performance Profiling Viewer, which a
provides a detailed report of memory issues.
82
Runtime Analysis
Performance Summary
This section of the report indicates, for each instrumented function, procedure or
method (collectively referred to as functions), the following data:
• Calls: The number times the function was called
• Function (F) time: The time required to execute the code in a function exclusive
of any calls to its descendants
• Function+descendant (F+D) time: The total time required to execute the code in
a function and in any function it calls.
Note that since each of the descendants may have been called by other
functions, it is not sufficient to simply add the descendants' F+D to the caller
function's F. In fact, it is possible for the descendants' F+D to be larger than the
calling function's F+D. The following example demonstrates three functions a, b
and c, where both a and b each call c once:
function F F+D
a 5 15
b 5 15
c 20 20
The F+D value of a is less than the F+D of c. This is because the F+D of a (15)
equals the F of a (5) plus one half the F+D of c (20/2=10).
• F Time (% of root) and F+D Time (% of root): Same as above, expressed in
percentage of total execution time
• Average F Time: The average time spent executing the function each time it was
called
83
Test RealTime - User Guide
Reloading a Report
If a Performance Profiling report has been updated since the moment you have
opened it in the Performance Profiling Viewer, you can use the Reload command to
refresh the display:
To reload a report:
1. From the View Toolbar, click the Reload button.
Resetting a Report
When you run a test or application node several times, the Performance Profiling
results are appended to the existing report. The Reset command clears previous
Performance Profiling results and starts a new report.
To reset a report:
1. From the View Toolbar, click the Reset button.
84
Runtime Analysis
85
Test RealTime - User Guide
• F+D Time (%) > and F+D Time (%) <: Function and descendant time, expressed
in percentage, greater or lower than the specified value.
• Average > and Average <: The average time spent executing the function
greater or lower than the specified value.
86
Runtime Analysis
Runtime Tracing
Runtime Tracing is a feature for monitoring real-time dynamic interaction analysis of
your C, C++ and Java source code. Runtime Tracing uses exclusive Source Code
Insertion (SCI) instrumentation technology to generate trace data, which is turned
into UML sequence diagrams within the Test RealTime GUI.
In Test RealTime, Runtime Tracing can run either as a standalone product, or in
conjunction with a Component Testing or System Testing test node.
• You associate Performance Profiling with an existing test or application code.
• You build and execute your code in Test RealTime .
• The application under test, instrumented with the Runtime Tracing feature,
then directs output to the UML/SD Viewer, which a provides a real-time UML
Sequence Diagram of your application's behavior.
87
Test RealTime - User Guide
In this C++ example, functions and static methods are attached to the World instance.
Objects are labelled with obj<number>:<classname>
The black cross represents the destruction of the instance.
Constructors are displayed in green.
Destructors are blue.
Return messages are dotted red lines.
Other functions and methods are black.
The main() is a function of the World instance called by the same World instance.
Advanced
Multi-Thread Support
Runtime Tracing can be configured for use in a multi-threaded environment such as
Posix, Solaris and Windows.
Multi-thread mode protects Target Deployment Port global variables against
concurrent access. This causes a significant increase in Target Deployment Port size
as well as an impact on performance. Therefore, select this option only when
necessary.
89
Test RealTime - User Guide
90
Runtime Analysis
See the Reference Manual for more information about pragma directives.
91
Test RealTime - User Guide
92
Automated Testing
The test features provided with Rational Test RealTime allow you to submit your
application to a robust test campaign. Each feature uses a different approach to the
software testing problem, from the use of test drivers stimulating the code under test,
to source code instrumentation testing internal behavior from inside the running
application.
Here is a rundown of the main steps to using the Test RealTime test features:
1. Set up a new project in Test RealTime. This can be done automatically with the
New Project Wizard.
93
Test RealTime - User Guide
2. Follow the Activity Wizard to add your application source files to the
workspace.
3. Select the source files under test with the Test Generation Wizard to create a test
node. The Wizard guides you through process of selecting the right test feature
for your needs.
4. Develop the test cases by completing the automatically generated test scripts
with the corresponding script language and native code.
5. Use the Project Explorer to set up the test campaign and add any additional
runtime analysis or test nodes.
6. Run the test campaign to builds and execute a test driver with the application
under test.
7. View and analyze the generated test reports.
94
Automated Testing
The test harness interacts with the source code under test and produces test results.
Test execution creates a .rio file.
The .tdc and .rio files are processed together the Component Testing Report
Generator (attolpostpro). The output is the .xrd report file, which can be viewed and
controlled in the Test RealTime GUI.
Of course, these steps are mostly transparent to the user when the test node is
executed in the Test RealTime GUI.
Integrated Files
This option provides a list of source files whose components are integrated into the
test program after linking.
The Component Testing wizard analyzes integrated files to extract any global
variables that are visible from outside. For each global variable the Parser creates a
default test which is added to an environment named after the file in the .ptu test
script.
Simulated Files
This option gives the Component Testing wizard a list of source files to simulate—or
stub—upon execution of the test.
A stub is a dummy software component designed to replace a component that the
code under test relies on, but cannot use for practicality or availability reasons. A
stub can simulate the response of the stubbed component.
The Component Testing parser analyzes the simulated files to extract the global
variables and functions that are visible from outside. For each file, a DEFINE STUB
block is generated in the .ptu test script.
By default, no simulation instructions are generated.
Additional Files
Additional files are merely dependency files that are added to the Component
Testing test node, but ignored by the source code parser. Additional files are
compiled with the rest of the test node but are not instrumented.
95
Test RealTime - User Guide
You can toggle instrumentation of a source file by using the Properties Window
dialog box.
Service/Test Tab
Use this tab to select one or several SERVICEs or TESTs as defined in the .ptu test
script. During execution, the Component Testing node plays the selected SERVICEs
or TESTs.
Family Tab
Use this tab to select one or several families as defined in the .ptu test script. During
execution, the Component Testing node plays the selected families.
Variable Only
This evaluation strategy setting generates both the initial and expected values of each
variable evaluated by the program during execution.
This is possible only for variables whose expression of initial or expected value is not
reducible by the Test Compiler. For arrays and structures in which one of the
members is an array, this evaluation is not given for the initial values. For the
expected values, however, it is given only for Failed items.
Value Only
With this setting, the test report displays for each variable both the initial value and
the expected value defined in the test script.
Combined evaluation
The combined evaluation setting combines both settings. The test report thus
displays the initial value, the expected value defined in the test script, and the value
found during execution if that value differs from the expected value.
96
Automated Testing
Please refer to the Rational Test RealTime Reference Manual for further information
about test script keywords.
• The HEADER Statement: specifies the name and version number of the module
being tested, as well as the version number of the tested source file. This
97
Test RealTime - User Guide
Variables
Testing Variables
One of the main features of Component Testing for Ada is its ability to compare
initial values, expected values and actual values of variables during test execution. In
the Ada Test Script Language, this is done with the VAR statement.
The VAR statement specifies both the test start-up procedure and the post-execution
test for simple variables. This instruction uses three parameters:
• Name of the variable under test: this can be a simple variable, an array element,
or a field of a record. It is also possible to test an entire array, part of an array or
all the fields of a record.
• Initial value of the variable: identified by the keyword INIT.
• Expected value of the variable after the procedure has been executed: identified
by the keyword EV.
Declare variables under test with the VAR statement, followed by the declaration
keywords:
• INIT = for an assignment
98
Automated Testing
Testing Intervals
You can test an expected value within a given interval by replacing EV with the
keywords MIN and MAX.
You can also use this form on alphanumeric variables, where character strings are
considered in alphabetical order ("A"<="B"<="C").
Example
The following example demonstrates how to test a value within an interval:
TEST 4
FAMILY nominal
ELEMENT
VAR a, init in {1,2,3}, ev = init
VAR b, init = 3, ev = init
VAR c, init = 0, min = 4, max = 6
#c = add(a,b);
END ELEMENT
END TEST
99
Test RealTime - User Guide
Testing Tolerances
You can associate a tolerance with an expected value for numerical variables. To do
this, use the keyword DELTA with the expected value EV.
This tolerance can either be an absolute value (the default option) or relative (in the
form of a percentage <value>%).
You can rewrite the test from the previous example as follows:
TEST 5
FAMILY nominal
ELEMENT
VAR a, INIT in {1,2,3}, EV = INIT
VAR b, INIT = 3, EV = INIT
VAR c, INIT = 0, EV = 5, DELTA = 1
#c = add(a,b);
END ELEMENT
END TEST
or
TEST 6
FAMILY nominal
ELEMENT
VAR a, INIT in {1,2,3}, EV = INIT
VAR b, INIT = 3, EV = INIT
VAR c, INIT = 0, EV = 5, DELTA = 20%
#c = add(a,b);
END ELEMENT
END TEST
No Test
It is sometimes difficult to predict the expected result for a variable; such as if a
variable holds the current date or time. In this case, you can avoid specifying an
expected output.
Example
The following script show an example of an omitted test:
TEST 7
FAMILY nominal
ELEMENT
VAR a, init in {1,2,3}, ev = init
VAR b, init = 3, ev = init
VAR c, init = 0, ev ==
#c = add(a,b);
END ELEMENT
END TEST
Testing an Expression
To test the return value of an expression, rather than declaring a local variable to
memorize the value under test, you can directly test the return value with the VAR
instruction.
In some cases, you must leave out the initialization part of the instruction.
100
Automated Testing
Example
The following example places the call of the add function in a VAR statement:
TEST 12
FAMILY nominal
ELEMENT
VAR a, init = 1, ev = init
VAR b, init = 3, ev = init
VAR add(a,b), ev = 4
END ELEMENT
FIN TEST
In this example, you no longer need the variable c. The resulting test report an
Unknown status indicating that it has not been tested.
All syntax examples of expected values are still applicable, even in this particular
case.
Arrays
Testing Arrays
With Component Testing for Ada, you can test arrays in quite the same way as you
test variables. In the Ada Test Script Language, this is done with the ARRAY
statement.
The ARRAY statement specifies both the test start-up procedure and the post-
execution test for simple variables. This instruction uses three parameters:
• Name of the variable under test: species the name of the array in any of the
following ways:
• To test one array element, conform to the Ada syntax: histo(0).
• To test the entire array without specifying its bounds, the size of the array is
deduced by analyzing its declaration. This can only be done for well-defined
arrays.
• To test a part of the array, specify the lower and upper bounds within which the
test will be run, separated with two periods (..), as in:
histo(1..SIZE_HISTO)
• Initial value of the array: identified by the keyword INIT.
101
Test RealTime - User Guide
• Expected value of the array after the procedure has been executed: identified by
the keyword EV.
Declare variables under test with the ARRAY statement, followed by the declaration
keywords:
• INIT = for an assignment
• INIT == for no initialization
• EV = for a simple test.
It does not matter where the ARRAY instructions are located with respect to the test
procedure call since the Ada code generator separates ARRAY instructions into two
parts :
• The array test is initialized with the ELEMENT instruction
• The actual test against the expected value is done with the END ELEMENT
instruction
102
Automated Testing
103
Test RealTime - User Guide
While an expression initializes all the array elements in the same way, you can also
initialize each element by using an enumerated list of expressions between brackets
"()". In this case, you must specify a value for each array element.
Furthermore, you can precede every element in this list of initial or expected values
with the array index of the element concerned followed by the characters "=>". The
following example illustrates this form:
ARRAY histo[0..3], init = (0 => 0, 1 => 10, 2 => 100, 3 => 10)
...
You can also initialize and test multidimensional arrays with a list of expressions, as
follows. In this case, the previously mentioned rules apply to each dimension.
ARRAY image, init = (0, 1=>4, others=>(1, 2, others=>100)) ...
Example
You can specify a value for all the as yet undefined elements by using the keyword
others, as the following example illustrates:
TEST 2
FAMILY nominal
ELEMENT
VAR x1, init = 0, ev = init
VAR x2, init = SIZE_IMAGE-1, ev = init
VAR y1, init = 0, ev = init
VAR y2, init = SIZE_IMAGE-1, ev = init
ARRAY image, init = (others=>(others=>100)), ev = init
ARRAY histo, init = 0,
& ev = (100=>SIZE_IMAGE*SIZE_IMAGE, others=>0)
VAR status, init ==, ev = 0
#status = compute_histo(x1, y1, x2, y2, histo);
END ELEMENT
END TEST
You can use this form of initialization and testing with one or more array dimensions.
Example
The following example tests the two arrays read_image and extern_image, which
have been declared in the same way. Every element from the extern_image array is
assigned to the corresponding read_image array element.
TEST 4
FAMILY nominal
#read_image(extern_image,"image.bmp");
ELEMENT
VAR x1, init = 0, ev = init
VAR x2, init = SIZE_IMAGE-1, ev = init
VAR y1, init = 0, ev = init
VAR y2, init = SIZE_IMAGE-1, ev = init
ARRAY image, init = extern_image, ev = init
ARRAY histo, init = 0, ev ==
VAR status, init ==, ev = 0
#status = compute_histo(x1, y1, x2, y2, histo);
END ELEMENT
END TEST
Records
Testing Records
To test all the fields of a structured variable or record, use a single STR instruction to
define the initializations and expected values of the structure.
The STR statement specifies both the test start-up procedure and the post-execution
test for simple variables. This instruction uses three parameters:
• Name of the variable under test: this can be a simple variable, an array element,
or a field of a record. It is also possible to test an entire array, part of an array or
all the fields of a record.
• Initial value of the variable: identified by the keyword INIT.
• Expected value of the variable after the procedure has been executed: identified
by the keyword EV.
Declare variables under test with the STR statement, followed by the declaration
keywords:
• INIT = for an assignment
• INIT == for no initialization
• EV = for a simple test.
It does not matter where the STR instructions are located with respect to the test
procedure call since the Ada code generator separates STR instructions into two parts
:
105
Test RealTime - User Guide
106
Automated Testing
No Test
You can only initialize and test records with the following forms:
108
Automated Testing
• INIT =
• INIT ==
• EV =
• EV ==
If a field of a structured variable needs to be initialized or tested in a different way,
you can omit its initial and expected values from the global test of the structured
variable, and run a separate test on this field.
The following example illustrates this:
TEST 4
FAMILY nominal
ELEMENT
VAR l, init = NIL, ev = NONIL
VAR l.all, init == , ev = (next=>NIL,prev=>NIL)
VAR s, init in ("foo","bar"), ev = init
VAR l.str, init ==, ev(s) in ("foo","bar")
#push(l,s);
END ELEMENT
END TEST
Stub Simulation
Stub simulation is based on the idea that subroutines to be simulated are replaced
with other subroutines generated in the test driver. These simulated subroutines are
often referred to as stubs.
Stubs use the same interface as the simulated subroutines, only the body of the
subroutine is replaced.
Stubs have the following roles:
• Check in and in out parameters against the simulated subroutine. If there is a
mismatch, the values are stored.
• Assign out and in out parameters from the simulated procedure
• Return a value for a simulated function
To generate stubs, the Test Script Compiler needs to know the specification of the
compilation units that are to be simulated.
Passing parameters by pointer can lead to problems of ambiguity regarding the data
actually passed to the function. For example, a parameter that is described in a
prototype by int *x can be passed in the following way:
int *x as input ==> f(x)
int x as output or input/output ==> f(&x)
int x[10] as input ==> f(x)
int x[10] as output or input/output ==> f(x)
Stub Definition
The following example highlights the simulation of all functions and procedures
declared in the specification of file_io. A new body is generated for file_io in file
<testname>_fct_simule.ada.
HEADER file, 1, 1
BEGIN
DEFINE STUB file_io
END DEFINE
You must always define stubs after the BEGIN instruction and outside any SERVICE
block.
Simulation of Generic Units
You can stub a generic unit like an ordinary unit with the following restrictions:
Parameters of a procedure or function, and function return types of a type declared
in a generic unit or parameter of this unit must use the _NO mode.
For example, if you want to stub the following generic package:
GENERIC
TYPE TYPE_PARAM is .....;
Package GEN is
TYPE TYPE_INTO is ....;
procedure PROC(x:TYPE_PARAM,y:in out TYPE_INTO,Z:out integer);
function FUNC return TYPE_INTO;
end GEN;
You can add a body to procedures and functions to process any parameters that
required the _NO mode.
Note With some compilers, when stubbing a unit by using a WITH operator
on the generic package, cross dependencies may occur.
Separate Body Stub
It some cases, you might need to define the body stub separately, with a proprietary
behavior. Declare the stub separately as shown in the following example, and then
you can define a body for it:
DEFINE STUB <STUB NAME>
# procedure My_Procedure(...) is separate ;
END DEFINE
110
Automated Testing
The Ada Test Script Compiler will not generate a body for the service My_Procedure,
but will expect you to do so.
Using Stubs
Range of Values of STUB Parameters
When using stubs, you may need to define an authorized range for each STUB
parameter. Furthermore, you can summarize several calls in one line associated with
this parameter.
Write such STUB lines as follows:
STUB F 1..10 => (1<->5)30
This expression means that the STUB F will be called 10 times with its parameter
having a value between 1 and 5, and its return value is always 30.
You can combine this with several lines; the result looks like the following:
STUB F 1..10 => (1<->5)30,
& 11..19 => (1<->5)0,
& 20..30 => (<->) 1,
& others =>(<->)-1
Raise-exception Stubs
You can force to raise a user-defined (or pre-defined) exception when a STUB is
called with particular values.
The appropriate syntax is as follows:
STUB P(1E+307<->1E+308) RAISE STORAGE_ERROR
If the STUB F happens to be called with its parameter between 1E+307 and 1E+308,
the exception STORAGE_ERROR will be raised during execution of the application;
the test will be FALSE otherwise.
Suppose that the current stubbed unit contains at least one overloaded sub-program.
When calling this particular STUB, you will need to qualify the procedure or
function. You can do this easily by writing the STUB as follows:
STUB A.F (1<->2:REAL)RAISE STANDARD.CONSTRAINT_ERROR
The STUB A.F is called once and will raise a CONSTRAINT_ERROR if its parameter,
of type REAL, has a value between 1 and 2.
Compilation Sequence
The Ada Test Script Compiler generates three files:
• <testname>_fct_simule.ada for the body of simulated functions and procedures
• <testname>_var_simule.ada for the declaration of simulation variables
111
Test RealTime - User Guide
Sizing Stubs
For each STUB, the Component Testing feature allocates memory to:
• Store the value of the input parameters during the test
• Store the values assigned to output parameters before the test
A stub can be called several times during the execution of a test. By default, when
you define a STUB, the Component Testing feature allocates space for 10 calls. If you
call the STUB more than this you must specify the number of expected calls in the
STUB declaration statement.
In the following example, the script allocates storage space for the first 17 calls to the
stub:
DEFINE STUB file 17
#int open_file(char _in f[100]);
#int create_file(char _in f[100]);
#int read_file(int _in fd, char _out l[100]);
#int write_file(int fd, char _in l[100]);
#int close_file(int fd);
END DEFINE
Note You can also reduce the size when running tests on a target platform
112
Automated Testing
You can describe each call to a stub by adding the specific cases before the preceding
instruction, for example:
<call i> =>
or
<call i> .. <call j> =>
Advanced Stubs
This section covers some of the more complex notions when dealing with stub
simulations in Component Testing for Ada.
Creating Complex Stubs
If necessary, you can make stub operation more complex by inserting native Ada
code into the body of the simulated function. You can do this easily by adding the
lines of native code after the prototype.
Example
The following stub definition makes extensive use of native Ada code.
DEFINE STUB file
#function open_file(f:string) return file_t is
#begin
# raise file_error;
#end;
END DEFINE
114
Automated Testing
The declaration of the pointer as an array with explicit size is necessary to memorize
the actual parameters when calling the stubbed function. For each call you must
specify the exact number of required array elements.
ELEMENT
STUB Funct.function 1 => (({'a', 'b', 'c', 'd', 'e', 'f', 'g',
'h', 'i', 0x0},
& {'i', 'h', 'g', 'f', 'e', 'd', 'c', 'b', 'a', 0x0}))
#call_the_code_under_test();
END ELEMENT
This naming convention compares the actual values and not the pointers.
The following line shows how to pass _inout parameters:
({<in_parameter>},{<out_parameter>})
116
Automated Testing
where <Generic_Package> is the name of the generic unit under test, and <Instance>
is the name of the instanciated unit from the generic. The <Procedure_Name>
parameter is not mandatory. Component Testing uses Attol_Test by default.
This Syntax automatically generates a separate procedure <Procedure_Name> of
<Generic_Package> and then calls the procedure <Instance>.<Procedure_Name>,
which is code generated by the Component Testing Preprocessor).
Note This technique also allows testing of private types within the generic
package.
Example
Consider the following Ada compilation unit:
Generic
Type t is private ;
Procedure swap(x,y :in out t) ;
Procedure swap(x,y :in out t) is
Z :t ;
Begin
Z := x ;
X := y;
Y := z;
End swap ;
With swap ;
Procedure swap_integer is new swap(integer) ;
You can test the swap_integer procedure just like any other procedure:
HEADER swap_integer,,
#with swap_integer;
BEGIN
SERVICE swap_integer
#x,y:integer;
117
Test RealTime - User Guide
TEST 1
FAMILY nominal
ELEMENT
VAR x , init = 1 , ev = 4
Var Y , init=4 ,ev = 1
#swap_integer(x,y) ;
END ELEMENT
END TEST
END SERVICE
The package specification is not modified, but the test procedure is called at every
elaboration of the package. Therefore, you need to remove or replace this call with an
empty procedure after the test phase.
Call by the Main Procedure
In this case, you must add an additional line in the specification of the unit tested:
PACKAGE < name>
...
PROCEDURE ATTOL_TEST;
...
END;
PACKAGE BODY <name> is
...
PROCEDURE ATTOL_TEST is SEPARATE;
END;
118
Automated Testing
FAMILY nominal
ELEMENT
FORMAT pointer_name = string_ptr
-- Then your variable pointer_name will be first initialized as a
pointer
....
VAR pointer_name, INIT="l11c01pA00", ev=init
-- It is initialized as pointing at the string "l11c01pA00",
--and then string comparisons are done with the expected values
using str_comp.
This instruction block is placed just before the END ELEMENT statement of the Test
Script.
Example
The source files and complete .ptu script for following example are provided in the
examples/Ada_Task directory.
In this example, the task calls a stubbed procedure. Therefore the task must be
terminated from within the Test Script. Two different techniques of starting and
stopping the task are shown here in Test 1 and Test 2.
HEADER Prg_Under_Tst, 0.3, 0.0
#with Pck_Stub;
BEGIN Prg_Under_Tst
DEFINE STUB Pck_Stub
#with Text_IO;
#procedure Proc_Stubbed is
#begin
# Text_IO.Put_Line("Stub called.");
#end;
END DEFINE
SERVICE S1
SERVICE_TYPE extern
#Param_1 : duration;
#task1 : Prioritaire;
TEST 1
FAMILY nominal
ELEMENT
VAR Param_1, init = duration(0), ev = init
STUB Pck_Stub.Proc_Stubbed 1..1 => ()
#Task1.Unit_Testing_Exit_Loop;
#delay duration(5);
#Task1.Unit_Testing_Wait_Termination;
END ELEMENT
END TEST -- TEST 1
TEST 2
FAMILY nominal
ELEMENT
VAR Param_1, init = duration(2), ev = init
STUB Pck_Stub.Proc_Stubbed 1..1 => ()
#declare
# Task2 : T_Prio := new Prioritaire;
#begin
# Task2.Do_Something_Useful(Param_1);
# Task2.Unit_Testing_Exit_Loop;
# Task2.Unit_Testing_Wait_Termination;
#end;
END ELEMENT
END TEST -- TEST 2
END SERVICE --S1
In the BEGIN line of the script, it is not necessary to add the name of the separate
procedure Attol_Test, as this is the default name;
120
Automated Testing
The user code within the STUB contains a context clause and some custom native
Ada instructions.
In both Test 1 and Test 2 it is necessary not only to stop the main loop of the task
before reaching the END ELEMENT instruction, but also the task itself in order to
have the tester return.
Task1 and Task2 could run in parallel, however, the test Report would be unable to
distinguish between the STUB calls coming in from either task, and would show the
calls in a cumulative manner.
The entry points Unit_Testing_Exit_Loop and Unit_Testing_Wait_Termination can
be considered as implementations for testing purposes only. They might not be used
in the deployment phase.
The second test is False in the Report, the loop runs twice. This allows to check that
the dump goes through smoothly.
Separate Compilation
You can make internal procedures and variables and the structure of private types
visible from the test program, by including them in the body of the unit under test
with a separate Ada instruction.
You must add the following line at the end of the body of the unit tested:
PACKAGE BODY <name>
...
PROCEDURE Test is separate;
END;
Defining the procedure Test this way allows you to access every element of the
specification and also those defined in the body.
121
Test RealTime - User Guide
If the test script requires access to items from Second_Package, it can call the
corresponding procedure from within an ELEMENT block of this .ptu test script.
Element
-- some VAR instructions here
#Second_Package.Something;
#-- here is the call to the tested procedure
End Element
Unexpected Exceptions
The generated test driver detects all raised exceptions. If a raised exception is not
specified in the test script, it is displayed in the report.
When the exception is a standard Ada exception (CONSTRAINT_ERROR,
NUMERIC_ERROR, PROGRAM_ERROR, STORAGE_ERROR, TASKING_ERROR),
the exception name is displayed in the test report.
Unknown Values
In some cases, Component Testing for Ada is unable to produce a default value in the
.ptu test script. When this occurs, Component Testing produces an invalid value with
the prefix _Unknown.
Such cases include:
• Private values: _Unknown_private_value
• Function pointers: _Unknown_access_to_function
• Tagged limited private: _Unknown_access_to_tagged_limited_private
Before compiling you must manually replace these _Unknown values with valid
values.
122
Automated Testing
Test Iterations
You can execute the test case several times by adding the number of iterations at the
end of instruction TEST, for example:
TEST <name> LOOP <number>
You can add other test cases to the current test case by using the instruction
NEXT_TEST:
TEST <name>
...
NEXT_TEST
...
END TEST
This instruction allows a new test case to be added that will be linked to the
preceding test case. Each loop introduced by the instruction LOOP relates to the test
case to which it is attached.
Test cases introduced by the instruction NEXT_TEST can be dissociated after the tests
are run. With the ELEMENT structure, the different phases of the same test case can
be dissociated.
Test phases introduced by the instruction ELEMENT can be included in the loops
created by the LOOP instruction.
Viewing Reports
After test execution, depending on the options selected, a series of Component
Testing for Ada test reports are produced.
Report Explorer
The Report Explorer displays each element of a test report with a Passed , Failed
symbol.
• Elements marked as Failed are either a failed test, or an element that contains
at least one failed test.
• Elements marked as Passed are either passed tests or elements that contain
only passed tests.
Test results are displayed for each instance, following the structure of the .ptu test
script.
123
Test RealTime - User Guide
Report Header
Each test report contains a report header with:
• The version of Test RealTime used to generate the test as well as the date of the
test report generation
• The path and name of the project files used to generate the test
• The total number of test cases Passed and Failed. These statistics are calculated
on the actual number of test elements listed in the sections below
Test Results
The graphical symbols in front of the node indicate if the test, item, or variable is
Passed or Failed :
• A test is Failed if it contains at least one failed variable. Otherwise, the test is
considered Passed.
You can obtain the following data items if you click with the pointer on the
Information node:
• Number of executed tests
• Number of correct tests
• Number of failed tests
A variable is incorrect if the expected value and the value obtained are not identical,
or if the value obtained is not within the expected range.
If a variable belongs to an environment, an environment header is previously edited.
In the report variables are edited according to the value of the Display Variables
setting of the Component Testing test node.
The following table summarizes the editing rules:
The Initial and Expected Values option changes the way initial and expected values
are displayed in the report.
124
Automated Testing
The Array and Structure Display option indicates the way in which Component
Testing processes variable array and structure statements. This option is part of the
Component Testing Settings for C dialog box.
125
Test RealTime - User Guide
126
Automated Testing
Both the instrumented application and the test driver provide output data which is
displayed within Test RealTime.
Integrated Files
This option provides a list of source files whose components are integrated into the
test program after linking.
The Component Testing wizard analyzes integrated files to extract any global
variables that are visible from outside. For each global variable the Parser declares an
external variable and creates a default test which is added to an environment named
after the file in the .ptu test script.
By default, any symbols and types that could be exported from the source file under
test are declared again in the test script.
Simulated Files
This option gives the Component Testing wizard a list of source files to simulate—or
stub—upon execution of the test.
A stub is a dummy software component designed to replace a component that the
code under test relies on, but cannot use for practicality or availability reasons. A
stub can simulate the response of the stubbed component.
The Component Testing parser analyzes the simulated files to extract the global
variables and functions that are visible from outside. For each file, a DEFINE STUB
block, which contains the simulation of the file's external global variables and
functions, is generated in the .ptu test script.
By default, no simulation instructions are generated.
Additional Files
Additional files are merely dependency files that are added to the Component
Testing test node, but ignored by the source code parser. Additional files are
compiled with the rest of the test node but are not instrumented.
For example, Microsoft Visual C resource files can be compiled inside a test node by
specifying them as additional files.
127
Test RealTime - User Guide
You can toggle a source file from under test to additional by using the Properties
Window dialog box.
Variable Only
This evaluation strategy setting generates both the initial and expected values of each
variable evaluated by the program during execution.
This is possible only for variables whose expression of initial or expected value is not
reducible by the Test Compiler. For arrays and structures in which one of the
members is an array, this evaluation is not given for the initial values. For the
expected values, however, it is given only for Failed items.
Value Only
With this setting, the test report displays for each variable both the initial value and
the expected value defined in the test script.
Combined Evaluation
The combined evaluation setting combines both settings. The test report thus
displays the initial value, the expected value defined in the test script, and the value
found during execution if that value differs from the expected value.
128
Automated Testing
Structure Statements
The following statements allow you to describe the structure of a test.
HEADER
For documentation purposes, specifies the name and version number of the module
being tested, as well as the version number of the tested source file. This information
is displayed in the test report.
BEGIN
Marks the beginning of the generation of the actual test program.
SERVICE
Contains the test cases relating to a given service. A service usually refers to a
procedure or function. Each service has a unique name (in this case add). A SERVICE
block terminates with the instruction END SERVICE.
TEST
Each test case has a number or identifier that is unique within the block SERVICE.
The test case is terminated by the instruction END TEST.
129
Test RealTime - User Guide
FAMILY
Qualifies the test case to which it is attached. The qualification is free (here nominal).
A list of qualifications can be specified (for example: family, nominal, structure) in
the Tester Configuration dialog box.
ELEMENT
Describes a test phase in the current test case. The phase is terminated by the
instruction END ELEMENT.
The different phases of the same test case cannot be dissociated after the tests are run,
unlike the test cases introduced by the instruction NEXT_TEST. However, the test
phases introduced by the instruction ELEMENT are included in the loops created by
the instruction LOOP.
The three-level structure of the test scripts has been deliberately kept simple. This
structure allows:
• A clear and structured presentation of the test script and report
• Tests to be run selectively on the basis of the service name, the test number, or
the test family.
Variables
Testing Variables
One of the main features of Component Testing for C is its ability to compare initial
values, expected values and actual values of variables during test execution. In the C
Test Script Language, this is done with the VAR statement.
The VAR statement specifies both the test start-up procedure and the post-execution
test for simple variables. This instruction uses three parameters:
• Name of the variable under test: this can be a simple variable, an array element,
or a field of a record. It is also possible to test an entire array, part of an array or
all the fields of a record.
• Initial value of the variable: identified by the keyword INIT.
• Expected value of the variable after the procedure has been executed: identified
by the keyword EV.
Declare variables under test with the VAR statement, followed by the declaration
keywords:
• INIT = for an assignment
• INIT == for no initialization
• EV = for a simple test.
130
Automated Testing
It does not matter where the VAR instructions are located with respect to the test
procedure call since the C code generator separates VAR instructions into two parts :
• The variable test is initialized with the ELEMENT instruction
• The actual test against the expected value is done with the END ELEMENT
instruction
Many other forms are available that enable you to create more complex test scenarios.
Example
The following example demonstrates typical use of the VAR statement
HEADER add, 1, 1
#with add;
BEGIN
SERVICE add
# a, b, c : integer;
TEST 1
FAMILY nominal
ELEMENT
VAR a, init = 1, ev = init
VAR b, init = 3, ev = init
VAR c, init = 0, ev = 4
#c := add(a,b);
END ELEMENT
END TEST
END SERVICE
Testing Intervals
You can test an expected value within a given interval by replacing EV with the
keywords MIN and MAX.
You can also use this form on alphanumeric variables, where character strings are
considered in alphabetical order ("A"<"B"<"C").
Example
The following example demonstrates how to test a value within an interval:
TEST 4
FAMILY nominal
ELEMENT
VAR a, INIT in {1,2,3}, EV = INIT
VAR b, INIT = 3, EV = INIT
VAR c, INIT = 0, MIN = 4, MAX = 6
#c = add(a,b);
END ELEMENT
END TEST
Testing Tolerances
You can associate a tolerance with an expected value for numerical variables. To do
this, use the keyword DELTA with the expected value EV.
131
Test RealTime - User Guide
This tolerance can either be an absolute value (the default option) or relative (in the
form of a percentage <value>%).
You can rewrite the test from the previous example as follows:
TEST 5
FAMILY nominal
ELEMENT
VAR a, INIT in {1,2,3}, EV = INIT
VAR b, INIT = 3, EV = INIT
VAR c, INIT = 0, EV = 5, DELTA = 1
#c = add(a,b);
END ELEMENT
END TEST
or
TEST 6
FAMILY nominal
ELEMENT
VAR a, INIT in {1,2,3}, EV = INIT
VAR b, INIT = 3, EV = INIT
VAR c, INIT = 0, EV = 5, DELTA = 20%
#c = add(a,b);
END ELEMENT
END TEST
No Test
It is sometimes difficult to predict the expected result for a variable; such as if a
variable holds the current date or time. In this case, you can avoid specifying an
expected output.
Example
The following script show an example of an omitted test:
TEST 7
FAMILY nominal
ELEMENT
VAR a, init in {1,2,3}, ev = init
VAR b, init = 3, ev = init
VAR c, init = 0, ev ==
#c = add(a,b);
END ELEMENT
END TEST
Testing an Expression
To test the return value of an expression, rather than declaring a local variable to
memorize the value under test, you can directly test the return value with the VAR
instruction.
In some cases, you must leave out the initialization part of the instruction.
Example
The following example places the call of the add function in a VAR statement:
TEST 12
132
Automated Testing
FAMILY nominal
ELEMENT
VAR a, init in {1,2,3}, ev = init
VAR b, init(a) with {3,2,1}, ev = init
VAR add(a,b), ev = 4
END ELEMENT
END TEST
Using C Expressions
Component Testing for C allows you to define initial and expected values with
standard C expressions.
All literal values, variable types, functions and most operators available in the C
language are accepted by Component Testing for C.
Note Any Ada instruction between HEADER and the BEGIN instruction
must be encapsulated into a procedure or a package. Context clauses are
possible.
Accessing Global Variables
The extra global variable package is visible from within all units of the test driver.
133
Test RealTime - User Guide
Variables can be accessed in the ELEMENT blocks, just like any other variable:
VAR My_Globals.Global_Var_Integer, init = 0, EV = 1
Rational Test RealTime processes the .ptu test script in such a way that global
variable package automatically becomes a separate compilable unit.
Declaring Parameters
ELEMENT blocks contain specific instructions that describe the test start-up
procedures and the post-execution tests.
The hash character (#) at the beginning of a line indicates a native language
statement written in C.
This declaration is introduced after the SERVICE instruction because it is local to the
SERVICE block; it is invalid outside this block.
It is only necessary to declare parameters of the procedure under test. Global
variables are already present in the module under test or in any integrated modules,
and do not need to be declared locally.
Structured Variables
Testing a Structured Variable
To test all the fields of a structured variable, use a single instruction (STR) to define
their initializations and expected values:
TEST 2
FAMILY nominal
ELEMENT
VAR l, init = NIL, ev = NONIL
STR *l, init == , ev = {"myfoo",NIL,NIL}
VAR s, init = "myfoo", ev = init
#l = push(l,s);
END ELEMENT
END TEST
Testing a Structured Variable with C Expressions
134
Automated Testing
To initialize and test a structured variable or record, initialize or test all the fields
using a list of native language expressions (one per field). The following example
(taken from list.ptu) illustrates this form:
STR *l, init == , ev = {"myfoo",NIL,NIL}
Each element in the list must correspond to the structured variable field as it was
declared.
Every expression in the list must obey the rules described so far, according to the
type of field being initialized and tested:
• An expression for simple fields or arrays of simple variables initialized using an
expression
• A list of expressions for array fields initialized using an enumerated list
• A list of expressions for structured fields
135
Test RealTime - User Guide
• INIT ==
• EV =
• EV ==
If a field of a structured variable needs to be initialized or tested in a different way,
you can omit its initial and expected values from the global test of the structured
variable, and run a separate test on this field.
The following example illustrates this:
TEST 4
FAMILY nominal
ELEMENT
VAR l, init = NIL, ev = NONIL
VAR *l, init == , ev = {,NIL,NIL}
VAR s, init in {"foo","bar"}, ev = init
VAR l->str, init ==, ev(s) in {"foo","bar"}
#l = push(l,s);
END ELEMENT
END TEST
Using field names, write this as follows:
VAR *l, init ==, ev = {next=>NIL,prev=>NIL}
C Unions
If the structured variable involves a C union (defined using the union instruction)
rather than a structure (defined using the struct instruction), you need to specify
which field in the union is tested. The initial and test value only relates to one of the
fields in the union, whereas, for a structure, it relates to all the fields.
The list.c example demonstrates this if you modify the structure of the list, such that
the value stored at each node is an integer, a floating-point number, or a character
string:
list1.h:
enum node_type { INTEGER, REAL, STRING };
typedef struct t_list {
enum node_type type;
union {
long integer_value;
double real_value;
char * string_value;
} value;
struct t_list * next;
struct t_list * prev;
} T_LIST, * PT_LIST;
In this case, the test becomes:
HEADER list1, 1, 1
##include "list1.h"
BEGIN
SERVICE push1
#PT_LIST l;
#enum node_type t;
#char s[10];
136
Automated Testing
TEST 1
FAMILY nominal
ELEMENT
VAR l, init = NIL, ev = NONIL
VAR t, init = my_string, ev = init
VAR *l, init == ,
& ev = {STRING,{string_value=>"myfoo"}, NIL,NIL}
VAR s, init = "myfoo", ev = init
#l = push1(l, t, s);
END ELEMENT
END TEST
END SERVICE
The use of string_value => indicates that the chosen field in the union is string_value.
If no field is specified, the first field in the union is taken by default.
Arrays
Testing Arrays
With Component Testing for C, you can test arrays in quite the same way as you test
variables. In the C Test Script Language, this is done with the ARRAY statement.
The ARRAY statement specifies both the test start-up procedure and the post-
execution test for simple variables. This instruction uses three parameters:
• Name of the variable under test: species the name of the array in any of the
following ways:
• To test one array element, conform to the C syntax: histo[0].
• To test the entire array without specifying its bounds, the size of the array is
deduced by analyzing its declaration. This can only be done for well-defined
arrays.
• To test a part of the array, specify the lower and upper bounds within which the
test will be run, separated with two periods (..), as in:
histo[1..SIZE_HISTO]
• Initial value of the array: identified by the keyword INIT.
• Expected value of the array after the procedure has been executed: identified by
the keyword EV.
Declare variables under test with the ARRAY statement, followed by the declaration
keywords:
• INIT = for an assignment
• INIT == for no initialization
• EV = for a simple test.
137
Test RealTime - User Guide
It does not matter where the ARRAY instructions are located with respect to the test
procedure call since the C code generator separates ARRAY instructions into two
parts :
• The array test is initialized with the ELEMENT instruction
• The actual test against the expected value is done with the END ELEMENT
instruction
To initialize and test an array, specify the same value for all the array elements.
You can use the same expressions for initial and expected values as those used for
simple variables (literal values, constants, variables, functions, and C operators).
Use the ARRAY instruction to run simple tests on all or only some of the elements in
an array.
138
Automated Testing
139
Test RealTime - User Guide
You can also specify a value for all the as yet undefined elements by using the
keyword others, as the following example illustrates:
TEST 2
FAMILY nominal
ELEMENT
VAR x1, init = 0, ev = init
VAR x2, init = SIZE_IMAGE-1, ev = init
VAR y1, init = 0, ev = init
VAR y2, init = SIZE_IMAGE-1, ev = init
ARRAY image, init = {others=>{others=>100}}, ev = init
ARRAY histo, init = 0,
& ev = {100=>SIZE_IMAGE*SIZE_IMAGE, others=>0}
VAR status, init ==, ev = 0
#status = compute_histo(x1, y1, x2, y2, histo);
END ELEMENT
END TEST
Note The form {others => <expression>} is equivalent to initializing and
testing all array elements with the same expression.
You can also initialize and test multidimensional arrays with a list of expressions, as
follows. In this case, the previously mentioned rules apply to each dimension.
ARRAY image, init = {0, 1=>4, others=>{1, 2, others=>100}} ...
Note Some C compilers allow you to omit levels of brackets when initializing
a multidimensional array. The Unit Testing Scripting Language does not
accept this non-standard extension to the language.
Testing Character Arrays
Character arrays are a special case. Variables of this type are processed as character
strings delimited by quotes.
You therefore need to initialize and test character arrays using character strings, as
the following list example illustrates.
If you want to test character arrays like other arrays, you must use a format
modification declaration (FORMAT instruction) to change them to arrays of integers.
Example
The following list example illustrates this type of modification:
TEST 2
FAMILY nominal
FORMAT T_LIST.str[] = int
ELEMENT
VAR l, init = NIL, ev = NONIL
VAR s, init = "myfoo", ev = init
VAR l•>str[0..5], init == , ev = {'m','y','f','o','o',0}
#l = push(l,s);
END ELEMENT
END TEST
Testing an Array with Another Array
The following example illustrates a form of initialization that consists of initializing
or comparing an array with another array that has the same declaration:
140
Automated Testing
TEST 4
FAMILY nominal
ELEMENT
VAR x1, init = 0, ev = init
VAR x2, init = SIZE_IMAGE•1, ev = init
VAR y1, init = 0, ev = init
VAR y2, init = SIZE_IMAGE•1, ev = init
ARRAY image, init = extern_image, ev = init
ARRAY histo, init = 0, ev ==
VAR status, init ==, ev = 0
#read_image(extern_image,"image.bmp");
#status = compute_histo(x1, y1, x2, y2, histo);
END ELEMENT
END TEST
Read_image and extern_image are two arrays that have been declared in the same
way. Every element from the extern_image array is assigned to the corresponding
read_image array element.
You can use this form of initialization and testing with one or more array dimensions.
Testing an Array Whose Elements are Unions
When testing an array of unions, detail your tests for each member of the array, using
VAR lines in the ELEMENT block.
Example
Considering the following variables:
#typedef struct {
# int test1;
# int test2;
# int test3;
# int test4;
# int test5;
# int test6;
# } Test;
#typedef struct {
# int champ1;
# int champ2;
# int champ3;
# } Champ;
#typedef struct {
# int toto1;
# int toto2;
# } Toto;
#typedef union {
# Test A;
# Champ B;
# Toto C;
# } T_union;
#extern T_union Tableau[4];
Stub Simulation
Stub simulation is based on the idea certain functions are to be simulated and are
therefore replaced with other functions which are generated in the test driver. These
generated functions, or stubs, have the same interface as the simulated functions, but
the body of the functions is replaced.
These stubs have the following roles:
• Store input values to simulated functions
• Assign output values from simulated functions
To be able to generate these stubs, the Test Script Compiler needs to know:
• The prototypes of the functions that are to be simulated
• the method of passing each parameter (input, output, or input/output).
When using the Component Testing Wizard, you specify the functions that you want
to stub. This automatically adds the corresponding code to the .ptu test script. On
execution of the test, Component Testing for C generates the stub in the test driver,
which includes:
• a variable array for the input values of the stub
• a variable array for the output values of the stub
• a body declaration for the stub function
Function Prototypes
When generating a stub for a function, Test RealTime considers the first prototype of
the function that is encountered, which can be:
• The declaration of the function in an included header file.
• The declaration DEFINE STUB statement in the .ptu test script.
This means that the declaration of the function contained in the DEFINE STUB
statement is ignored if the function was previously declared in a header file.
142
Automated Testing
Passing Parameters
Passing parameters by pointer can lead to problems of ambiguity regarding the data
actually passed to the function. For example, a parameter that is described in a
prototype by int *x can be passed in the following way:
int *x as input ==> f(x)
int x as output or input/output ==> f(&x)
int x[10] as input ==> f(x)
int x[10] as output or input/output ==> f(x)
Stub Definition
The following simulation describes a set of function prototypes to be simulated in an
instruction block called DEFINE STUB ... END DEFINE:
HEADER file, 1, 1
BEGIN
DEFINE STUB file
#int open_file(char _in f[100]);
#int create_file(char _in f[100]);
#int read_file(int _in fd, char _out l[100]);
#int write_file(int fd, char _in l[100]);
#int close_file(int fd);
END DEFINE
The prototype of each simulated function is described in ANSI form. The following
information is given for each parameter:
• The type of the calling function (char f[100] for example, meaning that the
calling function supplies a character string as a parameter to the open_file
function)
• The method of passing the parameter, which can take the following values:
• _in for an input parameter
• _out for an output parameter
• _inout for an input/output parameter
These values describe how the parameter is used by the called function, and,
therefore, the nature of the test to be run in the stub.
• The _in parameters only will be tested.
143
Test RealTime - User Guide
• The _out parameters will not be tested but will be given values by a new
expression in the stub.
• The _inout parameters will be tested and then given values by a new
expression.
Any returned parameters are always taken to be _out parameters.
You must always define stubs after the BEGIN instruction and outside any SERVICE
block.
Modifying Stub Variable Values
You can define stubs so that the variable pointed to is updated with different values
in each test case. For example, to stub the following function:
extern void function_b(unsigned char * param_1);
The global variables are created as if they existed in the simulated file.
Stub Usage
Use the STUB statement to declare that you want to use a stub rather than the
original function. You can use the STUB instruction within environments or test
scenarios.
This STUB instruction tests input parameters and assigns a value to output
parameters each time the simulated function is called.
The following information is required for every stub called in a scenario:
• Test values for the input parameters
• Return values for the output parameters
• Test and return values for the input/output parameters
• Where appropriate, the return value of the called stub
Example
The following example illustrates use of a stub which simulates file access.
SERVICE copy_file
#char file1[100], file2[100];
#int s;
TEST 1
FAMILY nominal
ELEMENT
VAR file1, init = "file1", ev = init
VAR file2, init = "file2", ev = init
VAR s, init == , ev = 1
STUB open_file ("file1")3
STUB create_file ("file2")4
STUB read_file (3,"line 1")1, (3,"line 2")1, (3,"")0
STUB write_file (4,"line 1")1, (4,"line 2")1
STUB close_file (3)1, (4)1
#s = copy_file(file1, file2);
END ELEMENT
END TEST
END SERVICE
The following example specifies that you expect three calls of foo.
STUB STUB1.foo(1)1, (2)2, (3)3
...
145
Test RealTime - User Guide
#foo(1);
#foo(2);
#foo(4);
The first call has a parameter of 1 and returns 1. The second has a a parameter of 2
and returns 2 and the third has a parameter of 3 and returns 3. Anything that does
not match is reported in the test report as a failure.
Sizing Stubs
For each STUB, the Component Testing feature allocates memory to:
• Store the value of the input parameters during the test
• Store the values assigned to output parameters before the test
A stub can be called several times during the execution of a test. By default, when
you define a STUB, the Component Testing feature allocates space for 10 calls. If you
call the STUB more than this you must specify the number of expected calls in the
STUB declaration statement.
In the following example, the script allocates storage space for the first 17 calls to the
stub:
DEFINE STUB file 17
#int open_file(char _in f[100]);
#int create_file(char _in f[100]);
#int read_file(int _in fd, char _out l[100]);
#int write_file(int fd, char _in l[100]);
#int close_file(int fd);
END DEFINE
Note You can also reduce the size when running tests on a target platform
that is short on memory resources.
Replacing Stubs
Stubs can be used to replace a component that is still in development. Later in the
development process, you might want to replaced a stubbed component with the
actual source code.
146
Automated Testing
Advanced Stubs
This section covers some of the more complex notions when dealing with stub
simulations in Component Testing for Ada.
Stub Definition
You can specify in the stub definition that a particular parameter is not to be tested or
given a value. You do this using a modifier of type _no instead of _in, _out or _inout,
as shown in the following example:
DEFINE STUB file
#int open_file(char _in f[100]);
#int create_file(char _in f[100]);
#int read_file(int _no fd, char _out l[100]);
#int write_file(int _no fd, char _in l[100]);
#int close_file(int fd);
END DEFINE
In this example, the fd parameters to read_file and write_file are never tested.
Note You need to be careful when using _no on an output parameter, as no
value will be assigned to it. It will then be difficult to predict the behavior of
the function under test on returning from the stub.
Stub Usage
Parameters that have not been tested (preceded by _no) are completely ignored in the
stub description. The two values of the input/output parameters are located between
brackets as shown in the following example:
DEFINE STUB file
#int open_file(char _in f[100]);
147
Test RealTime - User Guide
If a stub is called and if it has not been declared in a scenario, an error is raised in the
report because the number of the calls of each stub is always checked.
Functions Using _inout Mode Arrays
To stub a function taking an array in _inout mode, you must provide storage space
for the actual parameters of the function.
The function prototype in the .ptu test script remains as usual:
#extern void function(unsigned char *table);
The declaration of the pointer as an array with explicit size is necessary to memorize
the actual parameters when calling the stubbed function. For each call you must
specify the exact number of required array elements.
ELEMENT
STUB Funct.function 1 => (({'a', 'b', 'c', 'd', 'e', 'f', 'g',
'h', 'i', 0x0},
& {'i', 'h', 'g', 'f', 'e', 'd', 'c', 'b', 'a', 0x0}))
#call_the_code_under_test();
END ELEMENT
This naming convention compares the actual values and not the pointers.
The following line shows how to pass _inout parameters:
({<in_parameter>},{<out_parameter>})
Functions Containing Type Modifiers
Type modifiers can appear in the signature of the function but should not be used
when manipulating any passed variables. When using type modifiers, add @ prefix
to the type modifier keyword.
Test RealTime recognizes @-prefixed type modifiers in the function prototype, but
ignores them when dealing internally with the parameters passed to and from the
function.
148
Automated Testing
This behaviour is the default behaviour for the "const" keyword, the '@' is not
necessary for const.
Example
Consider a type modifier __foo
DEFINE STUB tst_cst
#int ModifParam(@__foo float _in param);
END DEFINE
Note In this example, __foo is not a standard ANSI-C feature. To force Test
RealTime to recognize this keyword as a type modifier, you must add the
following line to the .ptu test script:
##pragma attol type_modifier = __foo
Simulating Functions with Varying Parameters
In some cases, functions may be designed to accept a variable number of parameters
on each call.
You can still stub these functions with the Component Testing feature by using the
'...' syntax indicating that there may be additional parameters of unknown type and
name.
In this case, Component Testing can only test the validity of the first parameter.
Example
The standard printf function is a good a example of a function that can take a variable
number of parameters:
int printf (const char* param, ...);
149
Test RealTime - User Guide
To stub the function, you would normally write the following lines in the .ptu test
script. These will produce compilation error messages:
DEFINE STUB Example
#int ConstParam(const int _in param);
END DEFINE
150
Automated Testing
END DEFINE
Pointers
Define the stub as in the following example:
#int StubFunction(char* pChar);
DEFINE STUB Stubs
#int StubFunction(void* _in pChar)
END DEFINE
#char MyChar = 'A';
STUB StubFunction(NIL)0, (&MyChar)1
151
Test RealTime - User Guide
Use the FORMAT statement to force the treatment as a pointer to an unsigned char
and use the TAB instruction to test the variable as a table. For example:
#int StubFunction(char* pChar);
DEFINE STUB Stubs
#int StubFunction(unsigned char _in Chars[4])
END DEFINE
STUB StubFunction({'a','b','c','d'})0, ({'A','B','C','D'})1
C strings
This is the default behavior:
#int StubFunction(char* pString);
DEFINE STUB Stubs
#int StubFunction(char* _in pString )
END DEFINE
STUB StubFunction("abcd")0, ("ABCD")1
Environments
Testing Environments
When drawing up a test script for a service, you usually need to write several test
cases. It is likely that, except for a few variables, these scenarios will be very similar.
You could avoid writing a whole series of similar scenarios by factorizing items that
are common to all scenarios.
Furthermore, when a test harness is generated, there are often side-effects from one
test to another, particularly as a result of unchecked modification of global variables.
To avoid these two problems and leverage your test script writing, the Test Script
Language lets you define test environments introduced by the keyword
ENVIRONMENT.
These test environments are effectively a set of default tests performed on one or
more variables.
Declaring Environments
A test environment consists of a list of variables for which you specify:
• Default initialization conditions for before the test
• Default expected results for after the test
Use the VAR, ARRAY, and STR instructions described previously to specify the
status of the variables before and after the test.
You can only use an environment once you have defined it.
Delimit an environment using the instructions ENVIRONMENT
<environment_name> and END ENVIRONMENT. You must place it after the BEGIN
instruction. When you have declared it, an environment is visible to the block in
which it was declared and to all blocks included therein.
152
Automated Testing
Example
The following example illustrates the use of environments:
HEADER histo, 1, 1
##include <math.h>
##include "histo.h"
BEGIN
ENVIRONMENT image
ARRAY image, init = 0, ev = init
END ENVIRONMENT
USE image
SERVICE COMPUTE_HISTO
#int x1, x2, y1, y2;
#int status;
#T_HISTO histo;
#T_IMAGE image1;
ENVIRONMENT compute_histo
VAR x1, init = 0, ev = init
VAR x2, init = SIZE_IMAGE?1, ev = init
VAR y1, init = 0, ev = init
VAR y2, init = SIZE_IMAGE?1, ev = init
ARRAY histo, init = 0, ev = 0
VAR status, init == , ev = 0
END ENVIRONMENT
USE compute_histo
The parameters are identifiers, which you can use in variable status instructions, as
follows:
• In initial or expected value expressions
• In expressions delimiting bounds of arrays in extended mode
The parameters are initialized when they are used:
USE compute_histo1(0,0,SIZE_IMAGE?1,SIZE_IMAGE?1)
The number of values must be strictly equal to the number of parameters defined for
the environment. The values can be expressions of any type.
153
Test RealTime - User Guide
Environment Override
To provide more flexibility in using environments, you can override the initialization
and test specifications in an ENVIRONMENT block for one or more variables, one or
more array elements, or one or more fields of a structured variable by using either of
the following:
• A new environment
• The instructions VAR, ARRAY, or STR in the ELEMENT block
The ENVIRONMENT concept greatly improves test robustness. You can use this
approach to group default initialization and test specifications with all the variables
that are global to a module under test, allowing you to check that unexpected global
variables in tests on a service are indeed not modified.
The following steps are used to handle environments:
• VAR, ARRAY and STR instructions are stored between ENVIRONMENT and
END ENVIRONMENT instructions.
• When the Test Compiler comes across the instruction USE, it determines the
scope of the environment that has been stored.
• At every END ELEMENT instruction, the Test Compiler browses through all
visible environments beginning with the most recently declared one. The test
compiler then checks every environment variable to see if it has been fully or
partially tested. If it has only been partially tested, the test compiler generates
the necessary tests to complete the testing of the variable.
This process means that:
• Tests linked to environments are always carried out last.
• The higher the environment's precedence, the earlier the tests it contains will be
carried out.
Example
The following example illustrates an override of an array element in two tests:
TEST 1
FAMILY nominal
ELEMENT
VAR histo[0], init = 0, ev = SIZE_IMAGE*SIZE_IMAGE
#status = compute_histo(x1,y1,x2,y2,histo);
END ELEMENT
END TEST
TEST 2
FAMILY nominal
ELEMENT
ARRAY image, init = {others => {others => 100}}, ev = init
ARRAY histo[100], init = 0, ev = SIZE_IMAGE*SIZE_IMAGE
#status = compute_histo(x1,y1,x2,y2,histo);
END ELEMENT
154
Automated Testing
END TEST
In the first test, only histo[0] has an override. Therefore, all the default tests were
generated except for the test on the histo variable, which had its 0 element removed,
and a test was generated on histo[1..255].
In the second test, the override is more noticeable; the histo[100] element has been
removed to generate two tests: one on histo[0..99], and the other on histo[101..255].
Using Environments
The USE keyword declares the use of an environment (in other words, the beginning
of that environment's visibility).
The impact or visibility of an environment is determined by the position at which
you declare the environment's use with the USE statement.
The initial values and tests associated with the environment are applied as follows,
depending on the position of the declaration:
• To all the tests in a program
• To all the tests in a service
• To all the ELEMENT blocks of a particular test
• Within one ELEMENT block of a given test.
Advanced C Testing
Advanced C Testing
This section covers some of the more complex notions behind Component Testing for
C.
155
Test RealTime - User Guide
If you are running an runtime analysis feature on the Component Testing test node,
you can also use the -rename command line option to rename the main function
name.
See the Instrumentor Line Command Reference section in the Rational Test RealTime
Reference Manual.
156
Automated Testing
This syntax allows you to initialize the variable to "NIL", and to compare its contents
to a given string after the test.
FAMILY nominal
157
Test RealTime - User Guide
ELEMENT
FORMAT pointer_name = string_ptr
-- Then your variable pointer_name will be first initialized as a
pointer
....
VAR pointer_name, INIT="l11c01pA00", ev=init
-- It is initialized as pointing at the string "l11c01pA00",
--and then string comparisons are done with the expected values
using str_comp.
• Service and Family parameters: These are not imported and require manual
updating with the Tester Configuration function.
C Syntax Extensions
For a large number of calls to a stub, use the following syntax for a more compact
description:
<call i> .. <call j> =>
You can describe each call to a stub by adding the specific cases before the preceding
instruction, for example:
<call i> =>
or
<call i> .. <call j> =>
159
Test RealTime - User Guide
For example, the following instruction lets you specify the first call and all the
following calls without knowing the exact number:
STUB write_file 1=>(4,"line")1,others=>(4,"")1
Tester Configuration
The Tester Configuration dialog box allows you to configure the Component Testing
test driver.
Viewing Reports
After test execution, depending on the options selected, a series of Component
Testing for C test reports are produced.
Report Explorer
The Report Explorer displays each element of a test report with a Passed , Failed
symbol.
• Elements marked as Failed are either a failed test, or an element that contains
at least one failed test.
• Elements marked as Passed are either passed tests or elements that contain
only passed tests.
Test results are displayed for each instance, following the structure of the .ptu test
script.
160
Automated Testing
Report Header
Each test report contains a report header with:
• The version of Test RealTime used to generate the test as well as the date of the
test report generation
• The path and name of the project files used to generate the test
• The total number of test cases Passed and Failed. These statistics are calculated
on the actual number of test elements listed in the sections below
Test Results
The graphical symbols in front of the node indicate if the test, item, or variable is
Passed or Failed :
• A test is Failed if it contains at least one failed variable. Otherwise, the test is
considered Passed.
You can obtain the following data items if you click with the pointer on the
Information node:
• Number of executed tests
• Number of correct tests
• Number of failed tests
A variable is incorrect if the expected value and the value obtained are not identical,
or if the value obtained is not within the expected range.
If a variable belongs to an environment, an environment header is previously edited.
In the report variables are edited according to the value of the Display Variables
setting of the Component Testing test node.
The following table summarizes the editing rules:
The Initial and Expected Values option changes the way initial and expected values
are displayed in the report.
161
Test RealTime - User Guide
During the execution of the test, Component Testing generates trace data this is used
by the UML/SD Viewer. The Component Testing sequence diagram uses standard
UML notation to represent both Component Testing results.
When using Component Testing for C with Runtime Tracing or other Test RealTime
features that generate UML sequence diagrams, all results are merged in the same
sequence diagram.
You can click any element of the UML sequence diagram to open the test report at
the corresponding line. Click again in the test report, and you will locate the line in
the .pts test script.
162
Automated Testing
Overview
Basically, Component Testing for C++ interacts with your source code through a
scripting language called C++ Test Script Language. You use the Test RealTime GUI
163
Test RealTime - User Guide
or command line tools to set up your test campaign, write your test scripts, run your
tests and view the test results. Object Testing's mode of operation is twofold:
• C++ Test Driver scripts describe a test harness that stimulates and checks basic
I/O of the code under test.
• C++ Contract Check scripts, which instrument the code under test, verifying
behavioral assertions during execution of the code.
When the test is executed, Component Testing for C++ compiles both the test scripts
and the source under test, then instruments the source code and generates a test
driver. Both the instrumented application and the test driver provide output data
which is displayed within Test RealTime.
164
Automated Testing
165
Test RealTime - User Guide
See the Test RealTime Reference Manual for the semantics of the C++ Contract Check
Language.
Candidate Classes
For source files containing several classes, you may only want to submit a restricted
number of classes to testing.
If no classes are selected, the wizard automatically selects all classes that are defined
or implemented in the source(s) under test as follow:
• The class is defined within the source file (i.e. the sequence class <name> { .... };
).
• At least one of the methods of the class is defined within the source file (i.e. a
method's body).
Note Classes can only be selected if you have refreshed the File View before
running the Test Generation Wizard.
166
Automated Testing
When creating a Component Testing test node for C++, the Component Testing
wizard offers the following options for specifying dependencies of the source code
under test:
• Simulated files
• Additional files
• Included files
Simulated Files
This option gives the Component Testing wizard a list of source files to simulate—or
stub—upon execution of the test.
A stub is a dummy software component designed to replace a component that the
code under test relies on, but cannot use for practicality or availability reasons. A
stub can simulate the response of the stubbed component.
In Component Testing for C++, the use of stubs requires Source Code Insertion (SCI)
instrumentation of the source code under test.
The Component Testing parser analyzes simulated files and produces an .stb stub file
written in C++ Test Script Language. When manually creating a test node, use of a
separate .stb stub file is optional. It is entirely possible to define stubs directly inside
the .otd C++ Test Driver script.
Stub files appear in the Component Testing for C++ test node.
Additional Files
Additional source files are source files that are required by the test script, but not
actually tested. For example, with Component Testing for C++, Visual C++ resource
files can be compiled inside a test node by specifying them as additional files.
Additional header files (.h) are not handled in the same way as additional body files
(.cc, .C, or .cpp):
• Body files: With a body file, the Test Generation Wizard considers that the
compiled file will be linked with your test program. This means that all defined
variables and routines are considered as defined, and therefore not stubbed.
• Header files: With a header file (a file containing only declarations), the Test
Generation Wizard considers that all the entities declared in the source file itself
(not in included files) are defined. Typically, you would use additional header
files if you only have a .h file under test and a matching object file (.o or .obj),
but not the actual source file (.cc, .C, or .cpp).
You can toggle a source file from under test to additional by changing the
Instrumentation property in the Properties Window dialog box.
167
Test RealTime - User Guide
Additional directories are directories that are declared to only contain additional
source files.
Included Files
Included files are normal source files under test. However, instead of being compiled
separately during the test, they are included and compiled with the C++ Test Driver
script.
Header files are automatically considered as included files, even if they are not
specified as such.
Source files under test should be specified as included when:
• The file contains the class definition of a class you want to test
• A function or a variable definition depends upon a type which is defined in the
file under test itself
• You need access in your test script to a static variable or function, defined in the
file under test
In most cases, you do not have to specify files to be included. The Component
Testing wizard automatically generates a warning message in the Output Window,
when it detects files that should be specified as included files. If this occurs, rerun the
Component Testing wizard, and select the files to be included in the Include source
files section of the Advanced Options dialog box.
Declaration Files
A declaration file (.dcl) ensures that the types, class, variables and functions needed
by your test script will be available in your code.
Using a separate .dcl file is optional, since it is merely included within the C++ Test
Driver script. It is possible to declare types, classes, variables and functions directly
within an C++ Test Driver script file.
Typically, .dcl files are created by the Component Testing Wizard and do not need to
be edited by the user. If you do need to define your own declarations for a test, it is
recommended that you do this within the Test Driver script. Declaration files appear
in the Component Testing for C++ test node.
Declaration files must be written in C++ Test Script Language and contain native
code declarations. See the Test RealTime Reference Manual for details about the
language.
168
Automated Testing
Test reports for Component Testing for C++ are displayed in Test RealTime's Report
Viewer.
The test report is a hierarchical summary report of the execution of a test node. Parts
of the report that have Passed are displayed in green. Failed tests are shown in red.
Report Explorer
The Report Explorer displays each element of a Test Verdict report with a Passed ,
Failed or Undefined symbol:
• Elements marked as Failed are either a failed test, or an element that contains
at least one failed test.
• An Undefined marker means either that the test was not executed, or that the
element contains a test that was not executed AND all executed tests were
passed.
• Elements marked as Passed are either passed tests or elements that contain
only passed tests.
Test results are displayed in two parts:
• Test Classes, Test Suites and Test Cases of all the executed C++ Test scripts.
• Class results for the entire Test. Each class contains assertions (WRAP
statement), invariants, states and transitions.
Report Header
Each Test Verdict report contains a report header with:
• The path and name of the .xrd report file.
• A general verdict for the test campaign: Passed or Failed.
• The number of test cases Passed and Failed. These statistics are calculated on
the actual number of test elements (Test Case, Procedure, Stub and Classes)
listed sections below.
Note The total number counts the actual test elements, not the number of
times each element was executed. For instance, if a test case is run 5 times, of
which 2 runs have failed, it will be counted as one Failed test case.
Test Script
Each script is displayed with a metrics table containing the number of Test Suite, Test
Class, Test Case, Epilogue, Procedure, Prologue and Stub blocks encountered. In this
section, statistics reflect the number of times an element occurs in a C++ Test script.
169
Test RealTime - User Guide
Test Results
For each Test Case, Procedure and Stub, this section presents a summary table of the
test status. The table contains the number of times each verification was executed,
failed and passed.
For instance, if a Test Case containing three CHECK statements is run twice, the
reported number of executions will be six, the number of failed verifications will be
two, and the number of passed verifications will be four.
The general status is calculated as follows:
Tested Classes
Class results are grouped at the end of the report and sorted in alphabetical order.
For each class the report shows the general status of assertions (WRAP statement),
invariants, states and transitions.
The general status is computed as follows:
Diagrams
During the execution of the test, Component Testing for C++ generates trace data this
is used by the UML/SD Viewer. The Component Testing for C++ sequence diagram
uses standard UML notation to represent both Contract-Check and Test Driver
results.
• Class Contract-check sequence diagrams,
• Test Driver Sequence Diagrams.
Both types of results can appear simultaneously in the same sequence diagram.
When using Runtime Tracing with Component Testing for C++, all results are
generated in the same sequence diagram.
171
Test RealTime - User Guide
Methods
For each class, methods are shown with method entry and exit actions:
• Method entry actions have a solid border,
• Method exit actions have a dotted border.
Contract-Checks
Pre and post-conditions, invariants and state verifications are displayed as Notes,
attached to the class instance, and contained within the method.
You can click a note to highlight the corresponding OTC Contract-Check script line in
the Text Editor window.
172
Automated Testing
Instances
When using a Test Driver script, each of the following C++ Test Script Language
keywords are represented as a distinct object instance:
• TEST CLASS
• TEST SUITE
• TEST CASE
• STUB
• PROC
You can click an instance to highlight the corresponding statement in the Text Editor
window.
Checks
Test Driver checks are displayed as Passed (" ") or Failed (" ") glyphs attached to
the instances.
You can click any of these glyphs to highlight the corresponding statement in the
Text Editor window.
• CHECK
173
Test RealTime - User Guide
• CHECK PROPERTY
• CHECK STUB
• CHECK METHOD
• CHECK EXCEPTION
To distinguish checks that occur immediately from checks that apply to a stub,
method or exception, the three latter use different shades of red and green.
You can click an instance to highlight the corresponding statement in the Text Editor
window.
The following pre and post-condition statements are green (Passed) or red (Failed)
actions contained in STUB or PROC instances.
• REQUIRE
• ENSURE
Exceptions
Component Testing for C++ generates UNEXPECTED EXCEPTION Notes whenever
an unexpected exception is encountered. These notes will be followed by the ON
ERROR condition.
Error Handling
Whenever a check and a pre- or post-condition generates an error, or an
UNEXPECTED EXCEPTION occurs, the ON ERROR condition is displayed as shown
in the following diagrams.
An ON ERROR BYPASS condition:
174
Automated Testing
Messages
Messages can represent either a RUN or a CALL statement, or a native code stub call,
as shown below:
175
Test RealTime - User Guide
The .tsf and .tdf files are processed together the Component Testing Report
Generator (javapostpro). the output is the .xrd report file, which can be viewed and
controlled in the Test RealTime GUI.
Of course, these steps are mostly transparent to the user when the test node is
executed in the Test RealTime GUI.
JUnit Overview
JUnit is a regression testing framework written by Erich Gamma and Kent Beck.
JUnit is Open Source Software, released under the IBM's Common Public License
Version 0.5 and hosted on the SourceForge website
Please refer to the JUnit documentation for further information about JUnit. More
information on JUnit can be found at the following locations:
http://junit.sourceforge.net
http://www.junit.org
176
Automated Testing
The main difference of the verify primitives is that failed verify tests do not stop the
execution of the test program.
The complete list of extended verify primitives can be found in the Component
Testing for Java section of the Test RealTime Reference Manual.
178
Automated Testing
You must be especially aware of these constraints when importing existing test cases
into Rational Test RealTime
Naming Conventions
Test class names should be prefixed with Test, as in
Test<ClassUnderTest>
Where <ClassUnderTest> is the name of the class under test. This naming convention
enables the test class to use test class primitives.
Test method names must be prefixed with test, as in:
test<TestName>
• End of test: Use this primitive to insert any code that is required to end the test,
such as to set any setUp created objects to null.
void tearDown() throws Exception
tearDown:
• Test Primitives: The test class must also define as many test<TestName>
methods as there are tests.
You can inject such a TestCase into a TestSuite. This way, The TestSuite automatically
creates as many TestCases as requires and executes a sequential run of all the tests.
Running a Test
To run a series of tests, you must incorporate a main inside a TestCase or TestSuite
class, build the main, the TestSuite and TestClass, and execute the run.
In J2ME, these objects can be built in a midlet, which contains only TestSuite and
TestCase, and launches the run on the start app primitive. If the test case was
generated by Test RealTime, you must comment the main method that was
automatically generated.
179
Test RealTime - User Guide
Example
To test that the sum of two Moneys with the same currency contains a value which is
the sum of the values of the two Moneys, write:
public void testSimpleAdd() {
Money m12EUR=new Money(12, "EUR");
Money m14EUR=new Money(14, "EUR");
Money expected= new Money(26, "EUR");
Money result= m12EUR.add(m14EUR);
assertTrue(expected.equals(result));
}
"methodone", StubInfo.ENTER) ;
verifyEquals("Test single
sequence",TestSynchroStub.isSeqRespected(testof),true);
}
This example shows how to verify the entry into methodone of the class StubbedOne
and that the method m1 of StubbedTwo has been successively entered and exited.
This is part of the call stack of the methods of stubbed objects.
public void teststub4()
{
NoStub another = new NoStub();
another.call1();
StubSequence testof = new StubSequence(this);
testof = new StubSequence(this);
testof.addEltToSequence( new StubbedOne().getClass() ,
"methodone",StubInfo.ENTER) ;
testof.addEltToSequence( new StubbedTwo().getClass() ,
"m1",StubInfo.ENTER) ;
testof.addEltToSequence( new StubbedTwo().getClass() ,
"m1",StubInfo.EXIT) ;
verifyLogMessage("Check true for stub calls");
verifyEquals("Test single
sequence",TestSynchroStub.isSeqRespected(testof),true);
}
The following example demonstrates the use of TestSynchroStub to test if a stub has
been declared failed.
public void testStubFail()
{
StubbedThree st = new StubbedThree();
st.call();
verifyLogMessage("Check fail call from stub");
verifyEquals("Test single
sequence",TestSynchroStub.areStubfail(this),true);
}
181
Test RealTime - User Guide
This way, the TestCase object automatically call the public method name that was
passed as an argument with the run call.
To use this technique in J2ME, you must first create a runTest() method in the test
class which will call the correct function.
Examples
The following series of examples shows how to test a simple class Stocks in the J2SE
framework. First, derive the test class from TestCase to check the arithmetic methods:
package examples;
import junit.framework.*;
import examples.Stocks.*;
public class TestStocks extends TestCase {
public TestStokcs(String name) {
super(name);
}
public void testStocks1() {
Stock first = new Stock(“Company,”Dollar”,100,1.25);
Stock second = new Stock(“Company,”Dollar”,250,1.25);
Stock added = new Stock(first + second);
//Display a message in the report.
verifyLogMessage("Check equals for the count of stocks");
verifyEquals("verify equals added count",
added.amountstocks(),
(first.amountstocks()+second.amountstocks()));
}
182
Automated Testing
import j2meunit.framework.*;
import examples.Stocks.*;
public class TestStocks extends TestCase {
public TestStocks(String name) {
super(name);
}
protected void runTest() throws java.lang.Throwable {
if(getTestMethodName().equals("testStocksAmount"))
testStocksAmount ();
else if(getTestMethodName().equals("testStocksValues"))
testStocksValues();
}
Component Testing for Java allows you to check timing between events by using the
time method of the TestCase class:
public void testTimerOnStocks()
{
int idtimer1;
idtimer1 = createTimer("first timer created");
In J2SE, run the tests by calling the run method of the test class:
TestResult result = TestStockObject.run() ;
In J2ME, you run the test class by calling the run method of the object under test:
TestResult result = TestStockObject.run() ;
184
Automated Testing
You can also directly pass the test object. In this case, the TestSuite automatically
builds all the test classes from the public method names:
TestSuite suiteStocks = new TestSuite() ;
suiteStocks.addTest(TestStocks.class);
Running a TestSuite
In J2SE, you run a test suite exactly as you would run a test class, either by producing
a TestResult object, or by modifying the TestResult passed as a parameter, as in the
following examples:
TestResult result = suiteAllTests.run(result);
In J2ME, in order to save memory, the TestSuite destroys the last TestCase instance
after each run.
Simulated Files
This option gives the Component Testing wizard a list of source files to simulate—or
stub—upon execution of the test.
A stub is a dummy software component designed to replace a component that the
code under test relies on, but cannot use for practicality or availability reasons. A
stub can simulate the response of the stubbed component.
185
Test RealTime - User Guide
See Java Stubs for more information about JUnit stub handling.
J2ME Specifics
Component Testing for Java supports the Java 2 Platform Micro Edition (J2ME)
through a specialized version of the JUnit testing framework.
186
Automated Testing
This framework requires that you manually perform the two following additional
steps:
9. Create a test suite class Suite() that transforms a test class into a J2ME test suite.
10. Create a runTest() primitive that transforms the name of the test case into a
relevant call to the test function.
The objects under test must belong to the test class and must have been initialized in
the setUp method.
The following code sample is a runTest selection method for J2ME, which switches
the correct test method depending on the name of the test case:
protected void runTest() throws java.lang.Throwable {
if(getTestMethodName().equals("testOne"))
testOne();
else if(getTestMethodName ().equals("testTwo"))
testTwo();
}
187
Test RealTime - User Guide
Report Explorer
The Report Explorer displays each element of a Test Verdict report with a Passed
or Failed symbol:
• Elements marked as Failed are either a failed test, or an element that contains
at least one failed test.
• Elements marked as Passed are either passed tests or elements that contain
only passed tests.
Test results are displayed in two parts:
• TestClasses, TestSuites and derived test cases of all the executed JUnit scripts.
• Class results for the entire Test.
Report Header
Each Test Verdict report contains a report header with:
• The path and name of the .xrd report file.
• A general verdict for the test campaign: Passed or Failed.
• The number of test cases Passed and Failed. These statistics are calculated on
the actual number of test elements (Test Case, Procedure, Stub and Classes)
listed sections below.
Note The total number counts the actual test elements, not the number of
times each element was executed. For instance, if a test case is run 5 times, of
which 2 runs have failed, it will be counted as one Failed test case.
Test Script
Each script is displayed with a metrics table containing the number of TestSuite,
TestClass and derived test case encountered. In this section, statistics reflect the
number of times an element occurs in a JUnit script.
Test Results
For each test case, this section presents a summary table of the test status. The table
contains the number of times each verification was executed, failed and passed.
For instance, if a Test Case containing three assert functions is run twice, the reported
number of executions will be six, the number of failed verifications will be two, and
the number of passed verifications will be four.
The general status is calculated as follows:
188
Automated Testing
Instances
Each of the following classes are represented as a distinct object instance:
• TestSuite
• Derived test case classes
You can click an instance to highlight the corresponding statement in the Text Editor
window.
189
Test RealTime - User Guide
Checks
JUnit assert and verify primitives are displayed as Passed (" ") or Failed (" ")
glyphs attached to the instances.
You can click any of these glyphs to highlight the corresponding statement in the
Text Editor window.
Exceptions
Component Testing for Java generates UNEXPECTED EXCEPTION Notes whenever
an unexpected exception is encountered.
Comments
Calls to verifyLogMessage generate a white note, attached to the corresponding
instance.
Messages
Messages can represent either a run or a call statement as shown below:
190
Automated Testing
191
Test RealTime - User Guide
This procedure does not require system administrator access, but launching of the
agent is not fully automated.
1. Copy atsagtd.bin or atsagtd.exe to a directory on the target machine.
2. On the target machine, set the ATS_DIR environment variable to the directory
containing the agent binaries.
3. Add that same agent directory to your PATH environment variable.
Note You can add these commands to the user configuration file: login, .cshrc
or .profile.
4. On UNIX systems, create an agent access file .atsagtd file in your home
directory. On Windows create an atsagtd.ini file in the agent installation
directory. See System Testing Agent Access Files.
5. Move the agent access file to your chosen base directory, such as the directory
where the Virtual Testers will be launched.
6. Launch the agent as a background task, with the port number as a parameter.
By default, this number is 10000.
atsagtd.bin <port number>&
atsagtd <port number>
This procedure is for UNIX only. Launching agents on target machines is automatic
with inetd.
With this method, the inetd daemon runs the atsagtd.sh shell script that initializes
environment variables on the target machine and launches the System Testing Agent.
192
Automated Testing
The agent waits for a connection to <port number>. By default, System Testing
uses port 10000.
Note If NIS is installed on the target machine, you may have to update the NIS
server. You can check this by typing ypcat services on the target host.
6. Add the following line to the /etc/inetd.conf file:
atsagtd stream tcp nowait <username> <atsagtd path> <atsagtd path>
where <username> is the name of the user that will run the agent on the target
machine and <atsagtd path> is the full path name of the System Testing Agent
executable file atsagtd.
To reconfigure the inetd daemon, use one of the following methods:
• Type the command /etc/inetd -c on the target host.
• Send the SIGHUP signal to the running inetd process.
• Reboot the target machine.
In some cases, you might need to update the file atsagtd.sh shell script to add some
environment variables to the target machine.
Return to your user account and create an agent access file .atsagtd file in your home
directory. See System Testing Agent Access Files.
Troubleshooting the agent
To check the installation, type the following command on the host running Test
RealTime:
telnet <target machine> <port number>
where <port number> is the port number you specified during the installation
procedure. By default, System Testing uses port 10000. After the connection succeeds,
press Enter to close the connection.
If the connection fails, try the following steps to troubleshoot the problem:
• Check the target hostname and port.
• Check the Agent Access File.
193
Test RealTime - User Guide
• Check the target hostname and port in the atsagtd.sh shell script.
• Check the /etc/services and /etc/inetd.conf files on the target machine.
• If you are using NIS services on your network, check the NIS configuration.
A plus sign + can be used as a wildcard to provide access to all users or all
workstations.
The minus sign - suppresses access to a particular user.
You can add comments to the agent access file by starting a line with the # character:
Example
# This is a sample .atsagtd or atsagtd.ini file.
# The following line allows access from user jdoe on a machine
named workstation
workstation jdoe
194
Automated Testing
General Tab
This tab specifies an instance and target deployment to be assigned to the selected
Virtual Tester.
• VT Name: This is the name of the Virtual Tester currently selected in the Virtual
Tester List. The name of the virtual tester must be a standard C identifier.
• Implemented INSTANCE: Use this box to assign an instance, defined in the .pts
test script, to the selected virtual tester. This information is used for Virtual
Tester deployment. Select Default to specify the instance during deployment.
• Target: This specifies the Target Deployment Port compilation parameters for
the selected Virtual Tester.
• Configure Settings: This button opens the Configuration Settings dialog for the
selected Virtual Tester node.
Scenario Tab
Use this tab to select one or several scenarios as defined in the .pts test script. During
execution, the Virtual Tester plays the selected scenarios.
Family Tab
Use this tab to select one or several families as defined in the .pts test script. During
execution, the Virtual Tester plays the selected families.
The Virtual Tester Deployment Table allows to deploy previously created Virtual
Testers.
Advanced Options
Click the Advanced Options button to add the following columns to the Virtual
Tester Deployment Table, and to add the Rendezvous... button.
• Agent TCP/IP Port: This specifies the port used by the System Testing Agents
to communicate with Test RealTime. By default, System Testing uses port 10000.
• Delay: This allows you to set a delay between the execution of each line of the
table.
• First Occurrence ID: This specifies the unique occurrence ID identifier for the
first Virtual Tester executed on this line. The occurrence ID is automatically
incremented for each number of instances of the current line. See
Communication Between Virtual Testers for more information.
• Start Routine: This specifies the name of the function containing the Virtual
Tester, for use in multi-threaded or RTOS environments, if the starting
procedure is not main().
196
Automated Testing
• Select Error only to generate traces only if an error is detected during execution
of the application. This report will be incomplete, but the report will show failed
instructions as well as a number of instructions that preceded the error. This
number depends on the Virtual Tester's trace buffer size. Use this option for
endurance testing, if you expect a large quantity execution traces.
In addition to the above, you can select the Circular trace option for strong real-time
constraints when you need full control over the flush of traces to disk. If you want to
still store a large amount of trace data, specify a large buffer.
198
Automated Testing
199
Test RealTime - User Guide
{
pthread_t thrTester_1,thr_Tester_2;
pthread_attr_t pthread_attr_default;
ATL_T_ARG arg_Tester_1, arg_Tester_2;
int status;
arg_Tester_1.atl_riofilename = "Tester_1.rio";
arg_Tester_1.atl_filters = "";
arg_Tester_1.atl_instance = "Tester_1";
arg_Tester_1.atl_occid = 0;
arg_Tester_2.atl_riofilename = "Tester_2.rio";
arg_Tester_2.atl_filters = "";
arg_Tester_2.atl_instance = "Tester_2";
arg_Tester_2.atl_occid = 0;
pthread_attr_init(&pthread_attr_default);
/* Start Thread Tester 1 */
pthread_create(&thrTester_1,&pthread_attr_default,start,&arg_Te
ster_1);
/* Start Thread Tester 2 */
pthread_create(&thrTester_2,&pthread_attr_default,start,&arg_Te
ster_2);
/* Both Testers are running */
/* Wait for the end of Thread Tester 1 */
pthread_join(thrTester_1, (void *)&status);
/* Wait for the end of Thread Tester 2 */
pthread_join(thrTester_2, (void *)&status);
return(0);
}
200
Automated Testing
Example
HEADER "Registering", "1.0", "1.0"
SCENARIO basic_registration
FAMILY nominal
-- The body of my basic_registration test
END SCENARIO
SCENARIO extented_registration
FAMILY robustness
SCENARIO reg_priv_area
-- The body of my reg_priv_area test
END SCENARIO -- reg_priv_area
SCENARIO reg_pub_area LOOP 10
-- The body of my reg_pub_area test
END SCENARIO -- reg_priv_area
END SCENARIO
Include Statements
To avoid writing large test scripts, you can split test scripts into several files and link
them using the INCLUDE statement.
This instruction consists of the keyword INCLUDE followed by the name of the file
to include, in quotation marks (" ").
INCLUDE instructions can appear in high- and intermediate-level scenarios, but not
in the lowest-level scenarios.
You can specify both absolute or relative filenames. There are no default filename
extensions for included files. You must specify them explicitly.
Example
HEADER "Socket validation", "1.0", "beta"
INCLUDE "../initialization"
SCENARIO first
END SCENARIO
SCENARIO second
INCLUDE "scenario_3.pts"
SCENARIO level2
FAMILY nominal, structural
...
END SCENARIO
END SCENARIO
Procedures
You can also use procedures to build more compact test scripts. The following are
characteristics of procedures:
• They must be defined before they are used in scenarios.
• They do not return any parameters.
201
Test RealTime - User Guide
A procedure begins with the keyword PROC and ends in the sequence END PROC.
For example:
HEADER "Socket Validation", "1.0", "beta"
PROC function ()
...
END PROC
SCENARIO first
...
CALL function ()
...
END SCENARIO
SCENARIO second
SCENARIO level2
FAMILY nominal, structural
...
END SCENARIO
END SCENARIO
Flow Control
Several execution flow instructions let you develop algorithms with multiple
branches.
202
Automated Testing
Conditions
The IF statement comprises the keywords IF, THEN, ELSE, and END. It lets you
define branches and follows these rules:
• The test following the keyword IF must be a Boolean expression in C or C++.
• IF instructions can be located in scenarios, procedures, or environment blocks.
• The ELSE branch is optional.
The sequence IF (test) THEN must appear on a single line. The keywords ELSE and
END IF must each appear separately on their own lines.
Example
HEADER "Instruction IF", "1.0", "1.0"
#int IdConnection;
SCENARIO Main
COMMENT connection
CALL socket(AF_UNIX, SOCK_STREAM, 0)@@IdConnection
IF (IdConnection == -1) THEN
EXIT
END IF
END SCENARIO
Iterations
The WHILE instruction comprises the keywords WHILE and END. It lets you define
loops and follows these rules:
• The test following the keyword WHILE must be a C Boolean expression.
• The WHILE instructions can be located in scenarios, procedures, or
environment blocks.
The sequence WHILE (test) and the keyword END WHILE must each appear
separately on their own lines.
Example
HEADER "Instruction WHILE", "", ""
#int count = 0;
#appl_id_t id;
#message_t message;
SCENARIO One
FAMILY nominal
CALL mbx_init(&id) @ err_ok
VAR id.applname, INIT="JUPITER"
CALL mbx_register(&id) @ err_ok
203
Test RealTime - User Guide
Multiple Conditions
The multiple-condition statement CASE comprises the keywords CASE, WHEN,
END, OTHERS and the arrow symbol =>.
CASE instructions follow these rules:
• The test following the keyword CASE must be a C or C++ Boolean expression.
The keyword WHEN must be followed by an integer constant.
• The keyword OTHERS indicates the default branch for the CASE instruction.
This branch is optional.
• CASE instructions can be located in scenarios, procedures, or environment
blocks.
Example
HEADER "Instruction CASE", "", ""
...
MESSAGE message_t: response
SCENARIO One
...
CALL mbx_send_message(&id,&message) @ err_ok
DEF_MESSAGE response, EV={}
WAITTIL(MATCHING(response),WTIME == 10)
-- Checking the just received event type
CASE (response.type)
WHEN ACK =>
CALL mbx_send_message(&id,&message) @ err_ok
WHEN DATA =>
CALL mbx_send_message(&id,&ack) @ err_ok
WHEN NEG_ACK =>
CALL mbx_send_message(&id,&error) @ err_ok
WHEN OTHERS => ERROR
END CASE
END SCENARIO
Native C
CALL Instruction
The CALL instruction lets you call functions or methods in a test script and to check
return values of functions or methods.
204
Automated Testing
For the following example, you must pre-declare the param1, param2, param4, and
return_param variables in the test script, using native language.
CALL function ( )
-- indicates that the return parameter is neither checked nor
stored in a variable.
CALL function ( ) @ "abc"
-- indicates that the return parameter to the function must be
compared with the string "abc", but its value is not stored in a
variable.
CALL function ( ) @@return_param
-- indicates that the return parameter is not checked, but is
stored in the variable return_param.
CALL function ( ) @ 25 @return_param
-- indicates that the return parameter is checked against 25 and
is stored in the variable return_param.
You can add native code either inside or outside of C and Ada Test Script Language
blocks.
Instances
In a distributed environment, you can merge the description of several entities,
Virtual Testers, in a unique test script. This is possible through the concept of
interaction instances, as defined in UML.
Hence, you create Virtual Testers, all based on a same test script, with distinct
behaviors such as a client and a server or both.
The use of instances in a test script must be split into two parts, as follows:
205
Test RealTime - User Guide
Instance Declaration
The DECLARE_INSTANCE instruction lets you declare the set of the instances
included in the test script.
Note Each instance behavior will be translated into different Virtual Testers
executed within a process or a thread.
The DECLARE_INSTANCE instruction must be located before the top-level scenario.
The instance declaration can be done by one or several DECLARE_INSTANCE
instructions. They must appear in the test script in such a way that no INSTANCE
block containing global declarations uses an instance that has not been previously
declared.
Example
HEADER "Multi-server / Multi-client example","1.0",""
DECLARE_INSTANCE server1, server2
...
DECLARE_INSTANCE client1, client2, client3
...
SCENARIO Principal
...
Instance Synchronization
The RENDEZVOUS statement, provides a way to synchronize Virtual Testers to each
instance.
When a scenario is executed, the RENDEZVOUS instruction stops the execution until
all Virtual Testers sharing this synchronization point (the identifier) have reached
this statement.
When all Virtual Testers have met the rendezvous, the scenario resumes.
SCENARIO first_scenario
FAMILY nominal
-- Synchronization point shared by both Instances
RENDEZVOUS sync01
INSTANCE JUPITER:
RENDEZVOUS sync02
. . .
END INSTANCE
INSTANCE SATURN:
RENDEZVOUS sync02
. . .
END INSTANCE
END SCENARIO
206
Automated Testing
Synchronization can be shared with other parts of the test bench such as in-house
Virtual Testers, specific feature , and so on. This can be done easily by linking these
pieces with the current Target Deployment Port.
Then, to define a synchronization point, you must make a call to the following
function:
atl_rdv("sync01");
This synchronization point matches the following instruction used in a test script:
RENDEZVOUS sync01
Example
The following test script is based on the example developed in the Event
Management section. The script provides an example of the usefulness of instances
for describing several applications in a same test script.
HEADER "SystemTest Instance-including Scenario Example", "1.0",
""
DECLARE_INSTANCE JUPITER, SATURN
COMMTYPE appl_comm IS appl_id_t
MESSAGE message_t: message, data, my_ack, neg_ack
CHANNEL appl_comm: appl_ch
#appl_id_t id;
#int errcode;
PROCSEND message_t: msg ON appl_comm: id
CALL mbx_send_message( &id, &msg ) @ err_ok
END PROCSEND
CALLBACK message_t: msg ON appl_comm: id
CALL mbx_get_message ( &id, &msg, 0 ) @@ errcode
MESSAGE_DATE
IF ( errcode == err_empty ) THEN
NO_MESSAGE
END IF
IF ( errcode != err_ok ) THEN
ERROR
END IF
END CALLBACK
SCENARIO first_scenario
FAMILY nominal
COMMENT Initialize, register, send data
COMMENT wait acknowledgement, unregister and release
CALL mbx_init(&id) @ err_ok @ errcode
ADD_ID(appl_ch,id)
INSTANCE JUPITER:
VAR id.applname, INIT="JUPITER"
END INSTANCE
INSTANCE SATURN:
VAR id.applname, INIT="SATURN"
END INSTANCE
CALL mbx_register(&id) @ err_ok @ errcode
COMMENT Synchronization of both instances
RENDEZVOUS start_RDV
INSTANCE JUPITER:
VAR message, INIT={type=>DATA,num=>id.s_id,
& applname=>"SATURN",
& userdata=>"Hello Saturn!"}
SEND( message , appl_ch )
DEF_MESSAGE my_ack, EV={type=>ACK}
207
Test RealTime - User Guide
The scenario describes the behavior of two applications (JUPITER and SATURN)
exchanging messages by using a communications stack.
Some needed resources are allocated and a connection is established with the
communication stack (mbx_init). This connection is made known by the Virtual
Tester with the ADD_ID instruction. Note that this is a common part to both
instances.
Then, the two applications register (mbx_register) onto the stack by giving their
application name (JUPITER or SATURN). These operations are specific to each
instance, which is why these operations are done in two separate instance blocks.
The application JUPITER sends the message "Hello Saturn!" to the SATURN
application (through the communication stack) which is supposed to have set itself in
a message waiting state (WAITTIL (MATCHING(data), ...) ).
Once the message has been sent, JUPITER waits for an acknowledgment from the
communication stack (WAITTIL(my_ack),...). Then, it waits for the response of
SATURN (WAITTIL (MATCHING(data),...) ) which answers by the message "Fine,
Jupiter!" (SEND(message , appl_ch ) ). These operations are specific to each instance.
Finally, the applications unregister themselves and free the allocated resources in the
last part, which is common to both instances.
Environments
Environments
When creating a test script, you typically write several test scenarios. These scenarios
are likely to require the same resources to be deployed and then freed. You can avoid
208
Automated Testing
Error Handling
The ERROR Statement
The ERROR instruction lets you interrupt execution of a scenario where an error
occurs and continue on to the next scenario at the same level.
ERROR instructions follow these rules:
• ERROR instructions can be located in scenarios, in procedures, or in
environment blocks.
• If an ERROR instruction is encountered in an INITIALIZATION block, the
Virtual Tester exits with an error from the set of scenarios at the same level.
Note In debug mode, the behavior of ERROR instructions is different (see
Debugging Virtual Testers).
The following is an example of an ERROR instruction:
HEADER "Instruction ERROR", "1.0", "1.0"
#int IdConnection;
SCENARIO Main
COMMENT connection
CALL socket(AF_UNIX, SOCK_STREAM, 0)@@IdConnection
IF (IdConnection == -1) THEN
ERROR
END IF
END SCENARIO
209
Test RealTime - User Guide
210
Automated Testing
Scenario second is made up of two sub-scenarios, level2_1 and level2_2. The second
exception environment is executed after incorrect execution of scenarios level2_1 and
level2_2. The highest-level exception environment is not re-executed if scenarios
level2_1 and level2_2 finish with an error.
Only one exception environment can appear at a given scenario level.
An exception environment can appear among scenarios at the same level. It does not
have to be placed before a set of scenarios at the same level.
In a test report, the execution of an exception environment is shown even if you
decided not to trace the execution.
Initialization Environment
A test script is composed of scenarios in a tree structure. An initialization
environment can be defined at a given scenario level.
This initialization environment is executed before each scenario at the same level.
The syntax for initialization environments can take two different forms, as follows:
• A block: This begins with the keyword INITIALIZATION and ends with the
sequence END INITIALIZATION. An initialization block can contain any
instruction.
• A procedure call: This begins with the keyword INITIALIZATION followed by
the name of the procedure and, where appropriate, its arguments.
Example
In the following example, the highest level of the test script is made up of two
scenarios called first and second. The initialization environment that precedes them is
executed twice: once before scenario first is executed and once before scenario second
is executed.
HEADER "Validation", "01a", "01a"
PROC Load_mem()
...
END PROC
INITIALIZATION Load_mem()
SCENARIO first
...
END SCENARIO
SCENARIO second
INITIALIZATION
END INITIALIZATION
SCENARIO level2_1
FAMILY nominal, structural
211
Test RealTime - User Guide
...
END SCENARIO
SCENARIO level2_2
FAMILY nominal, structural
...
END SCENARIO
END SCENARIO
Scenario second is made up of two sub-scenarios, level2_1 and level2_2. The second
initialization environment is executed before scenarios level2_1 and level2_2 are
executed. The highest-level initialization environment is not re-executed between
scenarios level2_1 and level2_2.
Only one initialization environment can appear at a given scenario level.
An initialization environment can appear among scenarios at the same level. The
initialization environment does not have to be placed before a set of scenarios at the
same level.
In a test report, the execution of an initialization environment is shown beginning
with the word INITIALIZATION and ending with the words END
INITIALIZATION.
Termination Environment
A test script is composed of scenarios in a tree structure A termination environment
can be defined at a given scenario level.
This termination environment is executed at the end of every scenario at the same
level, provided that each scenario finished without any errors.
The syntax for termination environments can take two different forms, as follows:
• A block: This begins with the keyword TERMINATION and ends with the
sequence END TERMINATION. A termination block can contain any
instruction.
• A procedure call: This begins with the keyword TERMINATION followed by
the name of the procedure and, where appropriate, its arguments.
Example
In the previous example, the highest level of the test script is made up of two
scenarios called first and second. The termination environment that precedes them is
executed twice:
• once after scenario first is executed correctly
• once after scenario second is executed correctly
HEADER "Validation", "01a", "01a"
PROC Unload_mem()
...
END PROC
212
Automated Testing
TERMINATION Unload_mem()
SCENARIO first
...
END SCENARIO
SCENARIO second
TERMINATION
...
END TERMINATION
SCENARIO level2_1
FAMILY nominal, structural
...
END SCENARIO
SCENARIO level2_2
FAMILY nominal, structural
...
END SCENARIO
END SCENARIO
Scenario second is made up of two sub-scenarios, level2_1 and level2_2. The second
termination environment is executed after the correct execution of scenarios level2_1
and level2_2. The highest-level termination environment is not re-executed between
scenarios level2_1 and level2_2.
Only one termination environment can appear at a given scenario level.
A termination environment can appear among scenarios at the same level. The
termination environment does not have to be placed before a set of scenarios at the
same level.
In a test report, the execution of a termination environment is shown beginning with
the word TERMINATION and ending with the words END TERMINATION.
Time Management
In some cases, you will need information about execution time within a test script.
The following instructions provide a way to dump timing data, define a timer, clear a
timer, get the value of a timer, and temporarily suspend test script execution:
• TIME Instruction
• TIMER Instruction
• RESET Instruction
• PRINT Instruction
• PAUSE Instruction
TIME Instruction
The TIME instruction returns the current value of a timer. You must use a C
expression or scripting instruction (IF, PRINT, and so on).
213
Test RealTime - User Guide
Before using TIME, you must declare the timer with the TIMER instruction.
Example
HEADER "Socket validation", "1.0", "beta"
TIMER globalTime
PROC first
TIMER firstProc
...
PRINT globalTimeValue, TIME (globalTime)
END PROC
SCENARIO second
SCENARIO level2
TIMER level2Scn
...
PRINT level2ScnValue, TIME (level2Scn)
END SCENARIO
END SCENARIO
TIMER Instruction
The TIMER instruction declares a timer in the test script.
You may declare a timer in any test script block: global, initialization, termination,
exception, procedure, or scenario.
The timer lasts as long as the block in which the timer is defined. This means that a
timer defined in the global block can be used until the end of the test script.
You may define multiple timers in the same test script. The timer starts immediately
after its declaration.
The unit of the timer unit is defined during execution of the application, with the
WAITTIL and WTIME instructions.
Example
HEADER "Socket validation", "1.0", "beta"
TIMER globalTime
PROC first
TIMER firstProc
...
END PROC
SCENARIO second
SCENARIO level2
TIMER level2Scn
...
END SCENARIO
END SCENARIO
RESET Instruction
The RESET instruction lets you reset a timer to zero.
The timer restarts immediately when the RESET statement is encountered.
A timer must be declared before using RESET.
214
Automated Testing
Example
HEADER "Socket validation", "1.0", "beta"
TIMER globalTime
PROC first
TIMER firstProc
RESET globalTime
...
END PROC
SCENARIO second
SCENARIO level2
TIMER level2Scn
...
RESET level2Scn
END SCENARIO
END SCENARIO
PRINT Instruction
You can print the result of an expression in a performance report by using the PRINT
statement. The PRINT instruction prints an identifier before the expression.
Example
HEADER "Socket validation", "1.0", "beta"
#long globalTime = 45;
SCENARIO first
PRINT timeValue, globalTime
END SCENARIO
SCENARIO second
SCENARIO level2
PRINT time2Value, globalTime*10+5
...
END SCENARIO
END SCENARIO
PAUSE Instruction
The PAUSE instruction lets you temporarily stop test script execution for a given
period.
The unit of the PAUSE instruction is defined during execution of the application,
with the WAITTIL and WTIME instructions.
Example
HEADER "Socket validation", "1.0", "beta"
#long time = 20;
PROC first
PAUSE 10
...
END PROC
SCENARIO second
SCENARIO level2
PAUSE time*10
...
END SCENARIO
END SCENARIO
215
Test RealTime - User Guide
Event Management
Event management helps you describe communication between the Virtual Tester
and the system under test.
Many different means of communication allow your systems to talk with each other.
At the software application level, a communication type is identified by a set of
services provided by specific functions.
For example, a UNIX system provides several means of communication between
processes, such as named pipes, message queues, BSD sockets, or streams. You
address each communication type with a specific function.
Furthermore, each communication type has its own data type to identify the
application you are sending messages to. This type is often an integer (message
queues, BSD sockets, ...), but sometimes a structure type.
Data exchanged this way must be interpreted by all communicating applications. For
this reason, each type of exchanged data must be well identified and well known. By
providing the type of exchanged data to the Virtual Tester, it will be able to
automatically print and check the incoming messages.
• Basic Declarations
• Sending Messages
• Receiving Messages
• Messages and Data Management
• Communication Between Virtual Testers
Basic Declarations
COMMTYPE Instruction
For each communication type, there is a specific data type that identifies the
application you are sending messages to. In a test script, the COMMTYPE instruction
is used to identify clearly this data type, and then, the communication type.
The data type has to be defined by a C typedef or a C++ object.
On UNIX systems, the data type for the BSD sockets is an integer. The COMMTYPE
instruction is used as follows:
#typedef int bsd_socket_id_t;
COMMTYPE ux_bsd_socket IS bsd_socket_id_t
The stack defines the data type appl_id_t. Therefore, the following figure defines a
new communication type called appl_comm:
COMMTYPE appl_comm IS appl_id_t
MESSAGE Instruction
216
Automated Testing
The MESSAGE instruction identifies the type of the data exchanged between
applications. It also defines a set of reference messages.
The type of the messages exchanged between applications using our stack is
message_t.
The following instruction also declares three reference messages:
MESSAGE message_t: ack, neg_ack, data
CHANNEL Instruction
the function call to mbx_init opens a connection between the Virtual Tester and the
stack. This connection is identified by the value of id after the call. The ADD_ID
instruction add this new connection to the channel appl_channel.
Sending Messages
PROCSEND Instruction
Event management provides a mechanism to send messages. This mechanism needs
the definition of a message sending procedure or PROCSEND for each couple
communication type, message type.
The PROCSEND instruction is then called automatically to sends a message to the
stack.
In the following example, msg is a message_t typed input formal parameter
specifying the message to send. The input formal parameter stack is used to know
where to send a message on the communication type appl_comm.
PROCSEND message_t: msg ON appl_comm: id
CALL mbx_send_message ( &id, &msg ) @ err_ok
END PROCSEND
The sending is done by the API function call to mbx_send_message. The return code
is treated to decide whether the message was correctly sent. Another value than
err_ok means that an error occurred during the sending.
217
Test RealTime - User Guide
VAR Instruction
The instruction VAR allows you to initialize messages declared using MESSAGE
instructions. This message may also be initialized by any other C or C++ function or
method:
VAR ack, INIT= { type => ACK }
VAR data, INIT= {
& type => DATA,
& applname => "SATURN",
& userdata => "hello world !" }
To learn all the nuts and bolts of the DEF_MESSAGE Instruction, see the Messages
and Data Management chapter.
SEND Instruction
This instruction allows you to invoke a message sending on one communication
channel .
It has two arguments:
• the message to send,
• the communication channel where the message should be sent.
The send instruction is as follows:
SEND ( message , appl_ch )
In the previous figure, the SEND instruction allows the test program to send a
message on a known connection (see the ADD_ID instruction). If an error occurs
during the sending of the message, the SEND exits with an error. The scenario
execution is then interrupted.
Example
The following test script describes a simple use of our stack. First of all, some
resources are allocated and a connection is established with the communication stack
(mbx_init). This connection is made known by the Virtual Tester with the ADD_ID
instruction. Then, the Virtual Tester registers (mbx_register) onto the stack by giving
its application name (JUPITER). The Virtual Tester sends a message to an application
under test (SATURN). Finally, the Virtual Testers unregisters itself (mbx_unregister)
and frees the allocated resources (mbx_end)
218
Automated Testing
SCENARIO first_scenario
FAMILY nominal
COMMENT Initialize, register, send data
COMMENT wait acknowledgement, unregister and release
CALL mbx_init(&id) @ err_ok @ errcode
ADD_ID(appl_ch,id)
VAR id.applname, INIT="JUPITER"
CALL mbx_register(&id) @ err_ok @ errcode
VAR message, INIT={
& type=>DATA,
& applname=>"SATURN",
& userdata=>"hello Saturn!"}
SEND ( message, appl_ch )
CALL mbx_unregister(&id) @ err_ok @ errcode
CLEAR_ID(appl_ch)
CALL mbx_end(&id) @ err_ok @ errcode
END SCENARIO
Receiving Messages
CALLBACK Instruction
The event management provides an asynchronous mechanism to receive messages.
This mechanism needs the definition of a callback for each couple communication
type, message type.
A callback should do a non-blocking read for a specific message type on a specific
communication type.
The MESSAGE_DATE instruction lets you mark the right moment of the reception of
messages. The NO_MESSAGE instruction exits from the callback and indicates that
no message has been read.
The callback to receive messages from our stack is as follows:
CALLBACK message_t: msg ON appl_comm: id
CALL mbx_get_message ( &id, &msg, 0 ) @@ errcode
MESSAGE_DATE
IF ( errcode == err_empty ) THEN
NO_MESSAGE
END IF
IF ( errcode != err_ok ) THEN
ERROR
END IF
END CALLBACK
In this example, msg is an output formal parameter of the callback. Its type is
message_t. The input formal parameter id is used to known where to read a message
on the communication type appl_comm.
The reading is done by the function call to mbx_get_message. The return code is
stored into the variable errcode. The value err_empty for the return code means that
no message has been read. Another value than err_ok or err_empty means that an
error occurred during the reading. The NO_MESSAGE and ERROR instructions
make the callback to return.
219
Test RealTime - User Guide
DEF_MESSAGE Instruction
The DEF_MESSAGE instruction defines the values of a reference message declared
with the MESSAGE instruction. A reference message is a message expected by the
virtual tester from an application under test.
DEF_MESSAGE ack, EV= { type => ACK }
DEF_MESSAGE data, EV= {
& type => DATA,
& applname => "SATURN",
& userdata => "hello world !" }
To learn all the nuts and bolts of the DEF_MESSAGE Instruction, see the Messages
and Data Management chapter.
WAITTIL Instruction
The WAITTIL instruction allows waiting for events or conditions. WAITTIL is made
of two Boolean expressions: an expected condition, and a failure condition. The
instruction blocks until one of the two expressions becomes true.
In the following example, the WAITTIL instruction receives all the messages sent to
the Virtual Tester on a known connection. As soon as a received message matches the
reference message ack, the WAITTIL exits normally. Otherwise, if any message
matching the reference message ack is received during 300 units of time, the
WAITTIL exits with an error (the time unit is configurable in the Target Deployment
Port according to the execution target). The scenario execution is interrupted.
WAITTIL ( MATCHING(ack), WTIME == 300)
In the example given above, the status of the reference event variable ack is tested
using the function MATCHING() which identifies if the last incoming event
corresponds to the content of the variable ack. WTIME is a reserved keyword
valuated with the time expired since the beginning of the WAITTIL instruction.
The WAITTIL Boolean conditions are described using C or C++ conditions including
operators to manipulate events:
• MATCHING: does the last event match the specified reference event?
• MATCHED: did the Virtual Tester receive an event matching the specified
event?
• NOMATCHING: is the last event different from the specified reference event?
• NOMATCHED: did the Virtual Tester receive an event different from the
specified event?
The different combinations of these operators allow an easy an extensive definition of
event sequences:
-- I expect evt1 on channel1 before my_timeout is reached
WAITTIL (MATCHING(evt1, channel1), WTIME>my_timeout)
-- I expect evt1 then evt2 on one channel before my_timeout is
reached
220
Automated Testing
221
Test RealTime - User Guide
222
Automated Testing
When referencing by name, a parameter is described by the name of the field in the
structure followed by the arrow symbol (=>) and the initialization or checking
expression.
#typedef struct
# {
# int Integer;
# char String [ 15 ];
# float Real;
# } block;
# block variable;
VAR variable, INIT={Real=>2.0, Integer=>26, String=>"foo"}
You can omit the specification of structure elements by name if you know the order
of the fields within the structure. For the block type defined above, you can write the
following VAR statement:
VAR variable, INIT={ 26, "foo", 2.0 }
Reference by Position
You can describe the contents of an array by giving the position of elements within
the array.
When referencing by position, define a parameter by giving the position of the field
in the array followed by the arrow symbol (=>) and the initialization or checking
expression.
Note that numbering begins at zero.
#int array[5];
VAR array, EV=[4=>5, 1=>12, 2=>-18, 5=>15-26, 3=>0, 0=>123]
You can use ranges of positions when referencing by position. These ranges are
specified by two bounds separated by the symbol double full-stop (..).
#typedef int matrix[3][150];
VAR matrix, EV= [
& 2=>[0..99=>1, 100..149=>2],
& 0=>[99..0=>2, 100..149=>1],
& 1=>[0..80=>-1, 81..149=>0]]
The array elements 5, 6 and 7 are initialized to 2.1. Other elements are not initialized.
223
Test RealTime - User Guide
The following example provides a set of VAR instructions that are semantically
identical:
#int matrix[3][3];
VAR matrix, EV=0
VAR matrix, EV=[0,0,0]
VAR matrix, EV=[[0,0,0],[0,0,0],[0,0,0]]
In the three VAR instructions above, all the matrix elements are checked against zero.
Array Indices
With a VAR instruction, you can initialize and check array elements according to
their index at a given level.
The index is specified by a capital I followed by the level number. Levels begin at 1.
You can use I1, I2, I3, etc. as implicit variables.
#int matrix[3][100];
VAR matrix, EV=I1*I2
Each element of the above matrix is checked against the product of variables I1 and
I2, which indicate, respectively, a range from 0 to 2 and a range from 0 to 99. The
above matrix is checked against the 3 by 100 multiplication table.
224
Automated Testing
Reference by Default
You can reference the remaining set of fields in an array, structure, or object in a VAR
instruction. To do this, use the keyword OTHERS, followed by the arrow symbol =>,
and an expression in C or C++.
Note To use OTHERS, the remaining fields must be the same type and must
be compatible with the expression following OTHERS.
#typedef struct {
# char String[25];
# int Value;
# int Value2;
# int Array[30];
#} block;
# block variable;
VAR variable, INIT=[
& String=>"chaine",
& Array=>[0..10=>0, OTHERS=>1] ,
& OTHERS=>2]
In the above example, the pointers indexed from 0 to 5 of the addr array are
compared with the null address. The test of the pointers indexed from 6 to 9 is correct
if these pointers are different from the null address.
Checking Ranges
You may use ranges of acceptable values instead of immediate values. To do this, use
the following syntax:
VAR <variable>, EV=[Min..Max]
DEF_MESSAGE <variable>, EV=[Min..Max]
In the previous example, the elements indexed from 0 to 5 of the addr array are
checked with the following constraint:
a should be greater than 0 and lower than 100.
The test of the pointers indexed from 6 to 9 is correct if these pointers are different
from null address
Character Strings
When you use the VAR instruction for character strings, you may alter it. In C, a
character string can also be an array. This flexibility is retained in the VAR
instruction.
In the following example, the first variable String initializes as in C (null-terminated).
The second String initializes as an array of characters (not null-terminated).
#char String[15];
VAR String, INIT="abcdef"
VAR String, INIT=['a', 'b', 'c', 'd', 'e', 'f']
Note You must define the VAR instruction either as a character string or an
array of characters.
<instance>_<occid>
• If the Virtual Tester is running in multi-threaded mode, with its entry point in
<function>:
<function_name>_<occid>
• In any other case, the identifier uses the .rio file name:
<filename>.rio_<occid>
By default the occurrence identification number <occid> for each Virtual Tester is 0,
but you can set different <occid> values in the Virtual Tester Deployment dialog box.
There must never be two Virtual Testers at the same time with the same identifier. If
an INTERSEND message cannot be delivered because of an ambiguous identifier, the
System Testing supervisor returns an error message.
Report Explorer
The Report Explorer displays each element of a test report with a Passed , Failed
symbol.
• Elements marked as Failed are either a failed test, or an element that contains
at least one failed test.
• Elements marked as Passed are either passed tests or elements that contain
only passed tests.
Test results are displayed for each instance, following the structure of the .pts test
script.
Report Header
Each test report contains a report header with:
• The version of Test RealTime used to generate the test as well as the date of the
test report generation
• The path and name of the project files used to generate the test
• The total number of test cases Passed and Failed. These statistics are calculated
on the actual number of test elements listed in the sections below
• Virtual Tester information.
227
Test RealTime - User Guide
228
Automated Testing
You can modify the appearance of UML sequence diagrams by changing the
UML/SD Viewer Preferences.
When using System Testing with Runtime Tracing or other Test RealTime features
that generate UML sequence diagrams, all results are merged in the same sequence
diagram.
You can click any element of the UML sequence diagram to open the System Testing
reports at the corresponding line. Click again in the test report, and you will locate
the line in the .pts test script.
229
Test RealTime - User Guide
Messages
Messages are sent and received between Virtual Tester and system instances.
Rendezvous
RENDEZVOUS statements are displayed as Synchronizations in the Virtual Tester
lifeline.
On-the-Fly Tracing
If you are using the On-the-Fly option, only the following information can be
displayed in real-time during the execution of the application:
• Virtual Tester and system under test
• Messages
• Rendezvous
• Test script blocks
231
Test RealTime - User Guide
On-the-Fly Tracing
The System Testing for C on-the-fly tracing capability allows you to monitor the
Virtual Testers during the test execution in a UML sequence diagram. Information
provided by dynamic tracking includes:
• Beginning and end of scenarios
• Rendezvous
• Sent and received messages
• Inter-tester messages (only received messages)
• Beginning and end of termination, initialization and exception blocks
• End of Testers
On-the-fly tracing output is displayed in the UML/SD Viewer in real-time. You can
click any item in the sequence diagram to instantly highlight the corresponding test
script line in the Text Editor window.
Trace Probes
The Probe feature of Test RealTime allows you to manually add special probe C
macros at specific points in the source code under test, in order to trace messages.
Adding trace probes to the application produces a binary which is functionally
identical to the original, but which generates extra message tracing results with
System Testing for C.
Upon execution of the instrumented binary, the probes write trace information on the
exchange of specified messages to the .rio System Testing output file, including
message content and a time stamp. Probe trace results can then be processed and
displayed as .tdf dynamic trace files in the UML/SD Viewer.
232
Automated Testing
The use of C macros offers extreme flexibility. For example, when delivering the final
application, you can leave the macros in the final source and simply provide an
empty definition.
The atl_start_trace() and atl_end_trace() macros must be called when the application
under test starts and terminates.
Other macros must be placed in your source code in locations that are relevant for the
messages that you want to trace.
The following probe macros are available:
• atl_dump_trace()
• atl_end_trace()
• atl_recv_trace()
• atl_select_trace()
• atl_send_trace()
• atl_start_trace()
• atl_format_trace()
Please refer to the Probe Macros section in the Test RealTime Reference Manual for a
complete definition of each probe macro.
233
Test RealTime - User Guide
2. In the file selector, select Trace Files (*.tsf, *.tdf) and select the .tsf and .tdf files
produced after the execution of the application under test.
3. Click OK.
234
Graphical User Interface
GUI Philosophy
In addition to acting as an interface with your usual development tools, the GUI
provides navigation facilities, allowing natural hypertext linkage between source
code, test, analysis reports, UML sequence diagrams. For example:
• You can click any element of a test report to highlight the corresponding test
script line in the embedded text editor.
• You can click any element of an runtime analysis report to highlight and edit
the corresponding item in your application source code
• You can click a filename in the output window to open the file in the Text Editor
In addition, the GUI provides easy-to-use Activity Wizards to guide you through the
creation of your project components.
Start Page
When you launch the graphical user interface, the first element that appears is the
Test RealTime Start Page.
235
Test RealTime - User Guide
The Start Page is the central location of the application. From here, you can create a
new project, start a new activity and navigate through existing project reports.
The Start Page contains the following sections:
• Welcome: General information for first-time users of the product.
• Get Started: This section lists your recent projects as well as a series le projects
provided with Test RealTime.
• Activities: This section displays a series of new activities. Click a new activity to
launch the corresponding activity wizard. A project must be open before you
can select a new activity.
• Examples: A set of sample projects for tutorial or demonstration purposes. You
can use these projects to get familiar with the product.
• Support: Links to Customer Support and online documentation.
1. Select the Start page and click the Reset button in the toolbar.
Output Window
The Output Window displays messages issued by product components or custom
features.
The first tab, labelled Build, is the standard output for messages and errors. Other
tabs are specific to the built-in features of the product or any user defined tool that
you may have added.
To switch from one console window to another, click the corresponding tab. When
any of the Output Window tabs receives a message, that tab is automatically
activated.
When a console message contains a filename, double-click the line to open the file in
the Text Editor. Similarly when a test report appears in the Output Window, double-
click the line to view the report.
Project Explorer
The Project Explorer allows you to navigate, construct and execute the components of
your project. The Project Explorer organizes your workspace from two viewpoints:
• Project Browser: This tab displays your project as a tree view, as it is to be
executed.
• Asset Browser: Source code and test script components are displayed on an
object or elementary level.
To change views, select the corresponding tab in the lower section of the Project
Explorer window.
Project Browser
The Project Browser displays the following hierarchy of nodes:
• Projects: the Project Explorer's root node. Each project can contain one or more
sub-projects.
• Results: after execution, this node can be expanded to display the resulting
report sub-nodes and files, allowing you to control those files through a CMS
system such as Rational ClearCase.
• Test groups: provide a way to group and organize test or application nodes into
one or more test campaigns
• Test nodes: these contain test scripts and source files:
• Test Scripts: for Component Testing or System Testing
• Source files: for code-under-test as well as additional source files
• Any other test related files
• Application nodes: represent your application, to which you can apply SCI
instrumentation for Memory Profiling, Performance Profiling, Code Coverage
and Runtime Tracing.
• External Command nodes: these allow you to add shell command lines at any
point in the Test Campaign.
After execution of a test or application, double-click the node to open all associated
available reports.
237
Test RealTime - User Guide
When you run a Build command in the Project Browser, the product parses and
executes each node from the inside-out and from top to bottom. This means that the
contents of a parent node are executed in sequence before the actual parent node.
Asset Browser
The Asset Browser displays all the files contained in your project. The product parses
the files and displays individual components of your source files and test scripts,
such as classes, methods, procedures, functions, units and packages.
Use the Asset Browser to easily navigate through your source files and test scripts.
In Asset Browser, you can select the type of Asset Browser in the Sort Method box at
the top of the Project Explorer window. Each view type can be more or less relevant
depending on the programming language used:
• By Files: This view displays a classic source file and dependency structure
• By Objects: Primarily for C++ and Java, this view type presents objects and
methods independently from the file structure
• By Packages: This is mostly relevant for Java and displays packages and
components
Use the Sort button to activate or disable the alphabetical sort.
Double-click a node in the Asset Browser to open the source file or test script in the
text editor at the corresponding line.
Properties Window
The Properties Window box contains information about the node selected in the
Project Explorer. It also allows you to modify this information. The information
available in the Properties Window depends on the view selected in the Project
Explorer:
• Project Browser
• Asset Browser
238
Graphical User Interface
Project Browser
Depending on the node selected, any of the following relevant information may be
displayed:
• Name: Is the name carried by the node in the Project Explorer.
• Exclude from Build: Excludes the node from the Build process. When this
option is selected a cross is displayed next to the node in the Project Explorer.
• Execute in background: Enables the build and execution of more than one test
or application node at the same time.
• Relative path: Indicates the relative path of the file.
• Full path: Indicates the entire path of the file.
• Instrumented type: You can select either Yes or No.
Asset Browser
Select the type of Object View in the Sort Method box at the top of the Project
Explorer window: By Object, By Files, or By Packages. Depending on the sort method
selected, and the type of object or file, any of the following relevant information may
be displayed:
• Name: is the name carried of the file, object or package.
• Filters (for folders): is the file extension filter for files in that folder. See Creating
a Source File Folder.
• Name: is the name carried of the file or package.
• Relative path: indicates the relative path of the file.
• Full path: indicates the entire path of the file.
239
Test RealTime - User Guide
Report Explorer
The Report Explorer allows you to navigate through all text and graphical reports,
including:
• Test reports generated by Component or System Testing
• Memory Profiling, Performance Profiling and Code Coverage reports
• UML Sequence Diagram reports from the Runtime Tracing feature
• Metrics produced by the Metrics Viewer
The actual appearance of the Report Explorer contents depends on the nature of the
report that is currently displayed, but generally the Report Explorer offers a dynamic
hierarchical view of the items encountered in the report.
Click an item in the Report Explorer to locate and select it in the Report Viewer or
UML/SD Viewer window.
Standard Toolbars
The toolbars provide shortcut buttons for the most common tasks.
The following toolbars are available
• Main toolbar
• View toolbar
• Build toolbar
• Status bar
Main Toolbar
The main toolbar is available at all times:
• The New File button creates a new blank text file in the Text Editor.
• The Open button allows you to load any project, source file, test script or
report file supported by the product.
• The Save File button saves the contents of the current window.
• The Save All button saves the current workspace as well as all open files.
240
Graphical User Interface
• The Cut , Copy and Paste buttons provide the standard clipboard
functionality.
• The Undo and Redo buttons allow you undo or redo the last command.
• The Find button allows you to locate a text string in the active Text Editor or
report window.
View Toolbar
The View toolbar provides shortcut buttons for the Text Editor and report viewers.
• The Choose zoom Level box and the Zoom In and Zoom Out buttons are
classic Zoom controls.
• The Reload button refreshes the current report in the report viewer. This is
useful when a new report has been generated.
• The Reset Observation Traces button clears cumulative reports such as those
from Code Coverage, Memory Profiling or Performance Profiling.
Build Toolbar
The build toolbar provides shortcut buttons to build and run the application or test.
• The Configuration box allows you to select the target configuration on which
the test will be based.
• The Build button launches the build and executes the node selected in the
Project Explorer. You can configure the Build Options for the workspace by
selecting the Options button.
• The Stop button stops the build or execution.
• The Clean Parent Node button removes files created by previous tests.
• The Execute Node button executes the node selected in the Project Explorer.
Status Bar
The Status bar is located at the bottom of the main GUI window. It includes a Build
Clock which displays execution time, and the Green LED which flashes when work is
in progress.
241
Test RealTime - User Guide
Report Viewer
The Report Viewer allows you to view Test or Runtime Analysis reports from
Component Testing, System Testing and any of the Runtime Analysis features
Most reports are produced as XML-based .xrd files, which are generated during the
execution of the test or application node.
Understanding Reports
Test RealTime generates Test and Runtime Analysis reports based on the execution of
your application.
243
Test RealTime - User Guide
244
Graphical User Interface
Text Editor
The product GUI provides its own Text Editor for editing and browsing script files
and source code.
The Text Editor is a fully-featured text editor with the following capabilities:
• Syntax Coloring
• Find and Replace functions
• Go to line or column
The main advantage of the Text Editor included with Test RealTime is its tight
integration with the rest of the GUI. You can click items within the Project Explorer,
Output Window, or any Test and Runtime Analysis report to immediately highlight
and edit the corresponding line of code in the Editor.
245
Test RealTime - User Guide
2. Click the '+' symbol to expand the list of references in the file.
3. Double-click a reference to open the Text Editor at the corresponding line.
You can also navigate through the source file by double-clicking other reference
points in the Project Explorer.
Search Options
The Search box allows you to select the search mode:
• All searches for the first occurrence from the beginning of the file.
• Selected searches through selected text only.
• Forward and Backward specify the direction of the search, starting at the
current cursor position.
Match case restricts search criteria to the exact same case.
Match whole word only restricts the search to complete words.
Use regular expression allows you to specify UNIX-like regular expressions as search
criteria.
246
Graphical User Interface
2. The editor Find and Replace dialog appears with the Replace tab selected.
3. Type the text that you want to change in the Find what box. A history of
previously searched words is available by clicking the Find List button.
4. Type the text that you want to replace it with in the Replace with box. A history
of previously replaced words is available by clicking the Replace List button.
5. Change search options (see below) if required.
6. Click Replace to replace the first occurrence of the searched text, or Replace All
to replace all occurrences.
Search Options
The Search box allows you to select the search mode:
• All searches for the first occurrence from the beginning of the file.
• Selected searches through selected text only.
• Forward and Backward specify the direction of the search, starting at the
current cursor position.
• Match case restricts search criteria to the exact same case.
• Match whole word only restricts the search to complete words.
• Use regular expression allows you to specify UNIX-like regular expressions as
search criteria.
247
Test RealTime - User Guide
If the filename does not have a standard extension, you must select the language
from the Syntax Color submenu.
Tools Menu
About the Tools Menu
248
Graphical User Interface
The Tools menu is a user-configurable menu that allows you to access personal tools
from the Test RealTime graphical user interface (GUI). You can customize the Tools
menu to meet your own requirements.
Custom tools can be applied to a selection of nodes in the Project Explorer. Selected
nodes can be sent as a parameter to a user-defined tool application. A series of macro
variables is available to pass parameters on to your tool's command line.
See the section GUI Macro Variables in the Rational Test RealTime Reference Manual
for detailed information about using the macro command language.
Tool Configuration
The Tool Configuration dialog allows you to configure a new or existing tool.
In the Tools menu, each tool appears as a submenu item, or Name, with one or
several associated actions or Captions.
249
Test RealTime - User Guide
Identification
In this tab, you describe how the tool will appear in the Tools menu.
• Enter the Name of the tool submenu as it will appear in the Tools menu and a
Comment that is displayed in the lower section of the Toolbox dialog box.
• Select Change Management System if the tool is used to send and retrieve from
a change management system. When Change Management System is selected,
Check In and Check Out actions are automatically added to the Action tab (see
below) and a Change Management System toolbar is activated.
• Clear the Add to Tools menu checkbox if you do not want the tool to be added
to the Tools menu.
• Select Send messages to custom tab in the Output Window if you want to view
the tool's text output in the Output Window.
• Use the Icon button to attach a custom icon to the tool that will appear in the
Tools menu. Icons must be either .xpm or .png graphic files and have a size of
22x22 pixels.
Actions
This tab allows you to describe one or several actions for the tool.
• The Actions list displays the list of actions associated with the tool. If Change
Management System is selected on the Identification tab, Check In and Check
Out tool commands will listed here. These cannot be renamed or removed.
• Menu text is the name of the action that will appear in the Tools submenu.
• Command is a shell command line that will be executed when the tool action is
selected from Tools menu. Command lines can include toolbox macro variables
and functions.
Click OK to validate any changes made to the Tool Edit dialog box.
250
Graphical User Interface
To modify an action:
1. Select an action in the Actions list.
2. Make any changes in the Caption or Command lines.
3. Click Modify.
To hide a curve:
1. Right-click a curve.
2. From the pop-up menu, select Hide Curve.
251
Test RealTime - User Guide
displaying it at on a scale of 1000, unless you want to compare it with another curve
that uses that scale.
1. Right-click a curve.
2. From the pop-up menu, select Set Max Value.
3. Enter the scale value, and click OK.
Note Setting a maximum value lower than the actual maximum value of a
curve can result in erratic results.
To display a scale:
For any curve, you can display a scale on the right or left-hand side of the graph.
When you display a new scale, it replaces any previously displayed one.
1. Right-click a curve.
2. From the pop-up menu, select Right Scale or Left Scale.
Custom Curves
In some cases, you may want to remove certain figures from a chart to make it more
relevant. The custom curves capability allows you to alter the chart by selecting the
records that you want to include.
Note Using the custom curves capability does not impact the actual database.
If you remove a record from the chart by using the custom curves function, the
actual record remains in the database and may impact other figures.
Custom curves create a new metric, using the name of the base metric, with a Custom
prefix.
252
Graphical User Interface
2. In the Custom Curves dialog box, select the Custom metric that you want to
modify.
3. Select the records that you want to use for your custom curve. Clear the records
that you do not want to use.
4. Click OK.
Event Markers
Use event markers to identify milestones or special events within your Test Process
Monitor chart. An event marker is identified by the date of the event and a marker
label.
Event markers appear as bold vertical lines in a Test Process Monitor chart.
2. From the Project menu, select Test Process Monitor, Scale and the desired time
scale.
3. If you chose Customize, enter the start and end date of the period that you want
to monitor, and click OK.
Adding a Metric
Metrics generated Code Coverage or other tools are directly available through the
Test Process Monitor. Each metric file contains one or several fields.
To hide a curve:
1. Right-click a curve.
2. From the pop-up menu, select Hide Curve.
254
Graphical User Interface
UML/SD Viewer
About the UML/SD Viewer
The UML/SD Viewer renders sequence diagram reports as specified by the UML
standard.
UML sequence diagram can be produced directly via the execution of the SCI-
instruction application when using the Runtime Tracing feature.
The UML/SD Viewer can also display UML sequence diagram results for
Component and System Testing features.
Time Stamping
The UML/SD Viewer displays time stamping information on the left of the UML
sequence diagram. Time stamps are based on the execution time of the application on
the target.
You can change the display format of time stamp information in the UML/SD
Viewer Preferences.
The following time format codes are available:
• %n - nanoseconds
• %u - microseconds
• %m - milliseconds
• %s - seconds
• %M - minutes
• %H - hours
255
Test RealTime - User Guide
These codes are replaced by the actual number. For example, if the time elapsed is
12ms, then the format %mms would result in the printed value 12ms. If the number 0
follows the % symbol but precedes the format code, then 0 values are printed to the
viewer - otherwise, 0 values are not printed. For example, if the time elapsed is 10ns,
and the selected format code is %0mms %nns, then the time stamp would read 0ms
10ns .
Note To change the format code you must press the Enter key immediately
after selecting/entering the new code. Simply pressing the OK button on the
Preferences window will not update the time stamp format code.
Coverage Bar
In C, C++ and Java, the coverage bar provides an estimation of code coverage.
Note The coverage bar is unrelated to the Code Coverage feature. For
detailed code coverage reports, use the dedicated Code Coverage feature.
When using the Runtime Tracing feature, the UML/SD Viewer can display an extra
column on the left of the UML/SD Viewer window to indicate code coverage
simultaneously with UML sequence diagram messages.
The UML/SD Viewer code coverage bar is merely an indication of the ratio of
encountered versus declared function or method entries and potential exceptions
since the beginning of the sequence diagram.
If new declarations occur during the execution the graph is recalculated, therefore the
coverage bar always displays a increasing coverage rate.
When using the Runtime Tracing feature on a Java application, the UML/SD Viewer
can display an extra bar on the left of the UML/SD Viewer window to indicate total
memory usage for each sequence diagram message event.
The memory usage bar indicates how much memory has been allocated by the
application and is still in use or not garbage collected.
In parallel to the UML sequence diagram, the graph bar represents the allocated
memory against the highest amount of memory allocated during the execution of the
application.
This ratio is calculated by subtracting the amount of free memory from the total
amount of memory used by the application. The total amount of memory is subject to
change during the execution and therefore the graph is recalculated whenever the
largest amount of allocated memory increases.
A tooltip displays the actual memory usage in bytes.
Thread Bar
When using the Runtime Tracing feature on C, C++ and Java code, the UML/SD
Viewer can display an extra column on the left of its window to indicate the active
thread during each UML sequence diagram event.
Each thread is displayed as a different colored zone. A tooltip displays the name of
the thread.
Click the thread bar to open the Thread Properties window.
257
Test RealTime - User Guide
Thread Properties
The Thread Properties window displays a list of all threads that are created during
execution of the application. Threads are listed with the following properties:
• Colour tab: As displayed in the Thread Bar.
• Thread ID: A sequential number corresponding to the order in which each
thread was created.
• Name: The name of the thread.
• State: Either Sleeping or Running state.
• Priority: The current priority of the thread.
• Since: The timestamp of the moment the thread entered the current state.
Click the title of each column to sort the list by the corresponding property
Step-by-Step mode
When tracing large applications, it may be useful to slow down the display of the
UML sequence diagram. You can do this by using the Step-by-Step mode.
258
Graphical User Interface
259
Test RealTime - User Guide
260
Graphical User Interface
261
Test RealTime - User Guide
2. Select the type of UML element you want to define for the event and select
Activate. Several types of elements can be activated for a single filter or trigger
event.
3. Click More or Fewer to add or remove line to the event criteria.
4. From the drop-down criteria box, select a criteria for the filter, and an argument.
5. Arguments must reflect an exact match for the criteria. Pay particular attention
when referring to labels that appear in the sequence diagram since they may be
truncated.
6. You can use wildcards (*) or regular expressions by selecting the corresponding
option.
Message Criteria
• Name: Specifies a message name as the filter criteria.
• Internal message: Considers all messages other than constructor calls coming
from any internal source, as opposed to those messages coming from the World
instance.
• From Instance: Considers all messages other than constructor calls prior to the
first message sent from the specified object
• To Instance: Considers out all messages other than constructor calls if any
message is sent to the specified object
• From World: Considers all messages received from the World instance
• To World: Considers all messages sent to the World instance
Instance Criteria
• Name: Specifies an instance name as the filter criteria
• Instance child of: Specifies a child instance of the specified class.
Note Criteria
• All: Considers all notes
• Name: Specifies a note name
• All message notes: Considers any note attached to a message
• All instance notes: Considers any note attached to an instance
262
Graphical User Interface
• Instance child of: Specifies a note attached to an instance of the specified class
• Note on message named: Considers a note attached to a specified message
• With style named: Considers a note with the specified style attributes
Synchronization Criteria
• All: Considers all synchronization events
• Name: Specifies a synchronization name
Action Criteria
• All: Considers all actions
• Name: Specifies an action name
• From Instance: Considers an action performed by the specified object
• From World: Considers all actions performed by the World instance
• Instance child of: Specifies an action performed by an instance of the specified
class
• With style named: Considers an action with the specified style attributes
Loop Criteria
• All: Considers all loops
• Name: Specifies a loop name
Boolean Operators
• All Except expresses a NOT operation on the criteria
• Match All performs an AND operation on the series of criteria
• Match Any performs an OR operation on the series of criteria
263
Test RealTime - User Guide
Search Options
• Forward and Backward specifies the direction of the search.
• The Search into option allows you to specify type of object in which you expect
to find the search string.
• The Find dialog box accepts either UNIX regular expressions or DOS-like
wildcards ('?' or '*'). Select either wildcard or reg. exp. in the Find dialog box to
select the corresponding mode.
266
Graphical User Interface
Child settings can be set to override parent settings. In this case, the overridden
settings will, in turn, be cascaded down to lower nodes in the hierarchy. Overridden
settings are displayed in bold.
Settings are changed only for a particular Configuration. If you want your changes to
a node to be made throughout all Configurations, be sure to select All Configurations
in the Configuration box.
General Settings
The General settings are part Configuration Settings dialog box, which allows you to
configure settings for each node.
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
267
Test RealTime - User Guide
Host Configuration
The Host Configuration area lets you override any information about the machine on
which the Target Deployment Port is to be compiled.
• Hostname: The hostname of the machine. By default this is the local host.
• Address: The IP address of the host. For the local host, use 127.0.0.1.
• System Testing Agent TCP/IP Port: The port number used by System Testing
Agents. The default is 10000.
• Socket Uploader Port: The default value is 7777.
• Target Deployment Port: This allows you to change the Target Deployment Port
for the selected nodes. Child nodes will use the default Configuration Settings
from this Target Deployment Port, such as compilation flags.
Directories
• Build: Specify an optional working directory for the Target Deployment Port.
This is where the generated test harness or application will be executed on the
target host.
• Temporary: Enter the location for any temporary files created during the Build
process
• Report: Specify the directory where test and analysis results are created.
• Java Main Class (for Java only): Specifies the name of the main class for Java
programs.
268
Graphical User Interface
Build Settings
The Build settings are part of the Configuration Settings dialog box, which allows
you to configure settings for each node.
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
Compiler Settings
• Preprocessor options: Specific compilation flags to be sent to the Test Compiler.
• Compiler flags: Extra flags to be sent to the compiler.
• Preprocessor macro definitions: Specify any macro definition that are to be sent
to both the compiler preprocessor (if used) and the Test Compilers. Several
generation conditions must be separated by a comma ',' with no space, as in the
following example:
WIN32,DEBUG=1
• Directories for Include Files: Click the ... button to create or modify a list of
directories for included files when the include statement is encountered in
source code and test scripts. In the directory selection box, use the Up and
Down buttons to indicate the order in which the directories are searched.
269
Test RealTime - User Guide
• User Link File for Ada (for Ada only): When using the Ada Instrumentor, you
must provide a link file. See Ada Link Files for more information.
• Boot Class Path (for Java only): Click the ... button to create or modify the Boot
Class Path parameter for the JVM.
• Class Path (for Java only): Click the ... button to create or modify the Class Path
parameter for the JVM.
Linker Settings
This area contains parameters to be sent to the linker during the build of the current
node.
• Link Flags: Flags to be sent to the linker.
• Additional objects or libraries: A list of object libraries to be linked to the
generated executable. Enter the command line option as required by your
linker. Please refer to the documentation provided with your development tool
for the exact syntax.
• Test driver filename: The name of the generated test driver binary. By default,
Test RealTime uses the name of the test or application node.
• Directories for Libraries: Click the ... button to create or modify a list of
directories for library link files. In the directory selection box, use the Up and
Down buttons to indicate the order in which the directories are searched.
Execution Settings
These settings apply to Component Testing and System Testing nodes only.
• Command line arguments: Specifies any command line arguments that are to be
sent to the application under test upon execution.
• Main application procedure (for Ada only): Ada requires an entry point in the
source code. For other languages, leave this blank.
• Build jar file (for Java only): Specifies whether to build an optional .jar file.
• Jar file name (for Java only): If Build jar file is set to Yes, enter the name of the
.jar file.
• Manifest file (for Java only): Specifies the name of an optional manifest file.
• Jar other directories (for Java only): Enter the location to generate the .jar file. By
default this is the source code directory.
• Environment variables: This section allows you to specify any environment
variables that can be used by the application under test. Click the "..." button to
edit environment variables. String values must be entered with quotes ("").
You can enter GUI Macro Variables as values for environment variables. These
270
Graphical User Interface
will be interpreted by the GUI and replaced with the actual values for the
current node. See GUI Macro Variables in the Rational Test RealTime Reference
Manual.
271
Test RealTime - User Guide
variables, making them context-sensitive. See the GUI Macro Variables chapter in the
Reference Manual.
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
272
Graphical User Interface
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
Snapshot Settings
In some case, such as with applications that never terminate or when working with
timing or memory-sensitive targets, you might need to dump traces at specifics
points in your code.
• On Function Entry: Allows you to specify a list of function names, from your
source code, that will dump traces at the beginning of the function.
• On Function Return: Allows you to specify a list of function names, from your
source code, that will dump traces at the end of the function.
• On Function Call: Allows you to specify a list of function names, from your
source code, that will dump traces before the function is called.
273
Test RealTime - User Guide
For each tab, click the ... button to open the function name selection box. Use the Add
and Remove buttons to create a list of function names.
See Generating SCI Dumps for more information.
Selective Instrumentation
By default, runtime analysis features instrument all components of source code under
analysis.
The Selective Instrumentation settings allow you to more finely define which units
(classes and functions) you want to instrument and trace.
• Units excluded from instrumentation: Click the ... button to access a list of units
(classes and functions) that can be excluded from the instrumentation process.
Click a unit to select or clear a unit. Use the Select File and Clear File buttons to
select and clear all units from a source file.
• Files excluded from instrumentation: Click the ... button and use the Add and
Remove buttons to select the files to be excluded.
• Instrument inline methods: Extends instrumentation to inline methods.
• Instrument included methods or functions: Extends instrumentation to included
methods or functions.
• Directories excluded from instrumentation: Click the ... button and use the Add,
Remove buttons to select the files to be excluded.
274
Graphical User Interface
Miscellaneous Options
• Label Instrumented Files: Select this option to add an identification header to
files generated by the Instrumentor, including the command line used to
generate the file, the version of the product, date and operating system
information.
• Full template instantiation: By default unused methods are ignored by the
Instrumentor. Set this option to Yes to analyze all template methods, even if
they are not used.
• Additional Instrumentor Options: This setting allows you to add command line
options for the Instrumentor. Normally, this line should be left blank.
275
Test RealTime - User Guide
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
Instrumentation Control
• File in use (FIU): When the application exits, this option reports any files left
open.
• Memory in use (MIU): When the application exits, this option reports allocated
memory that is still referenced.
• Signal (SIG): This option indicates the signal number received by the application
forcing it to exit.
• Freeing Freed Memory (FFM) and Late Detect Free Memory Write (FMWL):
Select Display Message to activate detection of these errors.
• Free queue length (blocks) specifies the number of memory blocks that are kept
free.
• Free queue size (Kbytes) specifies the total buffer size for free queue blocks. See
Freeing Freed Memory (FFM) and Late Detect Free Memory Write (FMWL).
• Display Detect Array Bounds Write (ABWL): Select Yes to activate detection of
this error.
• Red zone length (bytes) specifies the number of bytes added by Memory
Profiling around the memory range for bounds detection.
• Number of functions: specifies the maximum number of functions reported
from the end of the CPU call stack. The default value is 6.
Misc. Options
• Trace File Name (.tpf): This box allows you to specify a filename for the
generated .tpf trace file.
• Global variables to exclude from observation (for Java only): This box specifies a
list of global variables that are not to be inspected for memory leaks. This option
can be useful to save time and instrumentation overhead on trusted code. Use
the Add and Remove buttons to add and remove global variables.
JVMPI
• Object hashtable size: Specifies the size of hashtables for objects where <size>
must be 64, 256, 1024 or 4096 values.
• Class hashtable size: Specifies the size of hashtables for classes where <size>
must be 64, 256, 1024 or 4096 values.
• Take a Snapshot: You can select one of the following options:
276
Graphical User Interface
277
Test RealTime - User Guide
The Performance Profiling settings are part of the Runtime Analysis node of the
Configuration Settings dialog box, which allows you to configure settings for each
node.
By default, the settings of each node are inherited from those of the parent node.
When you override the settings of a parent node, changes are propagated to all child
nodes within the same Configuration. Overridden fields are displayed in bold.
Trace File Name (.tqf): This box allows you to specify a filename for the generated .tqf
trace file for Performance Profiling.
278
Graphical User Interface
Miscellaneous Options
• Trace File Name (.tio): this allows you to specify a path and filename for the .tio
dynamic coverage trace file.
• Compute deprecated metrics: This setting is for compatibility with third party
tools designed for previous versions of the product. Set this to No in most cases.
• User comment: This adds a comment to the Code Coverage Report. This can be
useful for identifying reports produced under different Configurations. To view
the comment, click the a magnifying glass symbol that is displayed at the top of
your source code in the Code Coverage Viewer.
279
Test RealTime - User Guide
Instrumentation Control
• Trace File Name (.tdf): This allows you to force a filename and path for the
dynamic .tdf file. By default, the .tdf carries the name of the application node.
• Functions called within a return expression are sequenced: For C only. With this
option, the UML/SD Viewer displays calls located in return expressions as if
they were executed sequentially and not in a nested manner.
• Collapse unnamed classes and structures: For C++ only. With this option,
unnamed structs and unions are not instrumented.
• Display class template instantiation in a note: For C++ only. With this option,
the UML/SD Viewer will not display notes for template instances for each
template class instance.
Trace Control
• Split Trace File Enable: See Splitting trace files for more information on this
setting.
• Maximum Size (Kbytes): This specifies the maximum size for a split .tdf file.
When this size is reached, a new split .tdf file is created.
• File name prefix: By default, split files are named as att_<number>.tdf, where
<number> is a 4-digit sequence number. This setting allows you to replace the
att_ prefix with the prefix of your choice.
• Automatic loop detection enable: Loop detection simplifies UML sequence
diagrams by summarizing repeating traces into a loop symbol. Loops are an
280
Graphical User Interface
extension to the UML sequence diagram standard and are not supported by
UML.
• Options (Reserved for future use): This setting allows you to add command line
options. Normally, this line should be left blank.
• Display largest call stack length: When selected, the Target Deployment Port
records the highest level attained by the call stack during the trace. This
information is displayed at the end of the UML Sequence Diagram in the
UML/SD Viewer as Maximum Calling Level Reached.
281
Test RealTime - User Guide
Display
• Display variables: lets you select the level of detail of the Component Testing
output report
• Initial and expected value display: the way in which the values assigned to each
variable are displayed in the report. See Initial and Expected Values.
• Array and structure display: indicates the way in which Component Testing
processes variable array and structure statements. See Array and Structure
Display for more information.
Additional Options
• Continue test build despite warnings: Select this option to ignore warning
during the test compilation phase.
• Call breakpoint function on test failure: Select this option to call a breakpoint
function whenever a test failure occurs in a .ptu Test Driver script. To use this
feature, you must set a breakpoint on the function priv_check_failed (), located in
the <target_deployment_port>/lib/priv.c, file. You can use this option for
debugging purposes.
• Simulation: This setting determines the conditional generation of code in the
test program when using SIMUL blocks in the .ptu test script.
• Display diff of last two test runs: This setting activate the comparison option.
See Comparing Reports.
• Additional test compilation options: Sends extra command line options to the
Component Testing Test Compiler. Please refer to the Test RealTime Reference
Manual for further information about addressing the Test Compiler in
command-line mode.
• Additional report generation options: Sends extra command line options to the
Component Testing Report Generator. Please refer to the Test RealTime
Reference Manual for further information about addressing the Report
Generator in command-line mode.
282
Graphical User Interface
To edit the Component Testing for C and Ada settings for a node:
1. In the Project Explorer, click the Settings button.
2. Select a node in the Project Explorer pane.
3. In the Configuration Settings list, expand Component Testing for C and Ada.
4. Select Display, Additional Options, or Comparison.
5. When you have finished, click OK to validate the changes.
Files
This area specifies the path and filenames for the intermediate files generated by the
Component Testing for C++ feature during the test execution.
• Test report file name (.xrd): contains the location and name of the .xrd report file
generated by Component Testing for C++
• Generated test driver source file name: contains the location and name of a .cpp
source file generated from the C++ Test scripts by Component Testing for C++
• Contract check file name: contains the path and file name of a temporary .oti file
created during source code instrumentation by Component Testing for C++
General Options
This area contains general information for the Component Testing for C++ feature.
• Maximum test compilation errors displayed: Specifies the maximum number of
error messages that can be displayed by the C++ Test Script Compiler. The
default value is 30.
• Add #line directive into instrumented source file: This option allows use of
#line statements in the source code generated by Component Testing for C++.
Disable this option in environments where the generated source code cannot
use the #line mechanism. By default #line statements are generated.
Testing Options
These options are used for the C++ Test Driver Script.
283
Test RealTime - User Guide
Diagram only when they change. This affects both trace size and UML Sequence
Diagram display size, but has no impact on execution time.
• Check 'const' methods: Usually C++ const methods are not checked for state
changes because they cannot modify a field of the this object. Instead, const
methods are only evaluated once for invariants. In some cases, however, the this
object may change even if the method is qualified with const (by assembler
code, or by calling another method that casts the this parameter to a non-const
type). There may also be pointer fields to objects which logically belong to the
object, but the C++ Test Script Compiler will not enforce that these pointed sub-
objects are not modified. Select this option only if your code contains such code
implementations.
• Reentrant object support: Select this option if your application is multi-threaded
and objects are shared by several threads. This ensures atomicity for state
evaluation. This option has no effect if multi-thread support is not activated in
the Target Deployment Compilation Settings.
• Enforce 'const' assertions: When this option is selected, the compiler requires
that invariant and state expressions are constant. Disable this option if you do
not use the const qualifier on methods that are actually constant.
Mode Settings
This area contains parameters to be sent to the linker during the build of the current
node.
285
Test RealTime - User Guide
• Check stub: If selected, Component Testing for Java checks simulated classes.
Use No to save memory on the target platform.
• Display stub note: If the Check stub setting is set to Yes, this setting specifies
whether to display simulated class information in the Component Testing
sequence diagram.
286
Graphical User Interface
When selected, multiple instances of a virtual tester can all run in the same
process.
• Trace Buffer Optimization: See Optimizing Execution Traces.
• Select Time stamp only to generate a normal trace file.
• Select Block start/end only to generate traces for each scenario beginning and
end, all events, and for error cases.
• Select Errors only to generate traces only if an error is detected during execution
of the application.
• Circular buffer: Select this option to activate the Circular Trace Buffer.
• Trace Buffer size (Kbytes): This box specifies the size - in kilobytes - of the
circular trace buffer. The default setting is 10Kb.
4. Select Test Compiler, Report Generator or Target Deployment Port for System
Testing Settings.
5. When you have finished, click OK to validate the changes.
Selecting Configurations
Although a project can use multiple Configurations, as well as multiple TDPs, there
must always be at least one active Configuration.
The active Configuration affects build options, individual node settings and even
wizard behavior. You can switch from one Configuration to another at any time,
except during build activity, when the green LED flashes in the Build toolbar.
To switch Configurations:
1. From the Build toolbar, select the Configuration you wish to use in the
Configuration box.
Modifying Configurations
Configurations are based on the Target Deployment Ports (TDP) that are specified
when you create a new project. In fact, a Configuration contains basic Configuration
Settings for a given TDP applied to a project, plus any node-specific overridden
settings.
Remember that although a project can use multiple Configurations, as well as
multiple TDPs, there must always be at least one active Configuration.
Configuration Settings are a main characteristic of the project and can be individually
customized for any single node in the Project Explorer.
288
Graphical User Interface
Understanding Projects
A project is a tree representation that contains nodes.
Within the project tree, each node has its own individual Configuration Settings —
inherited from its parent node— and can be individually executed.
Project Nodes
289
Test RealTime - User Guide
• Group node: Allows you to group together several application or test nodes.
• Application node: contains a complete application.
• Results node: contains your runtime analysis result files, once the application
has been executed. Use this node to control the result files in Rational ClearCase
or any other configuration management system.
• Source node: these are the actual source files under test. They can be
instrumented or not instrumented .
• Test node: represents a complete test harness, for Component Testing for C and
Ada , C++ , Java or System Testing . A test node containing.
• Results node: contains your test result files, once the test has been executed. Use
this node to control the result files in Rational ClearCase or any other
configuration management system.
• Test Script node: contains the test driver script for the current test.
• Source node: these are the actual source files under test. They can be
instrumented or not instrumented .
• External Command node: this node allows you to execute a command line
anywhere in the project. Use this to launch applications or to communicate with
the application under test.
Application and test nodes can be moved around the project to change the order in
which they are executed. The order of files inside a Test node cannot be changed; for
example the test script must be executed before the source under test.
Sub-Projects
Projects can contain one or more sub-projects which are actually links to other project
directories. The behaviour of a sub-project is the same as a project. In fact, a sub-
project can be opened separately as a stand-alone project.
290
Graphical User Interface
Results Node
By default, each application and test node contains a Results node.
Once the test or runtime analysis results have been generated, this node contains the
report files. Right-click the result node or the report files to bring up the Source
Control popup menu.
If you are not controlling result files in a configuration management system, you can
hide the Results node by setting the appropriate option in the Project Preferences.
Creating a Group
The Group node is designed to contain several application nodes. This allows you to
organize workspace by grouping applications together.
This also allows you to build and run a specific group of application nodes without
running the entire workspace.
291
Test RealTime - User Guide
The preferred method to create an application or test node is to use the Activity
Wizard, which guides you through the entire creation process.
However, if you are re-using existing components, you might want to create an
empty application node and manually add its components to the workspace.
The GUI allows you to freely create and modify test or application nodes. However,
you must follow the logical rules regarding the order of execution of the items
contained in the node. When using Component Testing for C++, .otc scripts must be
placed before .otd scripts.
292
Graphical User Interface
293
Test RealTime - User Guide
294
Graphical User Interface
Deleting a Node
Removing nodes from a project does not actually delete the files, but merely removes
them from the Project Explorer's representation.
Renaming a Node
Renaming a node in the Project Explorer involves modifying the properties of the
node.
295
Test RealTime - User Guide
296
Graphical User Interface
297
Test RealTime - User Guide
299
Test RealTime - User Guide
300
Graphical User Interface
Opening a Report
Because of the links between the various views of the GUI, there are many ways of
opening a test or runtime analysis report in Test RealTime. The most common ones
are described here.
Note Some reports require opening several files. For instance, when
manually opening a UML sequence diagram, you must select at the complete
set of .tsf files as well as the .tdf file generated at the same time. A mismatch in
.tsf and .tdf files would result in erroneous tracing of the UML sequence
diagram.
301
Test RealTime - User Guide
Troubleshooting a Project
When executing a node for the first time in Test RealTime, it is not uncommon to
experience compilation issues. Most common problems are due to some common
oversights pertaining to library or include paths or Target Deployment Port settings.
To help debug such problems during execution, you can prompt the GUI to report
more detailed information in the Output window by selecting the verbose output
option.
Debug Mode
The Debug option allows you to build and execute your application under a
debugger.
The debugger must be configured in the Target Deployment Port. See the Rational
Target Deployment Guide for further information.
Note Before running in Debug mode you must change the Compilation and
Link Configuration Settings to support Debug mode. For example set the -g
option with most Linux compilers.
Editing Preferences
Rational Test RealTime has many Preference settings that allow you to configure
various components of the graphical user interface.
302
Graphical User Interface
Project Preferences
The Project Preferences dialog box lets you set parameters for the Test RealTime
project.
In the Preferences dialog box, select Project to change the project preferences.
• Automatic file tagging: Select this option to activate the Project Explorer's
automatic parsing mode, in which all source code and script components are
automatically listed. If disabled, you will have to manually refresh the File View
each time you modify the structure of a file.
Note If the structure of a source files has changed since the last file refresh,
metrics calculation cannot be performed. This impacts the Component Testing
Wizard, where the Unit Selection view will be disabled.
• Calculate static metrics: Select this option to ensure that static metrics are
recalculated whenever a file is added, modified or refreshed in the Project
Explorer window.
• Verbose output: Select this option to prompt the Test RealTime GUI to report
detailed information to the Output Window during execution. Use this option
to debug any compilation issues.
• Show report nodes in Project Explorer: Select this option to display test and
runtime analysis reports in the Project Browser once they have been successfully
generated. Report nodes appear inside their test or application nodes.
Connection Preferences
The Preferences dialog box allows you to customize Test RealTime.
The Connections node of the Preferences dialog box lets you set the network
parameters for the graphical user interface.
303
Test RealTime - User Guide
Activity Wizards
The Start Page provides with a full set of activity wizards to help you get started with
a new project or activity.
304
Graphical User Interface
• Performance Profile
• Code Coverage
• Runtime Tracing
305
Test RealTime - User Guide
2. The Application Files page opens. Use the Add and Remove buttons to build a
list of source files and header files (for C and C++) to add to your project.
The Configuration Settings button allows you to override the default
Configuration Settings.
Select Compute Static Metrics to run the analysis of static testability metrics.
Click Next> to continue.
Note If the static metrics analysis takes too much time, you can clear the
Compute Static Metrics option. In this case, the calculation and display of
static metrics in any further steps are disabled.
Note With Component Testing for Ada, it is not possible to submit only an
Ada procedure file. Instead, you must include the single procedure in a
package.
3. The Components Under Test page allows you to select the units or files for the
selected source files.
In order to help you choose which components you want to test, this page
displays the metrics for each file or unit (packages, classes or functions
depending on the language).
Select File Selection to choose files under test or Unit Selection to choose the
source code units that require testing. The selection mode toggles the static
metrics displayed between file metrics or unit metrics.
Note If the Unit Selection view seems incomplete, cancel the wizard, from the
Project menu, select Refresh File Information and restart the wizard. See
Refreshing the Asset Browser.
Click Metrics Diagram to select the units under test from a graph
representation.
Click Next> to continue or Generate to skip any further configuration and to use
default settings.
4. The Test Script Generation Settings page allows you to specify the test node
generation options. The General settings specify how the wizard creates the test
node.
• Test Name: Enter a name for the test node.
• Test Mode: Disables or enables the test boundaries.
• Typical Mode: No test boundaries are specified. This is the default setting. For
Java, all dependency classes are stubbed in the node.
• Expert Mode: This mode allowing you to manually drive generation of the test
harness. This provides more flexibility in sophisticated software architectures.
• Node Creation Mode: Selects how the test node is created:
307
Test RealTime - User Guide
• Single Mode: In C and Ada, this mode creates one test node for each source file
under test. In C++ and Java, it creates one test node for all selected source code
component.
• Multiple Mode: This creates a single test node for each selected source code
component.
The Components Under Test settings specify advanced settings for each
component of the test node. These settings depend on the language and
Configuration.
Click Next> to continue.
5. Review the Summary. This page provides a summary of the selected options
and the files that are to be generated by the wizard.
Click Next> to create the test node based on this information.
6. The Test Generation Result page displays progression of the test node creation
process. Click Settings to set the Configuration Settings. You can always modify
the test node Configuration Settings later if necessary, from the Project Explorer.
Note If you apply new settings after the test generation, the wizard reruns
the test generation. This allows you to fine-tune any settings that may cause
the test generation to fail.
Once a test node has been successfully generated, click Finish to quit the Component
Testing Wizard and update the project.
308
Graphical User Interface
• Expert Mode: This mode allowing you to manually drive generation of the test
harness. This provides more flexibility in sophisticated software architectures.
The Components Under Test settings specify advanced settings for the
component of the test node. These settings depend on the language and
Configuration.
Click Next> to continue.
6. Review the Summary. This page provides a summary of the selected options
and the files that are to be generated by the wizard.
Click Next> to create the test node based on this information.
7. The Test Generation Result page displays progression of the test node creation
process. Click Settings to set the Configuration Settings. You can always modify
the test node Configuration Settings later if necessary, from the Project Explorer.
Note If you apply new settings after the test generation, the wizard reruns
the test generation. This allows you to fine-tune any settings that may cause
the test generation to fail.
Once a test node has been successfully generated, click Finish to quit the Component
Testing Wizard and update the project.
Next, use the Add and Remove buttons to build a list of interface files. The
Interface Files List must contain files that define the communication routines of
your application.
Click Next to continue.
3. Specify the include directories for your application.
Use the Add and Remove buttons to build a list of include directories. These are
all the directories that contain files that are included by your application's
source code. Use the Up and Down buttons to indicate the order in which they
are searched.
Click Next to continue. If you chose to create a new .pts test script, this brings
you straight to step 6.
4. Create a set of Virtual Testers.
See Configuring Virtual Testers for information about this page. If necessary,
click the Configure Settings button to access the Configuration Settings for the
selected Virtual Tester node.
Click Next to continue.
5. Deploy the Virtual Testers onto your target hosts.
See Deploying Virtual Testers for information about this page.
Click Next to continue.
6. Perform a quick review of the options in the Summary Page, and use the <Back
button if necessary to make any changes.
• Test Script File: indicates the name of the .pts test script
• Interface Files: lists the interface files defining the communication routines of
your application.
• Included Directories: lists the directories containing files included by your
application.
• Virtual Testers: lists the Virtual Tester that are to be deployed by the System
Testing node.
7. Click the Finish button to launch the generation of the System Testing node
with the corresponding Virtual Testers.
The wizard creates a test node with the associated test scripts. The test node appears
in the Project Explorer.
If you chose to create a new .pts test script, you can now write your System Testing
test script in the Text Editor and then configure and deploy your Virtual Testers.
310
Graphical User Interface
Refer to the Rational Test RealTime Reference Manual for information about the
System Testing Language (STL).
Metrics Diagram
As part of the Component Testing wizard, Test RealTime provides static testability
metrics to help you pinpoint the critical components of your application. You can use
these static metrics to prioritize your test efforts.
The graph displays a simple two-axis plot based on the static metrics calculated by
the wizard. The actual metrics on each axis can be changed in the Metrics Diagram
Options dialog box.
Each unit (function, package or class, depending on the current Configuration
language) is represented by a checkbox located at the intersection of the selected
testability metrics values.
Move the mouse pointer over a checkbox to display a tooltip with the names of the
associated units. To test a unit, select the corresponding checkbox.
Test RealTime also provides a Static Metrics Viewer, which is independent from the
Component Testing wizard and can be accessed at any time.
Advanced Options
The Advanced Options dialog box allows you to specify a series of advanced test
generation parameters in the Component Testing wizard. In most cases, you can
leave the default values.
The actual options available in this dialog box depend on the programming language
of the current Configuration:
311
Test RealTime - User Guide
• C or Ada
• C++
• Java
312
Graphical User Interface
• Test each template instance: tells the wizard to generate C++ Test Script
Language code for each instance of a template class. If this option is selected,
there must be template class instances in the source file under test. By default,
the Test Generation Wizard generates a single portion of C++ Test Script
Language code for a template class.
• Overwrite previous test scripts: tells the wizard to overwrite any previously
generated .otc or .otd test scripts. if this option is not selected, no changes will
be made to any existing .otc or .otd test scripts.
• Path for included header files: specifies how include file names must be
analyzed.
• Select Relative for relative filenames.
• Select Absolute for absolute filenames.
• Select Copy to use include the path as specified.
• Included files: use the Add and Remove buttons to add and remove files in the
list. The include file list used by the Component Testing wizard is kept in the
generated test node settings.
313
Command Line Interface
315
Test RealTime - User Guide
where <node> is the node to be executed and <project> is the .rtp project file.
The <node> hierarchy must be specified from the highest node in the project
(excluding the actual project node) to the target node to be executed, with periods ('.')
separating each item:
<node>{[.<node>]}
Example
The following command opens the project.rtp project in the GUI, and runs the app2
application node, located in group1 of the sub-project subproject1:
studio -r subproject1.group1.app2 project.rtp
316
Command Line Interface
Where <compiler command line> is the command that you usually invoke to build
your application.
For example:
attolcc -- cc -I../include -o appli appli.c bibli.c -lm
attolcc -TRACE -- cc -I../include -o appli appli.c bibli.c -lm
Please refer to the Instrumentation Launcher section of the Reference Manual for
information on attolcc options and settings, or type attolcc --help on the command line.
3. After execution of your application, in order to process SCI dump information
(i.e. the runtime analysis results), you need to separate the single output file into
separate, feature-specific, result files. See Splitting the SCI Dump File.
4. Finally, launch the Graphical User Interface to view the test reports. See the
Graphical User Interface command line section in the Rational Test RealTime
Reference Manual.
317
Test RealTime - User Guide
Where <compiler command line> is the command that you usually invoke to build
your application.
Please refer to the Instrumentation Launcher section of the Reference Manual for
information on the options and settings.
3. After execution, to obtain the final test results, as well as any SCI dump
information, you need to separate the output file into separate result files. See
Splitting the SCI Dump File.
4. Finally, launch the Graphical User Interface to view the test reports. See the
Graphical User Interface command line section in the Reference Manual..
318
Command Line Interface
319
Test RealTime - User Guide
320
Command Line Interface
321
Test RealTime - User Guide
322
Command Line Interface
Calculating Metrics
This example demonstrates producing static metrics of the source code contained in
the BaseStation_C example project with the metcc command line. This example is
provided in the examples directory of Test RealTime.
323
Test RealTime - User Guide
324
Command Line Interface
Automated Testing
If you are using Component Testing or System Testing features, the following
additional environment variables must be set:
• ATUDIR for Component Testing, points to $TESTRTDIR/lib
• ATS_DIR, for System Testing, points to $TESTRTDIR/bin/<platform>/<os>,
where <platform> is the hardware platform and <os> is the current operating
system.
Library Paths
UNIX platforms require the following additional environment variable:
• On Solaris and Linux platforms: LD_LIBRARY_PATH points to
$TESTRTDIR/lib/<platform>/<os>
• On HP-UX platforms: SH_LIB points to $TESTRTDIR/lib/<platform>/<os>
• On AIX platforms: LIB_PATH points to $TESTRTDIR/lib/<platform>/<os>
where <platform> is the hardware platform and <os> is the current operating
system.
Example
The following example shows how to set these variables for Test RealTime with a sh
shell on a Suse Linux system. The selected Target Deployment Port is clinuxgnu.
TESTRTDIR=/opt/Rational/TestRealTime.v2002R2
ATCDIR=$TESTRTDIR/bin/intel/linux_suse
ATUDIR=$TESTRTDIR/lib
ATS_DIR=$TESTRTDIR/bin/intel/linux_suse
ATLTGT=$TESTRTDIR/targets/clinuxgnu
ATUTGT=$TESTRTDIR/targets/clinuxgnu
LD_LIBRARY_PATH=$TESTRTDIR/lib/intel/linux_suse
PATH=$TESTRTDIR/bin/intel/linux_suse:$PATH
export TESTRTDIR
export ATCDIR
export ATUDIR
export ATS_DIR
export ATLTGT
export ATUTGT
export LD_LIBRARY_PATH
export PATH
325
Test RealTime - User Guide
326
Command Line Interface
3. Open products.h or Products.java in a text editor and add the following define
at the beginning of the file:
4. #define ATL_WITHOUT_STUDIO
5. Make any necessary changes by adjusting the corresponding macros in the file.
The product_model.h file is self-documented, and you can adjust every macro to one
of the values listed. Each macro is set to a default value, so you can keep everything
unchanged if you don't know how to set them.
Note Pay attention to correctly set the macros starting with USE_, because
these macros set which features of Test RealTime you are using. Certain
combinations are not allowed, such as using several test features
simultaneously.
Ensure that the ATL_TRACES_FILE macro correctly specifies the name of the trace
file which will be produced during the execution. If you are using Component
Testing, this value may be overridden by a Test Script Compiler command line
option.
Take note of the directory where this file is stored, you will need it in order to
compile the generated or instrumented source files.
Requirements
Before compiling an SCI-instrumented source file, you must make sure that:
• A working C, Ada, C++ or Java compiler is installed on your system
• If you use Component Testing for C++, you have prepared a valid options.h file
• If you compile on a target different from the host where the generated file has
been produced, the instrumented file must have been produced using option -
NOPATH, and the sub-directory lib of the selected Target Deployment Port
directory must be copied onto the target.
There are two alternatives to instrument and compile your source code:
• Using the Instrumentation Launcher in your standard makefile
• Using the Instrumentor and Compiler separately.
Instrumentation Launcher
327
Test RealTime - User Guide
Requirements
To compile the Target Deployment Port library, make sure that:
• A working C or C++ Test Script Compiler is installed on your system
• You have prepared a valid Products file
Compilation
Depending on the language of your source file:
• For C: compile the TP.c file
• For C++: compile the TP.cpp file
• For Ada: compile the contents of the /lib directory
• For Java: set the CLASSPATH to the TDP /lib directory
Do not forget to add to the include search path the directory where the products.h or
Products.java file is located (usually with option -I or /I, depending on the compiler).
Configuration Settings
328
Command Line Interface
A wide variety of compilation flags can be used by the command line tools, allowing
you to select sub-components of the application under test. These flags are equivalent
to the Test Configuration Settings dialog box of the graphical user interface and are
covered in the Reference Manual.
Default settings are contained in the following Perl script. You can use this file to
define your own customized configuration settings.
<InstallDir>/lib/scripts/BatchCCDefaults.pl
where <cpu> is the architecture platform of the machine, <os> is the operating
system, and <my_env.pl> is your customized copy of the BatchCCDefaults.pl file
The TESTRTDIR and ATLTGT environment variables must have been previously set.
Requirements
In order to compile a generated source file, you must check that:
• A working C, C++ or Ada compiler is installed on your system
• If you are using System Testing, you have prepared a valid options.h file
• If you are compiling on a target different from the host where the file was
generated, the generated file must have been produced using the -NOPATH
option (available with every test compiler), and the /lib sub-directory of the
Target Deployment Port directory must be copied onto the target.
Compilation
If you are using Component Testing, System Testing or Component Testing for C++
alone without any of the runtime analysis features, then simply compile the
generated test harness source file with your C or C++ compiler.
If you are compiling on a remote target, do not forget to add to the include search
path the /lib sub-directory that you have copied onto the target.
If you are using SCI instrumentation features (Memory Profiling, Performance
Profiling, Code Coverage, Runtime Tracing and C++ .otc contract check), use the
specific command line options for the Instrumentor in the Reference Manual.
329
Test RealTime - User Guide
Requirements
In order to compile an instrumented source file, you must check that:
• A working C, C++ or Ada linker is installed on your system
• You have compiled every source file, including any instrumented source files,
of your application under test
• If using a Component Testing for C, Ada or C++, or System Testing, you have
compiled the test harness.
• You have compiled the Target Deployment Port library.
Linking
If you are using only runtime analysis feature (Runtime Tracing, Code Coverage,
Memory Profiling, Performance Profiling, C++ Contract Check), you just have to add
the Target Deployment Port library object to the object files linked together. If you are
using a test feature, you also have to add the tester object to the linked files.
When you use several features together, the executable produces a multiplexed trace
file, containing several outputs targeting different features from Test RealTime . By
default, the trace file is named atlout.spt.
Requirements
330
Command Line Interface
In most cases, you must split the atlout.spt trace file into several files for use with
each particular Report Generator or the product GUI.
To do this, you must have a working perl interpreter. You can use the perl interpreter
provided with the product in the /bin directory.
After the split, depending on the selected runtime analysis features, the following file
types are generated:
• .rio test result files: process with a Report Generator
• .tio Code Coverage report files: view with Code Coverage Viewer
• .tdf dynamic trace files: view with UML/SD Viewer
• .tpf Memory Profiling report files: view with Memory Profiling Viewer
• .tqf Performance Profiling report files: view with Performance Profiling Viewer
Failure Response
Compilation fails Ensure that the selected Target Deployment Port matches your
compiler; there may be several Target Deployment Ports for
one OS, each of which targets a different compiler. If you are
unsure, you can check the full name of a Target Deployment
Port by opening any of the .ini files located in the Target
Deployment Port directory.
Compiler reports that Ensure that you have correctly prepared the options.h file, and
options.h is missing that this file is located in a directory that is searched by your
compiler (this is usually specified with -I or /I option on the
compiler command line).
Compiler reports that If you are compiling on a target different from the host where
TP.h file is missing the generated file has been produced, double-check the above
specific requirements to compilation on a different target.
If the compiler and C/C++ Test Script Compiler are executed
on the same machine, ensure you have not used the -NOPATH
option on the test compiler command line, and that the
ATLTGT environment variable was correctly set while the test
compiler was executed.
331
Test RealTime - User Guide
Compilation fails Ensure that the selected Target Deployment Port matches your
compiler; there may be several Target Deployment Ports for
one OS, each of which targets a different compiler. If you are
unsure, you can check the full name of a Target Deployment
Port by opening any of the .ini files located in the Target
Deployment Port directory.
TDP compilation fails When using the -I- linker option, the TDP fails to compile. This
is because the following line is added to the instrumented file:
#include "<path to target
directory>/TP.h"
where TP.h includes other files using the #include syntax, such
as:
#include "clock.h"
where clock.h is in the same directory as "TP.h". If you use the -
I- flag, the compiler no longer searches the same directory as
the current file (TP.h) and therefore cannot find clock.h. If you
cannot remove the -I- flag, you must add a -I flag for the
compiler to find the include files required by the TDP.
Compiler reports that Ensure that you have correctly prepared the options.h file, and
options.h is missing that this file is located in a directory that is searched by your
compiler (this is usually specified with -I or /I option on the
compiler command line).
Compiler reports that If you are compiling on a target different from the host where
TP.h file is missing the generated file has been produced, double-check the above
specific requirements to compilation on a different target.
If the test compiler and C/C++ compiler are executed on the
same machine, ensure you have not used the -NOPATH option
on the test compiler command line, and that the ATLTGT
environment variable was correctly set while the test script
compiler was executed.
Linkage fails because of Ensure you have successfully compiled the Target Deployment
undefined references Port library object, and have included it in your linked files
Ensure you have correctly configured the products.h options
file.
If you are using a test feature, ensure that you are linking both
source under test and additional files. You may also want to
add some stubs in your .ptu or .otd test script.
Ensure the options set in options.h (if required) are coherent
with the options set in products.h.
Errors are reported You may have selected a combination of options in products.h
through #error directives which is incompatible. The error messages help you to locate
the inconsistencies.
332
Working with Other Development
Tools
Rational Test RealTime is a versatile tool that is designed to integrate with your
existing development environment.
333
Test RealTime - User Guide
For any file in the Test RealTime project, ClearCase, or any other CMS tool, can be
accessed through a set of source control commands.
Source control can be applied to all files and nodes in the Project Browser or Asset
Browser. When a source control command is applied to a project, group, application,
test or results node, it affects all the files contained in that node.
The following source control commands are included for use with ClearCase:
• Add to Source Control
• Check Out
• Check In
• Undo Check Out
• Compare to Previous Version
• Show History
• Show Properties
Please refer to the documentation provided with ClearCase for more information
about these commands.
Source control commands are fully configurable from the Tools menu.
334
Working with Other Development Tools
By default, the product offers defect tracking support for ClearQuest. When using
ClearQuest with Test RealTime you can directly submit a report from a test or
runtime analysis report.
CMS Preferences
The Preferences dialog box allows you to change the settings related to the
integration of the product with Rational ClearCase or other configuration
management software (CMS).
ClearQuest Preferences
The Preferences dialog box allows you to specify the location of the Rational
ClearQuest database.
Please refer to the documentation provided with ClearQuest for more information.
335
Test RealTime - User Guide
• Database: Use this box to enter the location of the ClearQuest database.
• User Name and Password: Enter the user information provided by your
ClearQuest administrator.
2. Click OK to apply your changes.
336
Working with Other Development Tools
337
Test RealTime - User Guide
To connect the SCI data dump to the Rose RealTime Stop button:
1. Add the following code to the cmdCommand.cc file.
At the beginning of the file:
#include <RTDebugger.h>
#include <RTMemoryUtil.h>
#include <RTObserver.h>
#include <RTTcpSocket.h>
#include <stdio.h>
extern "C" _atl_obstools_dump(int);
338
Working with Other Development Tools
339
Test RealTime - User Guide
Using a Makefile
If you chose not to use the Rose RealTime environment for compilation and link, but
instead to use a makefile to perform these tasks, you can use the Rational Test
RealTime Instrumentation Launcher tools as described below:
This launches the Test RealTime graphical user interface. The .fdc and .tsf files are
static files generated by the instrumentation. The four last files are created by the
product to store the traces for each component.
Project Link
• An application should not be instrumented with instrumented libraries:
Activate the Add TDP option for the application component. The plug-in
automatically scans application dependencies and adds the TDP.Obj of
instrumented libraries to the User Obj.
Note Instrumentation options must be the same for all libraries.
• An application should not be instrumented with external instrumented
libraries:
The Rose RealTime plug-in does not know where TDP is generated when
external components are used. In this case, create an external library that
contains TP.obj.
Execution
• Multithreading issues:
Check that the Multithreading instrumentation setting is correctly configured.
• Link issues:
When multiple subcomponents are involved in a component (libraries and
binary), check that instrumentation options are the same for all components and
that the TDP.obj is correctly linked.
• Instrumentation issues
341
Test RealTime - User Guide
Missing Results
• Files are missing when the Test RealTime is launched to display report files. Code
Coverage results are missing or display the entire application as uncovered.
The runtime analysis trace dump was interrupted. Dumps can take a long time,
especially when the Memory Profiling feature is in use. See Generating SCI
Dumps for more information.
• Missing files on another component:
The plug-in offers to display all the results for enabled components.
Disable the any components that are not under analysis.
• No coverage results on a diagram
Check that the component was correctly generated with the Code Coverage
instrumentation option.
Check that the component is enabled for instrumentation. The Plug-in only
changes state diagrams for enabled components.
Check that the component is not read-only, such as for an inherited diagram.
342
Working with Other Development Tools
• Facilitated debug efforts by maintaining the original test and runtime analysis
results in the Test RealTime project.
• Execution support of multiple Test RealTime tests in multiple projects—stored
on multiple machines—from within a single TestManager test suite.
Note If you installed TestManager after Test RealTime, you must manually
install the plug-in. Please refer to the Rational Test RealTime Installation
Guide for further information.
343
Test RealTime - User Guide
The Select a Test Node or Group Node window now displays a list of all top-level
group and test nodes.
4. Select the test node or group node that you want to associate with the
TestManager test case and click OK. You can specify either a single test node, a
group node containing one or more test nodes or child nodes of group nodes.
The text box in the Automated Implementation section of the test case Properties
window now displays the following path to the test node:
<group node name>.<test node name>
Note Click Options in the Implementation tab after selecting a Test RealTime
test or group node to view the path to the Test RealTime project and the test or
group node names.
Note Test RealTime test nodes and group nodes can also be associated with
TestManager test cases as test implementations. This means that Test
RealTime test and group nodes can also be executed as part of a TestManager
suite.
345
Test RealTime - User Guide
Configuration
The Rational Test RealTime Setup for Microsoft Visual Studio tool allows you to set
up and activate coverage types and instrumentation options for Test RealTime
runtime analysis features, without leaving Microsoft Visual Studio.
346
Working with Other Development Tools
Other Options
• Dump: this specifies the dump mode:
• Select None to dump on exit of the application
• Select Calling to dump on call of the specified function
347
Test RealTime - User Guide
348
Glossary
A
ABR: Array Bounds Read
ABW: Array Bounds Write
ABWL: Late Detect Array Bound Write on the Heap
Additional Files: Source files that are required by the test script, but not actually tested.
API: Application Programmer Interface. A reusable library of subroutines or objects that
encapsulates the internals of some other system and provides a well-defined interface.
Typically, it makes it easier to use the services of a general-purpose system, encapsulates
the subject system providing higher integrity, and increases the user's productivity by
providing reusable solutions to common problems.
Application: A software program or system used to solve a specific problem or a class of
similar problems.
Application node: The main building block of your application under analysis. It
contains the source files required to build the application.
Assertion: A predicate expression whose value is either true or false.
Asynchronous: Not occurring at predetermined or regular intervals.
B
Black box testing: A software testing technique whereby the internal workings of the
item being tested are not known by the tester.
Boundary: The set of values that defines an input or output domain.
Boundary condition: An input or state that results in a condition that is on or
immediately adjacent to a boundary value.
349
Test RealTime - User Guide
Branch: When referring to the Code Coverage feature, a branch denotes a generic unit of
enumeration.For a given branch, you specify the coverage type. Code Coverage
instruments this branch when you compile the source under test.
Branch coverage: Achieved when every path from a control flow graph node has been
executed at least once by a test suite. It improves on statement coverage because each
branch is taken at least once.
Breakpoint: A statement whose execution causes a debugger to halt execution and return
control to the user.
BSR: Beyond Stack Read
BSW: Beyond Stack Write
Bug: An error or defect in software or hardware that causes a program to malfunction.
Build: The executable(s) produced by a build generation process. This process may
involve actual translation of source files and construction of binary files by e.g.
compilers, linkers and text formatters.
Build generation: The process of selecting and merging specific versions of source and
binary files for translation and linking within a component and among components.
C
Check-in: In configuration management, the release of exclusive control of a
configuration item.
Check-out: In configuration management, the granting of exclusive control of a
configuration item to a single user.
Class: A representation or source code construct used to create objects. Defines public,
protected, and private attributes, methods, messages, and inherited features. An object is
an instance of some class. A class is an abstract, static definition of an object. It defines
and implements instance variables and methods.
Class contract: The set of assertions at method and class scope, inherited assertions, and
exceptions.
Class invariant: An assertion that specifies properties that must be true of every object of
a class.
Clear box testing: A software testing technique whereby explicit knowledge of the
internal workings of the item being tested are used to select the test data. Test RealTime
leverages the power of source code analysis to initiate the creation of white box tests.
350
Glossary
Code Coverage: Test RealTime feature whose function is to measure the percentage of
code coverage achieved by your testing efforts, using a variety of powerful data displays
to ensure all portions of your code are exercised and thus verified as properly
implemented.
COM: Com API/Interface Failure
Complexity: A characteristic of software measured by various statistical models.
Component: Any software aggregate that has visibility in a development environment,
for example, a method, a class, an object, a function, a module, an executable, a task, a
utility subsystem, an application subsystem. This includes executable software entities
supplied with an API.
Component Testing: The Test RealTime feature used to automate the white box testing
of individual software components in your system, facilitating early, proactive
debugging and provided a repeatable, well-defined process for runtime analysis.
Computational complexity: The study of the time (number of iterations) and space
(quantity of storage) required by algorithms and classes of algorithms.
Configuration: It is a Target Deployment Port, applied to a Project, plus node-specific
settings.
Configuration management: A technical and administrative approach to manage
changes and control work products.
Container class: A class whose instances are each intended to contain multiple
occurrences of some other object.
COR: Core Dump
Coverage: The percentage of source code that has been exercised during a given
execution of the application.
Cyclomatic complexity: The V(g) or cyclomatic number is a measure of the complexity of
a function which is correlated with difficulty in testing. The standard value is between 1
and 10. A value of 1 means the code has no branching. A function's cyclomatic
complexity should not exceed 10.
D
Debug: To find the error or misconception that led to a program failure uncovered by
testing, and then to design and to implement the program changes that correct the error.
Debugger: A software tool used to perform debugging.
351
Test RealTime - User Guide
E
Embedded system: A combination of computer hardware and software, and perhaps
additional mechanical or other parts, designed to perform a dedicated function. In some
cases, embedded systems are part of a larger system or product, as is the case of an anti-
lock braking system in a car.
Equivalence class: A set of input values such that if any value is processed correctly
(incorrectly), then it is assumed that all other values will be processed correctly
(incorrectly).
Error: A human action that results in a software fault.
Event: Any kind of stimulus that can be presented to an object: a message from any
client, a response to a message sent to the virtual machine supporting an object, or the
activation of an object by an externally managed interrupt mechanism.
EXC: Continued Exception
Exception: A condition or event that causes suspension of normal program execution.
Typically it results from incorrect or invalid usage of the virtual machine.
Exception handling: The activation of program components to deal with an exception.
Exception handling is typically accomplished by using built-in features and application
code. The exception causes transfer to the exception handler, and the exception handler
returns control to the module that invoked the module that encountered the exception.
EXH: Handled Exception
EXI: Ignored Exception
EXU: Unhandled Exception
F
FFM: Freeing Freed Memory
FIM: Freeing Invalid Memory
FIU: File In Use
FMM: Freeing Mismatched Memory
352
Glossary
G
Garbage collector (Java): The process of reclaiming allocated blocks of main memory
(garbage) that are (1) no longer in use or (2) not claimed by any active procedure.
H
HAN: Invalid Handle Use
HIU: Handle In Use
I
ILK: Com Interface Leak
Included Files: Included files are normal source files under test. However, instead of
being compiled separately during the test, they are included and compiled with the
object test driver script.
Inheritance: A mechanism that allows one class (the subclass) to incorporate the
declarations of all or part of another class (the superclass). It is implemented by three
characteristics: extension, overriding, and specialization.
Instrumentation: The action of adding portions of code to an existing source file for
runtime analysis purposes. The product uses Rational's source code insertion technology
for instrumentation.
IPR: Invalid Pointer Read
IPW: Invalid Pointer Write
J
JUnit: JUnit is an open source testing framework for Java. It provides a means of
expressing how the application should work. By expressing this in code, you can use
JUnit test scripts to test your code.
353
Test RealTime - User Guide
M
MAF: Memory Allocation Failure
MC/DC: Modified Condition/Decision Coverage.
Memory profiling: Test RealTime feature whose function is to measure your code's
reliability as it pertains to memory usage. Applicable to both Application and Test
Nodes, the memory profiling feature detects memory leaks, monitors memory allocation
and deallocation and provides detailed reports to simplify your debugging efforts.
Method (Java, C++): A procedure that is executed when an object receives a message. A
method is always associated with a class.
MIU: Memory In Use
MLK: Memory Leak
Model: A representation intended to explain the behavior of some aspects of [an artifact
or activity]. A model is considered an abstraction of reality.
N
Node: Any item that appears in the Project Explorer. This includes test nodes, application
nodes, source files or test scripts.
NPR: Null Pointer Read
NPW: Null Pointer Write
O
ODS: Output Debug String
P
Package (ADA): Program units that allow the specification of groups of logically related
entities.
Package (Java): A group of types (classes and interfaces).
PAR: Bad System Api Parameter
Performance profiling: Test RealTime feature whose function is to measure your code's
reliability as it pertains to performance. Applicable to both Application and Test nodes,
the performance profiling feature measures each and every function, procedure or
354
Glossary
method execution time, presenting the data in a simple-to-read format to simplify your
efforts at code optimization.
PLK: Potential Memory Leak
Polymorphism: This refers to a programming language's ability to process objects
differently depending on their data type or class. More specifically, it is the ability to
redefine methods for derived classes.
Postcondition: An assertion that defines properties that must hold when a method
completes. It is evaluated after a method completes execution and before the message
result is returned to the client.
Precondition: An assertion that defines properties that must hold when a method begins
execution. It defines acceptable values of parameters and variables upon entry to a
module or method.
Predicate expression: An expression that contains a condition (conditions) that evaluates
true or false.
Procedure (C ): A procedure is a section of a program that performs a specific task.
Project: The project is your main workspace as shown in the Project Explorer. The project
contains all the files required to build, analyze and test an application.
R
Requirement: A desired feature, property, or behavior of a system.
Runtime Tracing: The Test RealTime feature whose function is to monitor code s it
executes, generating an easy-to-read UML-based sequence diagram of events. Perfect for
developers trying to understand inherited code, this feature also greatly simplifies the
debugging process at the integration level.
S
Scenario: An interaction with a system under test that is recognizable as a single unit of
work from the user's point of view. This step, procedure, or input event may involve any
number of implementation functions.
SCI: Source Code Insertion. Method used to enable the runtime analysis functionality of
Test RealTime. Pre-compiled source code is modified via the insertion of custom
commands that enable the monitoring of executing code. The actual code under test is
untouched. The testing features of Test RealTime do not require SCI.
SCI dump: Data that is dumped from a SCI-instrumented application.
355
Test RealTime - User Guide
Sequence diagram: A sequence diagram is a UML diagram that provides a view of the
chronological sequence of messages between instances (objects or classifier roles) that
work together in an interaction or interaction instance. A sequence diagram consists of a
group of instances (represented by lifelines) and the messages that they exchange during
the interaction.
SIG: Signal Received
Snapshot: In Memory Profiling for Java, a snapshot is a memory dump performed by the
JVMPI Agent whenever a trigger request is received. The snapshot provides a status of
memory and object usage at a given point in the execution of the Java program.
Subsystem: A subset of the functions or components of a system.
System Testing: The Test RealTime feature dedicated to testing message-based
applications. It helps you solve complex testing issues related to system interaction,
concurrency, and time and fault tolerance by addressing the functional, robustness, load,
performance and regression testing phases from small, single threads or tasks up to very
large, distributed systems.
T
TDP: Target Deployment Port. A versatile, low-overhead technology enabling target-
independent tests and runtime analysis despite limitless target support. Its technology is
constructed to accommodate your compiler, linker, debugger, and target architecture.
Template class: A class that defines the common structure and operations for related
types. The class definition takes a parameter that designates the type.
Test driver: A software component used to invoke a component under test. The driver
typically provides test input, controls and monitors execution, and reports results.
Test harness: A system of test drivers and other tools to support test execution.
Test node: The main building block of your test campaign. It contains one or more test
scripts as well as the source code under test.
Transition: In a state machine, a change of state.
U
UMC: Uninitialized Memory Copy
UML: Unified Modeling Language. A general-purpose notational language for
specifying and visualizing complex software, especially large, object-oriented projects.
356
Glossary
W
White box testing: See Clear box testing.
357
Index
359
Test RealTime - User Guide
# JUnit............................................... 189
Memory Profiling .......................... 74
#line .................................................... 329 Performance Profiling ................... 89
. Runtime Tracing ............................ 94
Static Metrics .................................. 66
.dcl ...................................... 178, 179, 182 System Testing ............................. 205
.fdc ...................................................... 359 Target Deployment Technology.. 10
.h ................................................. 155, 179 Tools Menu................................... 266
.prj ....................................... 171, 308, 322 UML/SD Viewer .......................... 273
.pts ..... 175, 206, 210, 213, 214, 218, 221, Virtual Testers.............................. 206
245, 246, 249, 327, 339 About ............... 5, 62, 262, 266, 269, 273
.ptu..... 126, 131, 136, 139, 144, 160, 161, About/Projects .................................. 308
162, 163, 173, 174, 300, 324, 337 ABWL........................................... 76, 294
.rtp....................................................... 334 ACK............................ 222, 233, 234, 236
.ses....................................................... 171 Acknowledgement ........... 222, 234, 236
.tsf................................................ 319, 359 Action................................................... 22
.xpm.................................................... 267 Activation ............................................ 13
.xrd.............................. 182, 202, 260, 301 Activations .......................................... 13
Activities............................................ 253
_
Activity Wizard ........................ 258, 322
_ATCPQ_RESET ...................................8 Actor............................................... 13, 23
_inout ................................................. 160 Actors ............................................. 13, 23
_no .............................................. 124, 159 Ada
Ada 83 ............................................. 40
A Ada 95 ............................................. 32
Abort ....................................................32 Additional Statements .................. 42
About Advanced testing......................... 126
Code Coverage ...............................30 Arrays and structures ................. 135
Code Coverage Viewer .................62 Block code coverage ...................... 32
Component Testing for C and Ada Call code coverage......................... 35
........................................................ 136 Calling stubs................................. 122
Component Testing for C++ ....... 177 Code Coverage package ............. 296
Component Testing for Java....... 188 Condition code coverage .............. 35
Configuration Settings ................ 284 Discriminants ............................... 116
Environments ............................... 165 Exception ...................................... 132
360
Index
Build .... 29, 74, 89, 94, 96, 101, 205, 217, CALL Instruction......................... 220
254, 255, 269, 284, 285, 287, 300, 306, Call stack length................... 294, 298
310, 313, 314, 316, 317, 319, 320, 322, CALL...........186, 217, 218, 220, 225, 301
324, 327, 353 CALLBACK............................... 236, 249
Build Toolbar..................................... 258 calloc..................................................... 79
Build/Troubleshooting..................... 320 Calls...................................................... 90
Byte summary .....................................74 Cart....................................................... 19
cart100
C Cart .................................................. 19
C cart101
Advanced testing ......................... 168 Cart .................................................. 19
Arrays and structures.................. 176 CASE ............................................ 51, 220
C Additional Statements...............51 Change tracking................................ 354
C Function Code Coverage...........50 Changing configurations......... 306, 355
C macros........................................ 250 Changing targets .............................. 306
C typedef....................................... 233 CHANNEL........................................ 233
instrumenting .................................89 char* ................................................... 163
Stubs .............................................. 159 Character arrays ....................... 111, 152
Unions ........................................... 147 CHECK .............................. 182, 186, 301
C30, 35, 39, 43, 45, 46, 50, 51, 52, 54, 56, CHECK EXCEPTION ...................... 186
62, 66, 68, 79, 89, 91, 136, 137, 139, 145, Check In................. 75, 78, 267, 353, 356
155, 157, 166, 172, 178, 179, 180, 185, CHECK METHOD ........................... 186
206, 210, 217, 218, 219, 220, 221, 230, Check out........................... 267, 353, 356
233, 234, 236, 240, 265, 274, 291, 296, CHECK PROPERTY......................... 186
301, 337 CHECK STUB ................................... 186
C Test Script Language .... 136, 139, 324 Circular buffer .......................... 249, 304
C/Arrays............................................. 148 Class definitions ............................... 178
C/Character arrays............................ 152 Classes Under Test ........................... 179
C/Expressions............................ 143, 144 Classifier Role ............................... 13, 14
C/Pointers .................................. 169, 170 Classifier Roles.............................. 13, 14
C/Reports ........................................... 174 CLEAR_ID......................... 222, 234, 236
C/Structured variables ............. 145, 146 ClearCase
C/Variables ........................................ 141 ClearCase Toolbar ....................... 356
C++ ................ 79, 182, 265, 274, 301, 337 ClearCase........................................... 353
C++ Test Driver script ..... 177, 178, 179, ClearQuest
182, 186, 301 ClearQuest Preferences............... 355
C++ Test Script Language177, 178, 182, ClearQuest......................... 354, 355, 365
186, 301 CLI
CALL Example ........................................ 340
Call code coverage ..... 35, 45, 62, 296 CLI ...................................... 333, 334, 342
363
Test RealTime - User Guide
Freeing freed memory................ 76, 294 HEADER....139, 144, 216, 217, 218, 219,
Freeing unallocated memory ............76 220, 222, 225, 226, 228, 229, 230, 231,
FTF ........................................................35 232, 234, 236
FUM......................................................76 Header File ........................................ 344
Func .............................................. 52, 221 Header files ............... 137, 179, 180, 214
Function Heap allocation................................. 304
Function Call .... 45, 64, 233, 234, 236 Heap size ........................................... 298
Function pointers ......................... 133 Hit count tool ...................................... 64
Function return ............................ 119 Hostname .................. 206, 209, 285, 306
Function Time ................................90 HTML
Function ....... 90, 160, 161, 162, 163, 169 HTML file ........................... 62, 84, 91
FXF.................................................. 35, 46 HTML....................................... 62, 84, 91
FXT.................................................. 35, 46
I
G IDENTIFIER ........................................ 45
g............................................... 62, 73, 320 IF ...43, 51, 52, 57, 60, 168, 218, 222, 225,
Garbage Collection .............................88 230, 236
GEN .................................................... 119 IGNORE............................................. 250
General Settings ................................ 285 Illegal
Generate dumps....................................8 Illegal transitions ................. 184, 185
Generate separate test harness........ 131 Illegal.................................................. 184
Generate virtual testers............ 210, 304 Implementations....................... 179, 301
GENERIC........................................... 119 Implicit blocks..................... 8, 32, 43, 52
Generic packages .............................. 126 Import .................................. 26, 171, 200
Generic units ..................................... 119 Import makefile ................................ 311
Get Started ......................................... 253 Importing a JUnit Test Campaign.. 200
GetCoord .............................................54 Importing Component Testing files
Global variable .................................. 294 ............................................................. 171
Global variables ........................ 144, 304 INCLUDE .......................................... 216
Go To .................................................. 265 Include Statements ........................... 216
GOTO ................................. 42, 51, 54, 60 Indicators............................................. 71
Graphical user interface................... 253 inetd ................................................... 206
Group ................................................. 309 Information Mode .............................. 30
GUI ....... 66, 188, 253, 307, 333, 334, 353 Init ......106, 129, 141, 157, 165, 166, 167,
172, 219, 222, 234, 236, 240
H Init_expr............................................. 240
Halstead Initial .................................................. 138
Halstead metrics ............................71 Initial values...................................... 138
Halstead ............................. 62, 69, 70, 71 Initial/error state ............................... 184
Halstead Graph...................................69 INITIALIZATION ............ 225, 228, 246
368
Index
Lifeline ............................... 13, 14, 16, 19 In use ................................. 78, 79, 294
Lifelines.................................... 13, 16, 19 Leak ................................................. 78
Limiting coverage types ......................8 Memory usage ............................. 274
Link Potential leak.................................. 79
Link files........................................ 287 Usage bar ...................................... 274
Link....................................... 65, 287, 348 Memory ............................................... 76
Linking ............................................... 348 Memory Profiling
Loading files...................................... 315 Java .................................................. 85
Locate ................................................. 265 JVMPI .............................................. 88
Log2 n...................................................71 Memory Profiling Misc............... 294
LOGICAL.............................................52 Memory Profiling preferences ..... 85
Login........................................... 206, 355 Memory Profiling Results ............ 74
LOOP.............................. 13, 24, 139, 216 Memory Profiling settings.......... 294
Loops8, 13, 24, 32, 35, 39, 42, 43, 51, 52, Memory Profiling Viewer............. 84
129, 139, 219, 298 Memory Profiling Viewer
Preferences...................................... 85
M Memory Profiling warning
Macro expansion.................................64 messages ......................................... 78
macros ..... 8, 64, 168, 217, 250, 266, 267, Memory Profiling .... 6, 8, 29, 74, 75, 76,
287, 289, 304, 310 77, 78, 79, 84, 85, 255, 282, 284, 294,
MAF......................................................77 319, 322, 355
Main toolbar ...................................... 258 Memory Profiling for Java ................ 85
main() ...................................................95 Message
Make ...... 1, 71, 76, 89, 96, 159, 179, 184, Message dump ............................... 97
222, 236, 267, 270, 271, 284, 289, 311, Message-oriented middleware .. 205
315, 316, 324, 327, 355 Message ................. 13, 17, 233, 234, 236
Makefile ..................................... 311, 322 MET file ............................................... 67
malloc ...................................................79 Method
Margin........................................ 108, 142 Coverage ......................................... 54
Markers .............................................. 271 Method................................................. 60
MATCHED........................................ 236 Metrics
MATCHING...................... 220, 222, 236 Adding .......................................... 272
Math.h ................................................ 165 Graph ............................................ 329
Maurice Halstead................................71 Viewer ........................... 67, 69, 70, 73
Maximum reached..............................30 Metrics ...62, 66, 67, 68, 71, 73, 182, 269,
MDd.................................................... 320 270, 272
Memory Microsoft
Allocation failure ...........................77 Microsoft Visual Studio ...... 320, 322
Allocation method ....................... 304 Microsoft.................................... 365, 366
Errors ...............................................74 Min.Max............................................. 240
370
Index
373
Test RealTime - User Guide
378
Index
WHITE ............................... 35, 43, 46, 52 WTIME............... 220, 222, 231, 232, 236
White icon.......................................... 315
Wildcard ............................ 209, 279, 281 X
WIN32,DEBUG ................................. 287 XRD file.............................................. 202
Window ..................................... 253, 256
WITH.................................................. 119 Z
Wizard................ 253, 307, 322, 324, 327 Zoom .......................................... 258, 261
Workspace ......................................... 308 Zoom Level ....................................... 261
WRAP................................................. 182
WRITE ................................................ 129
379