PIPE2 Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

PETRI NETS GROUP PROJECT

FINAL REPORT

Edwin Chung
Tim Kimber
Ben Kirby
Thomas Master
Matthew Worthington
Supervisor: Dr W. Knottenbelt
Submission Date: 19th March 2007
Contents

I INTRODUCTION AND PLANNING 5


1 Introduction 6
1.1 Petri nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 PIPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Developing the Specification, Planning and Project manage­


ment 8
2.1 Working with an Active Open Source Application . . . . . . . . . 8
2.2 Familiarisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Arriving at the Specification . . . . . . . . . . . . . . . . . . . . . 9
2.4 Managing Project Tasks and Resources . . . . . . . . . . . . . . 10
2.5 Development Environment . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Development Approach . . . . . . . . . . . . . . . . . . . . . . . 11

II DEVELOPMENT 12
3 Refactoring and Code Clean Up 13
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Bug Fixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Integration of External Contributions . . . . . . . . . . . . . . . 14
3.4 Code Refactoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4.1 XML and DataLayer Division . . . . . . . . . . . . . . . . 15
3.4.2 Matrix Instantiation . . . . . . . . . . . . . . . . . . . . . 16
3.4.3 Reflection and Class Loading . . . . . . . . . . . . . . . . 17
3.4.4 Data Mismanagement . . . . . . . . . . . . . . . . . . . . 18
3.4.5 Evalution of refactoring . . . . . . . . . . . . . . . . . . . 19

4 Zoom Functionality 21
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.1 Existing PIPE code . . . . . . . . . . . . . . . . . . . . . 21
4.2.2 Swing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2.3 Successful Approaches to Zoom Implementation . . . . . 22
4.3 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.1 Design Decisions . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.2 Geometrical Approach . . . . . . . . . . . . . . . . . . . . 24
4.3.3 Code Architecture . . . . . . . . . . . . . . . . . . . . . . 25
4.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.1 PlaceTransitionObjects . . . . . . . . . . . . . . . . . . . 27
4.4.2 Arcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.3 Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2
4.4.4 AnnotionNotes . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.5 Updating the JScrollPane . . . . . . . . . . . . . . . . . . 28
4.4.6 Click and Drag . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4.7 User Interface to Zoom Functionality . . . . . . . . . . . . 30
4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.5.1 Differences from the Specification . . . . . . . . . . . . . . 33
4.5.2 Key Metrics ­ Usability and Reliability . . . . . . . . . . . 34

5 Reachability Graph Functionality 34


5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.1 Background on Reachability . . . . . . . . . . . . . . . . . 37
5.1.2 Background on generating state space[12] . . . . . . . . . 38
5.2 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2.1 Technologies . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.3 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.1 Graphviz, Grappa and C . . . . . . . . . . . . . . . . . . 41
5.3.2 Code Architecture . . . . . . . . . . . . . . . . . . . . . . 42
5.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.4.1 Steady State Algorithm . . . . . . . . . . . . . . . . . . . 42
5.4.2 Random Access Files and Input Streams . . . . . . . . . . 46
5.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.5.1 Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.5.2 Dot File Formatting . . . . . . . . . . . . . . . . . . . . . 48
5.5.3 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.5.4 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.6 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6 Testing 51
6.1 Library Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.2 Implementation of Testing Framework . . . . . . . . . . . . . . . 52
6.3 Petri­Net Specific Testing . . . . . . . . . . . . . . . . . . . . . . 53
6.4 Evaluation: Test Coverage . . . . . . . . . . . . . . . . . . . . . . 53

III EVALUATION AND CONCLUSIONS 55


7 Evaluation and learnings 56

8 Effectiveness of Scheduling and Group Organisation 57

9 Future Directions 57
9.1 Hierarchical Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.2 Copy and Paste . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9.3 Undo and Redo . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3
Acknowledgements: Thanks to Dave Patterson for offering insight and ad­
vice; to John Mocenigo at AT&T for allowing use of AT&T’s hosted GraphViz
service during project development; and to Dr William Knottenbelt for his sup­
port as project supervisor.

4
Part I
INTRODUCTION AND
PLANNING

5
1 Introduction
The aim of this project was to enhance an existing piece of software written for
a 2002/3 MSc Conversion group project[3], and subsequently extended by MSc
students in 2003/4[1] and 2004/5[2]. The application, PIPE (Platform Inde­
pendent Petri net Editor), is a Java­based editing and analysis system for Petri
nets. It has been improved by eradicating bugs, refactoring code to make it more
efficient and understandable, and by adding two major pieces of functionality.

1.1 Petri nets


Petri nets are a formalism for modelling concurrent systems, first defined by Carl
Adam Petri in 1962. They allow correctness of concurrent systems to be verified
using well­defined, provable mathematical techniques and allow the behaviour
of a system to be expressed both graphically and algebraically. They support
the natural expression of such concepts as synchronisation and communication
between processes. The ability they provide to visualise the structure of a
system promotes greater intuitive understanding of what is being modelled.
Petri nets have since been extended and augmented with additional behaviours,
most notably with the addition of time data to produce Generalised Stochastic
Petri nets (GSPNs).
Petri nets are used in areas including software design, engineering and data
analysis.
The building blocks of a Petri net are places, transitions, arcs and tokens.
Places model conditions or objects. Places may contain tokens, which represent
the value of the condition or object. Transitions model activities, which change
the value of conditions or objects. For example, firing a transition may destroy a
token at one place and create a token at another place. The interconnectedness
of places and transitions is represented using arcs. Each arc has one and only
one source, and one and only one target. If the source is a place, the target
must be a transition, and vice versa.
A Petri net may or may not meet certain key criteria, which can provide
crucial information about the correctness of the system it is modelling. The
reachability of a Petri net describes the possible states that can exist. This
could tell us, for example, whether the doors in a lift might open while the lift
is moving. The liveness of a Net indicates whether transitions between different
states are possible. A Net may, for example, reach a situation of deadlock, where
no transitions are enabled, or livelock, where a subset of transitions only are
enabled and the system is stuck in a cycle. Finally, a Petri net that is bounded
is known to stay within certain quantifiable limits; for example, in a 3­bounded
Net we know that no Place will cotain more than 3 tokens. Unbounded systems
are obviously unpredictable and seldom if ever desirable, so Petri nets provide
a powerful tool to mathematically verify that a system is bounded.

6
Figure 1: Dijkstra’s Dining Philosopher’s problem

7
1.2 PIPE
The figure above shows a Petri net representing Dijkstra’s well­known Dining
Philosopher’s problem[4]. Places are shown by circles and transitions by rectan­
gles, connected by directional arcs. The odd­numbered central places represent
the forks ­ shared resources which each philosopher requires two of in order to
eat. In its initial state, all even­numbered transitions are enabled and equally
likely to fire. If, for example, T2 was to fire, the tokens at P7, P8 and P9 would
be destroyed, and a token at P10 would be generated by transition T2. This
would have the effect of removing two resources (forks) from the central pool,
thus limiting access by the other philosophers. If T3 were then to fire, the token
at P10 would be destroyed and tokens at P7, P8 and P9 would be generating,
returning the system to its initial state. This Petri net is bounded and safe
(i.e. each place can contain no more than one token); however it is possible to
reach a state of deadlock (indeed, this is the issue that the Dining Philosophers
problem serves to illustrate).

2 Developing the Specification, Planning and Project


management
2.1 Working with an Active Open Source Application
PIPE has been downloaded over 7200 times and is in use around the world. It
is an active open source project on Sourceforge[5], with a number of contrib­
utors, regularly reported bugs and a features wishlist. This implies a certain
level of responsibility: much of our specification was based upon user feedback,
and a substantial amount of time was spent cleaning up code (debugging and
refactoring) ­ not the most glamorous work but something we understood to be
important to the longer term viability of the project. Working on a real­world
project also meant that we had to ensure that any changes to the application’s
functionality didn’t impact existing code or introduce subtle bugs; therefore the
first thing we did was to introduce a suite of automated unit and functional
tests which could be run at any time to ensure that key parts of the application
were working as they should.

2.2 Familiarisation
Most major updates to the application have been made on an annual basis by
groups of students, who then left Imperial College. This meant that there were
some issues with continuation of knowledge: new groups come to work on PIPE
with no prior knowledge of the application, yet the previous year’s developers,
who had come to understand the PIPE’s design and inner workings quite inti­
mately, had by this point moved on. Even detailed documentation left in the
annual reports and in source code comments provides no substitute for face­to­
face interaction. Therefore, before any actual development work could begin,

8
a substantial amount of time was spent becoming familiar with the applica­
tion. This involved studying code, reading what documentation was available,
communicating with contributors to the PIPE project beyond Imperial College,
generating UML diagrams for the software, and then attempting to fix bugs (see
section 3.2).

2.3 Arriving at the Specification


The final specification was based on a number of inputs:

• The group’s own experiences investigating the application and learning


Petri net theory led us to suggest a number of improvements. At this stage
it was difficult to judge how hard these would be to incorporate, and some
were clearly beyond the scope of this project. One idea that remained,
however, and deemed achievable, was to incorporate zoom functionality, to
make it possible to see the structure of larger nets in full, without having
to scroll the screen.
• Dr. Knottenbelt has overseen PIPE’s development for several years and
is the main point of contact for many PIPE users and contributors; his
opinions were therefore highly valuable in deciding which direction to take.
One piece of functionality suggested was to offer generation of reachability
graphs which would show the possible states a Petri net could achieve and
the transitions between them.
• The team had been in ongoing contact with David Patterson, the most
active of PIPE’s external contributors. David uses PIPE regularly and
had many good ideas about how it could be improved. Much of the refac­
toring work was proposed by him, and he had also suggested that zoom
functionality would considerably improve PIPE’s usability when working
with larger Nets.

The key areas identified for this year’s work therefore were: refactoring and
code clean­up; incorporation of a module that would allow users to generate
reachability graphs; and adding zoom functionality.
Additionally, to address the issues of code complexity and maintainability we
proposed to introduce an automated testing framework to the project (at this
stage no tests existed within PIPE2). A well written suite of automated tests
would allow developers to refactor and add to the code with confidence that
new bugs were being kept to a minimum. Over time, this should allow faster,
better designed improvements to the application. Another important benefit of
automated testing and a continously integrated build is that testing and bug
fixing is spread over the life of the project, rather than being concentrated into
the period just before release. Testing is described fully in section Finally, we
considered the possibility of improving options functionality. Many options that
would be useful to specify were defined all over the code and could not be
changed dynamically. A single properties file with all of the options (such as

9
the colour of active transitions and the size of annotation text) would improve
PIPE’s usability.

2.4 Managing Project Tasks and Resources


These areas were investigated in the early stages of the project in order to
determine their feasibility and to produce estimates for the amount of work
required. Each team member investigated a different part of the specification
in order to establish what steps he thought would need to be taken, how long
would be required and what possible complications might arise. The group then
discussed which team members would be suited to different tasks. The team
of five students expected to contribute around 12­14 hours of work per week
each. With this information, the initial schedule was drawn up (using Microsoft
Project 2003), shown below as a Gantt chart.

10
The chart features key milestones of completing reports and making releases
of the application. The Gantt chart was updated at intervals as the project pro­
gressed, making it possible to see whether tasks were being completed on time.
The effectiveness of group organisation and project management is discussed in
the Evaluation section.

2.5 Development Environment


It was agreed that a consistent coding environment across the group would be
best to enable easier transfer of code and project specifications. PIPE was
originally written using Borland’s JBuilder, which is proprietary commercial
software unavailable to the group. Of the available free IDEs, the open­source
Eclipse was chosen because:

• It is free
• It is well supported under Windows, Linux and Mac OS
• It has a highly mature Java environment
• It has native support for CVS
• It was easily able to import the existing code tree and compile it

Concurrent Version System (CVS) support was especially important because


PIPE is already published on SourceForge2, a popular Open Source community
website, with a CVS repository and some useful issue tracking facilities. CVS
enables several people to work on the code at the same time, and then to commit
their changes to the repository, which determines if there are any potential
conflicts. Eclipse extends this by allowing line­by­line resolution of conflicting
source changes and easy access to all CVS features such as previous revision
comparisons and committal annotations. This proved to be a valuable resource,
also allowing group members to work on the code from their home computers
as well as University machines.

2.6 Development Approach


No particular established development methodology was strictly adhered to.
However, the approach borrowed to some extent from agile methods. Updates
were committed as regularly as possible, with three separate fully­functional re­
leases (versions 2.1, 2.2 and 3.0) made during the project. Regular face­to­face
communication took precedence over written documents (though submitted re­
ports 1 and 2 meant that we had a detailed initial specification and an indication
of progress mid­way through development): the group had a scheduled weekly
meeting but three or four additional ad hoc meetings typically took place every
week, either the whole group or teams of two or three who were working on
specific areas.

11
Part II
DEVELOPMENT

12
3 Refactoring and Code Clean Up
3.1 Introduction
The decision to develop PIPE within the framework of the open source com­
munity has yielded substantial benefits to the application as a whole and to
the user base in general. The direction and focus of the application’s software
development has often been driven by user demand for specific functionality.
This has enabled the software to evolve into a highly usable and diverse toolset
for creating and analysing Petri nets. It successfully addresses the requirements
and obligations of real world users who stress the system on a daily basis. Nev­
ertheless, as a result of the demand orientated approach to development and
the dynamic nature of the core group of developers over the last five years (con­
stantly changing on a yearly basis), the code itself rather than its functionality
has become a fundamental concern of the developers. In order to sustain long
term projects, one of the key issues is building a sturdy foundation on which
future generations of coders can naturally develop viable extensions. Unfortu­
nately, this has not been a priority and over the last couple of years a certain
amount of inefficiency has slowly crept into the code base. As a natural conse­
quence of the project’s rapid expansion, some of these inefficiencies are caused
not by individual modules but rather by the complexity introduced as a result of
multifarious interaction between classes. In addition, the lack of a clear develop­
ment strategy from the outset has resulted in poor management of data and an
unnecessary repetition of common processes. Understandably, as separate years
of developers implement increasing levels of functionality, the logical separation
that usually exists between classes has become indistinct and has negatively
impacted the application’s efficiency. Mostly, this part of the work involved the
refactoring described in section 3.4. However, before this could take place, there
were some outstanding bugs which needed attention, and some code which had
been modified by a developer working in isolation (outside the CVS system)
needed to be integrated.

3.2 Bug Fixing


A number of bugs had been identified at the outset of the project. While an
initial period of bug­fixing had been scheduled, in practice bug fixing was more
opportunistic. ­Dave Patterson, an active user and contributor to the PIPE
project, provided fixes for several bugs (immediate transitions not being given
priority of timed transitions; simultaneous firing of tokens via the same channel;
random null pointer exception generated while saving after running an analysis
module).

• Tim fixed one of the bugs listed in the original report (Infinite while
(!(isEmptySet(chj))) loop in the invariant analysis).
• During the course of his refactoring work, Edwin discovered and fixed
four bugs including one posted by Tim (it was causing animation to freeze

13
halfway while stepping back through a long firing sequence). Edwin man­
aged to fix it by modifying the code in several places to resolve the discrep­
ancies between the count int variable and the firedTransitions arrayList,
which are used to play back the firing sequence. There was a bug where
users were not prompted to save when they added or removed tokens from
the net. Edwin fixed the problem by adding a code in the setCurrentMark­
ing() method in the Place class so that it will set the netChanged to true
whenever its number of tokens is altered. There was a problem where
users could still step forward a previous firing sequence after it had been
reset (e.g. toggling off and on the animation mode). This was resolved
by changing the code in the GuiFrame class so that it resets the fired­
Transitions array list and the counter in the Animator class. Finally, the
animation didn’t playback the way it was expected to when the firing se­
quence has been altered i.e. firing a transition in the middle of the firing
sequence. Edwin found out that the problem was due to the application
trying to simulate the original firing sequence which was still in memory.
He changed the code in the animator class so that whenever the firing se­
quence is altered, all successive stored transitions in the previous sequence
are pruned. He also made corresponding changes in the AnimationHistory
class so that the correct firing sequence is displayed.
• One of the unfortunate truths for any development team working on re­
vamping a particular code base is that some errors will be inherited from
the original code. One such instance was a particularly obscure problem
relating to the saving of Petri nets. At some point during the construction
of a net, the core representation of the net was becoming corrupted and
this naturally led to a plethora of unexpected behaviour and errors. The
first signs of this problem were raised when nets that had been worked on
for extensive time were causing a null pointer exception when attempt­
ing to be saved. With focussed testing the group realised that this was
only the tip of the iceberg, and that should the exception be generated, a
variety of unexpected behaviour was observed throughout the entire appli­
cation. This was eventually tracked back to a problem with the manner in
which objects were being created according to mouse presses and releases.
Changes made by Matt to the manner in which the object handler dealt
with events within the container which resolved many of the related issues.
Finding a solution was far simpler than finding the cause, highlighting the
need for well­designed exception handling.

3.3 Integration of External Contributions


As shown from the Gantt charts, from 25th January to 3rd February, Edwin
worked on integrating external contributions into the project. He managed to
incorporate all the bug fixes from all external contributors in the recent release
of PIPE v2.1. However, after evaluating the code by external contributor Pere
Bonet, the group decided that it was in its best interest to leave the integration

14
of new functionalities including cut and paste, redo and undo etc to the end of
the project, if time permits. This was mainly due to time constraints and the
possibility that Pere’s code would introduce conflicting changes to the work we
set out to do. We have created a separate branch on the CVS server to facilitate
the merging of the two versions in the future.

3.4 Code Refactoring


Code refactoring had been identified as a major part of the initial specification.
The first report described six key areas where there was obscure or inefficient
code. Most of the team was scheduled to spend a substantial part of this first
period working on refactoring. All listed issues were addressed successfully and
updates have been posted to CVS, with the exception of introducing Generics ­
this area was examined and confirmed as a possibility but following advice from
Dr Knottenbelt, the team has decided for the time being to maintain backwards
compatibility with Java 1.4. Each area of refactoring is described again below
together with the details of how it has been addressed and an evaluation of the
outcome.

3.4.1 XML and DataLayer Division


3.4.1.1 Problem The DataLayer class had many methods to manage the
properties of the data layer (what transitions, places and arcs are part of the
data model) as well as a lot of code to import a data model from, or export
it to, an XML file. We planned to separate out all of the XML code from the
DataLayer class and create classes to manage the importing and exporting of
XML files. A DataLayerFactory class would be written to process an existing
XML file as input and create a DataLayer. A DataLayerWriter class would
output the description of the model to an XML file (originally done using the
savePNML method in the DataLayer class).

3.4.1.2 Solution In order to solve the problem, Ben spent time investigat­
ing the existing code and identifying the methods that dealt with saving and
loading. He then determined how best these could be abstracted to different
classes. In order to better understand the code and possible solutions, he spent
time investigating the handling of exceptions in Java, and also the use of cer­
tain design patterns, as suggested by team mates, such as the ’Factory’ and
’Builder’ patterns. The saving functionality was easier to abstract, as the Data­
Layer object doesn’t have to to be manipulated. The method savePNML, and
the methods createPlaceElement, createAnnotationElement, createArcElement,
createArcPoint and createTransitionElement, which are used to create the ele­
ments in the XML Document object, were all abstracted to a DataLayerWriter
class, along with the necessary imports. savePNML now takes in the Data­
Layer object to save, and the other methods examine this when building the
Document. Ben then changed the call to savePNML in the saveNet method
of the GuiFrame class, so that a DataLayerWriter object is created, and the

15
method called from this. The loading functionality was more difficult to ab­
stract, as the code for loadPNML involved parsing and transforming an XML
file, cycling through its elements, determining their nature (Place, Transition,
etc) and then adding the necessary attribute to the DataLayer object. This
last piece of functionality was difficult, and seemingly inappropriate, to per­
form in a class other than DataLayer itself, as the data members and methods
needed are private. A first attempt was to abstract all the necessary methods
from Data­ Layer, however these methods are needed in order to add to a Petri
net model when using the application, and so would have to be duplicated. A
suggestion from a team mate was to implement an interface containing these
methods. However upon discussion it was decided that this would increase the
coupling between classes, and would also leave the problem of a class trying
to modify the private data members of another, which seems especially inap­
propriate when the object in question is actually being created. Therefore Ben
decided to split the functionality of loadPNML. The actions concerning XML
were removed to a separate class PNMLTransformer, which includes the func­
tion transformPNML. This takes the filename of the XML file and parses and
transforms the file, creating a Document object, which is then passed back to the
method’s call ­ in the DataLayer constructor. A separate method in DataLayer,
createFromPNML, was then written, which takes this Document and uses the
DataLayer class’s original methods for modifying a net model to create the one
described in the XML file. All the getDOM methods were also abstracted to
the PNMLTransformer class as these involve the use of XML and returning a
document. Finally, Ben modified the create­ NewTab method in GuiFrame so
that instead of calling loadPNML from the present DataLayer object, a new
instance of PNMLTransformer was created, and the methods called appropri­
ately. The second abstracted class was called PNMLTransformer rather than
DataLayerFactory as initially anticipated because Ben felt the new class didn’t
fit the standard ’Factory’ pattern, in that it didn’t involve a condition and then
a call to build an object, but rather was just a class concerned with the parsing
and transformation of XML files.

3.4.1.3 Evaluation The final outcome is that the DataLayer class no


longer deals with XML, as outlined in the refactoring brief. All imports to
XML related packages have been removed. Comments have been added and
method and class documentation updated. All exceptions are still dealt with.
Much of the original code remains intact, and the application still passes the
tests of the functional suite.

3.4.2 Matrix Instantiation


3.4.2.1 Problem The DataLayer class had many methods that were used
in the analysis modules (typically getting matrices or Petri net objects (these
include, Place and Transition objects, arcs and annotations), or the status of
transitions for a Petri net ­ i.e. which transitions are enabled for firing). How­
ever, only a limited number of events changed the model enough that there

16
was impact to the analysis. For example, when a model was created from an
XML file, the incidence matrices could be created once and reused until the user
changed the number of tokens at a place, or changed the number or kinds of
arcs, places or transitions. Aside from iterative simulation, all of the analysis
modules ran on data that was static as far as the structure goes, yet the analysis
routines were all written to keep generating the incidence matrices repeatedly.
We investigated ways of reducing unnecessary updates to the data model used
in the analysis modules, perhaps by developing methods which could monitor
when an impactful change has been made to a Petri net and flag this up to
the analysis module, which would only then update the model (or parts of the
model) it is using for the analysis.

3.4.2.2 Solution Edwin addressed this issue, refactoring the code to make
it more speed and memory efficient. He added a new matrixChanged Boolean
attribute to the matrix object and modified the code so that matrices are only
recreated when they have been altered.

3.4.2.3 Evaulation Comparing the modified code with the original code,
it can be shown that the number of times the createMatrix method is called is
reduced by half when the GSPN, invariant and incident markup analyses were
performed on the Petri net.

3.4.3 Reflection and Class Loading


The Module Manager is responsible for core elements of the software’s function­
ality. It is accountable for the integration of many of the major developments
and expansions that have been introduced over the last couple of years. Inher­
ently, it acts as a form of plug­in manager for the PIPE application, offering the
necessary structure to permit foreign modules to be incorporated into the pro­
gram. As a result, it was one of the key areas of the code that was scrutinised
during the refactoring phase. The first issue Matt addressed was a problem
with the actual manner in which reflection was being used to incorporate for­
eign modules with unknown methods into the application. The code initially
used reflection to discover all methods of a foreign module and build them into
a tree structure for future use. Using Java Swing and reflection technology, all
methods were added to a JTree as nodes (default mutable tree nodes), creating
a means of controlling and displaying a set of hierarchical data as an outline.
The structure made each new module rapidly and readily accessible via the tree
view Swing left hand side of the GUI. Since only the ’run’ method of these mod­
ules is required to populate the drop down menu within the main PIPE GUI,
and to effectively enable the running of that module, we decided to remove
all other reflected methods and simply process each module by only loading a
single method. The work around the module loaders naturally led to a closer
examination of the method used to load modules as a whole into the PIPE ap­
plication. Despite the fact that most of the code was in place for stand alone
dynamic runtime loading of modules, information relating to module file paths

17
was hard coded into property files (the configuration files). The properties were
then streamed out of the files providing the classes with both a correct name
and file path to the actual ".class" files. Each class was then loaded via the
ModuleLoader and the ExtFileManager using a custom classloader that fell un­
der the hierarchy of the urlClassloader. The methods of these classes were then
used to populate the "available module" tree at runtime.
This seemed to contradict the idea of pluggable modules which could sim­
ply be dropped into the module folder upon completion and be immediately
integrated into the project. It also introduced complications should modules be
either moved or renamed. Changes would involve updating each of the individual
property files for all the modules. Modifying the getModuleClasses/getModuleDir,
Matt made this loading more pluggable, so that the class names and paths were
no longer hard coded but truly dynamic. By recursively searching through the
modules directory and sub directories and filtering the results through an exten­
sion filter, it was possible to load all the classes from that particular directory
regardless of the contents, picking up only modules that successfully imple­
mented the module interface. The ModuleLoader class was changed to reflect
this different approach and a getClassName method dealt with generating Class
names. Thus modules can now be dropped into the module folder and simply
incorporated into the build without the need for modifying configuration files.

3.4.4 Data Mismanagement


3.4.4.1 Problem There was considerable inefficiency in how data is man­
aged. For example, the DataLayer kept ArrayLists of Place and Transition
objects, which were generated by the analysis modules. There were numerous
places where arrays were generated only because the length of the array was
required, whereas a method which only returns the length of the array requires
much less overhead. In some cases, an array was called to get its length, then
a few lines of code later, another copy of the array was generated. It would
be more efficient to call the array once, get its length locally, then use it as
required. We aimed to look into the management of data, particularly in the
analysis modules and the DataLayer class, and eliminate these inefficiencies
wherever possible.

3.4.4.2 Solution Will identified numerous instances where arrays were be­
ing created in order to facilitate basic parts of the analysis. This was particu­
larly prevalent in the Classification class, and also the GSPN class which extends
Classification. The initial solution was to introduce local placeholders within the
Classification class, one for Place arrays and one for Transition arrays. These
were set up so that on the first attempt to access, a copy was made; this copy
was then used for subsequent accesses by analysis modules. Although this didn’t
produce any perceptible issues when the application was run, it turned out that
the changes were causing a test to fail. The test had been written by Edwin
to ensure some basic requirements of Petri nets were being met. Will therefore
used an alternative method to prevent unnecessary creation of arrays ­ methods

18
were written to get the counts of elements in the Place and Transition arrays
used by modules. Since so much of the inefficient code involved creating arrays
just to then get their length, this should significantly improve the efficiency of
the code. The new methods were added in the DataLayer class, and a total of
23 references in five different classes were updated to take advantage of the new
methods.

3.4.4.3 Evaluation A functional test script was developed which opened


a basic Petri net, then ran the Classification, GSPN and State Space analysis
modules on the Net. Using TPTP profiling[10] it can be seen that the new get­
PlacesCount method is called 36 times and the getTransitionsCount is called 47
times. Correspondingly, while the old version of PIPE called the method get­
Places(Place[]) 41 times and the method getTransitions(Transition[]) 105 times
during this test, the new version calls getPlaces(Place[]) 3 times and getTransi­
tions(Transition[]) 58 times. This clearly represents a substantial reduction in
the number of array instantiations thanks to the refactored code.

3.4.5 Evalution of refactoring


While the individual refactorings were clearly successful in increasing the usabil­
ity and efficiency of the application, it was also desirable to show quantitatively
that the refactoring had improved application performance. To this end, the ap­
plication was profiled before and after refactoring (it was possible to download
the code base at different date points via CVS). TPTP (Test and Performance
Tools Platform)[10] is an Eclipse foundation top­level project that includes a
range of tools that can be used to profile Java applications. Profiling took
place using a range of functional tests simulating various uses of PIPE (open­
ing, closing and editing nets and subjecting them to a variety of analyses); it
was then possible to see how memory usage and execution statistics differed.
Secondly, the Eclipse metrics plugin was used. This provides detailed informa­
tion about Java packages and classes including afferent vs. efferent coupling
(i.e. instability). It also measures McCabe cyclomatic complexity[11], an indi­
cation of method complexity arrived at by counting the number of distinct paths
through methods. The group hoped that the refactored code would show some
improvement on these measures. There were a number of differences between
the profiles, but regrettably it wasn’t possible to show significant and consistent
improvements in performance of the new code over the old. According to some
measures (e.g. overall execution time), the newer code base showed marginally
improved performance. However, on others there was no change or a slight
worsening of performance. Similarly, on measures of instability and complex­
ity, the results were mixed and certainly not conclusive. It had perhaps been
wishful thinking that the changes made would generate significant quantifiable
improvements to the application’s overall performance considering its size and
complexity. The individual improvements, justified above, were certainly sig­
nificant however, and should lead to more efficient and more easily understood
code for future generations of PIPE developers.

19
20

Figure 2: DataLayer class before and after refactoring. References in brackets


refer to refactorings described above
Figure 3: The DataLayerWriter class which now contains methods for saving a
Petri Net to an XML file.

4 Zoom Functionality
4.1 Introduction
As outlined in the specification of the zoom function, the idea was to add this
feature to PIPE to give users greater flexibility in working with their Petri nets.
At a functional level, what one would want from zoom is fairly obvious. So,
when planning the work, possibly the most important considerations to keep in
mind regarded avoiding unwanted side effects of adding zoom. In particular,
use of the zoom function should not:

• Affect the integrity of the Petri net diagram


• Affect or disable any other drawing functionality available to the user
• Alter the behaviour of the net on animation or the results of analysis
• Make any change to the file representation of the Petri net

So, it was important to get a good understanding of how the visualisation of


the nets worked, how the data model of the net interacted with the on screen
view, and to design and integrate the zoom feature carefully into the existing
code.

4.2 Research
4.2.1 Existing PIPE code
The existing version of PIPE was run in debug mode, using the Eclipse IDE, to
determine the way changes and additions to the Petri net diagram are carried
out. The case of adding a new place to the net was chosen as a simple example
to look at. Single stepping through the code and recording the flow of control
enabled the UML sequence diagram in Figure 4.1 to be produced. The diagram
shows that, although PIPE does not follow the MVC pattern fully by separat­
ing the model and the view of a place into separate classes, it does separate
the interactions the Place object is involved in. At the top of the diagram the

21
newly created Place object is added to a DataLayer object which is responsible
for providing information when analyses are run, or the net is saved to file. In
the lower half of the diagram the Place is added to a GuiView object which is
the on­screen component that displays the Petri net. From this we were able to
establish that most of our work would need to look at the representation of the
net by the GuiView, and the interactions between the user and the GuiView.
The only point to be addressed on the model side would be to ensure that the
x and y values of each component provided to the DataLayer, and subsequently
saved to file, were not affected by the current zoom.

4.2.2 Swing
The GUI components used in PIPE are part of the Java Swing library, and so
several of the Java Tutorial pages related to Swing were looked at as part of
our research. The most important information we found related to the layout of
components within a container[6], and transforming shapes and images[7]. Lay­
ing out components within a container in Java is controlled by a LayoutManager
object, which pulls, pushes, stretches and squeezes components into a particular
overall layout, e.g. a grid of cells where the components are all forced to be the
same size. In fact, PIPE bypasses this system by setting the LayoutManager
of the GuiView container to null and delegating positioning and sizing to each
individual component. They achieve this using pixel measurements, relative to
the top left corner of the parent GuiView in the case of positioning. Scaling (or
translating, shearing or rotating) can be applied to graphics in Java using the
Java.awt.geom.AffineTransform class. An AffineTransform object can be passed
to the Graphics2D object that shapes and images are painted into, causing the
graphics to be transformed in some chosen fashion. In our case, we would cre­
ate an AffineTransform and use its scale() method to set the degree of zoom we
want to apply to the shapes of the Petri net.

4.2.3 Successful Approaches to Zoom Implementation


Some searching was carried out to try to find descriptions of successful imple­
mentations of zoom in Java Swing applications, or possibly even class libraries
that we could look to integrate. This search was not very fruitful. Some open
source zoom libraries do exist, in particular Piccolo developed at the University
of Maryland[8]. While it is very impressive, with classes representing cameras,
lenses etc., integrating something like Piccolo would have meant replacing the
current PIPE net element classes entirely. We felt this was unnecessary. No
descriptions of general approaches or patterns for zoom were found.

22
Figure 4: UML Sequence diagram for adding a new place to the Petri Net. The
Place is registered with both a DataLayer and a GuiView.

23
Figure 5: Zooming from the centre out

4.3 Design
4.3.1 Design Decisions
We posed the following questions when thinking about how zoom would work:

• Which elements of the net should be affected by zoom?


• Answer: The position of all elements will be affected, but text labels will
not be resized. Changing the size of text will make it unreadably small or
obtrusively large fairly quickly and will not enhance the diagram.
• Will a saved net always be opened at 100% zoom?
• Answer: Yes. Zoom values are not part of the file data, and so a net will
always be saved and reopened as though it were at 100%.
• Can multiple nets open in different tabs be shown at different zooms?
• Answer: Yes. Zooming a net in one tab should not affect the others.

4.3.2 Geometrical Approach


The strategy for transforming the geometry of the net went through two itera­
tions. Initially, a two­step approach was envisaged. First, the position of each
component relative to the centre of the current diagram would be calculated,
then this position would be scaled, see Fig. 4.2.
As shown in the figure, this would result in some components having negative
x or y values with respect to the parent panel, so the second step would be to
translate the components back into the visible area. Initially, it was thought it

24
Figure 6: Zooming from the container origin. The diagram is subsequently
scrolled, so that it appears as though the screen is zooming in on the central
point.

would be necessary to use these two steps to achieve the correct expansion of
the diagram around the central point. However, once implementation began it
was realised that this was very complicated to get right. Furthermore, simply
multiplying all the normal x and y coordinates by the zoom factor, and then
scrolling to the correct point was equivalent to the more complicated approach,
and achieved exactly what we wanted. See Fig. 4.3.

4.3.3 Code Architecture


At the code level, the design of the zoom function had three main parts. Firstly,
a new class ZoomController would be added to the pipe.gui package which would
be responsible for keeping track of the zoom value of a GuiView, and provide
services to objects needing to zoom. Secondly, since different elements in the
diagram would need to act differently when a zoom occurred (Places scale and
translate, ArcPoints and text elements translate only etc.) a new interface,
Zoomable, was introduced. Zoomable contains one method, zoomUpdate(),
which each class would implement according to its own particular behaviour.
This then enables the GuiView to treat all relevant components as Zoomable ob­
jects, and simply tell them to zoomUpdate(). Finally, a new class ZoomAction
was also created, within the GuiFrame class, with a standard actionPerformed()
method to initiate the zoom. A high­level sequence diagram for the planned im­
plementation of a zoom event is shown in Fig. 4.4.

25
Figure 7: Sequence diagram for zoom. The GuiView makes a zoomUpdate()
call on all its Zoomable components, and the ZoomController is queried for the
current settings.

26
Figure 8: UML class diagram showing the relationships between GuiView and
some of the Zoomable and non­Zoomable components it may contain. Not
all classes derived from PetriNetObject are shown. Several others, like Arc, did
not need to implement Zoomable, so it was decided not to make PetriNetObject
Zoomable itself.

4.4 Implementation
4.4.1 PlaceTransitionObjects
Places and Transitions in PIPE both derive from PlaceTransitionObject and
much of the implementation, including the zoomUpdate() method was done at
the superclass level. Firstly, two new variables, locationX and locationY were
added to the class to store the ’real’ coordinates of the object, i.e. with no zoom.
These variables are only updated if the object is actually selected and moved by
the user, not if it moves as the result of a zoom. Next, zoomUpdate() was im­
plemented to query for the current zoom value and update the size and location
of the object accordingly. An AffineTransform for the current zoom is obtained
from the ZoomController and used to transform the component’s shape. The
challenges encountered were updating the ’clickable’ area of the objects and any
arcs connected to them. Mouse clicks did not initially register within the whole
area of the new zoomed components. We discovered that Places and Transitions
had overridden the contains(int x, int y) method of JComponent. This method
defines the area within which the component responds to mouse clicks, and so
had to be altered to reflect the zoomed size. Finally, the objects carefully calcu­
late the points at which any arcs attach to them. For a Place this is a point on

27
its circumference. For the Transition we found that an AffineTransform was al­
ready being used to calculate the points (to allow for rotation of the transition)
and so we were simply able to concatenate this with our own AffineTransform
to produce the zoomed result.

4.4.2 Arcs
In fact we did not implement zoom in the arcs themselves, but rather in the
ArcPathPoint class. The Arc simply joins up its constituent points to create
itself. In the same way as for PlaceTransitionObjects, a variable was added to
the class to store the ’real’ location of the point, independent of the on­screen,
zoomed position.

4.4.3 Labels
NameLabels are the names of Transition or Place objects, and this information
is stored in the pnName NameLabel variable inherited from PetriNetObject. In
the first iteration of the zoom process, these labels were not updating when the
object was zoomed and moved position. To remedy this, a call to the pnName
variable’s setLocation() method was added to the updateBounds() method of
PlaceTransitionObject, which is called to update the object as part of the zoom.
The new, zoomed coordinates are passed in, along with the height offset of the
object, and the result is that the label is repositioned with the newly zoomed
object.

4.4.4 AnnotionNotes
The AnnotationNote objects inherit from PetriNetObject like everything else,
but because their text still had to be legible, we decided not to change their
size. Therefore the effect of the zoom simply had to be a translation from one
coordinate to another. As with other objects, variables were added to the class
to store the original coordinates of the object, so that these could be used for
saving/loading. The zoomUpdate() method was also added, which gets the cur­
rent ZoomController (if not null), calculates the new zoomed coordinates using
the original coordinates of the object, and then uses the original setLocation()
method to repaint the object. This updates all aspects of it, including it’s
draggable points, so no further work was needed.

4.4.5 Updating the JScrollPane


After successfully zooming a net so that the objects’ size and relative position
are correct, the next task was to ensure that the view pane changed accordingly.
The specification dictates that the zoomed view will appear to remain centred on
the same part of the net as before. The first challenge, however, was to resize the
entire pane to fit the new net, if larger. As previously mentioned, we encountered
numerous difficulties with this. The container PIPE uses is a JScrollPane, which
creates and updates the viewport and scrollbars automatically, but the pane only

28
recognises it needs to ’grow’ if it is updated and it’s child components are off
the bottom and/or right­hand side of the pane. However, once we decided to
zoom the objects’ distance from the top left corner of the pane, rather than
the centre of the view, it meant that the net was effectively always growing out
to the right and down. The pane can then be resized by calling the existing
method updatePreferredSize(), which iterates over the net objects and sets the
size of the pane accordingly. Originally, PIPE only extended the pane if objects
were created on the very edge of it. We decided it was easier to continue to
draw a net if the pane increased whenever an object was within 100 pixels of
the edge. This way, there’s always plenty of space to scroll to and add the next
object, meaning the application is more usable.
When the JScrollPane enlarges, scrollbars are added and updated automat­
ically. However, the viewport remains the same. Therefore, the next challenge
was to reposition the viewport so that it appears to be centred on the same
coordinates as before ­ as detailed in the specification. The scrollbars are cre­
ated automatically as children of the JScrollPane, and can be explicitly altered.
We first experimented with moving the scrollbars to certain values in order to
move the viewport and change which section of the net was visible. However, we
soon discovered it was easier to modify the JViewPort component of the pane
directly. We added code to the actionPerformed() method of ZoomAction to
store the coordinates of the top left­hand corner of the viewport pre­zoom. In
case this was itself a zoomed perspective, we convert these to what they would
be without any zoom. The zoom process then goes ahead, and after the pane
is enlarged if necessary, we calculate the coordinates of the new viewport. The
end result is that the model appears to have been zoomed around the centre
point of the previous view. The scrollbars update automatically.

4.4.6 Click and Drag


During the implementation phase the idea to be able to ’click and drag’ the
Petri net came up. This is a feature that often goes alongside zoom, and the
decision was taken to promote this over the zoom select function. Briefly, the
user turns on the "drag mode" by clicking a button on the tool bar, and is
then able to click anywhere in the diagram and drag that point across the
screen, causing the whole net to scroll. Click and drag was implemented by
making a few fairly simple changes. Firstly, an extra mode constant was added
to the pipe.gui.Constants class, with a button to activate it. Secondly, the
mouseDragged() method was implemented in the nested MouseHandler class of
GuiView. The algorithm used to produce the drag effect is as follows:

• Let point p be the top left corner of the visible area

• if Xdragged_to > Xdragged_from then p.x = p.x + (width of visible


area)
• if Ydragged_to > Ydragged_from then p.y = p.y + (height of visible
area)

29
Figure 9: Proposed buttons for zoom

Figure 10: Final buttons for zoom

• translate p by ((Xdragged_from ­ Xdragged_to), (Ydragged_from ­ Ydragged_to)).


• scroll point p into the visible area

The scrollRectToVisible() method of Java.awt.Component is provided for ex­


actly this purpose and gives a pleasant, smooth scroll (especially when compared
to the author’s initial attempts to reposition the scroll bars manually!).

4.4.7 User Interface to Zoom Functionality


The original specification detailed that the zoom functionality should be ac­
tioned by the user via several buttons and menu items from the View dropdown
menu. The anticipated buttons were as follows:
The final application looks like this:
The Zoom Select functionality was omitted in favour of the Click and Drag
tool, as mentioned above, but the other functionality is all present. Icons in
keeping with the look and feel of PIPE were sourced [9] and added to the GUI
toolbar for the incremental Zoom In and Zoom Out. The original specification
dictates that clicking either of these buttons will result in a zoom of 20%, how­
ever we felt it was more useful to reduce this increment to 10%, as the buttons
would mostly be used for zooming in small amounts to refine a larger, previous
zoom, probably made using the selection box or dropdown menu. The zoom
selection box for the toolbar was created as a JComboBox. In order to tailor it
to the correct size, several of its mutator methods are called when the GUI is
set up, so these were abstracted to a separate addZoomComboBox() method.
The specification dictates that the options in the box range from 20 to 100%,
with increments of 20, with 150, 300 and 400% also available. This has been
changed to a range of 40 ­200%, with increments of 20%, plus 300, 350 and
400 percent, so as to give the user more choice. All these selectable options are
also used for the Zoom dropdown menu under View, so they are declared as

30
Figure 11: Proposed dropdown menu for zoom

the constant String array zoomExamples within GuiFrame. The combobox is


also set as editable, meaning the user can input their own value. A JComboBox
can be associated with an action, but each individual selection within it cannot.
Therefore a generic zoom action of type ZoomAction was initialised, and asso­
ciated to the box. The specific item selected has to be determined in the action
handler. The other way of actioning a zoom is via the dropdown menu. In the
specification this was to look like this:
However in the final implementation, the number of selections has been
increased, as with the selection box on the toolbar, and looks like this:
Unlike the JComboBox, each item of the dropdown menu has to be associ­
ated with an action. However, rather than initialise new ZoomActions for each
item, each one is associated with the generic zoom action, as with the selection
box.
So, with the interface created, a user can click on any of these items to
action a zoom. The degree of zoom they have selected is then extracted from
the created action in the actionPerformed() method of ZoomAction. If the

31
Figure 12: Final dropdown menu for zoom

32
incremental Zoom In or Zoom Out buttons have been clicked, the name of the
action created will reflect this, and so zoomIn() or zoomOut() can be called
in ZoomController. These methods increment or decrement the controller’s
percent by 10 and use this to set the scale of the controllers Transform variable.
If the name of the action is just Zoom, then it could have been created via
either the dropdown menu or the toolbar selection box. Therefore ZoomAction
checks the source of the action as well, and extracts the text of the selected item
appropriately. However, as the selection box can accept any user input, and the
example percentages themselves contain the ’%’ character, some validation is
necessary before the information can be passed to the ZoomController. If either
of these conditions are violated, then the selection box is blanked out and the
ZoomAction aborted. The validated string is then parsed to an integer and
passed to the ZoomController’s setZoom() method, which modifies the percent
and Transform accordingly.
It was decided that the selection box should update whenever a ZoomAc­
tion was completed, regardless of the source of the action. To this end, the
updateZoomCombo() method was created, which is called after the percentage
zoom of ZoomController has been altered and sets the selection box text to this
value. We soon discovered a problem with our original implementation of this,
however; the method to set the selection box text also fires the newly selected
item as well. This meant that each zoom was in fact being actioned twice ­
highly inefficient! To solve this, we updated updateZoomCombo() so that the
ActionListener on the box is removed before the item is selected, and then added
again afterwards, so no action is fired.

4.5 Evaluation
Overall, we feel that the development of zoom functionality has been a success.
The key objectives outlined in the original specification were as follows:
• when a zoom magnification is selected, each object in the net is redrawn
to the correct scale, and the view centred on the correct co­ ordinates
• any new objects created with that magnification selected are drawn to the
scale previously selected
• the new buttons are created in the toolbar and clicking them initiates the
correct action. Relevant Alt text should also be added ­ the new dropdown
options are added to the dropdown menu and clicking them initiates the
correct action
All of these objectives have been met. However, the method of implementation
and some smaller details do differ from what was outlined in the spec.

4.5.1 Differences from the Specification


4.5.1.1 Implementation The development process detailed in the original
spec was largely followed; our initial research focused greatly on how the original

33
PIPE application drew objects. However, the suggestion that certain existing,
unused data members were related to scaling and redrawing object and could
be used as part of a zoom functionality proved to be incorrect. However, this
allowed us to design the implementation from scratch, following MVC concepts
and good object­orientated practices.
The specification did not provide a detailed plan of how the functionality
would be implemented. It specified that the Graphics and Graphics2D classes
of the AWT package would be sufficient to achieve zoom, and this was the case,
although AffineTransform was also needed. However it didn’t mention exactly
what classes or would be created, or whether an interface would be used, so we
simply used its open­ended instruction that much of the work would concern
the Datalayer and Gui packages as the start point of our research.

4.5.1.2 Functionality Details Perhaps the most obvious deviation from


the outcome anticipated in the spec was the eventual omission of the Zoom
Select functionality. This was omitted as we felt that it was an unnecessary em­
bellishment of the zoom functions we had previously created. The drag function
was implemented instead, which we feel is far more useful. It isn’t simply an­
other way of zooming the net, it is a separate tool which adds new functionality
and increases usability in every view.

4.5.2 Key Metrics ­ Usability and Reliability


Zoom was added at the suggestion of a user, and the key objective was to allow
the information contained in large, complicated nets to be more easily gleaned,
both by allowing users to focus in on certain sections and also to zoom out for
a comprehensive overview of the whole model. We feel that our functionality
meets this objective, and increases the usability of the application greatly ­
especially when coupled with the related Drag functionality. The icons used
match the look and feel of the original product, and the zoom functionality
works intuitively.
Reliability is also key for users, and we have completed rigorous user testing
to establish that the zoom functions effectively in different scenarios ­ different
views, panes, object types, net designs. Functional tests have also been added
to the suite to reinforce this testing, and aid future development.

5 Reachability Graph Functionality


5.1 Motivation
When entering the arena of open source software development, the task of iden­
tifying key areas of the code that require development and expansion can often
be daunting. One attempts to identify areas that have either been entirely over­
looked by previous teams of coders or underexploited in terms of generating the
maximum level of functionality to the end user. As a team the idea of extending

34
Figure 13: Representation of the relation between a Petri Net and its Reacha­
bility Graph

the analysis side of the application was both an interesting avenue to explore
and potentially quiet rewarding.
The reachability graph was an obvious extension of the work that had al­
ready been done on state space analysis and would naturally complement the
information provided on boundedness, safeness and deadlocks. The reachabil­
ity graph, a commonly sought after tool when analysing the behaviour of a
stochastic system modelled with Petri Nets, provide a visual representation of
all the Net’s possible firing sequences. The states therefore represented the cur­
rent state of the nodes and the transitions between them indicated particular
(single/multi) transitions firing. The example below clearly illustrates the close
relationship between the Net and the Graph, while reinforcing the fact that each
presents and emphasises different aspects of that same body of information.
The graph is a useful tool for finding erroneous behaviour of a system e.g. we
can use it to determine whether using semaphore will prevent the simultaneous
access of the data protected by the semaphore.
If the data is simultaneously accessed by both processes, the above Petri
net will assume a state whose markings = (|start of p1|, |access by p1|, |end
of p1|, |semaphore|, |start of p2|, |access by p2|, |end of p2|) = (0,1,0,0,0,1,0).
However, this state is not present is the reachability graph generated in figure 2,
this proves that using semaphore will allow mutual exclusive access of the data
protected by the data. In addition to this, there are many other uses for the
graph e.g. it can be used to determine whether the net can result in deadlock,
the net is bounded etc.

35
Figure 14: PIPE Petri net representation of a two process semaphore

Figure 15: Reachability graph generated by the above Petri net. Note that S3,
S4, S5, S6, and S7 are in red and represent the vanishing states.

36
5.1.1 Background on Reachability
A Petri net is not static, its markings can evolve dynamically by transition
firings which destroy and create tokens. The reachability graph of a Petri net
is a graph representation of these firing sequences. The graph is directed whose
nodes represent reachability states S and edges represent transitions between
two such states; if there is an edge connecting S1 to S2, it can also be said
that S2 is directly reachable from S1. It is also a multigraph as multiple edges
between nodes are allowed as it is possible for two transitions to be enabled in
the same marking and produce the same state change.

37
5.1.2 Background on generating state space[12]

38
5.2 Research
The research carried out prior to the development of the Reachability Module
was of key importance to both ascertain the requirements of the general users
in terms of desired functionality and simultaneously investigate potential imple­
mentation strategies. In terms of the general user, the needs were fairly basic
and easily pinpointed due to feasibility constraints and actual beneficial gains
provided by the extension itself. The user requirements of the module were
therefore as follows:
• Graphical representation of the reachability graph via an additional mod­
ule.
• Inclusion of information within the graph in terms of both state numbers
and transition explanations.
• Limitation of reachability graph search space in terms of number of ex­
plored states, hence enforcing an upper limit constraint on graph’s beyond
which both viability and actual additional information obtainable from the
graph become minimal.
In contrast with the user requirements, the potential implementation strategies
were both numerous and extremely varied in approach. The key disputable
areas were as follows:
• The manner in which we obtained the state space information.
• The methods through which we transformed this data into its graphical
representation (this was going to be a flat bitmap graph according to the
specification).
Initial research focused on new algorithms that would perform the analysis on
the Petri net and generate its corresponding reachability graph data. However,
it became obvious that completely overhauling the algorithms implemented in
the GSPN Analysis module would both be cumbersome and provide only min­
imal gains in terms of efficiency. Additionally, the objective of this part of the
project was not to rework other modules but rather increase the overall func­
tionality of the software package. Therefore we pushed this design choice in
favour of manipulating implemented algorithms in order to obtain the required
output. The second stage of the implementation research was highly centred on
technologies that would enable us to represent data in a graphical format. In
particular, the challenge was to find open source viable solutions to the problem
of automating the graph drawing process.

5.2.1 Technologies
The need for a graph visualisation package was immediately apparent from the
outset of this module’s development. The research focused on graph visualiza­
tion techniques which enable structural information to be represented as dia­
grams of abstract graphs and networks. More importantly though this process

39
Figure 16: Example of Graphviz engine generating automated graph layout

needed to include automated graph drawing. This would tackle the fundamental
problem of obtaining a graph layout algorithm capable of determining optimum
positioning of the elements of the graph.

5.2.1.1 Graphviz Research into the area of automated graph construc­


tion and available packages led us towards the open source community, which
promised to yield solutions that might be readily integrated into the PIPE appli­
cation. Graphviz was highlighted as the dominant open source software capable
of representing structural information as abstract graphs and networks. Some
of the key elements which made this package so attractive were that it imple­
mented a wide range of image formats including JPEG and interactive graph
browser images, as well as supporting many leading layout formats.
Nevertheless the integration could not be done in a completely straightfor­
ward manner due to the fact that Graphviz is coded in C which would break
down the platform independent nature of the PIPE application, or at least
this particular module’s independence. Therefore, we extended our research to­
wards Java extensions that would provide the link between the functionality of
Graphviz while maintaining the independency of the Java environment.

5.2.1.2 Grappa Developed by the research branch of AT&T, Grappa is a


graph drawing package that may be regarded as a subset of Graphviz. The Java
package provides methods for building, manipulating, traversing and displaying
graphs of nodes, edges and sub­graphs. It also includes methods for reading in
and writing out graphs using the ’dot’ text format. Nevertheless, a fundamental
problem remained in that it still required end users to install Graphviz on their
local machines.

5.2.1.3 Dot File Format The dot file format provided the necessary data
structure that would be needed to translate reachability graph information into

40
its graphical representation. Grappa was implemented using this format for a
number of logical reasons. The coding format of dot files is both user orientated
in terms of readability at its source level and also enables quite detailed con­
trol over the manner in which Graphviz automates the layout algorithm. The
format allows users to specify numerous fundamental properties of the graph in
question, for example:

• Node shape, colour and positioning (if the user requires fixed nodes)
• Graph orientation in terms of direction ­ top to bottom, left to right, etc.
• Detailed labelling both within nodes and between them ­ labels can be
fixed, can change visibility state depending on node selection or inter node
selection, etc.

The format therefore seemed ideal for the level of control we required over
the actual structure of the resulting graph. Also, this format enables users
to significantly customize graphs and thus could help future developers extend
the functionality of the module through higher levels of user input in terms of
desired graphical layouts. Once the dot file is constructed, it is then parsed
by Graphviz to construct hierarchical or layered drawings of directed graphs.
When dealing with the dot file format, the Graphviz layout algorithm directs
edges in the same direction (top to bottom, or left to right) and then attempts
to avoid edge crossings and reduce edge length.

5.3 Design
Having pinpointed the relevant technology that would be used to implement the
new reachability module, a number of design decision then needed to be taken
into consideration.

5.3.1 Graphviz, Grappa and C


One of the fundamental problems that needed to be addressed was the fact
that although Grappa provided a Java port to the Graphviz application, it
did not provide a means of actually running the Graphviz application under a
closed Java environment. The end user would still need to install the Graphviz
application in order to utilise the Grappa port. As Grappa is coded in C,
this would require different compilations on different platforms. This naturally
contradicted the initial objectives of the PIPE application in that it was to
remain platform independent.
The initial strategic approach to this problem was to export the necessary
elements of the Graphviz application into Java, reasoning that the cross coding
from C to Java should be fairly straightforward considering the many similari­
ties between the two code bases. Nevertheless, it rapidly became apparent that
this task would be both incredibly time consuming and would require additional
extensive library translations. Hence, to make use of the Graphviz application

41
in its original form, we decided that hosting the application as a remote ser­
vice for the PIPE application would be an acceptable compromise. By hosting
Graphviz as a service, the PIPE application would remain platform independent
yet would now include an additional constraint as it requires internet connec­
tivity. Taking into consideration the fact this introduced only minor limitations
on the application (given the extent of today’s internet access) we decided to
push forward.

5.3.2 Code Architecture


At the code level, many of the decisions in terms of how to introduce the new
module were predetermined by the manner in which the PIPE application had
been initially designed. We introduced a new class ReachabilityGraphGenerator
which implemented the Module interface ­ providing the means of rapidly ex­
tending the application via its module management system. The user interface
was implemented through a generic Abstract Window Toolkit container using
the content pane of the JDialog object. The container was then populated with
various Jcomponents to provide the user both with the ability to perform anal­
ysis on the current net and Petri Net xml files saved outside of PIPE through
the PetriNetChooserPanel. In addition, the user could recieve feedback via the
ResultsHTMLPane and are given a choice of to place the initial state at the head
of the reachability graph by selecting the Checkbox. Finally a ButtonBar was
added to the container which initiated the reachability analysis by performing
amongst other tasks the calls to both the overloaded generate method within
the StateSpaceGenerator class and to the generateDotFile. The first produced
a reachability graph file (rgf) which held both the state and state transition
information corresponding to the net in question. The second translated this
information into its corresponding dot file format which could then be used by
Graphviz to automate the graph layout.

5.4 Implementation
5.4.1 Steady State Algorithm
The principle of the algorithm[12] used is pretty straightforward. It is a deep
first search and will terminate only when the state space is finite. First, the
initial marking (M0) of the net is calculated and it is added to the state space.
The transition enabling rule described in lemma 2 is then used to generate an
array of enabled transitions and the firing rule described in lemma 3 is used to
compute the set of states reachable from M0. In this way, new states can be
iteratively added to the state space by selecting a state that is already present
in the state space, generating its reachable states and then inserting them to the
state space. In order to generate the smallest set of possible states as defined
in lemma 1, it is necessary to keep a list of explored states so that a state that
is reachable by more than one state is only added once to the state space.
As the state space of the net increases exponentially with its size, special
considerations have to be taken in terms of the memory requirements and com­

42
Figure 17: Class diagram outlining of some of the structure and interactions of
the Reachability module

putational intensiveness. Hence, a depth first search approach is chosen over


a breadth first search approach. This is because the explored list, which will
eventually contain all the states in the state space, is already very memory in­
tensive; it will make it even more memory taxing if a breadth first approach is
used. Besides, a depth first search approach confer another advantage in that it
allows information on the transitions between the states to be recorded at the
same time, thereby reducing the computational time for generating the state
space.

5.4.1.1 Data Representation The most obvious data type to choose to


store state space is to use a queue because the states in the state space can
be thought as elements queuing to be processed, which in this case, waiting to
have their reachability states explored. Using queue will incur more overhead
than a primitive type, such as an array due to the continuous need to recast
objects back into their original types. However, the ease of programming and
maintainability of code it offers justifies the speed­space trade off since arrays
in general used more space and incurred a higher initialisation overhead.
The computational and memory requirements of the explored states play a
even greater role ­ the explored state list will eventually contain all the states
in the state space and we potentially need to transverse through it for every
newly M’ generated to check whether it has already been added to the state
space. Hence, the hashing technique developed by Dr Knottenbelt[13] detailed
in Nadeem’s report[14], is used store the state information. The technique can
be summarised into 2 steps:

1. A primary hash key h1 (s) is calculated to determine the row of the hash

43
Figure 18: UMLSequence diagram for generating a reachability graph from a
Petri net.

44
Figure 19: Taken from p17 of Nadeem’s report summarising the hashing
technique used to store states in the explored
state list

table
2. A secondary hash key is calculated to get the data stored in the table,
which contains only compressed state information, which is simply an
integer value, and not the full state description
Each row of the hash table contains a linked list whose elements are the single
integer compressed state information.
As seen in the diagram above, the combined h1 and h2 key can be used as
an identifier for a particular state and thus this information can be used instead
of full the state description to check whether a particular state has already been
added to the state space. However, the combined h1 and h2 is not unique for
every state description; it is possible to generate the same hash key for two
completely different states. This disadvantage is offset by the 25­fold decrease
in the memory required to store each state.
In order for the graph to distinguish between vanishing and tangible states,
the following rule is used to correctly categorise the states in the state space.
At a particular state S,

where ET = the set of enabled transitions containing transitions that satisfy


the enabling rule 2, IT = the set of immediate transitions and VS = set of
vanishing states

45
To generate a directed reachability graph, a list of states reachable by the
current state needs to be recorded. This is achieved by the use of the arcListEle­
ment object created by Nadeem to store rate information between two states.
An extra attribute is added so that it can hold transitions information, namely
the transition number responsible for the change of state observed. Once this
information has been recorded, it can be taken out of the list, thus a stack is
chosen over a normal list to store this information together with this list of
reachable states that haven’t been added to the arcListElement.
All the above information is output into two separate records, the state
record and the transition record. Markings of the state and the Boolean isTan­
gible attribute are stored in the state record. The connections between the states
and the fired transition number are stored in the transition record. Knowing
how the state space can explode exponentially with the complexity of the net,
a random access file is used to output this information despite this sacrificing a
little computational speed, as accessing information from a random access file
will be slower than accessing information from a sequential access file. The main
reason for using a random access file is the fact that it enables you to write the
information anywhere in the file. By creating an offset for the location where
the transition records are stored in the file, it is possible to store both the state
record and transition record simultaneously as soon as they have been calcu­
lated. This reduces the memory requirement of the output operation by half,
because if a normal sequential access file is used, a complete list of state records
and transition records has to be created before writing to the file; otherwise it
will not be possible to retrieve the information orderly later on.

5.4.2 Random Access Files and Input Streams


One of the major considerations prior to implementing this method was the fact
that in order to make use of the Grappa port to Graphviz, it was necessary to
solve the problem of data passing from this method to the att.grappa.parser.
When an instance of the Parser class is constructed, it takes as arguments an
InputStream which should contain the necessary dot file information. Unfor­
tunately in Java, the random access files are not part of the input and output
stream hierarchy, instead implementing the DataInput and DataOuput inter­
face. It is therefore impossible to simply assign a random access file to an input
stream.
It was therefore necessary to determine a method by which we could obtain
the data from the random access file, convert that same data into the dot file
format, and save it in such a manner that it was accessible via an input stream.
The solution came in the form of a circular buffer Class[15], which effectively
connected input and output streams to a single buffer. The data converted to
dot file format could then be placed on the output stream of the buffer. The
buffer would then contain the necessary data which could now be accessed via
the input stream also attached to the buffer, which was effectively passed into
the att.grappa.parser object constructor.

46
Figure 20: Java IO class hierarchy (http://Java.sun.com/)

5.4.2.1 Converting to Dot Format Through the use of circular buffers


the actual conversion of the reachability graph file to a dot file was rudimentary.
Having researched the dot file format, the states and transitions were parsed into
the output stream of the circular buffer. Choices were made at this point to
represent the graph in a top down format, with nodes (the states) as circles
and depending on whether or not they were tangible they were drawn in black
or red. The labels on the states were simplified to Sx, where x was an integer,
therefore producing a clearly graph. Nevertheless, we also wanted to include
information on how each state corresponded to a particular instance of the Petri
Net in question. Therefore, we included information in the form of a pop up
box which became visible when you selected a particular state. The information
described the instance of the Petri Net which resulted in the instantiation of
that particular state. In addition, we also included the same form of labels on
transitions from one state to another indicating the transition from state t0 to
t1.

5.4.2.2 Grappa Interfacing The final stage of the implementation re­


quired research into the manner in which Grappa both received, processed and
then returned the graphical representation of the data itself. The first stage
required constructing a Parser object which read in the input stream. From
the parser object it was possible to instantiate a graph object via the getGraph
method, which was then used to construct the GraphFrame. The construct­
GraphFrame method established a URL connection with the remote server pro­
viding the necessary resources (http://www.research.att.com/~john/cgi­bin/format­
graph) which was then able to use the Graphviz installed remotely to determine
an appropriate layout for the graph data using the dot algorithms.

5.4.2.3 Modification to the Grappa Interface Some small modifications


were made to the Grappa interface in order to maximise both usability and
understanding for the end user. The most significant of these were an addition
text box at the bottom of the interface outlining the various options and manners

47
through which the user could interact with the reachability graph. These options
include but are not limited to deleting states, zooming, saving and acquiring
additional information when hovering over states or transitions.

5.5 Evaluation
The following three criteria are examined to test the correctness of the reacha­
bility graph, as opposed to its modelling accuracy:

5.5.1 Quality
Two main factors are being considered here ­ the GUI and human­computer
interaction (HCI). The interface should be aesthetically pleasing and blend in
with the rest of the modules. It should also be intuitive (user­friendly) and
functionally relevant. Visualising the graph should assist the users with un­
derstanding the state space of the net. Appropriate colour coding, labels and
informative legends should be used to aid the use and visualisation of the graph.
A dialog box was chosen as the interface to display information and get
input from users because it conforms to the appearance of the other modules.
The main drawback of this dialog box is its inflexibility as a top­level container
and its modality, which effectively blocks all interaction between other windows
unless it has been closed. This created a lot of problems for us in getting the
graph JFrame window to interact with the user without having to close the
module window. After testing several methods, we managed to get the graph
window to pop up on top of the module window and allow the users to interact
with the graph window before having to first close the module windows. This
makes the module more user­friendly as users can now generate multiple graphs
from a single module window and they do not have to close the module window
first before they can interact with the graph window.

5.5.2 Dot File Formatting


Overall, the interface is very easy to use even for someone who is not familiar
with computers as the graph can be generated by just one click of the button.
The appearance of the module window is almost identical to the other modules
as it is created exactly the same way as the rest of the modules. The use of
colour coding, labels and informative legend allow users with basic knowledge
of Petri nets to intuitively interpret the graph. Functionally relevant options
like print, zoom etc. are introduced to cater for the needs of an average user,
aiding him/her in his/her graph interpretation.

5.5.3 Robustness
The module should be able to handle invalid file inputs. A limit should be set
for the maximum number of nodes on the graph:

48
1. To ensure that the graph can be generated within a reasonable amount of
time
2. To prevent an average user from running into memory problems due to
the explosion of the state space
3. To ensure the quality of the graph ­ it would be pointless to create the
graph if it is going to be a mesh and users cannot intuitively interpret it

The module should always reliably produce the same graph for any two identical
nets. Any exceptions should be handled gracefully.
The main tool used in accessing the robustness and correctness of the module
is the test suite developed during its implementation. Details of the tests can
be found below.

5.5.3.1 Regression Testing The reachability module is closely coupled


with the GSPN analysis module. This is because a number of existing func­
tions and objects were overloaded/extended to achieve optimal code efficiency
and maintainability of the code. Hence, regression testing was used to ensure
that the reachability module was implemented without any disruption to the
GSPN analysis module. A GSPN analysis test suite was written prior to the
implementation of the reachability module. It tests several functions used in
the analysis. These tests were run regularly when a new feature or optimisation
is added or an existing method is modified. These tests prove to be invaluable
in identifying subtle errors and inconsistency of data.

5.5.3.2 Unit Testing Unit testing was employed during the implementa­
tion of the module to validate the correctness of each function. These tests
helped to ensure correct output and graceful handling of exceptions and in­
valid/incomplete parameters inputs.
It is essential to check that the correct state space is generated and the right
result is passed onto the dot file generation function, otherwise it will lead to
the generation of an inaccurate reachability graph. The integrity of the state
space generated is checked as follows in the StateSpaceGeneratorTest:

1. Check that the state space generated by our algorithm have equal or more
states than the set of tangible states generated by Nadeem’s algorithm[14]
as the set of tangible states is a subset of the set of reachability states.
2. If the state space contains only tangible states, the state space generated
should be equivalent to the set of tangible states generated by Nadeem’s
algorithm[14].
3. Check for uniqueness of the states in the state space as the state space
generated should contain the least number of states as defined in equation
1, thus there should be no replication of states in the state space.

49
The state space data is exported out via a random access file. State informa­
tion and transition information are represented as state records and transition
records in the file. Therefore, to check that the right information is passed
from the state space generation function to the dot file generation function, it
is important to access the integrity of the functions in the state record and
transition record class. These functions are tested in the StateRecordTest and
TransitionRecordTest, which literally tests whether the original data is modified
in anyway after it has been written and read on the other end.
Having verified the integrity of the state space generator, it was then neces­
sary to begin testing on the actual reachability module. This part of the testing
required verification of the dot file generator algorithm as well as the grappa
port. Two fundamental authentication steps were necessary to achieve this:

1. Verify whether the data received from the state space generator was cor­
rectly translated into the dot file format ­ carried out by the reachability­
GraphGeneratorTest.
2. Check the actual reachability graph for correct number of states and tran­
sitions following the dot file parsing and subsequent graph generation
through the grappa engine.

Having checked the major functionality of the module, one final test (ModuleM­
anagerTest) was implemented to verify the module was correctly loaded via the
module manager. This naturally overlapped with tests that had been written
to asses the refactoring of the module loading classes and were therefore used
for dual testing.
Having verified the integrity of the state space generator, it was then neces­
sary to begin testing on the actual reachability module. This part of the testing
required verification of the dot file generator algorithm as well as the grappa
port. Two fundamental authentication steps were necessary to achieve this:

1. Verify whether the data received from the state space generator was cor­
rectly translated into the dot file format ­ carried out by the reachability­
GraphGeneratorTest.
2. Check the actual reachability graph for correct number of states and tran­
sitions following the dot file parsing and subsequent graph generation
through the grappa engine.

Having checked the major functionality of the module, one final test (ModuleM­
anagerTest) was implemented to verify the module was correctly loaded via the
module manager. This naturally overlapped with tests that had been written
to asses the refactoring of the module loading classes and were therefore used
for dual testing.
After the module had been implemented, we tested it rigorously. We are able
to demonstrate that we are able to reproduce the same graph with fidelity for
each example nets. During the testing phase, we also found that the maximum

50
number of nodes the graph can represent without obscuring its visualisation
and interpretation to be around 400, hence, we have set that as the limit for
the module.

5.5.4 Efficiency
Generation of a graph with less than 100 nodes should be moderately fast but
it is acceptable for graphs with greater than 1000 nodes to be generated less
quickly.
On average, we have determined that it takes less than a second to generate
graph with less than 15 nodes and about 10 seconds to generate a graph with
100 nodes. Out of the 10 seconds, the steady state generation only took ap­
proximately 0.1 second; most of the time is spent on transmitting the dot file to
the remote server at http://www.research.att.com/~john/cgi­bin/format­graph
and waiting for its response. If speed is an issue, users can always download a
copy of Graphviz and invoke the remote call locally. This will reduce the time
taken for graph generation by 10­fold.

5.6 Improvements
As in all software development projects, a number of possible improvements
might have been made both to the method of solving the problem and the
manner in which the method was implemented. Hindsight naturally makes
some of the short falls more apparent although given the time constraints of the
project we feel that the objectives of this module have been achieved.
Nevertheless, there are some improvements that we would like to suggest.
In our view the core change that could be made to this module is the imple­
mentation of Graphviz hosting at Imperial College. This would provide long
term stability for the module and remove our reliability on the hosting of a nec­
essary service via a third party (AT&T). Nevertheless, this was placed on the
list of future recommendations for two reasons. First, this problem is simple
to solve given the necessary time and not fundamental to the module in the
medium term, and therefore was not seen as a critical area to leave as a future
improvement. Secondly, during the course of the project we were able to con­
tact the individual running the service at AT&T and received confirmation that
for the foreseeable future the service would remain available ­ thus providing a
minimum guarantee towards the future proofing of the application.

6 Testing
A testing framework was implemented using the junit and Abbot open source
libraries as outlined in the specification document.

51
Figure 21: UML class diagram of Pipe test framework

6.1 Library Integration


The project folder structure was divided into separate application and testing
trees to keep the two types of code apart and allow construction of releases with­
out having to sift out test classes. The junit and Abbot code was downloaded
into a test/lib directory and added to the classpath.

6.2 Implementation of Testing Framework


The main classes of the testing framework are shown in Figure 6.1. At the
top of the diagram, pipe.AllTests contains references to a collection of classes
that test specific functions, and provides a way to run the full test suite.
Each of the testing classes inherits from junit.framework.TestCase. TestCase
provides the ability to execute the test and supplies various assertion meth­
ods that throw an exception if the conditions of the test are broken. The
junit test runner then collates the results of the tests and displays details
of any failures. Some PIPE test classes, such as GSPNAnalysisTest
inherit directly from TestCase. They collect all the information they need
directly from the classes under test, in this case the outcome of running a
GSPN analysis on some of the example net files. The Abbot library builds
on junit to provide extra services for testing the GUI parts of the applica­
tion. The junit.extensions.abbot.ComponentTestFixture class has methods to
search through a GUI hierarchy and locate components, and others to generate
button, mouse and menu clicks. So, our pipe.gui.GuiFrameTest class is derived
from ComponentTestFixture to give it access to these facilities, which are then
used to exercise the main application buttons and menus. A third type of test

52
is made possible by the junit.extensions.abbot.ScriptFixture class, which is the
basis for test scripts written in xml. We derived pipe.gui.FunctionalScriptsTest
from ScriptFixture and generated a series of scripts to test the main GUI­level
functionality of PIPE as thoroughly as we could. The xml test scripts are gen­
erated with the help of a script editing tool that is provided with the Abbot
package. This tool generates xml references to GUI components that allow
the Abbot classes to identify them correctly, and records user actions directly
from the application, so that they can be written into the script and played
back. These action steps are then interleaved with assertions which generate
exceptions if they are broken, following the same pattern as the other tests.

6.3 Petri­Net Specific Testing


There were several challenges in implementing meaningful tests for PIPE,
mostly related to the script style tests. Firstly, the Abbot classes must be able
to determine exactly which component is being referenced for each action during
a test. They have certain default ways of doing this, looking first at the type
and then at the text a component displays. In the case of some components,
such as the Arcs in PIPE, these defaults are not discriminating enough to be
able to separate multiple instances. It was therefore necessary to derive a new
class, ArcTester, from the Abbot ComponentTester class and add it to the
abbot.extensions.tester package. ArcTester overrides the deriveTag() method,
which generates a unique string for each instance of the class being tested.
Secondly, the library classes provide a good range of methods to test things such
as the size and location of components, whether certain sequences of actions are
possible, and whether components such as the PIPE Places and Transitions
are added and deleted correctly. However, we also wanted to test things such
as the correct behaviour of a net when it is animated and a particular firing
sequence is applied. To do this required extending the testing capabilities so
that we could determine which transitions were enabled at any given time, and
follow the progress of tokens around the net. The scripting mechanism allows
arbitrary calls to the Java code from within the script, so we added methods
to FunctionalScriptTest to accept a reference to a component in the net being
tested and analyse it. To determine the state of the place or transition it is then
necessary to analyse its graphics context to see what image is being portrayed
to the user. FunctionalScriptTest does this by creating a new BufferedImage
and painting a copy of the place or transition into it. The pixels of the new
image are then checked to verify the colour of the transition (red for enabled)
or the image is cut up to count the number of tokens present in a place.

6.4 Evaluation: Test Coverage


We planned to focus our efforts on testing as much of the high­level functionality
as possible, and believe that we have achieved a good outcome. The four xml
test scripts cover 41 individual functions, as described in the Functional Test
Directory (Appendix A). In addition to this, 20 unit tests have been written,

53
some covering the analysis modules that were worked on by the team, and others
testing some of the GUI classes in a more method­specific way than the scripted
tests.

54
Part III
EVALUATION AND
CONCLUSIONS

55
7 Evaluation and learnings
The group is satisfied that in most cases the tasks it set out to do have been
achieved beyond the level specified.
Refactoring of the application was to some extent continued as we worked on
adding new functionality and fixing existing bugs, so was somewhat more thor­
ough than initially expected. Having said that though, the group felt that
progress with the refactoring was at times frustrating due to a lack of familiar­
ity with the code, with Java and with working on collaboratively on a software
project. It is generally agreed that the refactoring might have gone further had
it been done towards the end of development, but of course this could have
sacrificed the quality of new functionality that was added to PIPE.
The original specification for reachability graph functionality included the gen­
eration of flat bitmap images from a supplied DOT­formatted; however, by
incorporating the Grappa code it was possible to offer highly interactive graphs
to the users, including zoom in and out, scale­to­fit and tool tip (additional in­
formation displayed on mouse hover) functionality. Users can reposition nodes
and specify layout prioirities.
The group is very happy with the quality of the zoom functionality: the feature
of being able to drag Nets smoothly across the page is particularly effective, and
something which wan’t specified initially. It was however chosen as a substitute
for the initially proposed fit to page functionality, so was a case of adapting
the specification to better suit perceived users’ needs rather than simply adding
extra features to the specification.
One task which wasn’t completed was the updating of options functionality,
which had been specified in the original document and scheduled for the final
week of the project. We had seen this as a ’nice­to­have’ feature and in the end
the team chose to ensure the main new pieces of functionality were effective and
thoroughly tested rather than take something on and run the risk of incorporat­
ing a feature we wouldn’t have had time to test. This is certainly functionality
we would recommend for future generations of PIPE developers.
For most of the team, this was the first time they had taken on an existing live
development project. This led us to slightly underestimate the time required
to gain a good understanding of the application. The presence of bugs and
unwieldy code was a legacy of previous contributors. Becoming familiar with
PIPE’s code base and its idiosyncratic and in places sub­optimal design was at
times frustrating but was undoubtedly a useful experience, and of course some­
thing professional developers working on large­scale projects face on a regular
basis. We therefore consider this to have been a valuable experience, and will
make appropriate allowances for becoming familiar with future coding projects.

56
8 Effectiveness of Scheduling and Group Organ­
isation
The Gantt charts on the following pages show the scheduling and status of tasks
at the outset, midpoint and end of the project. Task completion fitted remark­
ably closely with the planned timings. For the two major pieces of functionality
the group split into two teams, and we had agreed that if necessary, resources
could be moved from one task to the other. However, this proved unnecessary,
so we stuck to the original plan.
The group had taken time at the outset to understand each other’s areas of
expertise and interests; task allocation was to some extent influenced by these
and we feel that this led to each member working as effectively as possible.
Emphasising face­to­face communication over detailed written documenta­
tion meant that we had flexibility and could respond rapidly to unexpected
developments. Nonetheless, the initial schedule was successfully adhered and
it would certainly be wrong to suggest that there was any lack of discipline or
organisation as a result of our approach.

9 Future Directions
There are many potential improvements that could still be made to PIPE. We
would hope that future development continues to be guided by users. Some
possible areas are listed below.

9.1 Hierarchical Nets


Hierarchical nets are a useful way of building complex nets by representing com­
mon sections by a ’black box’ with inputs and outputs, which can be expanded
to show the contained sub­net. This was anticipated to be handled by defining
an additional Petri net object type, which would be an interface object. Such
an interface would be a source or destination for arcs in a sub­net, providing
the location for inputs and outputs with the enclosing net. An additional object
would then be used to represent a sub­net, providing input and output locations
corresponding to the interfaces in the sub­net. The sub­net could be edited by
double­clicking on this representative object, which would open the sub­net in
a new tab. PIPE would not require substantial changes to support this, as
demonstrated by the number of new net objects added and the improved ability
to deal with multiple tabs. The demand for it is questionable, as there seem
to be few significantly large common net sections, and it is probably easier to
draw nets without hiding sections since this may lead to errors. There are also
issues regarding how such nets would be animated.

57
58

Figure 22: Initial specification schedule


59

Figure 23: Schedule: 15/02/07


60

Figure 24: Schedule: 19/03/07


9.2 Copy and Paste
The ability to cut, copy and paste individual net elements, or entire sections
of the net, was suggested as a useful feature. It was decided that, due to the
highly specific nature of the data, it would not be necessary to use the global
system clipboard, but instead a local mechanism could be used. The anticipated
implementation was to extend all net objects with serialisation functions, allow­
ing them to save and load themselves concisely to and from a memory stream,
including dealing with links between objects. This memory stream could then
hold an arbitrary number of objects, which could be recreated in other nets or
elsewhere within the same net. However, this serialisation functionality would
have to be complete, with implementation in all net objects, for it to be useful.
Due to the length of time for which some net objects, particularly arcs, had
non­final internal representations, it was not possible to implement this with
enough time to be sure of completion.

9.3 Undo and Redo


The serialisation of objects would also simplify the ability to undo and redo
their creation and modification, as a traversable history of changes could be
maintained with the addition of some container objects for some group actions
such as paste and group deletion. As such, it was an extension of copy and
paste and to be implemented after it.

References
[1] http://pipe2.sourceforge.net/documents/PIPE2 Final Report.pdf
[2] http://pipe2.sourceforge.net/documents/PIPE2­Report­20050814.pdf
[3] http://pipe2.sourceforge.net/documents/PIPE­Report.pdf
[4] Dijkstra, E.W.: Hierarchical Ordering of Sequential Processes. Acta Infor­
matica 1 (1971) 115­138
[5] http://sourceforge.net/projects/pipe2/

[6] http://Java.sun.com/docs/books/tutorial/uiSwing/layout/index.html
[7] http://Java.sun.com/docs/books/tutorial/2d/advanced/transforming.html
[8] http://www.cs.umd.edu/hcil/jazz/index.shtml
[9] http://www.famfamfam.com
[10] http://www.eclipse.org/tptp/
[11] http://www­128.ibm.com/developerworks/Java/library/j­cq03316/

61
[12] M. Ajome Marsan et. al. (1995). Modelling with Generalized Stochastic
Petri Nets (John Wiley and Sons, 1995, West Sussex)
[13] W.J. Knottenbelt Generalised Markovian Analysis of Timed Transition
Systems MSc thesis, University of Cape Town, June 1996
[14] http://pipe2.sourceforge.net/documents/PIPE2­Report­20050814.pdf p14­
19 [last accessed 15/3/07]
[15] http://ostermiller.org/utils/CircularBuffer.html

62

You might also like