Classifying Desirable Features of Software Visualization Tools For Corrective Maintenance
Classifying Desirable Features of Software Visualization Tools For Corrective Maintenance
Maintenance
Mariam Sensalire∗ Patrick Ogao† Alexandru Telea‡
Faculty of Computing and IT Faculty of Computing and IT Institute of Mathematics and Computing Science
Makerere University, Uganda Makerere University, Uganda University of Groningen, the Netherlands
Abstract and tools is quite broad. Since a tool’s audience strongly depends
on its purposes [Maletic et al. 2002], evaluating similar-purpose
We provide an evaluation of 15 software visualization tools appli- tools would be more insightful [Koschke 2003]. Here, Storey et al.
cable to corrective maintenance. The tasks supported as well as [Storey and German 2005] compared 12 tools that provide aware-
the techniques used are presented and graded based on the support ness of human activities during software development against the
level. By analyzing user acceptation of current tools, we aim to help categories of intent, information, presentation, interaction and ef-
developers to select what to consider, avoid or improve in their next fectiveness. In corrective maintenance, Baecker et al. analyzed
releases. Tool users can also recognize what to broadly expect (and three classes of SoftVis techniques used for debugging [Baecker
what not) from such tools, thereby supporting an informed choice et al. 1997]: animation, improved typographic representations, and
for the tools evaluated here and for similar tools. error sonification. Earlier, we evaluated ten general-purpose Soft-
Vis software-understanding tools[Sensalire and Ogao 2007a]. This
evaluation forms the basis of our extended study of SoftVis tools
1 Introduction for CM.
Several studies have advocated the use of software visualization
tools for corrective maintenance (CM) [Baecker et al. 1997; Swan- 3 Classification Model
son 1976]. However, although several tool surveys exist on the In-
ternet, it is still hard to answer the questions ”does this tool fit my We classify SoftVis tools using four categories of desirable fea-
user profile, context, and needs?” (for tool users) and ”what do the tures: Effectiveness, Tasks supported, Techniques used and Avail-
users most (dis)like in a tool?” (for tool developers). In a previ- ability. These features and the scales their presence is measured
ous study [Sensalire and Ogao 2007b], we collected feedback from on are derived from several user interviews [Sensalire and Ogao
software engineers who used such tools for program understand- 2007b]. We use either a low,medium,high scale or a simpler yes,no
ing in general, and extracted several desirable features of SoftVis scale, as described next.
tools. In this paper, we refine and focus the set of desirable features
on SoftVis tools for CM. We aim to guide users in selecting Soft- 3.1 Category A: Effectiveness
Vis tools for CM, based on their desirable features, either from the
tools discussed here or from a larger, more general, set. Next, we
aim to discover possible correlations between perceived tool accep- An effective tool arguably helps users to solve the problems it was
tance levels and the usage of certain visual techniques, to further designed to assist with. Effectiveness is task-specific, hence hard to
clarify what makes a tool accepted (or not). measure in general. Yet, we identified three non-functional proper-
ties that effective SoftVis tools should have, as follows.
This paper is organized as follows. Section 2 overviews related
work. Section 3 presents our classification model for desirable tool Scalability: A scalable tool supports CM tasks on systems of mil-
features. Section 4 presents the evaluated tools and evaluation pro- lions of LOC and/or thousands of classes and/or source files. Tools
cedure (Sec. 5). Section 6 discusses our evaluation results. Sec- created for educational purposes, proof-of-concept, or research pro-
tion 7 concludes the paper. totypes are given low; high if they support large-scale code (as de-
fined above); and medium if falling in between.
2 Related Work Integration: This measures how easily several tools can switched
and exchange data to complete a given task in an IDE or similar
Price et al. [Price et al. 1993] compared 12 tools against 6 desir- setup. A tool that input and output data from/to other tools, and
able features categories: scope, content, form, method, interaction also senses and displays data changes, is given high. Tools that co-
and effectiveness. The tools were however not related to a single exist in an IDE but do not actively exchange data and/or events with
application area. Maletic et al. [Maletic et al. 2002] compared 5 peer tools are given medium. Standalone tools are given low.
software tools along 5 axes: task, audience, target, representation,
and medium. Similar to Price et al., the scope of this taxonomy Query support: Tools that work in batch mode are given low. Tools
that support just lexical queries get medium. Tools that corre-
∗ e-mail: [email protected] late queries and visualizations (query highlighting) or use complex
† e-mail: [email protected] queries(e.g., syntax-aware or type-based) get high.
‡ [email protected]
the tool is fully functional? Involvement at all points, al- S ENSALIRE , M., AND O GAO , P. 2007. Tool users require-
though desirable, may not be practically feasible. ments classification:how software visualization tools measure
up. AFRIGRAPH ’07’ Proceedings of the 5th International Con-
Future work includes refining the analysis on a set of specific tasks ference on Computer graphics, virtual reality, visualization and
and sample code bases in order to provide a more quantitative evalu- interaction in Africa, Grahamstown, South Africa.
ation of the tools’ abilities to address their tasks. A second direction
is to extend our evaluation of SoftVis tools to adaptive, perfective S ENSALIRE , M., AND O GAO , P. 2007. Visualizing object ori-
and preventive maintenance. ented software:towards a point of reference for developing tools
for industry. 4th IEEE International Workshop on Visualizing
Software for Understanding and Analysis, Banff, Canada.
References
S OURCE -NAVIGATOR -T EAM, 2007. Source code analysis tool.
BAECKER , R., D I G IANO , C., AND M ARCUS , A. 1997. Software Website. http://sourcenav.sourceforge.net/.
visualization for debugging. Commun. ACM 40, 4, 44–54. S TOREY, M.-A. D., AND G ERMAN , D. M. 2005. On the use of
H ENDERSON , T., 2008. ns-allinone-2.33 release. Website. visualization to support awareness of human activities in soft-
http://http://sourceforge.net/project/ ware development: a survey and a framework. In SoftVis ’05:
showfiles.php?group_id=149743&package_id= Proceedings of the 2005 ACM symposium on Software visualiza-
169689&release_id=588643. tion, ACM, New York, NY, USA, 193–202.
KOSCHKE , R. 2003. Software visualization in software mainte- S WANSON , E. B. 1976. The dimensions of maintenance. In ICSE
nance, reverse engineering, and re-engineering: a research sur- ’76’ Proceedings of the 2nd International Conference on Soft-
vey. Journal of Software Maintainance. Evol.: Res. Pract 15, ware Engineering, IEEE Computer Society Press, Los Alamitos,
87–109. CA, USA.
VANS , A., VON M AYRHAUSER , A., AND S OMLO , G. 1999.
M ALETIC , J., M ARCUS , A., AND C OLLARD , M. 2002. A task Program understanding behavior during corrective maintenance
oriented view of software visualization. Proceedings of IEEE of large-scale software. International Journal of Human-
Workshop of Visualizing Software for Understanding and Analy- Computers Studies 51, 1, 31–70.
sis Paris, France,, 32–40.
P RICE , A., BAECKER , R., AND S MALL , I. 1993. A principled tax-
onomy of software visualization. Journal of Visual Languages
and Computing 4(3):211-266.