Introducing Performance
Introducing Performance
Practical Exercises
Alexander Ufimtsev, Trevor Parsons, Lucian M. Patcas,
John Murphy and Liam Murphy
Functional testing Students had first to Application profiling Until this step, stu-
perform functional testing of their application dents were using a ”blackbox” approach to
to eliminate the configurations which were ob- the application they were testing. They did
viously not working properly. They had to not have any insights into how the applica-
identify whether the cause of the problem was tion was working internally, and only used the
indeed a performance related problem (using external functional interfaces to extract be-
garbage collection and debug information) or a havioural and performance information. Suc-
problem of a different nature. cessful modelling, on other hand, requires a
deep inside knowledge of the application to be
Performance testing Once the functional modelled. Therefore, students extracted infor-
tests were performed, students used the JMeter mation about the inner logic of the applica-
load generation tool (see Section 3.1) to simu- tion using the profiler of PredictorV (see Sec-
late a workload on the application. The com- tion 3.3). Once extracted, these scenarios could
plexity of this exercise lay in the requirement be easily visualised in PredictorV in the form
that the students construct a realistic workload of sequence diagrams. Profiled data also con-
on the application. Students had to decide tained information about resource utilisation.
upon common usage scenarios of the applica-
tion at hand. It was explained to the students
that testing systems with an unrealistic work- Performance modelling Students used the
load would produce unreliable results, which profiled usage scenarios, augmented with re-
is not acceptable in real-life situations. After source utilisation, together with the data they
generating a realistic workload, students tested extracted in the load and stress testing steps
their system by gradually increasing the num- to build a model of their application in Predic-
ber of clients. They collected both the through- torV. The model consisted of specified hard-
put and response time, and performed an anal- ware topology (network devices, application
ysis of the variation of these metrics against the server, and database server) and all the nec-
number client requests. Students had to notice essary data (variations of performance metrics
the trend in the system behaviour under differ- with workload, resource utilisation), in order
ent workloads (load testing) and the number to create an environment that mimics a real
of clients threshold for which the throughput system. Students used this model for various
started to decrease and response time started types of performance simulations and predic-
to increase (stress testing). tions: to determine system performance on al-
ternative hardware configurations; to provide
Performance optimisations The next step hardware estimates for doubling and tripling
included various optimisation techniques that the throughput (capacity planning); to analyse
would lead to improved application perfor- the design of the application in order to identify
mance. J2EE platform offers variety of levels performance antipatterns [2], which are com-
where performance can be tweaked, including mon design mistakes.
Java Virtual Machine (JVM) layer, Applica-
tion Server (AS) layer, Operating System (OS) 5 http://www.jboss.com/products/jbossas
5 Observations References
The course feedback received from the stu- [1] Mos A. A Framework for Adaptive Mon-
dents was analysed at the end of the course itoring and Performance Management of
through an anonymous system. The feedback Component-Based Enterprise Applications.
was mainly positive with the one exception PhD thesis, Dublin City University, Ire-
being the amount of time and effort required land, 2004.
to test their application correctly. Most of
[2] Smith C.U. and Williams L. G. Perfor-
the students had to conduct the performance
mance Solutions: A Practical Guide to Cre-
tests a few times, mainly because many pa-
ating Responsive, Scalable Software. Addi-
rameters had to be taken into account to run
son Wesley, 2002.
the test correctly (e.g. performing load and
stress testing, measurements, profiling, optimi- [3] Chen M. et al. Pinpoint: Problem determi-
sations). The student felt that while the ”help nation in large, dynamic, internet services.
from lab demonstrators” was good to very good In Proc. Int. Conf. on Dependable Systems
the ”computer facilities” were only fair. This and Networks (IPDS Track), 2002.
would be mainly due to the number of applica-
tions that they were required to master in the [4] Murphy J. Whitepaper: Peeling the lay-
course. Most students agreed that they put a ers of the ”performance onion”.
”lot of effort” into this course, and that it was http://crovan.com/download/
”interesting” and a ”clear link between the lec- crovan whitepaper.pdf.
tures and the practicals”.
[5] Musich P. Survey questions Java App reli-
All the reports that the students produced ability. http://www.eweek.com/
were read by three staff members and about article2/0,1895,1388603,00.asp.
25% of the students were subjected to an inter-
view to ensure the quality of the material pre- [6] Haines S. Solving common Java EE per-
sented. The students scored well in this course, formance problems.
with it being the second highest scoring course http://www.javaworld.com/javaworld/
over about ten courses that were held for that jw-06-2006/jw-0619-tuning.html.
cohort of students. For example over 80% of
the students would have got a second class hon-
ours grade one or higher in this course.
6 Conclusions
In this paper we present our experience in in-
troducing performance engineering concepts to
university students through practical exercises.
These exercises can easily be integrated as part
of performance engineering solution in a real in-
dustry environment. We give details on a num-
ber of tools that we used for the practical part
of the course. In particular we focus on the test
bed application, AdaptiveCells, and modelling
and simulation tool, PredictorV. However one
of the key lessons from this exercise was the
need for intensive laboratory support, which
would limit the number of applications and the
scale of the problems that could be addressed
in a single course.