0% found this document useful (0 votes)
14 views6 pages

Introducing Performance

1. The document discusses introducing performance engineering concepts to software engineering students through practical exercises. 2. Students used tools like JMeter for performance testing and PredictorV for performance modeling to analyze sample J2EE applications generated with AdaptiveCells. 3. The exercises helped students understand how design decisions can impact performance and relate performance concepts to real-world industry problems.

Uploaded by

zimawibong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

Introducing Performance

1. The document discusses introducing performance engineering concepts to software engineering students through practical exercises. 2. Students used tools like JMeter for performance testing and PredictorV for performance modeling to analyze sample J2EE applications generated with AdaptiveCells. 3. The exercises helped students understand how design decisions can impact performance and relate performance concepts to real-world industry problems.

Uploaded by

zimawibong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Introducing Performance Engineering by means of Tools and

Practical Exercises
Alexander Ufimtsev, Trevor Parsons, Lucian M. Patcas,
John Murphy and Liam Murphy

Performance Engineering Laboratory,


School of Computer Science and Informatics, University College Dublin, Ireland
{alexu,trevor.parsons,lucian.patcas,j.murphy,liam.murphy}@ucd.ie

Abstract longer need to concern themselves with lower


level issues such as resource usage. An example
Many software engineers complete their educa- of this can be seen in modern languages (Java,
tion without an introduction to the most basic C#) that provide garbage collection facilities
performance engineering concepts. IT special- freeing developers from the task of having to
ists need to be educated with a basic degree manage memory which had typically been com-
of performance engineering knowledge, so they plex and time consuming exercise. This is
are aware of why and how certain design and even more obvious with enterprise level soft-
development decisions can lead to poor perfor- ware frameworks (J2EE, .NET), whereby the
mance of the resulting software systems. To framework can be expected to handle complex
help address this need, the School of Computer underlying issues such as security, persistence,
Science and Informatics at University College performance and concurrency to name but a
Dublin offered a final year undergraduate/first few. Freeing developers from having to worry
year postgraduate module on ”Performance of about what is happening under the hood allows
Computer Systems” in Autumn 2005. In this them to concentrate more of their efforts on
paper we document how performance engineer- developing the functionality of a system. How-
ing was introduced to the students through ever, a downside of this trend is that devel-
practical exercises, and how these exercises re- opers become less familiar with the mechanics
late to industry problems. of the underlying system and, as a result, can
make decisions that have an adverse effect on
the system. This is especially evident in the
1 Introduction area of performance of systems, where projects
often fail to deliver in time functional systems
Software languages and frameworks have devel- which meet their performance requirements,
oped significantly since their early days. They fact that leads to major project delays and sub-
have become more abstract and developers no sequently higher development and maintenance
costs [5]. Two major factors contribute to this
Copyright c 2006 Performance Engineering Lab,
School of Computer Science and Informatics, University fact. First, many developers do not have an
College Dublin. Permission to copy is hereby granted understanding for the basic concepts of perfor-
provided the original copyright notice is reproduced in mance engineering. This is hardly surprising,
copies made.
since many software engineers complete their motivates their choice. Section 4 describes the
education without an introduction to the most practical assignments and steps that the stu-
basic performance engineering concepts. As a dents followed to conduct a complete analysis
result, it is common that they are unaware of of a sample J2EE application. Sections 5 and 6
the performance implications of many decisions present our observations and conclusions upon
taken during the design and implementation of the course aspects presented in this report.
a system. Second, since many developers do
not have an understanding for performance en-
gineering, often they believe that system per- 2 Performance Engineering
formance can be addressed at the end of devel-
opment cycle after a system has been designed, The performance engineering concepts we
implemented, functionally tested and deployed. present in this section relate to the performance
This common misconception often leads devel- metrics, as well as the software performance en-
opers to take the opinion that performance is gineering (SPE) methodology that the students
a matter of ”production tuning” that merely followed for their practical work.
involves tweaking different system parameters We introduced to students the most common
at the end of the development cycle. It is more performance metrics used to characterise the
often the case however that performance issues performance behaviour of a system: throughput
have a deeper structural nature. Consequently, (the number of requests the system can han-
it is common that a major rework of the system dle in a given time period); response time (the
is required to meet performance goals leading temporal delay between the moment the sys-
to expensive delays at the end of the project. tem receives a request and the moment it ful-
To address the above problems, developers fils that request); latency (the temporal delay
need to be educated with a basic degree of per- between the receiving of an event and the sys-
formance engineering knowledge so that they tem reaction to that event); capacity (the max-
are aware of why and how certain design and imum load the system can take while meeting
development decisions can lead to poor perfor- its throughput, response time, and latency re-
mance of the resulting software systems. To quirements). In addition to these performance
help address this need, the School of Com- metrics, the performance behaviour of a system
puter Science and Informatics at University can as well be characterised by resource utili-
College Dublin offered a final year undergrad- sation (e.g. CPU or memory consumption).
uate/first year postgraduate module on ”Per- The methodology that the students followed
formance of Computer Systems” in Autumn for the performance evaluations they carried
2005. The course introduced a basic theoretical out was inspired from the SPE guideline pre-
framework of Performance Engineering, while sented in [2]. That is, performance require-
the practical work for this course consisted of ments, which define the values expected from
exercises to allow students to develop an under- a functioning system, must be considered from
standing of the performance aspects of industry early stages in the software development cycle
scale software systems. and tracked through the entire process. There-
In this report we detail certain aspects of fore, the students carried out performance test-
this course. We focus on the performance tools ing (to address the performance problem at the
used, paying particular attention to two rel- testing and maintenance stages) and perfor-
atively new performance tools, AdaptiveCells mance modelling (design stage) of sample J2EE
and PredictorV. We document how these tools internet applications.
were used to educate the students on perfor- Usually in the testing stage, functional test-
mance engineering through practical exercises, ing is performed before performance tests in
and how the exercises relate to industry prob- order to catch the errors in the application to
lems. Section 2 presents the performance engi- be tested that are not caused by performance
neering concepts that were introduced to stu- issues. Thus, functional testing tries to elimi-
dents. Section 3 describes the main features nate non-performance problems before any per-
of the tools used for the practical work, and formance testing is conducted.
Performance testing deals with verifying 3 Tools
whether the system meets its performance re-
quirements under various workloads. The Students conducted performance testing and
workload of a system denotes how many users modelling of sample J2EE applications that
the system can handle at any given time. In were generated with the AdaptiveCell1 tool.
order to allow the performance analysis of the They carried out the performance testing us-
system, various workloads are generated and ing the JMeter2 load generation tool, and the
the system performance behaviour under these performance modelling using PredictorV3 . This
various workloads is recorded in the load test- section introduces these tools and motivates
ing phase. Usually, load testing involves gener- their choice.
ation of simulated requests to the system using
load generation tools (see Section 3.1). Per-
3.1 JMeter
formance testing can also involve finding the
highest workload the system can take while still JMeter is a tool for workload generation and
fulfilling its requirements, that is stress testing performance measurements. It can be used to
the system. Performance optimisations can be simulate a heavy concurrent load on a J2EE
tried in order to correct some of the problems application and to analyse the overall perfor-
discovered during performance testing. Load mance under various load types. It also al-
and stress testing treat the system as a ”black lows for a graphical analysis of the performance
box” to whose internal structure test engineers metrics (e.g. throughput, response time) mea-
do not have any insights into. sured.

Performance modelling is a complementary 3.2 AdaptiveCells


method to building prototypes or simulating
the system behaviour. During the system de- AdaptiveCells is a novel tool that allows for the
sign, this method provides answers to questions development of complex artificial J2EE testbed
like ”How does the system scale up?”, ”What applications without requiring a single line of
is the capacity that allows the prescribed qual- code. These applications can be used for a
ity of service for expected workloads?”, ”Which number of different purposes, such as perfor-
components of the system are bottlenecks?”, or mance testing, middleware infrastructure test-
”Which components are more sensitive to vari- ing and even for the creation of working J2EE
ations?”. Performance modelling makes use of applications. The initial learning curve for
discrete event simulation (e.g. workload gener- writing J2EE applications is often prohibitively
ation tools) to generate usage scenarios of the high and means that even developing simple
system, and profiling to capture the interac- test cases can be a major effort. AdaptiveCells
tions between system components correspond- solves this problem by allowing for the creation
ing to those usage scenarios and to record the of working (complex) J2EE applications with-
performance metrics. The data obtained from out having to write the code. The testbed ap-
profiling constitutes the performance model of plications generated with AdaptiveCells have a
the system. The analysis of this model helps fully controllable behaviour at runtime. By se-
identifying bottleneck or sensitive components. lecting the appropriate configurations, testers
This analysis also allows for capacity planning, and developers can replicate how resources
which aims to ensure that sufficient capacity is such as CPU and memory are consumed by
available so that the system can cope with in- the different parts of the application. In fact,
creasing demand. Performance modelling can AdaptiveCells goes further by allowing for the
help developers understand how the compo- emulation of performance bugs (e.g. memory
nents they are working on contribute to the leaks) which often occur in real systems. The
performance of the whole system. Performance applications generated can also be configured
modelling is a ”white box” approach, develop- 1 http://adaptivecellsj.sourceforge.net
ers and system designers having an insight into 2 http://jakarta.apache.org/jmeter

the internal structure of the system. 3 http://www.crovan.com


to throw exceptions at certain points. These information in a Java-based system is to use
characteristics of AdaptiveCells represent real profiling. The profiler makes use of the Java
advantages not only in learning environments Virtual Machine Profiler Interface (JVMPI) to
such as the one discussed in this report, but collect information relating to the current state
in the development of real-world component- of the JVM, such as memory and CPU infor-
based software systems as well. For example, mation.
applications generated with AdaptiveCells can
be used to compare the performance of real Modelling module The modelling module
application servers, or, in the area of middle- is responsible for displaying the call graph for
ware infrastructure, they can be used for test- each business transaction that has been col-
ing problem determination tools [3] or monitor- lected using the monitoring module. UML
ing tools such as COMPAS [1]. Event Sequence Diagrams notation is used to
display this call graph. The sequence diagrams
3.3 PredictorV in PredictorV show the ordered sequence of
events that happen inside the application as it
PredictorV is a modelling and simulation tool services a user request, and are annotated with
that aims to help IT architects and developers profile data showing the CPU and memory that
solve performance problems in enterprise J2EE are consumed.
based systems by predicting their performance
early in the design cycle. PredictorV is an Prediction module The prediction module
Eclipse-based product available in form of both takes the UML models and detects perfor-
a plug-in and standalone application [4]. The mance bottlenecks (the InSight analysis level),
tool offers a framework for performance moni- identifies ways of correcting these bottlenecks
toring, modelling and prediction of component- (MoreSight), assesses capacity planning under
based systems. The aim of the framework is different hardware configurations (ForeSight),
to capture the information from a system and and evaluates performance for different usage
automatically generate a performance model. scenarios (ClearSight).
Then the performance model is used to deter-
mine quickly where the performance problems
are in the system. One advantage of this ap- 4 Practical Exercises
proach is that it reduces the time and skill re-
quired to build a model, because information This section describes the tasks that the stu-
to build the model itself is captured directly dents had to accomplish for their practical as-
from the system. Another advantage is that as signments, as well as the steps they followed in
much of the process as possible is automated, order to conduct a complete performance anal-
so it reduces the risk of human errors being ysis of a sample J2EE application.
added into the model. The tool comprises of The practical assignments were intended
three modules: a module to monitor the sys- to imitate real-life development environments.
tem, a module to model the transaction flow For this purpose, the students formed groups
on that system, and a module to predict the and each group played the role of a Quality
performance of the system. Assurance (QA) team in a software develop-
ment company. The tasks4 of each team were
Monitoring module The monitoring mod- to conduct performance testing and modelling
ule tries to collect enough information from a of a sample J2EE application. The sample
system under test so that a predictive model of applications comprised of seven components
the system can be built. In order to so, it needs and ten possible configurations (different in-
structure information (what the system does) teraction scenarios between components), and
and resource information (how much resource were generated with AdaptiveCells (see Sec-
the system uses) for each business transaction tion 3.2). We introduced randomly some per-
that occurs in the usage scenarios employed for 4 The assignments can be found online at
testing. The best method for collecting this http://floating1.ucd.ie/comp4015
formance problems likely to occur in real ap- layer, hardware, and application itself. How-
plications, such as memory leaks or excessive ever, only JVM and AS layers were considered
resource usage [6], into the applications tested in the scope of the course. After a few ini-
by students. Each QA team submitted in the tial hints students were asked to use available
end a report that contained two parts: an ex- resources, including the Internet, to find and
ecutive summary of their work to the manage- experiment with performance of Sun Microsys-
rial staff, and a detailed informative report to tem JVM and JBoss AS5 . They were also asked
the developers. To accomplish their tasks, stu- to explain why certain parameter would result
dents had to follow the methodology described in better, or worse application performance.
in Section 2.

Functional testing Students had first to Application profiling Until this step, stu-
perform functional testing of their application dents were using a ”blackbox” approach to
to eliminate the configurations which were ob- the application they were testing. They did
viously not working properly. They had to not have any insights into how the applica-
identify whether the cause of the problem was tion was working internally, and only used the
indeed a performance related problem (using external functional interfaces to extract be-
garbage collection and debug information) or a havioural and performance information. Suc-
problem of a different nature. cessful modelling, on other hand, requires a
deep inside knowledge of the application to be
Performance testing Once the functional modelled. Therefore, students extracted infor-
tests were performed, students used the JMeter mation about the inner logic of the applica-
load generation tool (see Section 3.1) to simu- tion using the profiler of PredictorV (see Sec-
late a workload on the application. The com- tion 3.3). Once extracted, these scenarios could
plexity of this exercise lay in the requirement be easily visualised in PredictorV in the form
that the students construct a realistic workload of sequence diagrams. Profiled data also con-
on the application. Students had to decide tained information about resource utilisation.
upon common usage scenarios of the applica-
tion at hand. It was explained to the students
that testing systems with an unrealistic work- Performance modelling Students used the
load would produce unreliable results, which profiled usage scenarios, augmented with re-
is not acceptable in real-life situations. After source utilisation, together with the data they
generating a realistic workload, students tested extracted in the load and stress testing steps
their system by gradually increasing the num- to build a model of their application in Predic-
ber of clients. They collected both the through- torV. The model consisted of specified hard-
put and response time, and performed an anal- ware topology (network devices, application
ysis of the variation of these metrics against the server, and database server) and all the nec-
number client requests. Students had to notice essary data (variations of performance metrics
the trend in the system behaviour under differ- with workload, resource utilisation), in order
ent workloads (load testing) and the number to create an environment that mimics a real
of clients threshold for which the throughput system. Students used this model for various
started to decrease and response time started types of performance simulations and predic-
to increase (stress testing). tions: to determine system performance on al-
ternative hardware configurations; to provide
Performance optimisations The next step hardware estimates for doubling and tripling
included various optimisation techniques that the throughput (capacity planning); to analyse
would lead to improved application perfor- the design of the application in order to identify
mance. J2EE platform offers variety of levels performance antipatterns [2], which are com-
where performance can be tweaked, including mon design mistakes.
Java Virtual Machine (JVM) layer, Applica-
tion Server (AS) layer, Operating System (OS) 5 http://www.jboss.com/products/jbossas
5 Observations References
The course feedback received from the stu- [1] Mos A. A Framework for Adaptive Mon-
dents was analysed at the end of the course itoring and Performance Management of
through an anonymous system. The feedback Component-Based Enterprise Applications.
was mainly positive with the one exception PhD thesis, Dublin City University, Ire-
being the amount of time and effort required land, 2004.
to test their application correctly. Most of
[2] Smith C.U. and Williams L. G. Perfor-
the students had to conduct the performance
mance Solutions: A Practical Guide to Cre-
tests a few times, mainly because many pa-
ating Responsive, Scalable Software. Addi-
rameters had to be taken into account to run
son Wesley, 2002.
the test correctly (e.g. performing load and
stress testing, measurements, profiling, optimi- [3] Chen M. et al. Pinpoint: Problem determi-
sations). The student felt that while the ”help nation in large, dynamic, internet services.
from lab demonstrators” was good to very good In Proc. Int. Conf. on Dependable Systems
the ”computer facilities” were only fair. This and Networks (IPDS Track), 2002.
would be mainly due to the number of applica-
tions that they were required to master in the [4] Murphy J. Whitepaper: Peeling the lay-
course. Most students agreed that they put a ers of the ”performance onion”.
”lot of effort” into this course, and that it was http://crovan.com/download/
”interesting” and a ”clear link between the lec- crovan whitepaper.pdf.
tures and the practicals”.
[5] Musich P. Survey questions Java App reli-
All the reports that the students produced ability. http://www.eweek.com/
were read by three staff members and about article2/0,1895,1388603,00.asp.
25% of the students were subjected to an inter-
view to ensure the quality of the material pre- [6] Haines S. Solving common Java EE per-
sented. The students scored well in this course, formance problems.
with it being the second highest scoring course http://www.javaworld.com/javaworld/
over about ten courses that were held for that jw-06-2006/jw-0619-tuning.html.
cohort of students. For example over 80% of
the students would have got a second class hon-
ours grade one or higher in this course.

6 Conclusions
In this paper we present our experience in in-
troducing performance engineering concepts to
university students through practical exercises.
These exercises can easily be integrated as part
of performance engineering solution in a real in-
dustry environment. We give details on a num-
ber of tools that we used for the practical part
of the course. In particular we focus on the test
bed application, AdaptiveCells, and modelling
and simulation tool, PredictorV. However one
of the key lessons from this exercise was the
need for intensive laboratory support, which
would limit the number of applications and the
scale of the problems that could be addressed
in a single course.

You might also like