Legacy To Agile
Legacy To Agile
Hans Keppens is part of the EMC BDG team that develops an automated
acceptance testing framework for the Centera product. Hans is mainly interested
in efficient and productive teams, and in the 'tools/practices' that help teams reach
that goal, like pair programming, test-driven development, short releases, simple
design, collective code ownership, continuous integration, ...
1
Agenda
Unit Testing
Bringing Code Under Test
Characteristics & Compromises
Case Study: Critical Dependencies
Tips & Tricks
2
Unit Testing
Definition: a unit test is a test, coded in
software, that verifies the functionality
of an individual software component in
isolation of the overall system.
So we’ll be talking about unit tests. But exactly what is a unit test ?
3
Benefits Of Unit Testing
Rapid Feedback
problems are spotted earlier
pinpointing bugs and regressions
Automation
run frequently at low cost
Confidence
measured quality
•Our unit tests help us to gain rapid feedback, problems can be spotted earlier.
Tests are also a great help help for pinpointing bugs and regressions. A decent
suite of unit tests can save lots of debugging time.
•Unit tests can be automated, thereby increasing their value. Once we
have an automated testSuite we can run them over and over at low cost.
•Since unit tests are a form of measured quality we can be confident in
our work. With unit testing the risk of latest hour surprises is low.
4
A Forgotten Best Practice ?
Unit testing is not a new practice, in
fact its a rather old one
Its importance has long been ignored
by most of the software industry
In a way unit testing has long been a forgotten best practice of the software
industry. In fact it’s an old practice that despite its benefits has long been ignored
by most of the software industry. In recent years unit testing has regained
popularity because of its promotion in XP and other agile methods.
5
Things Are Changing
xUnit framework has been ported to most
popular programming languages
CppUnit, NUnit, PyUnit, JUnit, and many more
We've seen the publication of the first books
on unit testing
TDD by example, JUnit Recipes, xUnit Patterns, ...
Nowadays most developers know what unit
tests are, some write unit tests daily
•The xUnit framework has been ported too most popular programming
languages.
•We now have CppUnit, NUnit PyUnit, JUnit, and many more
•Most developers today now what unit tests are. Unit testing has found its way to
the mainstream development practices. So things are really taking off.
6
So What Is The Problem ?
=> Does your project have unit tests ?
Existing code doesn’t have tests
New development without tests
=> Instant legacy
In practice many teams still don’t have unit tests and the problem is often
twofold:
•Many existing code does not have unit tests
•New development is often done without tests
Instant legacy
As a result many teams find themselves trapped within a “Legacy
Culture”. Code is often hard to comprehend and difficult to test and as a
result new development proceeds without writing tests. Most of the team
members fear change and prefer not to fix what ain’t broke in order to
avoid regressions. As a result these teams are missing all the benefits of
agile development.
7
Bringing Code Under Test
public class AlarmHandler {
...
public void addAlarm(String alarm) { ... }
We’ve been asked to work on the system’s AlarmHandler class today. A quick
peek at the source learns that it’s not so bad but apparently there are no unit tests
available.
We’re new to this code so before making any changes to the code we we want to
be confident that we don’t break anything. By writing some unit tests for the
AlarmHandler class we can learn how the code currently behaves, and at the
same time lock down the current behavior of the code.
8
Writing The First Test
public class AlarmHandlerTest extends TestCase {
9
Setting Up The Fixture
public class AlarmHandlerTest extends TestCase {
Create instance
Add alarm
10
Validating The Result
public class AlarmHandlerTest extends TestCase {
assertTrue(handler.isActive("red alert"));
}
And our final step is to add an assert to verify the alarm is really activated.
11
Red Bar, Missing Config File
public void testAddAlarmSetsAlarmStatus() {
AlarmHandler handler = new AlarmHandler();
handler.addAlarm("red alert");
assertTrue(handler.isActive("red alert"));
}
java.io.FileNotFoundException: c:\workflow\config.xml (The system cannot find the path
specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at java.io.FileInputStream.<init>(FileInputStream.java:66)
at be.legacy.project.ConfigManager.<init>(ConfigManager.java:20)
at be.legacy.project.ConfigManager.getInstance(ConfigManager.java:34)
at be.legacy.project.MessagingEngine.init(MessagingEngine.java:11)
at be.legacy.project.AlarmHandler.<init>(AlarmHandler.java:5)
at be.legacy.project.AlarmHandlerTest.testAddAlarmSetsAlarmStatus(AlarmHandlerTest.java:8)
So, we’ve run our test and surprisingly we got a red bar. Well that’s a bit strange.
But apparently an exception was thrown, lets have a closer look at the console
output.
12
Getting The Config File
13
Red Bar, Database Not Running
public void testAddAlarmSetsAlarmStatus() {
AlarmHandler handler = new AlarmHandler();
handler.addAlarm("red alert");
assertTrue(handler.isActive("red alert"));
}
java.sql.SQLException: Connection refused: connect
at oracle.jdbc.dbaccess.DBError.check_error(DBError.java:203)
at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:100)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:146)
at java.sql.DriverManager.getConnection(DriverManager.java:512)
at java.sql.DriverManager.getConnection(DriverManager.java:171)
at be.legacy.project.MessagingEngine.init(MessagingEngine.java:15)
at be.legacy.project.AlarmHandler.<init>(AlarmHandler.java:5)
at be.legacy.project.AlarmHandlerTest.testAddAlarmSetsAlarmStatus(AlarmHandlerTest.java:8)
14
Start Database
please start
DB
developer admin
15
Green Bar, Finally
public void testAddAlarmSetsAlarmStatus() {
AlarmHandler handler = new AlarmHandler();
handler.addAlarm("red alert");
assertTrue(handler.isActive("red alert"));
}
16
Mini Retrospective
Writing a unit test turns out to be difficult
We spend almost twenty minutes only
to get one test up and running
We needed to set up a configuration
file, start a database, ...
We spend more than twenty minutes just to get one test up and running
Need to lower our expectations of what our tests will look like
17
Characteristics and Compromises
18
What Makes a Good Unit Test ?
fast
fine-grained
reliable
self documenting
19
Realizing Ideals Can Be Hard
Unit testing is complicated by
Global variables, singletons
Dependencies upon concrete classes
Hardcoded filenames, directory structures
Databases, external processes
OS environment, native calls
...
20
Test Smells
unavailable
broken
slow
non-repeatable
unstable
interdependent
resource / environment
dependent
half automated
interdependent
During development the relative weight of each of these problems will change:
•When there are only few tests in place, we may accept the fact that they're slow,
however, as the suite of tests grows, the speed of the tests may become more
critical.
With all the testing difficulties encountered in the first code example it’s
quite obvious that we can’t live up to our testing ideals from day one.
Sure, we want our test to be: self containing, side-effect free and the like
but we need to be realistic: So far we don’t have tests yet and it’s unlikely
that we’re able to write ideal tests from the start.
So our goal is to get tests in place while keeping the balance between
effort and gain
22
Case Study: Critical Dependencies
Database connections
Dos or unix shell commands
Sockets, external protocols
Native calls
•Database connections
If our tests need access to the database in order to run we may not be able to run
our tests in case the database is down or when working from a location from
which the database is not reachable.
•Dos or unix shell commands.
If our tests require operating system calls to be executed and our code contains
unix or linux shell commands our tests may not be runnable from a mac or pc.
•Sockets, external protocols
If our tests have to set up a socket to call to external processes for which no
dummy or stub is available we may not be able to run our tests.
•Native calls
If our tests make calls to native libraries that are only available on-target, for
instance in an embedded environment, our tests won’t run from our development
machine.
23
Hardware.execute() 1 / 13
HardwareController public class HardwareController {
...
Hardware.execute( command );
...
For some reason }
Our basic problem with bringing the HardwareController class under test is that
by calling Hardware.execute() it has a critical dependency upon the Hardware
class.
Some reasons:
•Hardware.execute() is intended to be run in an embedded environment and
there’s no supporting library available to allow it to run in the development
environment.
•Hardware.execute() calls to an expensive external resource that needs to be
shared by other team members, including non-developers.
24
Class Diagram 2 / 13
Hardware Hardware Hardware
Controller Controller
Test execute()
25
Extract execute() Method 3 / 13
Extract Method public class HardwareController {
...
execute() void process() {
String command = null;
Minimize impact: ...
if ( !active && ctr2Set )
we don’t have any command = "incr cntr2;";
26
Subclass To Test 4 / 13
public class HardwareControllerTester extends HardwareController {
private String _command;
Extends HardwareController
Overrides execute()
Stores incoming commands
27
HardwareControllerTest 5 / 13
public class HardwareControllerTest extends TestCase {
controller.process();
28
Critical Dependencies 6 / 13
Hardware Hardware
Controller
execute()
Hardware Hardware
Controller Controller
Test Tester
29
Extract Executor Interface 7 / 13
public interface Executor {
void execute(String command);
}
30
Self Delegate 8 / 13
public class HardwareController implements Executor {
...
Executor executor = this;
...
HardwareController
...
void process() {
String command = null;
now delegates to an
... executor, using itself
if ( !active && ctr2Set )
command = "incr cntr2;"; as component
executor.execute(command);
...
}
}
31
HardwareExecutor 9 / 13
public class HardwareController implements Executor {
...
public void execute(String command) {
Hardware.execute( command );
}
...
}
32
Parameterize Constructor 10 / 13
public class HardwareController implements Executor {
...
...
}
We generalize HardwareController so
that it can accept any kind of Executor
33
Upgrade To StoreExecutor 11 / 13
public class HardwareControllerTester public class StoreExecutor
extends HardwareController { implements Executor {
34
Update Test Code 12 / 13
public class HardwareControllerTest extends TestCase {
controller.process();
35
Class Diagram 13 / 13
Hardware Hardware Executor
Controller Controller
Test
36
Tips & Tricks
37
Troubled By Singletons
Singletons make testing difficult:
Unintentional side effects
Lack of substitution point
When singletons are getting in the way of our unit tests creating a
resetInstance() method often makes a good trick to get rid of most of the
side-effects. Calls to resetInstance() certainly aren’t the cleanest solution,
especially since singletons are typically being depended on by classes from
various packages. Verry likely reset resetInstance() method will have to
be made public at some point. Yet this approach is often a good starting point
because is enables us to bring more of our code under test.
38
Troubled By Singletons
S1
B
S2
TA A
C
•We may need to call resetInstance() on both S1 and S1 since either of them
could have unintentional side-effects to our tests.
•For starters, its a good idea to do call resetInstance() both from within the
setUp() and the teardown()
•In the setup() the idea is to have a clearly defined fixture. Otherwise we
can’t be sure that we get a correctly initialized instance by the time our
test is run.
•For the tearDown() the idea is that each test is responsible for cleaning
up its own side-effects. If not other tests might accidentally run or break
as a result of a previous test.
39
File Dependent Instances
Problem: instantiation depends upon a
hardcoded configuration file
Solution: set up the configuration file
from within your test
protected void setUp() throws Exception {
Properties properties = new Properties();
properties.put("serverIp","192.168.4.88");
properties.put("serverPort","3688");
... more properties ...
properties.store(new FileOutputStream("C:/workflow/config.properties"),"");
40
File Dependent Instances
Problem: tests run slow due to
excessive file access
Solution: parameterize the constructor
to allow configuring the test instance
from within the test
protected void setUp () throws Exception {
Properties properties = new Properties();
properties.put("serverIp","192.168.4.88");
properties.put("serverPort","3688");
... more properties ...
•if many tests use the same file we could try to write the file once
•however, a better solution is to deal with the file dependency
•doing so will often significantly speed up the unit tests
41
Commented Code
Smell: code differs between development
environment and integration environment
and is being commented in and out
{ {
... ...
stubbedCode(); // stubbedCode();
// productionCode(); productionCode();
... ...
} }
42
Commented Code
Problem: code version that lives in the
source code repository is always wrong
check in check out
CVS
check out check in
development integration
environment ? environment
Even though commenting out a bit of code may seem innocent its practice is
problematic in a project setting. From the development environment developers
can run their tests against the stubbed version, however running agains the
productioncode breaks the tests. In the integration environment the production
code is required do meaningfull testing. Anyhow the practice of commenting in
and out source code leads to a version in the source code repository that is always
wrong: either the unit tests or acceptance tests will fail.
43
Introducing A Testing Flag
private boolean _testing;
Solution:
{
... introduce a testing flag
if ( _testing ) {
stubbedCode();
} else {
productionCode(); Far from ideal, but
}
... sufficient to keep our
}
tests running
void setTesting() {
_testing = true;
}
It’s a start, not the end
We can take the first step towards an improved situation by introducing a testing
flag to distinguish between the stubbed code and production code. From our unit
tests we can call the setTesting() method to ensure the stubbed version is called.
Sure, the obtained solution is still far from ideal. We have production code being
mixed with test code and some extra code clutter due to the conditional, but at
least we can keep our tests running while refactoring towards a better solution.
44
Conclusion
If there are no tests today
=> You can write some tomorrow
45
Discussion
Questions ?
46
References
<www.refactoring.be>
47