Architectures of Test Automation: Acknowledgement
Architectures of Test Automation: Acknowledgement
Acknowledgement
Many of the ideas in this presentation were jointly developed with Doug Hoffman, in a course that we
taught together on test automation, and in the Los Altos Workshops on Software Testing (LAWST) and
the Austin Workshop on Test Automation (AWTA).
• LAWST 5 focused on oracles. Participants were Chris Agruss, James Bach, Jack Falk, David
Gelperin, Elisabeth Hendrickson, Doug Hoffman, Bob Johnson, Cem Kaner, Brian Lawrence,
Noel Nyman, Jeff Payne, Johanna Rothman, Melora Svoboda, Loretta Suzuki, and Ned
Young.
• LAWST 1-3 focused on several aspects of automated testing. Participants were Chris Agruss,
Tom Arnold, Richard Bender, James Bach, Jim Brooks, Karla Fisher, Chip Groder, Elizabeth
Hendrickson, Doug Hoffman, Keith W. Hooper, III, Bob Johnson, Cem Kaner, Brian Lawrence,
Tom Lindemuth, Brian Marick, Thanga Meenakshi, Noel Nyman, Jeffery E. Payne, Bret
Pettichord, Drew Pritsker, Johanna Rothman, Jane Stepak, Melora Svoboda, Jeremy White,
and Rodney Wilson.
• AWTA also reviewed and discussed several strategies of test automation. Participants in the
first meeting were Chris Agruss, Robyn Brilliant, Harvey Deutsch, Allen Johnson, Cem Kaner,
Brian Lawrence, Barton Layne, Chang Lui, Jamie Mitchell, Noel Nyman, Barindralal Pal, Bret
Pettichord, Christiano Plini, Cynthia Sadler, and Beth Schmitz.
I’m indebted to Hans Buwalda, Elizabeth Hendrickson, Alan Jorgensen, Noel Nyman , Harry Robinson,
James Tierney, and James Whittaker for additional explanations of test architecture and/or stochastic
testing.
Why do some groups have so much more success with test automation than others?
In 1997, Brian Lawrence and I organized a meeting of senior testers, test automators, managers, and
consultants to discuss this question, forming the Los Altos Workshops on Test Automation (LAWST).
Other groups (Software Test Managers Roundtable and Austin Workshop on Test Automation) have since
formed along the same lines. Brian and I described the LAWST process in Kaner & Lawrence (1997,
1999).
This paper is essentially a progress report. I can’t (yet) tell you how to succeed in test automation, but I
know more of the pieces of more of the puzzles than I did when we started setting up the first meeting, in
1996. The progress has been uneven, and so is this paper. But I hope that it’s useful.
Comments and criticisms are welcome. Please send them to me at [email protected].
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 1
GUI Regression Testing is Computer Assisted Testing
GUI-level regression automation is the most common approach to test automation, but it is definitely not
the only one and it has serious limitations. This paper puts this approach in context and then explores
several alternatives.
As a start to the discussion, let me note that GUI-level regression automation is not automated testing. It
doesn’t automate very much of the testing process at all. Let’s look at some of the tasks, and see who
does them:
• Analyze the specification and other docs for ambiguity or Done by humans
other indicators of potential error
• Analyze the source code for errors Humans
• Design test cases Humans
• Create test data Humans
• Run the tests the first time Humans
• Evaluate the first result Humans
• Report a bug from the first run Humans
• Debug the tests Humans
• Save the code Humans
• Save the results Humans
• Document the tests Humans
• Build a traceability matrix (tracing test cases back to specs or Done by humans or by another
requirements) tool (not the GUI tool)
• Select the test cases to be run Humans
• Run the tests The Tool does it
• Record the results The Tool does it
• Evaluate the results The Tool does it, but if there’s an
apparent failure, a human re-
evaluates the test results.
• Measure the results (e.g. performance measures) Done by humans or by another
tool.
• Report errors Humans
• Update and debug the tests Humans
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 2
When we see how many of the testing-related tasks are being done by people or, perhaps, by other testing
tools, we realize that the GUI-level regression test tool doesn’t really automate testing. It just helps a
human to do the testing.
Rather than calling this “automated testing”, we should call it computer-assisted testing.
I am not showing disrespect for this approach by calling it computer-assisted testing. Instead, I’m making
a point—there are a lot of tasks in a testing project and we can get help from a hardware or software tool
to handle any subset of them. GUI regression test tools handle some of these tasks very well. Other tools
or approaches will handle a different subset. For example,
• Dick Bender’s Softtest tool and Telcordia’s AETG help you efficiently design complex tests. They
don’t code the tests, execute the tests, or evaluate results. But in some cases, the design problem
is the most difficult problem in the testing project.
• Source code analyzers, like LINT or the McCabe complexity metric tool are useful for exposing
defects, but they don’t design, create, run or evaluate a single test case.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 3
on Print to get to the Print Dialog, move to the right side of the dialog to click on the Number of
Copies field, enter 2 copies, then click on OK to start printing. If any print-related details of the
user interface changed, the test case code had to be revised
• Tests were self-contained. They repeated the same steps, rather than relying on common, shared
routines. Every test that printed had its own code to pull down the File menu, etc. If the user
interface for printing changed, every test had to be examined and (if it prints) fixed.
• The process of coding and maintaining the tests was tedious and time-consuming. It took a lot of
time to create all these tests because there was no modularity and therefore no code reuse. Rather
than calling modules, every step has to be coded into every test case. Boredom results in errors,
wasting even more time.
Starting in the early 1990’s, several of us began experimenting with data driven designs that got around
several of these problems. Bach (1996) and Kaner (1997a, 1998) discussed the problems of test
automation tools in much greater detail than I cover in this paper. Buwalda (1996, 1998) described a data
driven approach that is more general than the calendar example presented below. An implementation of
that approach is available as TestFrame (check www.sdtcorp.com and Kit, 1999). Pettichord (1996,
1999) presented another general data driven approach. See also Kent (1999) and Zambelich (1998). A
web search will reveal several other papers along the same lines. Pettichord’s web site,
www.pettichord.com, is an excellent place to start that search.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 4
• The day names are printed in a typeface, style, or size and in a language.
• The days are shown in a table, and are numbered (1, 2, 3, etc.) Each day is in one cell (box) in the
table. The number might be anywhere (top, bottom, left, right, center) in the cell.
• The cell for a day might contain a picture or text (“Mom’s birthday”) or both.
In a typical test case, we specify and print a calendar.
Suppose that we decided to create and automate a lot of calendar tests. How should we do it?
One way (the approach that seems most common in GUI regression testing) would be to code each test
case independently (either by writing the code directly or via capture/replay, which writes the code for
you). So, for each test, we would write code (scripts) to specify the paper orientation, the position,
network location, and type of the picture, the month (font, language), and so on. If it takes 1000 lines to
code up one test, it will take 100,000 lines for 100 tests and 1,000,000 lines for 1000 tests. If the user
interface changes, the maintenance costs will be enormous.
Here’s an alternative approach:
• Start by creating a table. Every column covers a different calendar attribute (paper orientation,
location of the graphic, month, etc.). Every row describes a single calendar. For example, the
first row might specify a calendar for October, landscape orientation, with a picture of children in
costumes, 7-day week, lettering in a special ghostly typeface, and so on.
• For each column, write a routine that implements the choice in that column. Continuing the
October example, the first column specifies the page orientation. The associated routine provides
the steps necessary to set the orientation in the software under test. Another routine reads
“October” in the table and sets the calendar’s month to October. Another sets the path to the
picture. Perhaps the average variable requires 30 lines of code. If the program has 100 variables,
there are 100 specialized routines and about 3000 lines of code.
• Finally, write a control program that reads the calendar table one row at a time. For each row, it
reads the cells one at a time and calls the appropriate routine for each cell.
In this setup, every row is a test case. There are perhaps 4000 lines of code counting the control program
and the specialized routines. To add a test case, add a row. Once the structure is in place, additional test
cases require 0 (zero) additional lines of code.
Note that the table that describes calendars (one calendar per row) is independent of the software under
test. We could use these descriptions to test any calendar-making program. Changes in the software user
interface won’t force maintenance of these tests.
The column-specific routines will change whenever the user interface changes. But the changes are
limited. If the command sequence required for printing changes, for example, we change the routine that
takes care of printing. We change it once and it applies to every test case.
Finally, the control program (the interpreter) ties table entries to column-specific routines. It is
independent of the calendar designs and the design of the software under test. But if the scripting
language changes or if we use a new spreadsheet to store the test cases, this is the main routine that will
change.
In sum, this example reduces the amount of repetitious code and isolates different aspects of test
descriptions in a way that minimizes the impact of changes in the software under test, the subject matter
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 5
(calendars) of the tests, and the scripting language used to define the test cases. The design is optimized
for maintainability.
3. To what extent are you looking for delayed-fuse bugs (memory leaks, wild pointers, etc.)?
4. Does your management expect to recover its investment in automation within a certain period of
time? How long is that period and how easily can you influence these expectations?
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 6
5. Are you testing your own company’s code or the code of a client? Does the client want (is the
client willing to pay for) reusable test cases or will it be satisfied with bug reports and status
reports?
7. Do you anticipate that the product will be stable when released, or do you expect to have to test
Release N.01, N.02, N.03 and other bug fix releases on an urgent basis after shipment?
8. Do you anticipate that the product will be translated to other languages? Will it be recompiled or
relinked after translation (do you need to do a full test of the program after translation)? How
many translations and localizations?
9. Does your company make several products that can be tested in similar ways? Is there an
opportunity for amortizing the cost of tool development across several projects?
10. How varied are the configurations (combinations of operating system version, hardware, and
drivers) in your market? (To what extent do you need to test compability with them?)
11. What level of source control has been applied to the code under test? To what extent can old,
defective code accidentally come back into a build?
13. Are new builds well tested (integration tests) by the developers before they get to the tester?
14. To what extent have the programming staff used custom controls?
15. How likely is it that the next version of your testing tool will have changes in its command syntax
and command set?
16. What are the logging/reporting capabilities of your tool? Do you have to build these in?
17. To what extent does the tool make it easy for you to recover from errors (in the product under
test), prepare the product for further testing, and re-synchronizethe product and the test (get them
operating at the same state in the same program).
18. (In general, what kind of functionality will you have to add to the tool to make it usable?)
19. Is the quality of your product driven primarily by regulatory or liability considerations or by market
forces (competition)?
20. Is your company subject to a legal requirement that test cases be demonstrable?
21. Will you have to be able to trace test cases back to customer requirements and to show that each
requirement has associated test cases?
22. Is your company subject to audits or inspections by organizations that prefer to see extensive
regression testing?
23. If you are doing custom programming, is there a contract that specifies the acceptance tests?
Can you automate these and use them as regression tests?
25. Do you have to make it possible for non-programmers to create automated test cases?
26. To what extent are cooperative programmers available within the programming team to provide
automation support such as event logs, more unique or informative error messages, and hooks
for making function calls below the UI level?
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 7
27. What kinds of tests are really hard in your application? How would automation make these tests
easier to conduct?
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 8
o Old
o Ongoing monitoring
o Intentionally designed, new
o Random new
There’s at least one other key dimension, which I tentatively think of as “level” of testing. For example,
the test might be done at the GUI or at the API level. But this distinction only captures part of what I
think is important. Testing at the GUI is fine for learning something about the way the program works
with the user, but not necessarily for learning how the program works with the file system or network,
with another device, with other software or with other technical aspects of the system under test. Nguyen
(2000) and Whittaker (1998) discuss this in detail, but I haven’t digested their discussions in a way that I
can smoothly integrate into this framework. (As I said at the start of the paper, this is a progress report,
not a finished product.)
Within this scheme, GUI level regression tests look like this:
• Source of tests: Old (we are reusing existing tests).
• Size of test pool: Small (there are dozens or hundreds or a few thousand tests, not millions).
• Evaluation strategy: Comparison to a saved result (we expect the software’s output today to
match the output we got yesterday).
• Serial dependence among tests: Independent (our expectations for this test don’t depend on the
order in which we ran previous tests).
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 9
Let’s consider some other examples.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 10
• Compare our product with a competitor’s.
• Compare our product with a standard function (perhaps one loaded off the net, or coded based on
explicit instructions in a textbook).
• Use our function with its inverse (a mathematic inverse like squaring a square root or an
operational one, such as splitting a merged table).
• Take advantage of a known relationships (such as sine2(x) + cosine2(x) = 1.)
• Use our product to feed data or messages to a companion product (that we trust) that expects our
output to be formatted in certain ways or to have certain relationships among fields.
For a thorough discussion of random numbers, see Donald Knuth’s Art of Computer Programming,
Volume 2. Or for a simpler presentation, Kaner & Vokey (1982).
The beauty of random testing (whether we use sequentially related or independent tests) is that we can
keep testing previously tested areas of the program (possibly finding new bugs if programmers added
them while fixing or developing some other part of the system) but we are always using new tests
(making it easier to find bugs that we’ve never looked for before). By thoughtfully or randomly
combining random tests, we can create tests that are increasingly difficult for the program, which is useful
as the program gets more stable. The more extensive our oracle(s), the more we can displace planned
regression testing with a mix of manual exploratory testing and computer-assisted random tests.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 11
Noel Nyman used techniques like this as part of the testing of Windows NT’s compatibility with a large
number of applications.
Dumb monkeys are useful early in testing. They are cheap to create and, while the software is unstable,
they can reveal a lot of defects. Note, though, that the software can’t be considered stable just because it
can run without crashing under a dumb monkey for several days. The program might be failing in other
critically important ways (such as corrupting its data), without crashing.
Classification:
• Source of tests: Random, new.
• Size of test pool: Large.
• Evaluation strategy: Crash or light diagnostics.
• Serial dependence among tests: Sequence is relevant.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 12
Classification:
• Source of tests: Random, new.
• Size of test pool: Large.
• Evaluation strategy: Diagnostics (or crash).
• Serial dependence among tests: Sequence is relevant.
A Heuristic Test
Suppose that you were running an online retail system. Suppose further that over the last year, almost all
of your sales were in your home state (Florida) and almost none of your customers were from Wyoming.
But today, you had a big increase in volume and 90% of your sales were from Wyoming. This might or
might not indicate that your code has gone wild. Maybe you just got lucky and the State of Wyoming
declared you a favored vendor. Maybe all that new business is from state agencies that never used to do
business with you. On the other hand (and more likely), maybe your software has gone blooey.
In this case, we aren’t actually running tests. Instead, we are monitoring the ongoing behavior of the
software, comparing the current results to a statistical model based on prior performance.
Additionally, in this case, our evaluation is imperfect. We don’t know whether the software has passed or
failed the test, we merely know that its behavior is highly suspicious. In general, a heuristic is a rule of
thumb that supports but does not mandate a conclusion. We have partial information that will support a
probabilistic evaluation. This won’t tell you for sure that the program works correctly, but it can tell you
that the program is probably broken.
One of the insights that I gained from Doug Hoffman is that all oracles can be thought of as heuristic
oracles. Suppose that we run a test using an oracle—for example, comparing our program’s square root
function with an oracle. The two programs might agree that the square root of 4 is 2, but there’s a lot of
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 13
other information that isn’t being checked. For example, what if our program takes an hour to compute
the square root? Or what if the function has a memory leak? What if it sends a message to cancel a print
job? When we run a test of our software against an oracle, we pay attention to certain inputs and certain
outputs, but we don’t pay attention to others. For example, we might not pay attention to the temperature
(recently, I saw a series of failures in a new device that were caused by overheating of the hardware. They
looked like software errors, and they were an irreproducible mystery until someone noticed that testing of
the device was being done in an area that lacked air conditioning and that the device seemed very hot.)
We might not pay attention to persistent states of the software (such as settings of the program’s options)
or to other environmental or system data. Therefore, when the program’s behavior appears to match an
oracle’s, we don’t know that the software has passed the test. And if the program fails to match the oracle,
that might be the result of an error of it might be a correct response to inputs or states that we are not
monitoring. We don’t fully know about pass or fail. We only know that there is a likelihood that the
software has behaved correctly or incorrectly.
Using relatively simple heuristic oracles can help us inexpensively spot errors early in testing or run a
huge number of test cases through a simple plausibility check.
Classification of this example (the retail system):
• Source of tests: Ongoing monitoring.
• Size of test pool: Large.
• Evaluation strategy: Comparison to a heuristic prediction.
• Serial dependence among tests: Sequence might be relevant.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 14
Suppose that we were evaluating GUI-level regression testing. Here are some favorable conditions.
Please note that factors that are favorable to one strategy might also favor another, and that they might or
might not be necessary or sufficient to convince us to use that strategy. Additionally, these are just
examples. You might think of other factors that would be favorable to the strategy that I haven’t listed.
• Goal of testing: It would make sense to use a GUI-level regression tool if our goal was
o Smoke testing
o Port testing (from code written for one O/S, ported to another, for example)
o Demonstrate basic functionality to regulators or customers
o Ability to quickly check patch releases
• Level of testing (e.g. UI, API, unit, system): A GUI tool will test at the GUI.
• Software under test:
o For a GUI-based regression test under Windows, we’re more likely to succeed if
the developers used standard controls rather than custom controls. The test code
dealing with custom controls is more expensive and more fragile.
o If we were willing to drop down a level, doing regression testing at the API level
(i.e. by writing programs that called the API functions), the existence of a stable
set of API’s might save us from code turmoil. In several companies, the API
stays stable even when the visible user interface chances daily.
o The more stable the design, the lower the maintenance cost.
o The software must generate repeatable output. As an example of a problem,
suppose the software outputs color or gray-scale graphics to a laser printer. The
printer firmware will achieve the color (or black/white) mix by dithering,
randomly placing dots, but in an appropriate density. Run the test twice and
you’ll get two outputs that might look exactly the same but that are not bit-for-bit
the same.
• Environment:
o Some embedded systems give non-repeatable results.
o Real-time, live systems are usually not perfectly repeatable.
• Generator of test cases. The generator is the tool or method you use to create tests. You
want to use a test strategy like regression, which relies on a relatively small number of
tests, if:
o It is expensive to run tests in order to create reference data, and therefore it is
valuable to generate test results once and use them from the archives.
o It is expensive to create the tests for some other reason.
• Reference function: This is the oracle or the collection of saved outputs. Compare the
results to the output from your program under test. One could argue that it favors the
strategy of regression automation if:
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 15
o Screens, states, binary output, or a saved database are relatively easy to get and
save.
o Incidental results, such as duration of the operation, amount of memory used, or
the exiting state of registers are easy to capture and concise to save.
• Evaluation function: This is the method you use to decide whether the software under
test is passing or failing. For example, if you compare the current output to a previous
output, the evaluation function is the function that does the actual comparison.
o I don’t see anything that specifically favors GUI regression testing.
• Users: Who will use the software? Who will use the automation?
o Non-programmers are common users of these tools and frequent targets of
vendors of these tools. To the extent that you have a sophisticated user, the
simple old/new comparison might not be very informative.
• Risks: What risks are we trying to manage by testing?
o We shouldn’t be thinking in terms of (large sample) exhaustive or high-volume
regression testing.
You might find it useful to build a chart with this type of information, for each testing strategy that you
are considering. My conjecture is that, if you do this, you will conclude that different potential issues in
the software are best tested for in different ways. As a result, you might use a few different automation
strategies, rather than settling into reliance on one approach.
References
Bach, J. (1996) “Test Automation Snake Oil”, Windows Tech Journal, October. Available at
http://www.satisfice.com/articles/test_automation_snake_oil.pdf.
Buwalda, H. (1996) “Automated testing with Action Words, Abandoning Record & Playback”; Buwalda,
H. (1998) “Automated testing with Action Words”, STAR Conference West. For an accessible version of
both of these, See Hans’ chapter in Fewster, M. & Graham, D. (2000) Software Test Automation :
Effective Use of Test Execution Tools.
Gause, D. & Weinberg, G. (1989) Exploring Requirements : Quality Before Design.
Jorgensen, A. (1999), Software Design Based on Operational Modes, Doctoral dissertation, Florida
Institute of Technology.
Jorgensen, A. & Whittaker, J. (2000), “An API Testing Method”, Star Conference East. Available at
www.aet-usa.com/STAREAST2000.html.
Kaner, C. & Vokey, J. (1982), “A Better Random Number Generator for Apple’s Floating Point BASIC”,
Micro. Available by e-mail from [email protected].
Kaner, C. (1997a), “Improving the Maintainability of Automated Tests”, Quality Week. Available at
www.kaner.com/lawst1.htm.
Kaner, C. (1997b), “The Impossibility of Complete Testing”, Software QA. Available at
www.kaner.com/imposs.htm.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 16
Kaner, C. & Lawrence, B. (1997) “Los Altos Workshop on Software Testing”,
www.kaner.com/lawst.htm.
Kaner, C. (1998), “Avoiding Shelfware: A Manager’s View of Automated GUI Testing”, STAR East,
1998; available at www.kaner.com or from the author ([email protected]).
Kaner, C. & Lawrence, B. (1999), The LAWST Handbook, available from the authors ([email protected];
[email protected]).
Kent, J. (1999) “Advanced Automated Testing Architectures”, Quality Week.
Kit, E. (1999) “Integrated, Effective Test Design and Automation”, Software Development, February
issue. Available at www.sdmagazine.com/breakrm/features/s992f2.shtml.
Lawrence, B. & Gause, D. (1998), Gause & Lawrence Requirements Tools. Available at
www.coyotevalley.com/stuff/rt.doc.
Michalko, M. (1991), Thinkertoys (A Handbook of Business Creativity). (See especially the Phoenix
Questions at p. 140).
Nguyen, H. (2000), Testing Applications on the Web: Test Planning for Internet-Based Systems,
manuscript in press with John Wiley & Sons.
Nyman, N. (1998), “Application Testing with Dumb Monkeys”, STAR Conference West.
Pettichord, B. (1996) “Success with Test Automation”, Quality Week. Available at
www.io.com/~wazmo/succpap.htm (or go to www.pettichord.com).
Pettichord, B. (1999) “Seven Steps to Test Automation Success”, STAR West, 1999. Available (later
version) at www.io.com/~wazmo/papers/seven_steps.html (or go to www.pettichord.com).
Robinson, H. (1999a), “Finite State Model-Based Testing on a Shoestring”, STAR Conference West.
Available at www.geocities.com/model_based_testing/shoestring.htm.
Robinson, H. (1999b), “Graph Theory Techniques in Model-Based Testing”, International Conference on
Testing Computer Software. Available at www.geocities.com/model_based_testing/model-based.htm.
Whittaker, J. (1997), “Stochastic Software Testing”, Annals of Software Engineering, 4, 115-131.
Whittaker, J. (1998), “Software Testing: What it is and Why it is So Difficult”, available at
www.se.fit.edu/papers/SwTestng.pdf.
Zambelich, K. (1998), “Totally Data-Driven Automated Testing”, www.sqa-test.com/w_paper1.html.
Architectures of Test Automation Copyright © Cem Kaner, 2000. All rights reserved. 17