Introduction To Automated Testing
Introduction To Automated Testing
● input/output data
● state before/after
Results
Verification
Test Library
Management
What is a Test Case ?
● Test Case is a pair of
<input, expected outcome>
● For state-less systems (e.g. a compiler)
● Test cases are very simple
● Outcome depends solely on the current input
● For state-oriented (e.g. ATM)
● Test cases are not that simple.
● A test case may consist of a sequences of <input, expected outcome>
● The outcome depends both on the current state of the system and the current input
ATM example:
< check balance, $500.00 >,
< withdraw, “amount?” >,
< $200.00, “$200.00” >,
< check balance, $300.00 >
● Various ways input may be specified
Expected Outcome
● An outcome of program execution may include
● Value produced by the program
● State Change
● A sequence of values which must be interpreted together for
the outcome to be valid
● Unit testing
● Individual program units, such as
procedure, methods in isolation
● Integration testing
● Modules are assembled to
construct larger subsystem and
tested
● System testing
● Includes wide spectrum of testing
such as functionality, and load
● Acceptance testing
● Customer’s expectations from the
system
Levels of Testing
● Unit testing
● Tools such as JUnit
● Integration testing
● Stubs, mocks
● System testing
● Security, performance, load
testers
● Regression testing
● Test Management tools (e.g.
defect tracking, ...)
What do we need to do
automated testing?
●
Test script
– Test Case Specification
●
Actions to send to system under test (SUT).
●
Responses expected from SUT.
– How to determine whether a test was successful or
not?
●
Test execution system
– Mechanism to read test script, and connect test case
to SUT.
– Directed by a test controller.
Test Architecture (1)
●
Includes defining the set of Points of
Control and Observation (PCOs)
Test
Test controller Execution
Test script System
m m
PCO PCO 1 PCO 2
SUT SUT
Potential PCOs
●
Determining the PCOs of an application can be a challenge.
●
Potential PCOs:
●
Direct method call (e.g. JUnit)
●
User input / output
●
Data file input / output
●
Network ports / interfaces
●
Windows registry / configuration files
●
Log files
●
Temporary files or network ports
●
Pipes / shared memory
Potential PCOs (2)
●
3rd party component interfaces:
●
Lookup facilities:
– network: Domain Name Service (DNS), Lightweight Directory
Access Protocol (LDAP), etc.
– local / server: database lookup, Java Naming and Directory
Interface (JNDI), etc.
●
Calls to:
– remote methods (e.g. RPC)
– Operating System
• For the purposes of security testing, all of these PCOs could be a point
of attack.
Distributed Test Architecture (1)
●
May require several local test controllers and a
master test controller
Master Test
controller
PCO PCO
SUT SUT
Component 1 Component 2
Distributed Test Architecture (2)
●
Issues with distributed testing:
●
Establishing connections at PCOs
●
Synchronization
●
Where are pass/fail decisions made?
●
Communication among test controllers
Choosing a test architecture
User
mouse clicks / keyboard
Browser
HTTP / HTML
Web Server
SQL
Data base
Choosing a Test Architecture
●
Testing from the user’s point of view:
●
Need a test tool to simulate mouse events, or keyboard input
●
Need to be able to recognize correct web pages
– Small web page changes might require large changes to test
scripts.
●
Testing without the browser:
●
Test script would send HTTP commands to web server, and check
HTTP messages or HTML pages that are returned.
●
Easier to do, but not quite as “realistic”.
Test Scripts
●
What should the format of a test script be?
●
tool dependent?
●
a standard test language?
●
a programming language?
Test Script Development
●
Creating test scripts follows a parallel development process, including:
●
requirements
●
creation
●
debugging
●
configuration management
●
maintenance
●
documentation
●
Result: they are expensive to create and maintain
Making the automation decision
(1)
• Will the user interface of the application be stable or not?
• To what extent are oracles available?
• To what extent are you looking for delayed-fuse bugs (memory leaks,
wild pointers, etc.)?
• Does your management expect to recover its investment in
automation within a certain period of time? How long is that period
and how easily can you influence these expectations?
• Are you testing your own company’s code or the code of a client?
Does the client want (is the client willing to pay for) reusable test
cases or will it be satisfied with bug reports and status reports?
• Do you expect this product to sell through multiple versions?
Making the automation decision
(2)
• Do you anticipate that the product will be stable when released, or do
you expect to have to test Release N.01, N.02, N.03 and other bug fix
releases on an urgent basis after shipment?
• What level of source control has been applied to the code under test?
To what extent can old, defective code accidentally come back into a
build?
• Are new builds well tested (integration tests) by the developers before
they get to the tester?
Making the automation decision
(4)
• To what extent have the programming staff
used custom controls?
• How likely is it that the next version of your
testing tool will have changes in its
command syntax and command set?
• What are the logging/reporting capabilities
of your tool? Do you have to build these in?
Making the automation decision
(5)
• To what extent does the tool make it easy for you to recover from
errors (in the product under test), prepare the product for further
testing, and re-synchronize the product and the test (get them
operating at the same state in the same program).
• In general, what kind of functionality will you have to add to the tool to
make it usable?
http://www.testfocus.co.za/Feature%20articles/july2005.htm
●
C. Kaner, “Architectures of Test Automation.” (2000). On line at:
http://www.kaner.com/testarch.html
●
C. Kaner, “Improving the maintainability of automated test suites,” Software
QA, Vol. 4, No. 4 (1997). On line at:
www.kaner.com/pdfs/autosqa.pdf
●
J. Bach, “Test automation snake oil”, Proceedings of 14th Int’l conference on
Testing Computer Software (revised 1999). On line at:
www.satisfice.com/articles/test_automation_snake_oil.pdf