0% found this document useful (0 votes)
21 views

Introduction To Automated Testing

The document discusses automated software testing including defining test cases, designing test procedures, and implementing automated tests. It covers topics like unit testing, integration testing, test levels, test tools, and challenges in designing test architectures.

Uploaded by

Shivam Roy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Introduction To Automated Testing

The document discusses automated software testing including defining test cases, designing test procedures, and implementing automated tests. It covers topics like unit testing, integration testing, test levels, test tools, and challenges in designing test architectures.

Uploaded by

Shivam Roy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Introduction to Automated Testing

What is Software testing ?


●Examination of a software unit, several integrated software
units or an entire software package by running it.
● execution based on test cases
● expectation – reveal faults as failures
● Failure incorrect execution of the system
● usually consequence of a fault
● Fault/defect/bug result of a human error
Objectives of testing
●  To find defects before they cause a  production system to fail.
● To bring the tested software, after correction of the identified defects 
and retesting, to an acceptable level of quality.
●To perform the required tests efficiently and effectively, within  
budgetary and scheduling limitation. 
●To compile a record of software errors for use in error prevention (by 
corrective and preventive actions)  
Software Testing Process
Test ● Planning
Planning
include completion criteria (coverage goal)
● Design - approaches for test case selection to

achieve coverage goal


Test Design ● Implementation - find for test cases

● input/output data

● state before/after

Test ● test procedure


Implementation ● Execution – run tests

● Result verification – pass or fail? Coverage ?

● Test Library Management – maintain


Test
Execution relationships keeping track, etc.

Results
Verification

Test Library
Management
What is a Test Case ?
● Test Case is a pair of
<input, expected outcome>
● For state-less systems (e.g. a compiler)
● Test cases are very simple
● Outcome depends solely on the current input
● For state-oriented (e.g. ATM)
● Test cases are not that simple.
● A test case may consist of a sequences of <input, expected outcome>
● The outcome depends both on the current state of the system and the current input
ATM example:
< check balance, $500.00 >,
< withdraw, “amount?” >,
< $200.00, “$200.00” >,
< check balance, $300.00 >
● Various ways input may be specified
Expected Outcome
● An outcome of program execution may include
● Value produced by the program
● State Change
● A sequence of values which must be interpreted together for
the outcome to be valid

● A test oracle is a mechanism that verifies the correctness


of program outputs
● Generate expected results for the test inputs
● Compare the expected results with the actual results of
execution of the IUT
Levels of Testing

● Unit testing
● Individual program units, such as
procedure, methods in isolation
● Integration testing
● Modules are assembled to
construct larger subsystem and
tested
● System testing
● Includes wide spectrum of testing
such as functionality, and load
● Acceptance testing
● Customer’s expectations from the
system
Levels of Testing

● New test cases are not designed


● Test are selected, prioritized and executed
● To ensure that nothing is broken in the new version of the software
When to automate testing ? (1)
● The benefits of test automation need to be greater
than the (expensive!) costs of automation.
● General rule of thumb: it is expected that tests will
have to be run many times
● regression testing
● configuration testing
● conformance testing
● “agile” development process
● capacity / stress testing
● performance measurements
When to automate testing (2)

Automated testing is especially beneficial if
the tests need to be re-executed “quickly”

Frequent recompiles

Large number of tests

Using an agile development process

An automated test can be duplicated to
create many instances for capacity / stress
testing.
Example: Test-First process in XP
When NOT to automate

Initial functional testing

Automated testing is more likely to find bugs
introduced by changes to code or the execution
environment, rather than in new functionality.

Automated test scripts may not be ready for
first software release.

Situations requiring human judgment to
determine if system is functioning correctly.
Types of Testing Tools
Test Planning and Management
● Create/maintain test plans
● integrate with project plan
● Maintain links to Requirements/Specification
● generate Requirements Test Matrix
● Reports and Metrics on test case execution
● Tracking of history/status of test cases
● defect tracking
Types of Testing Tools
Test Design & Implementation
● Automatic creation of test cases
● Based on test design approaches
– graph based
– data flow analysis
– logic based
– ...
● Very few concrete usable tools
● Random test data generator
● Stubs/Mocks
Types of Testing Tools
Test Execution
● Test Drivers and Execution Frameworks
● Run test scripts and report result
● e.g. JUnit
● Runtime test execution assistance
● memory leak checkers
● comparators
Types of Testing Tools
Test Performance assessment
● Analysis of the effectiveness of test cases for
extend of system covered
● Coverage analyzers
● report on various levels of coverage
● Analysis of the effectiveness of test cases for
bug detection
● mutation testing
Types of Testing Tools
Specialized testing
● Security testing tools
● password crackers
● vulnerability scanners
● packet crafters
● ...
● Performance / Load testing tools
● performance monitors
● load generators
● ...
Types of Test Tools
Capture and Replay

For user interface testing, one approach to automating tests is, after
the system is working, “record” the input supplied by the user, and
“capture” the system responses.

When the next version of the software needs to be tested, “play back”
the recorded user input and check if the same responses are detected
as are stored in the capture file.

Benefits: relatively simple approach, easy to do

Drawbacks:

very difficult to maintain

specific to one environment
Tool support at different levels

● Unit testing
● Tools such as JUnit
● Integration testing
● Stubs, mocks
● System testing
● Security, performance, load
testers
● Regression testing
● Test Management tools (e.g.
defect tracking, ...)
What do we need to do
automated testing?


Test script
– Test Case Specification

Actions to send to system under test (SUT).

Responses expected from SUT.
– How to determine whether a test was successful or
not?

Test execution system
– Mechanism to read test script, and connect test case
to SUT.
– Directed by a test controller.
Test Architecture (1)

Includes defining the set of Points of
Control and Observation (PCOs)

Test
Test controller Execution
Test script System

PCO A PCO could be…


• a particular method to call
SUT
• a device interface
• a network port
• etc.
Test Architecture (2)

The test architecture will affect the test script
because it may be significant as to which PCO is
used for an action or response.

Test controller Test controller

m m
PCO PCO 1 PCO 2

SUT SUT
Potential PCOs

Determining the PCOs of an application can be a challenge.

Potential PCOs:

Direct method call (e.g. JUnit)

User input / output

Data file input / output

Network ports / interfaces

Windows registry / configuration files

Log files

Temporary files or network ports

Pipes / shared memory
Potential PCOs (2)

3rd party component interfaces:

Lookup facilities:
– network: Domain Name Service (DNS), Lightweight Directory
Access Protocol (LDAP), etc.
– local / server: database lookup, Java Naming and Directory
Interface (JNDI), etc.


Calls to:
– remote methods (e.g. RPC)
– Operating System

• For the purposes of security testing, all of these PCOs could be a point
of attack.
Distributed Test Architecture (1)

May require several local test controllers and a
master test controller
Master Test
controller

Local Test Local Test


controller controller

PCO PCO

SUT SUT
Component 1 Component 2
Distributed Test Architecture (2)

Issues with distributed testing:

Establishing connections at PCOs

Synchronization

Where are pass/fail decisions made?

Communication among test controllers
Choosing a test architecture

User
mouse clicks / keyboard
Browser
HTTP / HTML
Web Server
SQL
Data base
Choosing a Test Architecture

Testing from the user’s point of view:

Need a test tool to simulate mouse events, or keyboard input

Need to be able to recognize correct web pages
– Small web page changes might require large changes to test
scripts.

Testing without the browser:

Test script would send HTTP commands to web server, and check
HTTP messages or HTML pages that are returned.

Easier to do, but not quite as “realistic”.
Test Scripts

What should the format of a test script be?

tool dependent?

a standard test language?

a programming language?
Test Script Development

Creating test scripts follows a parallel development process, including:

requirements

creation

debugging

configuration management

maintenance

documentation


Result: they are expensive to create and maintain
Making the automation decision
(1)
• Will the user interface of the application be stable or not?
• To what extent are oracles available?
• To what extent are you looking for delayed-fuse bugs (memory leaks,
wild pointers, etc.)?
• Does your management expect to recover its investment in
automation within a certain period of time? How long is that period
and how easily can you influence these expectations?
• Are you testing your own company’s code or the code of a client?
Does the client want (is the client willing to pay for) reusable test
cases or will it be satisfied with bug reports and status reports?
• Do you expect this product to sell through multiple versions?
Making the automation decision
(2)
• Do you anticipate that the product will be stable when released, or do
you expect to have to test Release N.01, N.02, N.03 and other bug fix
releases on an urgent basis after shipment?

• Do you anticipate that the product will be translated to other


languages? Will it be recompiled or re-linked after translation (do you
need to do a full test of the program after translation)? How many
translations and localizations?

• Does your organization make several products that can be tested in


similar ways? Is there an opportunity for amortizing the cost of tool
development across several projects?
Making the automation decision
(3)
• How varied are the configurations (combinations of operating system
version, hardware, and drivers) in your market? (To what extent do
you need to test compatibility with them?)

• What level of source control has been applied to the code under test?
To what extent can old, defective code accidentally come back into a
build?

• How frequently do you receive new builds of the software?

• Are new builds well tested (integration tests) by the developers before
they get to the tester?
Making the automation decision
(4)
• To what extent have the programming staff
used custom controls?
• How likely is it that the next version of your
testing tool will have changes in its
command syntax and command set?
• What are the logging/reporting capabilities
of your tool? Do you have to build these in?
Making the automation decision
(5)
• To what extent does the tool make it easy for you to recover from
errors (in the product under test), prepare the product for further
testing, and re-synchronize the product and the test (get them
operating at the same state in the same program).

• In general, what kind of functionality will you have to add to the tool to
make it usable?

• Is the quality of your product driven primarily by regulatory or liability


considerations or by market forces (competition)?

• Is your organization subject to a legal requirement that test cases be


demonstrable?
Making the automation decision
(6)
• Will you have to be able to trace test cases back to
customer requirements and to show that each
requirement has associated test cases?
• Is your company subject to audits or inspections by
organizations that prefer to see extensive regression
testing?
• If you are doing custom programming, is there a
contract that specifies the acceptance tests? Can you
automate these and use them as regression tests?
• What are the skills of your current staff?
Making the automation decision
(7)
• Do you have to make it possible for non-programmers to
create automated test cases?

• To what extent are cooperative programmers available


within the programming team to provide automation
support such as event logs, more unique or informative
error messages, and hooks for making function calls
below the UI level?

• What kinds of tests are really hard in your application?


How would automation make these tests easier to
conduct?
Suggested reading

Henk Coetzee , “Best Practices in Software Test Automation,” (2005) on line at

http://www.testfocus.co.za/Feature%20articles/july2005.htm

C. Kaner, “Architectures of Test Automation.” (2000). On line at:
http://www.kaner.com/testarch.html

C. Kaner, “Improving the maintainability of automated test suites,” Software
QA, Vol. 4, No. 4 (1997). On line at:
www.kaner.com/pdfs/autosqa.pdf

J. Bach, “Test automation snake oil”, Proceedings of 14th Int’l conference on
Testing Computer Software (revised 1999). On line at:
www.satisfice.com/articles/test_automation_snake_oil.pdf

You might also like