0% found this document useful (0 votes)
7 views45 pages

Unit 5

The document discusses software test automation, covering its definition, benefits, necessary skills, and the design and architecture of automation frameworks. It outlines various generations of automation tools, types of testing suitable for automation, and essential requirements for effective test frameworks. Additionally, it emphasizes the importance of management commitment and the need for structured reporting and metrics in the automation process.

Uploaded by

nithin.pogaku1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views45 pages

Unit 5

The document discusses software test automation, covering its definition, benefits, necessary skills, and the design and architecture of automation frameworks. It outlines various generations of automation tools, types of testing suitable for automation, and essential requirements for effective test frameworks. Additionally, it emphasizes the importance of management commitment and the need for structured reporting and metrics in the automation process.

Uploaded by

nithin.pogaku1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

UNIT -V

TEST AUTOMATION
Software test automation – skill needed for
automation – scope of automation –
design and architecture for automation –
requirements for a test tool – challenges in
automation – Test metrics and
measurements – project, progress and
productivity metrics.
Software test automation
• Manual software testing - Trying various usage and input
combinations, comparing the results to the expected behavior and
recording their observations.

• Developing software to test the software is called test automation.

• Automation Testing means using an automation tool to


execute your test case suite.

• The automation software can also enter test data , compare


expected and actual results and generate detailed test reports.

• Test Automation demands considerable investments of


money and resources.
• Automation saves time as software can execute test cases faster
than human do.

• Test automation can free the test engineers from mundane tasks
and make them focus on more creative tasks.

• Automated tests can be more reliable.

• Automation helps in immediate testing.

• Automation can protect an organization against attrition of test


engineers.

• Test automation opens up opportunities for better utilization of


global resources.

• Certain types of testing cannot be executed without automation.


• Automation means end to end, not test execution alone.
Skills needed for automation
• The skills required depends on what generation of automation the
company is in.

• 1) Capture / playback and test harness tools (first generation).

• 2) Data driven tools (second generation).

• 3) Action driven (third generation).


1) Capture / Playback and Test Harness Tools:

• One of the most boring and time-consuming activity – rerun


manual tests number of times.

• capture/playback tools -These tools do this by recording


and replaying the test input scripts.

• Tests can be replayed without attendant for long hours specially


during regression testing.

• Also these recorded test scripts can be edited as per need i.e.,
whenever changes are made to the software.

• These tools can even capture human operations e.g., mouse


activity, keystrokes etc.
2) Data-driven Tools:

• This method help in developing test scripts that generates the set of
input conditions and corresponding expected output.

• The approach takes as much time and effort as the product.

• This generation of automation focuses on input and output


conditions using the black box testing approach.
3) Action-driven Tools:

• This technique enables testers to create automated tests.

• There are no input and expected output conditions required


for running the tests.

• All actions that appear on the application are automatically tested,


based on a generic set of controls defined for automation.
Automation-first Automation- second Automation-third generation
generation generation

Skills for test case Skills for test case Skills for test case Skills for framework
automation automation automation

Scripting languages Scripting languages Scripting languages Programming


languages

Record- playback Programming Programming Design and


tools usage languages languages architecture skills for
framework creation

Knowledge of data Design and Generic test


generation architecture of the requirements for
techniques product under test multiple products

Usage of the product Usage of the


under test framework
Scope of Automation in Testing
1. Identifying the Types of Testing Amenable to Automation

these types of testing


Stress, reliability, scalability, and performance testing
require the test cases to be run form a large number of different
machines for an extended period of time, such as 24 hours, 48
hours, and so on.

Regression tests are repetitive in nature. These test


Regression tests
cases are executed multiple times during the product development
phases.

These kinds of tests may require a complex set up and


Functional tests
thus require specialized skill, which may not be available on an
ongoing basis. Automating these once, using the expert skill sets,
can enable using less-skilled people to run these tests on an ongoing
basis.
2. Automating Areas Less Prone To Change

Automation should consider those areas where requirements go


through lesser or no changes. Normally change in requirements
cause scenarios and new features to be impacted, not the basic
functionality of the product.
3. Automate Tests That Pertain to Standards

One of the tests that product may have to undergo is compliance to


standards. For example, a product providing a JDBC interface should
satisfy the standard JDBC tests.

Automating for standards provides a dual advantage. Test suites


developed for standards are not only used for product testing but can
also be sold as test tools for the market.

4. Management Aspects in Automation

Prior to starting automation, adequate effort has to be spent to obtain


management commitment. It involves significant effort to develop and
maintain automated tools; obtaining management commitment is an
important activity. Return on investment is another aspect to be
considered seriously.
DESIGN AND ARCHITECTURE FOR AUTOMATION

• Design and architecture is an important aspect of automation. As in


product development, the design has to represent all requirements
in modules and in the interactions between modules and in the
interactions between modules.
External Modules

• There are two modules that are external modules to automation-


TCDB and defect DB. All the test cases, the steps to execute them,
and the history of their execution are stored in the TCDB.

• The test cases in TCDB can be manual or automated. The interface


shown by thick arrows represents the interaction between TCDB and
the automation framework only for automated test cases.

• Defect DB or defect database or defect repository contains details of


all the defects that are found in various products that are tested in a
particular organization. It contains defects and all the related
information test engineers submit the defects for manual test cases.

• For automated test cases, the framework can automatically submit


the defects to the defect DB during execution.
Scenario and Configuration File Modules

• Scenarios are nothing but information on “how to execute a


particular test case”.

• A configuration file contains a set of variables that are used in


automation. A configuration file is important for running the test
cases for various execution conditions and for running the tests for
various input and output conditions and states.
Test Cases and Test Framework Modules

• Test case is an object for execution for other modules in the


architecture and does not represent any interaction by itself.

• A test framework is a module that combines “what to execute” and


“how they have to be executed”. It picks up the specific test cases
that are automated from TCDB and picks up the scenarios and
executes them.

• The test framework is considered the core of automation design. It


subjects the test cases to different scenarios. The test framework
contains the main logic for interacting , initiating, and controlling all
modules.

• A test framework can be developed by the organization internally or


can be bought from the vendor.
Tools and Result Modules

• When a test framework performs its operations, there are a set of


tools that may be required. For example, when test cases are stored
as source code files in TCDB, they need to be extracted and
compiled by build tools. In order to run the compiled code, certain
runtime tools and utilities may be required.

• For eg , IP Packet Simulators. The result that comes out of the tests
run by the test framework should not overwrite the results from the
previous test runs. The history of all the previous tests run should be
recorded and kept as archives.
Report Generator and Reports / Metrics Modules

• Once the results of a test run are available, the next step is to
prepare the test reports and metrics. Preparing reports is a
complex and time- consuming effort and hence it should be part of
the automation design.

• There should be customized reports such as an executive report,


which gives very high level status; technical reports, which give a
moderate level of detail of the test run; and detailed or debug
reports which are generated for developers to debug the failed test
cases and the product.

• The module that takes the necessary inputs and prepares a


formatted report is called a report generator. Once the results are
available, the report generator can generate metrics.
SOFTWARE AUTOMATION
FRAMEWORK
What is Framework?

• A framework is considered to be a combination of set of protocols,


rules, standards and guidelines that can be incorporated or
followed as a whole so as to leverage the benefits of the scaffolding
provided by the Framework.
Generic requirements for test tool/framework
Requirement 1: No hard coding in the test suite

• The variables for the test suite are called configuration variables.
The file in which all variable names and their associated values are
kept is called configuration file.

• The variables belonging to the test tool and the test suite need to
be separated so that the user of the test suite need not worry
about test tool variables.

• Changing test tool variables, without knowing their purpose, may


impact the results of the tests.

• Providing inline comment for each of the variables will make the
test suite more usable and may avoid improper usage of variables.
Ex: well documented config file
Requirement 2: Test case/ suite expandability

Points to be considered during expansion are

• Adding a test case should not affect other test cases

• Adding a test case should not result in retesting the complete


test suite

• Adding a new test suite to the framework should not affect


existing test suites
Requirement 3:

Reuse of code for different types of testing, test cases

Points to be considered during Reuse of codes are:

• The test suite should only do what a test is expected to do.


The test framework needs to take care of “how” and

• The test programs need to be modular to encourage reuse


of code.
Requirement 4: Automatic setup and cleanup
• When test cases expect a particular setup to run the tests, it
will be very difficult to remember each one of them and do
the setup accordingly in the manual method.

• Hence, each test program should have a “setup” program


that will create the necessary setup before executing the
test cases.

• The test framework should have the intelligence to find out


what test cases are executed and call the appropriate setup
program.

• A setup for one test case may work negatively for another test
case. Hence, it is important not only to create.
Requirement 5: Independent test cases

• Each test case should be executed alone; there should be no


dependency between test cases such as test case-2 to be
executed after test case-1 and so on.

• This requirement enables the test engineer to select and


execute any test case at random without worrying about
other dependencies.
Requirement 6: Test case dependency

• Making a test case dependent on another makes it


necessary for a particular test case to be executed before or
after a dependent test case is selected for execution

Requirement 7: Insulating test cases during execution

• Insulating test cases from the environment is an important


requirement for the framework or test tool. At the time of test
case execution, there could be some events or interrupts or
signals in the system that may affect the execution.
Requirement 8: Coding standards and directory structure

• Coding standards and proper directory structures for a test


suite may help the new engineers in understanding the test
suite fast and help in maintaining the test suite. Incorporating
the coding standards improves portability of the code.
Requirement 9: Selective execution of test cases

• A Test Framework contains many Test Suite

• A Test Suite contains many Test Program

• A Test Program contains many Test Cases


• The selection of test cases need not be in any order and any
combination should be allowed.

• Allowing test engineers to select test cases reduces the time.

• These selections are normally done as part of the scenario


file.

• The selection of test cases can be done dynamically just


before running the test cases, by editing the scenario file.
Example scenario file Meaning

test-pgm-name 2,4,1,7-10 The test cases 2,4,1,7-10 are selected for


execution

Tests-pgm-name Executes all test cases


Requirement 10: Random execution of test cases

• Test engineer may sometimes need to select a test case


randomly from a list of test cases.

• Giving a set of test cases and expecting the test tool to select
the test case is called random execution of test cases.

• A test engineer selects a set of test cases from a test suite;


selecting a random test case from the given list is done by the
test tool.

Ex: scenario file.


Random test tool select one out of test cases 2,1,5 for execution
test-pgm-name 2,1,5
Random Test engineer wants one out of test program 1,2,3 to be
test-pgm-name1 (2,1,5) randomly executed and if pgm-name1 is selected , then one
test-pgm-name2 out of test cases 2,1,5 to be randomly executed, if test
test-pgm-name3 program 2,3 are selected , then all TC in those 2 program
are executed.
Requirement 11: parallel execution of test cases

• In a multi-tasking and multi processing operating systems it is


possible to make several instances of the tests and make
them run in parallel.

• Parallel execution simulates the behavior of several machines


running the same test and hence is very useful for
performance and load testing.

Ex: scenario file.


Instance, 5 5 instances of test case 3 in test-pgm-name1 are executed
test-pgm-name1 (3)
Instance, 5 5 instances of test programs are created , within each of
test-pgm-name1 (2,1,5) the five instances that are created the test program 1,2,3,
test-pgm-name2 are executed in sequence .
test-pgm-name3
Requirement 12: Looping the test cases

• Reliability testing requires the test cases to be executed in a


loop. There are two types of loops that are available.

• iteration loop - gives the number of iterations of a


particular test case to be executed.

• timed loop - which keeps executing the test cases in a loop


till the specified time duration is reached.

Ex: scenario file.


Repeat_loop, 50 test case 3 in test-pgm-name1 is repeated 50 times.
test-pgm-name1 (3)
Time_loop, 5 Hours TC 2,1,5 from test-pgm-name1 and all test cases from
test-pgm-name1 (2,1,5) the test program2 and 3 are executed in order, in a
test-pgm-name2 loop for 5 hours
test-pgm-name3
Requirement 13: Grouping of test scenarios

• The group scenarios allow the selected test cases to be


executed in order, random, in a loop all at the same time.

• The grouping of scenarios allows several tests to be executed


in a predetermined combination of scenarios.

Ex: scenario file.


Requirement 14: Test case execution based on previous results

• One of the effective practices is to select the test cases that


are not executed and test cases that failed in the past and
focus more on them. Some of the common scenarios that
require test cases to be executed based on the earlier results
are

1. Rerun all test cases which were executed previously;

2. Resume the test cases from where they were stopped the
previous time;

3. Rerun only failed/not run test cases; and

4. Execute all test cases that were executed previously.


Requirement 15: Remote execution of test cases

• The central machine that allocates tests to multiple machines


and co-ordinates the execution and result is called test
console or test monitor.

• In the absence of a test console, not only does executing the


results from multiple machines become difficult, collecting the
results from all those machines also becomes difficult.

Role of test console and multiple execution machine.


Requirement 16: Automatic archival of test data

• The test cases have to be repeated the same way as before,


with the same scenarios, same configuration variables and
values, and so on.

• This requires that all the related information for the test cases
have to be archived. It includes

1. What configuration variables were used

2. What scenario was used

3. What program were executed and from what path


Requirement 17: Reporting scheme

• Every test suite needs to have a reporting scheme from where


meaningful reports can be extracted.

• As we have seen in the design and architecture of framework, the


report generator should have the capability to look at the results
file and generate various reports.

• Audit logs are very important to analyze the behavior of a test


suite and a product. A reporting scheme should include

1. When the framework, scenario, test suite, test program,


and each test case were started/ completed;
2. Result of each test case;
3. Log messages;
4. Category of events and log of events; and
5. Audit reports
Requirement 18: Independent of languages

• A framework or test tool should provide a choice of languages and


scripts that are popular in the software development area.

• A framework should be independent of programming languages


and scripts.

• A framework should provide choice of programming languages,


scripts, and their combinations.

• A framework or test suite should not force a language/script.

• A framework or test suite should work with different test


programs written using different languages, and scripts.
Requirement 19: portability to different platforms

• With the advent of platform-independent languages and


technologies, there are many products in the market that are
supported in multiple OS and language platforms.

• The framework and its interfaces should be supported on


various platforms.

• Portability to different platforms is a basic requirement for


test tool/ test suite.

• The language/script used in the test suite should be


selected carefully so that it runs on different platforms.

• The language/ script written for the test suite should not
contain platform- specific calls.
CHALLENGES IN AUTOMATION
• Test automation presents some very unique challenges. The
most important of these challenges is management
commitment.

• Automation should not be viewed as a panacea for all problems


nor should it be perceived as a quick-fix solution for all the
quality problems in a product.

• The main challenge here is because of the heavy front-loading


of costs of test automation, management starts to look for an
early payback.

• Successful test automation endeavors are characterized by


unflinching management commitment, a clear vision of the
goals, and the ability to set realistic short-term goals that track
progress with respect to the long-term vision.
TEST METRICS AND MEASUREMENTS
Definition:

• Metrics are the source of measurement.

• Metrics derive information from raw data with a view to help in


decision making.

Ex: No of defects , No of test cases , effort , schedule

Metrics are needed to know test case execution productivity and to


estimate test completion date.

Effort : actual time that is spent on a particular activity or a phase

Schedule : Elapsed days for a complete set of activities


Steps in a Metrics Program
Step1: Metrics program is to decide what measurements are important and
collect data accordingly. Ex for Measurements: effort spent on testing , no
of defects , no of test cases.

Step2: It deals with defining how to combine data points or measurement to


provide meaningful metrics. A particular metric can use one or more
measurements

Step3: It involves with operational requirement for measurements. It contains


Who should collect measurements?
Who should receive the analysis etc.

This step helps to decide on the appropriate periodicity for the measurements
as well as assign operational responsibility for collecting, recording and
reporting the measurements.

Daily basis measurements  no of test cases executed, no of defects found,


defects fixed..

Weekly measurements  how may test cases produced 40 defects,


Step4: This step analyzes the metrics to identify both positive area and
improvement areas on product quality.

Step5: The final step is to take necessary action and follow up on the action.

Step6: To continue with next iteration of metrics programs, measuring a


different set of measurements, leading to more refined metrics addressing
different issues.
THE END

You might also like