0% found this document useful (0 votes)
37 views31 pages

Unit V Test Automation

The document discusses test automation including the scope of automation, design and architecture for automation, requirements for a test tool, and challenges in automation. It covers skills needed for automation, what areas to automate, components of test automation architecture, and requirements for selecting a test tool.

Uploaded by

Harish S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views31 pages

Unit V Test Automation

The document discusses test automation including the scope of automation, design and architecture for automation, requirements for a test tool, and challenges in automation. It covers skills needed for automation, what areas to automate, components of test automation architecture, and requirements for selecting a test tool.

Uploaded by

Harish S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT V

TEST AUTOMATION

Software test automation – skill needed for automation – scope of automation – design
and architecture for automation – requirements for a test tool – challenges in automation –
Test metrics and measurements – project, progress and productivity metrics.
Software Test Automation :
Developing software to test the software is called test automation
Automation saves time as software can execute test cases faster than humans do.
The time saved can be used effectively by test engineers to
• Develop additional test cases to achieve better coverage
• Perform some esoteric or specialized tests like adhoc testing
• Perform some extra manual testing
Advantages :
• Test automation can free the test engineers from mundane tasks and make them focus on more
creative tasks
• Automated tests can be more reliable
• Automation helps in immediate testing
• Automation can protect an organization against attrition of test engineers
• Test automation opens up opportunities for better utilization of global resources
Advantages :
• Certain types of testing cannot be executed without automation
• Automation means end to end not test execution alone
Activities involved in automation:
• Picking up the right product build
• Choosing the right configuration
• Performing installation
• Running the tests
• Generating the right test data
• Analyzing the test results
• Filling the defects in the defect repository
Test data generators:
They are scripts that produce test data to maximize coverage of permutations and combinations of
inputs and expected output for result comparison
Terms used in automation:
Test case: It is a set of sequential steps to execute a test operating on a set of predefined inputs to
produce certain expected outputs.
Types of test cases:
1. Manual – executed manually
2. Automated – executed using automation
Same test case being used for different types of testing :
Sl.No. Test cases for testing Belongs to what type of
testing
1 Checks whether log in works Functionality
2 Repeat log in operation in a loop for 48 hours Reliability
3 Perform log in from 10000 clients Load / Stress testing
4 Measure time taken for log in operations in Performance
different conditions
5 Run log in operation from a machine running Internationalization
in Japanese language

Dimensions of test case:


1. What operations have to be tested – product specific feature
2. How the operations have to be tested ( Scenario ) – frame work specific requirement
Test suite : It is a set of test cases combined with a set of scenarios
Framework for test automation :
Skills needed for Automation:
Generations of Automation:
1. First generation – record and playback
2. Second generation – data driven
3. Third generation – action driven
What to Automate , Scope of Automation
• Identifying the types of Testing Amenable to Automation
Stress, reliability, scalability and performance testing
test cases belonging to these testing types are the first candidates for automation
Regression tests
they are repetitive in nature , the test cases are executed multiple times automation of test
cases will save time and effort
Functional tests
these kind of tests may require complex setup and require specialized skill, automating
the test can enable the less skilled people to run these tests.

• Automating areas less prone to change


the basic functionality of the product never changes , hence while automating it has to be
considered first
• Automate test that pertain to Standards
One of the tests that products may have to undergo is compliance to standards. These tests
undergo relatively less change. Automating these tests provides dual advantages. Test suites
developed for standards are not only useful for product testing but can also be sold as test tools for
the market.
Testing for standards have certain legal and organizational requirements. To certify the
software or hardware, a test suite is developed and handed over to different companies. The
certification suites are executed every time by the supporting organization before the release of
software and hardware. This is called certification testing and requires perfectly compliant results
every time the tests are executed.
• Management aspects in automation
prior to starting automation, adequate effort has to be spent to obtain management
commitment. Automation generally is a phase involving a large amount of effort and is not
necessarily a one time activity. Since it involves significant effort to develop and maintain
automated tools, obtaining management commitment is an important activity.
return on investment is another aspect to be considered seriously. Effort estimates for
automation should give a clear indication to the management on the expected return on
investment.
Design and Architecture for Automation
Components of test automation
Architecture for test automation involves two major heads:
1. A test infrastructure that covers a test case database
2. A defect database or defect repository

External Modules:
There are two modules that are external modules to automation
• TCDB
• Defect DB
All the test cases , the steps to execute them, and the history of their execution are stored in
TCDB. The test cases in TCDB can be manual or automated. The interface shown by thick arrows
represent the interaction between the TCDB and the automation framework only for automated
test cases. Manual test cases do not need any interaction between the TCDB and the framework.
Defect DB contains details of all the defects that are found in various products that are tested in a
particular organization. It contains defects and all the related information. Test engineers submit
the defects for manual test cases . For automated test cases, the framework can automatically
submit the defects to the defects DB during execution.
Scenario and Configuration file modules:
Scenarios are information on “how to execute a particular test case”
Configuration file contains a set of variables that are used in automation. The variables could be
for the test framework or for other modules in automation such as tools and metrics or for the test
suite or for a set of test cases or for a particular test case. Configuration file is important for
running the tests for various input and output conditions and states. The values of variables in this
configuration can be changed dynamically to achieve different execution, input, output and state
conditions.
Test cases and test framework modules:
Test case means the automated test cases that are taken from TCDB and executed by the
framework. Test case is an object for execution for other modules in the architecture and does not
represent any interaction by itself.
A test framework is a module that combines “what to execute” and “ how they have to be
executed”. It picks up the specific test cases that are automated from TCDB and picks up the
scenarios and execute them. The variables and their defined values are picked up by the test
framework and the test cases are executed for those values.
Tools and Results Modules:
When a test framework performs its operations, there are a se of tools that may be required. When
test cases are stored as source code files in TCDB, they need to be extracted and compiled by
build tools. In order to run the compiled code, certain runtime tools and utilities may be required
When a test framework executes a set of test cases with a set of scenarios for the different values
provided by the configuration file, the results for each of the test case along with scenarios and
variable values have to be stored for future analysis and action. The results that come out of the
tests run by the test framework should not overwrite the results from the previous test run. The
history of all the previous test run should be recorded and kept as archives.
Report Generator and Report/Metrics Modules:
Once the results of a test run are available, the next step is to prepare the test reports and metrics.
Preparing report is a complex and time consuming effort and hence it should be part of the
automation design. There should be customized reports like
• Executive report – gives high level status
• Technical report – gives a moderate level of detail
• Debug report – generated for developers to debug the failed test cases and the product.
The periodicity of the report is different such as daily, weekly, monthly and milestone report.
The module that takes the necessary inputs and prepares a formatted report is called a report
generator. Once the results are available, the report generator can generate metrics. All the reports
and metrics that are generated are stored in the reports / metrics module of automation for future
use and analysis.

Requirements for a Test Tool :


1. No hard coding in the test suite
2. Test case / suite expandability
3. Reuse of code for different types of testing, test cases
4. Automatic setup and cleanup
5. Independent test cases
6. Test case dependency
7. Insulating test cases during execution
8. Coding standards and directory structure
Requirements for a Test Tool :
9. Selective execution of test cases
10. Random execution of test cases
11. Parallel execution of test cases
12. Looping the test cases
13. Grouping of test scenarios
14. Test case execution based on previous results
15. Remote execution of test cases
16. Automatic archival of test data
17. Reporting scheme
18. Independent of languages
Selecting a test tool:
Selecting the test tool is an important aspect of test automation because of the following reasons
• Free tools are not well supported and get phased out soon
• Developing in-house tools takes time
• Test tools sold by vendors are expensive
• Test tools require strong training
• Test tools generally do not meet all the requirements for automation
• Not all test tools run on all platforms
Criteria for selecting Test tools:
1. Meeting requirements
2. Technology expectations
3. Training
4. Management aspects
Issues in selecting a testing tool:
Steps for tool selection and deployment:

1. Identify the test suite requirements among the generic requirements. Add other requirements
2. Make sure experiences are taken care of
3. Collect the experiences of other organizations which used similar test tools
4. Keep a checklist of questions to be asked to the vendors on cost/effort/support
5. Identify list of tools that meet the above requirements
6. Evaluate and shortlist one/set of tools and train all test developers on the tool
7. Deploy the tool across test teams after training all potential users of the tool
Challenges in Automation:
• Management commitment
• Automation takes time and effort and pays off in the long run
• It requires significant initial outlay of money
• It requires a steep learning curve for test engineers
• Management should have patience and persist with automation
Test metrics and measurements :
In order to track a project performance and monitor its progress
• The right parameters must be measured
• The right analysis must be done on the data measured
• The result of the analysis must be presented in an appropriate form to the stakeholders to
enable them to make the right decisions on improving product or process quality

Effort – the actual time that is spent on a particular activity or a phase


Elapsed days – it is the difference between the start of an activity and the completion of the
activity
Schedule – elapsed days for a complete set of activities
Steps in a metrics program:
Importance of Metrics in Testing:
Metrics are needed to know test case execution productivity and to estimate test completion date
Testing alone can’t determine the date at which the product can be released. The number of days
to fix all outstanding defects is another crucial data point. The number of days needed for
defects fixes needs to take into account the “ outstanding defects waiting to be fixed” and a
projection of “how many more defects that will be unearthed from testing in future cycles”. The
defect trend collected over a period of time gives a rough estimates of the defects that will come
through future test cycles. Hence metrics helps in predicting the number of defects that can be
found in future test cycles.
Days needed to complete testing = Total test cases yet to be executed / Test case execution
productivity
Total days needed or defect fixes = (Outstanding defects yet to fixed + Defects that can be
found in future test cycles ) / Defect fixing capability
Days needed for release = Max (Days needed for testing, days needed for defect
fixes)
Days needed for release = Max (Days needed for testing, (days needed for defect
fixes + days needed for regressing outstanding
defect fixes))
Metrics in testing helps in identifying
• When to make the release
• What to release
• Whether the product is being released with known quality
Types of Metrics:
Metrics can be classified as (i) product metrics and (ii) process metrics
Product metrics can be further classified as
1. Project metrics – indicates how project is planned and executed
2. Progress metrics – tracks how the different activities of the project are progressing
3. Productivity metrics – helps in planning and estimating of testing activities
Project Metrics:
A typical project starts with requirements gathering and ends with product release. All the
phases that fall in between these points need to be planned and tracked. In the planning cycle,
the scope of the project is finalized. The project scope gets translated into size estimated, which
specify the quantum of work to be done. This size estimate gets translated to effort estimate for
each of the phases and activities by using the available productivity data available. This initial
effort is called baselined effort.
As the project progresses and if the scope of the project changes or if the available productivity
numbers are not correct, then the effort estimates are re-evaluated again and this re-evaluated
effort estimate is called revised effort. The estimates can change based on the frequency of
changing requirements and other parameters that impact the effort.
Effort and schedule are two factors to be tracked for any phase or activity. In an ideal situation
if the effort is tracked closely and met then the schedule can be met. The schedule can also be
met by adding more effort to the project. If the release date ( schedule) is met by putting more
effort then the project planning and execution cannot be considered successful. If planned
effort and actual effort are the same but if the schedule is not met then too the project cannot be
considered successful. Hence it is a good idea to track both effort and schedule in project
metrics.
Inputs to project metrics
1. The different activities and the initial baselined effort and schedule for each of the activities
2. The actual effort and time taken for the various activities
3. The revised estimate of effort and schedule
Progress Metrics:
Progress metrics reflects the defects of a product.
Defects get detected by the testing team and get fixed by the development team. The defect
metrics are further classified into test defect metrics and development defect metrics.
How many defects have already been found and how many more defects may get unearthed are
two parameters that determine product quality and its assessment.
The progress chart gives the pass rate and fail rate of executed test cases, pending test cases,
and test cases that are waiting for defects to be fixed. A scenario represented by a progress
chart shows progressing in testing as well as improvement in quality of the product. On the
other hand if the chart had shown a trend that as weeks progress, the not run cases are not
reducing in number or the blocked cases are increasing in number or pass cases are not
increasing, then it would clearly point to quality problems in the product that prevent the
product from being ready for release
Test Defect Metrics:
Defect priority and defect severity
A common defect definition and classification
Test Defect Metrics
• Defect find rate
• Defect fix rate
• Outstanding defects rate
• Priority outstanding rate
• Defect trend
• Defect classification trend
• Weighted defects trend
• Defect cause distribution
Development Defect Metrics:
• Component –wise defect distribution
• Defect density and defect removal rate
• Age analysis of outstanding defects
• Introduced and reopened defects trend

Defects per KLOC = ( Total defects found in the product ) / (Total executable AMD lines
of code in KLOC)
Defect removal rate
( Defects found by verification activities + Defects found in unit testing ) / (Defects found
by test test teams ) * 100
Productivity Metrics:
It combines several measurements and parameters with effort spent on the product. It helps in finding out
the capability of the team as well as for other purposes such as
1. Estimating for the new release
2. Finding out how well the team is progressing, understanding the reasons for variations in results
3. Estimating the number of defects that can be found
4. Estimating release date and quality
5. Estimating the cost involved in the release
Metrics:
• Defects per 100 Hours of testing
• Test cases executed per 100 Hours of testing
• Test cases developed per 100 Hours of testing
• Defects per 100 Test cases
• Defects per 100 Failed Test cases
• Test Phase Effectiveness – the defects found in various phases are plotted and analyzed
• Closed Defect Distribution – testing team tracks the defects and analyze how they are closed
Defects per 100 hours of testing = ( Total defects found in the product for a period / Total
hours spent to get those defects ) * 100

Test cases executed per 100 hours of testing = ( Total test cases executed for a period /
Total hours spent in test execution ) * 100

Test cases developed per 100 hours of testing = ( Total test cases developed for a period /
Total hours spent in test case development ) * 100

Defects per 100 test cases = ( Total defects found for a period / Total test cases executed
for the same period ) * 100

Defects per 100 failed test cases = ( Total defects found for a period / Total test cases failed
due to those defects ) * 100

You might also like