0% found this document useful (0 votes)
8 views34 pages

Software Testing Unit 5

The document discusses software test automation, comparing it with manual testing across various parameters such as processing time, reliability, and cost-effectiveness. It outlines the skills needed for automation testers, the scope of automation, and the design and architecture for automation, emphasizing the importance of management commitment and return on investment. Additionally, it covers software configuration management, review programs, and types of reviews to ensure quality in the software development lifecycle.

Uploaded by

Geeta Anjali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views34 pages

Software Testing Unit 5

The document discusses software test automation, comparing it with manual testing across various parameters such as processing time, reliability, and cost-effectiveness. It outlines the skills needed for automation testers, the scope of automation, and the design and architecture for automation, emphasizing the importance of management commitment and return on investment. Additionally, it covers software configuration management, review programs, and types of reviews to ensure quality in the software development lifecycle.

Uploaded by

Geeta Anjali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Unit 5

Controlling and
Monitoring
Software Test Automation
• Test automation
• In software testing, test automation is the use of software separate from the
software being tested to control the execution of tests and the comparison of
actual outcomes with predicted outcomes.
• Manual Testing
• Manual testing is a type of software testing in which test cases are executed
manually by a tester without using any automated tools.
Comparison
Parameter Automation Testing Manual Testing
Automation Testing uses automation tools to In manual testing, test cases are executed by a human
Definition
execute test cases. tester and software.
Automated testing is significantly faster than a Manual testing is time-consuming and takes up human
Processing time
manual approach. resources.
Exploratory Testing Automation does not allow random testing Exploratory testing is possible in Manual Testing
The initial investment in the Manual testing is
The initial investment in the automated testing is
Initial investment comparatively lower. ROI is lower compared to
higher. Though the ROI is better in the long run.
Automation testing in the long run.
Automated testing is a reliable method, as it is
Manual testing is not as accurate because of the
Reliability performed by tools and scripts. There is no testing
possibility of the human errors.
Fatigue.
For even a trivial change in the UI of the AUT,
Small changes like change in id, class, etc. of a button
UI Change Automated Test Scripts need to be modified to
wouldn’t thwart execution of a manual tester.
work as expected
Investment is required for testing tools as well as
Investment Investment is needed for human resources.
automation engineers
Cost-effective Not cost effective for low volume regression Not cost effective for high volume regression.
Comparison
Parameter Automation Testing Manual Testing
With automation testing, all stakeholders can login
Manual Tests are usually recorded in an Excel or Word,
Test Report Visibility into the automation system and check test
and test results are not readily/ readily available.
execution results
Automated testing does not involve human
The manual testing method allows human
consideration. So it can never give assurance of
Human observation observation, which may be useful to offer user-friendly
user-friendliness and positive customer
system.
experience.
Performance Tests like Load Testing, Stress Testing,
Performance Testing Spike Testing, etc. have to be tested by an Performance Testing is not feasible manually
automation tool compulsorily.
This testing can be executed on different operating Manual tests can be executed in parallel but would
Parallel Execution platforms in parallel and reduce test execution need to increase your human resource which is
time. expensive
You can Batch multiple Test Scripts for nightly
Batch testing Manual tests cannot be batched.
execution.
Programming Programming knowledge is a must in automation
No need for programming in Manual Testing.
knowledge testing.
Automation test requires less complex test Manual testing needs have a more straightforward
Set up
execution set up. test execution setup
Repetitive Manual Test Execution can get boring and
Engagement Done by tools. Its accurate and never gets bored!
error-prone.
skills needed for an automation
tester
• Focus on analytical thinking:
• Once the business team provides the business requirement document, the
automation testing team should focus on understanding every aspect of the
feature very well from an automation perspective.
• The automation testing team needs to think and identify areas of the functionality
which can or cannot be automated and define a detailed automation test strategy.
• Understanding of programming languages:
• Mostly the automated test tools use programming languages like Java, Python,
Perl, Vb script, etc. The automation tester needs to be proficient in these
programming languages. The thought process of the automation tester should be
to identify and cover all the possible modules that demand automation.
skills needed for an automation
tester
• Expertise in creation of test scripts :
• In the market, there is a wide range of automation frameworks, out of which
some will expect the tester to have sufficient programming knowledge when
it comes to writing automated test scripts whereas for some tools the test
scripts are written in plain English language and do not require an
understanding of backend logic and coding.
• Most of the organizations in today’s date are using the Cucumber framework
for test automation. In cucumber, as the test scripts are designed using plain
English language, sufficient knowledge on Selenium WebDriver is enough to
create the test scripts.
skills needed for an automation
tester
• Well versed with agile, DevOps and continuous delivery:
• Automation testing demand in the market is increasing with new-age agile and
DevOps methodology replacing waterfall model.
• As agile methodology involves frequent changes, it is essential to have an
automation testing process in place for the same. Automation testers can
automate the test scripts for a module to be able to respond to frequent
customer induced requirement changes.
• Maintain good communication and interaction with stakeholders:
• Having good communication skills and collaboration is essential for automation
testers. This is most important before and during the testing phase as automation
testers have to interact with developers, business analysts, feature engineers
(possessing excellent domain-specific knowledge) and all other stakeholders.
skills needed for an automation
tester
• Able to assess and mitigate the risk :
• Though test automation seems like a strategic step in the agile world, there is always
a risk associated with the test automation process.
• If there are changes in the interfaces, after the automation test scripts were
prepared, it can cause a problem during the test execution process as irrelevant test
results will be generated due to these interface changes. Similar problems might
occur in case of changes in business logic. This incurs additional cost to support the
changes, also it can involve modification of test data and impact other test cases as
well.
• Online Automation Testing Courses and Certifications :
• A certification always stands as an added advantage for an automation tester.
• It helps to build a strong profile for an automation tester and enhance your
knowledge in terms of automation testing.
Scope of Automation
• The following are some generic tips for identifying the scope(breadth)
for automation,
• Identifying the types of Testing Amenable to Automation
• Stress, reliability, scalability and performance testing: Test cases belonging
to these testing types are the first candidates for automation
• Regression tests: They are repetitive in nature, the test cases are executed
multiple times automation of test cases will save time and effort
• Functional tests: These kind of tests may require complex setup and require
specialized skill, automating the test can enable the less skilled people to run
these tests.
Scope of Automation
• Automating areas less prone to change
• The basic functionality of the product never changes, hence while automating
it has to be considered first
• Consider the area that go through lesser change.
• User interface go through many changes , so don’t automate
• Automate test that pertain to Standards
• Automate test that pertain to Standards
• For example the product providing standard JDBC interface should satisfy
JDBC test.
• Automating for standard creates new opportunity to sell it as commercial
tool.
Scope of Automation
• Management aspects in automation
• Prior to starting automation, adequate effort has to be spent to obtain
management commitment. Automation generally is a phase involving a large
amount of effort and is not necessarily a onetime activity. Since it involves
significant effort to develop and maintain automated tools, obtaining
management commitment is an important activity.
• Return on investment is another aspect to be considered seriously. Effort
estimates for automation should give a clear indication to the management
on the expected return on investment.
Design and Architecture for
Automation
• Design and architecture is an important aspect of automation.
• In the below figure, the thin arrows represent internal interfaces and
the direction of flow and thick arrows show the external interfaces
Design and Architecture for
Automation
• Architecture for test automation involves two major heads:
1. A test infrastructure that covers a test case database
2. A defect database or defect repository
• The different interaction modules are as follows,
• External Modules
• TCDB
• Defect DB
• All the test cases, the steps to execute them, and the history of their execution are
stored in TCDB. The test cases in TCDB can be manual or automated. The interface
shown by thick arrows represent the interaction between the TCDB and the
automation framework only for automated test cases. Manual test cases do not need
any interaction between the TCDB and the framework.
Design and Architecture for
Automation
• Defect DB contains details of all the defects that are found in various products that are
tested in a particular organization. It contains defects and all the related information.
Test engineers submit the defects for manual test cases. For automated test cases, the
framework can automatically submit the defects to the defects DB during execution.
• Scenario and Configuration file modules:
• Scenarios are information on “how to execute a particular test case”
• Configuration file contains a set of variables that are used in automation. The variables
could be for the test framework or for other modules in automation such as tools and
metrics or for the test suite or for a set of test cases or for a particular test case.
• Configuration file is important for running the tests for various input and output
conditions and states. The values of variables in this configuration can be changed
dynamically to achieve different execution, input, output and state conditions.
Design and Architecture for
Automation
• Test cases and test framework modules:
• Test case means the automated test cases that are taken from TCDB and
executed by the framework.
• A test framework is a module that combines “what to execute” and “how
they have to be executed”.
• It picks up the specific test cases that are automated from TCDB and picks up
the scenarios and execute them. The variables and their defined values are
picked up by the test framework and the test cases are executed for those
values.
Design and Architecture for
Automation
• Tools and Results Modules:
• When a test framework performs its operations, there are a set of tools that may be
required. These tools helps in performing the various test related activities.
• When test cases are stored as source code files in TCDB, they need to be extracted and
compiled by build tools. In order to run the compiled code, certain runtime tools and
utilities may be required
• Ex: user login simulator
• When a test framework executes a set of test cases with a set of scenarios for the
different values provided by the configuration file, the results for each of the test case
along with scenarios and variable values have to be stored for future analysis and action.
• The results that come out of the tests run by the test framework should not overwrite
the results from the previous test run. The history of all the previous test run should be
recorded and kept as archives.
Design and Architecture for
Automation
• Report Generator and Report/Metrics Modules:
• Once the results of a test run are available, the next step is to prepare the test reports
and metrics. Preparing report is a complex and time consuming effort and hence it
should be part of the automation design. There should be customized reports like
• Executive report – gives high level status
• Technical report – gives a moderate level of detail
• Debug report – generated for developers to debug the failed test cases and the product.
• The periodicity of the report is different such as daily, weekly, monthly and milestone
report.
• The module that takes the necessary inputs and prepares a formatted report is called a
report generator. Once the results are available, the report generator can generate
metrics. All the reports and metrics that are generated are stored in the reports /
metrics module of automation for future use and analysis.
Test metrics and Measurements
• A Metric is a quantitative measure of the degree to which a system, system
component, or process possesses a given attribute.
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Suppose, in general, “Kilogram” is a metric for measuring the attribute
“Weight”. Similarly, in software, “How many issues are found in a thousand
lines of code?”, here No. of issues is one measurement & No. of lines of code
is another measurement. Metric is defined from these two measurements.
• Test metrics example:
• How many defects exist within the module?
• How many test cases are executed per person?
• What is Test coverage %?
Test metrics and Measurements
• Measurement is the quantitative indication of extent, amount,
dimension, capacity, or size of some attribute of a product or process.
• Test Measurement example: Total number of defects.
• Below diagram describes the difference between Measurement &
Metrics.
Types of software testing metrics:
• Project metrics – indicates how project is planned and executed
• track both effort and schedule in project metrics.
• Progress metrics – tracks how the different activities of the project
are progressing
• Progress metrics reflects the defects of a product.
• Productivity metrics – helps in planning and estimating of testing
activities
• Estimating release date and quality, Estimating the cost involved in the
release
Test Completion Criteria
• One of the most difficult questions to answer when testing a program is determining when to
stop, since there is no way of knowing if the error just detected is the last remaining error. In
fact, in anything but a small program, it is unreasonable to expect that all errors will eventually be
detected.
• The completion criteria typically used in practice are both meaningless and counterproductive.
The two most common criteria are these:
• Stop when the scheduled time for testing expires.
• stop when the test cases are unsuccessful.
• All Specified Coverage Goals Have Been Met.
• The Detection of a Specific Number of Defects Has Been Accomplished.
• The first criterion is useless because you can satisfy it by doing absolutely nothing. It does not
measure the quality of the testing. The second criterion is equally useless because it also is
independent of the quality of the test cases. Furthermore, it is counterproductive because it
subconsciously encourages you to write test cases that have a low probability of detecting errors.
Test Completion Criteria
• All the Planned Tests That Were Developed Have Been Executed and
Passed.
• All Specified Coverage Goals Have Been Met.
• The Detection of a Specific Number of Defects Has Been
Accomplished.
Software Configuration Management
• In Software Engineering, Software Configuration Management(SCM) is a
process to systematically manage, organize, and control the changes in the
documents, codes, and other entities during the Software Development
Life Cycle.
• The primary reasons for Implementing Technical Software Configuration
Management System are:
• There are multiple people working on software which is continually updating
• It may be a case where multiple version, branches, authors are involved in a
software config project, and the team is geographically distributed and works
concurrently
• Changes in user requirement, policy, budget, schedule need to be accommodated.
• Software should able to run on various machines and Operating Systems
Software Configuration Management
• Tasks in SCM process
• Configuration Identification
• Identification of configuration Items like source code modules, test case, and
requirements specification.
• Baselines
• baseline means ready for release.
• A baseline is a formally accepted version of a software configuration item. It is
designated and fixed at a specific time while conducting the SCM process. It can only be
changed through formal change control procedures.
• Change Control
• Change control is a procedural method which ensures quality and consistency when
changes are made in the configuration object. In this step, the change request is
submitted to software configuration manager.
Software Configuration Management
• Configuration Status Accounting
• Configuration status accounting tracks each release during the SCM process. This stage
involves tracking what each version has and the changes that lead to this version.
• Configuration Audits and Reviews
• Software Configuration audits verify that all the software product satisfies the baseline
needs. It ensures that what is built is what is delivered.
Review program
• A review is a group meeting whose purpose is to evaluate a software artifact or a set
of software artifacts.
Or
• A review is a systematic examination of a document by one or more people with the
main aim of finding and removing errors early in the software development life cycle.
• Reviews are used to verify documents such as requirements, system designs, code,
test plans and test cases.
• The general goals for the reviewers are to:
• identify problem components or components in the software artifact that need improvement;
• identify components of the software artifact that do not need improvement;
• identify specific errors or defects in the software artifact (defect detection);
• ensure that the artifact conforms to organizational standards.
Types of Review
• Software Peer Reviews:
• This type of review is conducted by the main author of the software, or it can be between
the colleagues so that the evaluation can be done of the technical content or quality of
the work.
• Different types of Software Peer Reviews are there which are enumerated below
• Code Review: The source code of the software is examined here. The software is checked for the bugs
and the bugs are removed from the code.
• Inspection: It is a formal type of review where a person has to go through a defined set of
instructions in order to find defect/defects. There can be a number of reviewers involved in this type
of reviewing.
• Walkthrough: This is a process where the authors of the software as well as other associates are
gathered at one place and they discuss about the software defects. Questions are made, comments
are given, answers are given to all the queries people have regarding the software. With all the
members' satisfaction, conclusions are made.
• Technical review: This is the team of qualified personnel examines the suitability of the software
product for its intended inconsistency from the specification and standards point of view.
Types of Review
• Software Management Reviews:
• The management representatives are responsible for this type of review. The
status of the work is evaluated and the decision by the activities of the
software. This review is very important in making major decision regarding
software.
• Software audit review
• A software audit review, or software audit, is a type of software review in
which one or more auditors who are not members of the software
development organization conduct "An independent examination of a
software product, software process, or set of software processes to assess
compliance with specifications, standards, contractual agreements, or other
criteria
Defect prevention
• Defect Prevention is basically defined as a measure to ensure that
defects being detected so far, should not appear or occur again. For
facilitating communication simply among members of team, planning
and devising defect prevention guidelines, etc., Coordinator is mainly
responsible.
• Methods of Defect Prevention :
• Software Requirement Analysis :
• The main cause of defects in software products is due to error in software
requirements and designs. Software requirements and design both are important, and
should be analyzed in an efficient way with more focus.
Defect prevention
• Review and Inspection :
• Review and inspection, both are essential and integral part of software development.
They are considered as powerful tools that can be used to identify and remove defects if
present before there occurrence and impact on production.
• Defect Logging and Documentation :
• After successful analysis and review, there should be records maintained about defects
to simply complete description of defect. This record can be further used to have better
understanding of defects. After getting knowledge and understanding of defect, then
only one can take some effective and required measures and actions to resolve particular
defects so that defect cannot be carried further to next phase.
• Root Cause Analysis :
• Root cause analysis is basically analysis of main cause of defect. It simply analysis what
triggered defect to occur. After analyzing main cause of defect, one can find best way to
simply avoid occurrence of such types of defects next time.
The Test Maturity Model (TMM)
• The Test Maturity Model (TMM) in software testing is a framework for
assessing the software testing process with the intention of improving it.
• The software quality and efficiency of the testing processes increased
• Five Levels of TMM
• Below are the five different levels that help in achieving the Test Maturity:
• Level 1: Initialization
• At this level, we are able to run the software without any hindrances or blocks.
• There are no exactly defined testing processes.
• Quality checks are not done before the software release.
The Test Maturity Model (TMM)
• Level 2: Definition
• Develop testing and debugging goals and policies
• This level distinguish testing from debugging & they are considered distinct
activities
• Testing phase comes after coding
• A primary goal of testing is to show software meets specification
• Basic testing methods and techniques are in place
The Test Maturity Model (TMM)
• Level 3: Integration
• Integration of testing into the software life cycle
• Level 4: Measurement and Management
• All the testing procedures become part of the software life cycle.
• These include reviews of requirement analysis, design documents, and Code
reviews.
• Integration and Unit testing as a part of coding is done here.
The Test Maturity Model (TMM)
• Level 5: Optimization
• Testing effectiveness and costs can be monitored
• Testing can be fine-tuned and continuously improved
• Quality control and Defect prevention are practiced
• Process reuse is practiced

You might also like