0% found this document useful (0 votes)
29 views28 pages

Software Testing Unit 5 New

Uploaded by

sambheanup2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views28 pages

Software Testing Unit 5 New

Uploaded by

sambheanup2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 5

Automated Tools and Measurements


Manual Testing :
• In Manual Testing , Testers manually execute test cases without
using any automation tools.

• It requires a tester to play the role of an end user.

• Any new application must be manually tested before its testing


can be automated.

• Manual testing requires more effort, but is necessary to check


automation feasibility.

• Manual Testing does not require knowledge of any testing tool.

• One of the Software Testing Fundamental is "100% Automation is


not possible". This makes Manual Testing imperative.
Advantages of manual testing:
1. It is preferable for products with short life cycles.

2. It is preferable for products that have GUIs that constantly


change

3. It requires less time and expense to begin productive manual


testing.

4. Automation can not replace human intuition, inference,


and inductive reasoning

5. Automation can not change course in the middle of a test


run to examine something that had not been previously
considered.

6. Automation tests are more easily fooled than human testers.


Disadvantages of manual testing:
1. Requires more time or more resources, some times both

2. Performance testing is impractical in manual testing.

3. Less Accuracy

4. Executing same tests again and again time taking process as

well as Tedious.

5. Not Suitable for Large scale projects and time bounded

projects.
Continued…
6. Batch Testing is not possible, for each and every test execution
Human user interaction is mandatory.

7. Manual Test Case scope is very limited.

8. Comparing large amount of data is impractical.

9. Checking relevance of search of operation is difficult

10. Processing change requests during software maintenance

takes more time.


Automation Testing
• Automation Testing is a software testing
technique that performs using special automated
testing software tools to execute a test case suite.
• It is a method in which software tests and other
sets of repeatable tasks can be performed
without human interaction.
Need of automation Testing
• Manual Testing of all workflows, all fields, all negative
scenarios is time and money consuming
• It is difficult to test for multilingual sites manually
• Test Automation in software testing does not require
Human intervention. You can run automated test
unattended (overnight)
• Test Automation increases the speed of test execution
• Automation helps increase Test Coverage
• Manual Testing can become boring and hence error-
prone.
Advantages of Automation Testing
• Increased accuracy
• One of the main benefits of automated testing is that it can increase
accuracy. as automated testing is less likely to be affected by human error.
• Faster execution
• Automated testing can also lead to faster execution of tests. This is because
the tests will run concurrently instead of serially. Running tests concurrently
means more tests run in a shorter amount of time.
• Reduced costs
• Automated testing can also lead to reduced costs. When tests are automated,
the need for manual testers is reduced. In addition, the time needed to
execute tests is reduced, leading to savings in terms of both time and money.
• More trustworthy results
Another benefit of automated testing is that it can lead to more reliable
results. This comes as a result of the fact that tests are run automatically and
with greater frequency.
Advantages of Automation Testing
• Increased efficiency
• Automated testing can help improve developer productivity by automating
tasks that would otherwise have to be done manually.
• For example, you can configure your continuous integration (CI) system to
automatically execute and monitor the results of your automated tests each
time a new feature or change is introduced into your application. This will help
ensure that any issues in the recent changes are identified and fixed as quickly
as possible.
• Improved scalability
• Automated tests can be used on many devices and configurations, making it
easier to test more things at once.
• For example, automated tests can be written to measure the performance of
your application on different devices or browsers. This allows you to more
easily test the different variations in which your application is being served
and ensure that these are running as expected across a variety of end-user
devices.
Diadvantages of Automation Testing
• Complexity
• Automated tests can take longer to develop than manual tests,
especially if they are not well designed.
• They can also be more challenging to implement into your
development workflow.
• High initial costs
• One of the main drawbacks of automated testing is that it initially
takes a significant amount of time and money to implement.
However, this investment can often be recouped very quickly in terms
of improved developer productivity and more trustworthy results.
• It needs to be rewritten for every new environment
• When you make a change in one environment, your automated tests
will need to be updated in order for the results to pass.
Diadvantages of Automation Testing
• Generates false positives and negatives
• Automated tests can sometimes fail even when there is no actual issue
present. your tests may generate false negatives if they are designed only
to verify that something exists and not that it works as expected.
• Difficult to design tests that are both reliable and maintainable
• Designing a comprehensive suite of automated tests is no small task.
They need to be reliable enough that they can be run frequently and
consistently without giving you false positives or negatives
• Cannot be used on GUI elements (e.g., graphics, sound files)
• While automated tests can be used to test most functionality of your
application, they are not suited to testing things like graphics or sound
files. This is because computerized tests typically use textual descriptions
to verify the output. Therefore, if you try using an automated test on a
graphic or audio file, it will likely fail every time, even if the content
appears correct.
Comparison between Automation Testing and Manual testing

Automation Testing Manual Testing


Perform the same operation each Test execution is not accurate all
time the time. Hence not reliable.
Useful to execute the set of test cases Useful when the test case only needs
frequently. to run once or twice.
Fewer testers required to execute Large number of tester required
the test cases.
Testers can fetch complicated Does not involve in programming
information from code. task to fetch hidden information.
Faster Slow

Not Helpful in UI Helpful in UI

High Cost Less than automation.


Test tool selection
• Largely depends on the technology the Application Under
Test is built on.
• A detailed analysis of various tools must be performed before
selecting a tool by assigning a dedicated test team
• For example, if we are testing desktop application then we can’t
use selenium for that.
• Selenium is mainly for web applications.
• So first we need to verify which type of application we are
using and what appropriate tool we have to choose.
Enlist factors considered for selecting a testing
tool for test automation
• 1. Meeting requirements

• 2. Technology expectations

• 3. Training/skills

• 4. Management aspects
• 1. Meeting requirements-
 There are plenty of tools available in the market but
rarely do they meet all the requirements of a given
product or a given organization.
 Evaluating different tools for different requirements
involve significant effort, money, and time.

• 2. Technology expectations
 Test tools in general may not allow test developers to
extends/modify the functionality of the framework.
 So extending the functionality requires going back to the
tool vendor and involves additional cost and effort. A
good number of test tools require their libraries to be
linked with product binaries..
• 3. Training/skills-
 While test tools require plenty of training, very few
vendors provide the training to the required level.
 Organization level training is needed to deploy the
test tools, as the user of the test suite are not only the
test team but also the development team and other
areas like configuration management.

• 4. Management aspects-
 A test tool increases the system requirement and
requires the hardware and software to be upgraded.
 This increases the cost of the already- expensive test
tool.
Benefits of Automated Testing

Reliable:
Tests perform precisely the same operations each time they are
run, thereby eliminating human error

Repeatable:
You can test how the software reacts under repeated execution of
the same operations.

Programmable: You can program sophisticated tests that bring out


hidden information from the application.

Comprehensive: You can build a suite of tests that covers every feature
in your application.
Benefits of Automation Testing
Reusable:
You can reuse tests on different versions of an application,
even if the user interface changes.

Better Quality Software:


Because you can run more tests in less time with fewer
resources

Fast:
Automated Tools run tests significantly faster than human
users. 70% faster than the manual testing

Cost Reduction:
As the number of resources for regression test are reduced
Software Test Metrics and Measurement

• A Metric is a quantitative measure of the degree


to which a system, system component, or process
possesses a given attribute.
• Metrics can be defined as “STANDARDS OF
MEASUREMENT”.
• Software Metrics are used to measure the quality
of the project.
• Simply, a Metric is a unit used for describing an
attribute.
• Metric is a scale for measurement.
Software Test Metrics and Measurement

• Suppose, in general, “Kilogram” is a metric for


measuring the attribute “Weight”. Similarly, in
software, “How many issues are found in a thousand
lines of code?”,
• here No. of issues is one measurement & No. of
lines of code is another measurement. Metric
is defined from these two measurements.
• Test metrics example:
• How many defects exist within the module?
• How many test cases are executed per person?
• What is Test coverage %?
Types : Product Metrics
• Product metrics − Describes the
characteristics of the product such as size,
complexity, design features, performance, and
quality level.
S.No. Software Metric Description
Fan-in is a measure of the number of functions that call some
other function (say X). Fan-out is the number of functions which
are called by function X. A high value for fan-in means that X is
(1) Fan-in/Fan-out tightly coupled to the rest of the design and changes to X will
have extensive knock-on effects. A high value for fan-out
suggests that the overall complexity of the control logic needed
to coordinate the called components.
This is measure of the size of a program. Generally, the large the
(2) Length of code size of the code of a program component, the more complex and
error-prone that component is likely to be.
Product Metrics
Cyclomatic This is a measure of the control complexity of a program. This
(3)
complexity control complexity may be related to program understandability.

This is a measure of the average length of distinct identifier in a


Length of
(4) program. The longer the identifiers, the more understandable the
identifiers
program.

Depth of This is a measure of the depth of nesting of if statements in a


(5) conditional program. Deeply nested if statements are hard to understand and
nesting are potentially error-prone.

This is a measure of the average length of words and sentences in


(6) Fog index documents. The higher the value for the Fog index, the more
difficult the document may be to understand.
Types : Process Metrics
• Process metrics These are the measures of
various characteristics of the software
development process.
• For example, the efficiency of fault detection.
• They are used to measure the characteristics of
methods, techniques, and tools that are used for
developing software.
Examples of Process Metrics
• #1.1. Test Case Preparation Productivity

• It is used to calculate the number of Test Cases prepared and the effort spent
for the preparation of Test Cases.

• Formula:
• Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for
Test Case Preparation)
• E.g.:

• No. of Test cases = 240

• Effort spent for Test case preparation (in hours) = 10

• Test Case preparation productivity = 240/10 = 24 test cases/hour


Examples of Process Metrics
• #1.2. Test Design Coverage
• It helps to measure the percentage of test case
coverage against the number of requirements

• Formula:
• Test Design Coverage = ((Total number of
requirements mapped to test cases) / (Total
number of requirements)*100
Examples of Process metrics
1.3) %ge Test cases Executed:
This metric is used to obtain the execution
status of the test cases in terms of %ge.

• %ge Test cases Executed = (No. of Test cases


executed / Total no. of Test cases written) *
100.
1.4) %ge Test cases not executed:
This metric is used to obtain the pending
execution status of the test cases in terms of %ge.

• %ge Test cases not executed = (No. of Test cases not


executed / Total no. of Test cases written) * 100.

1.5) %ge Test cases Passed:


This metric is used to obtain the Pass %ge
of the executed test cases.

• %ge Test cases Passed = (No. of Test cases Passed /


Total no. of Test cases Executed) * 100.
1.6) %ge Test cases Failed:
This metric is used to obtain the Fail %ge
of the executed test cases.

• %ge Test cases Failed = (No. of Test cases Failed / Total


no. of Test cases Executed) * 100.

• 1.7) %ge Test cases Blocked:


This metric is used to obtain the blocked %ge of the
executed test cases. A detailed report can be submitted by
specifying the actual reason of blocking the test cases.

• %ge Test cases Blocked = (No. of Test cases Blocked /


Total no. of Test cases Executed) * 100.

You might also like