0% found this document useful (0 votes)
46 views106 pages

MSBTE STE Chapter 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views106 pages

MSBTE STE Chapter 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

1

22518
Chapter 5
Testing tools and measurements
5.1 Manual Testing and Need for Automated Testing Tools
5.2 Advantages and Disadvantages of Using Tools
5.3 Selecting a Testing Tool
5.4 When to Use Automated Test Tools, Testing Using
Automated Tools.
5.5 Metrics and Measurement: Types of Metrics. Product
Metrics and Process Metrics, Object oriented metrics in
testing.
Manual Testing
► Manual testing is a software testing process in which test
cases are executed manually without using any automated
tool.
► All test cases executed by the tester manually according to
the end user's perspective.
► It ensures whether the application is working, as mentioned
in the requirement document or not.
► Test cases are planned and implemented to complete almost
100 percent of the software application.
► Test case reports are also generated manually.
Manual Testing
► Manual Testing is one of the most fundamental testing
processes as it can find both visible and hidden defects of the
software.
► The difference between expected output and output, given
by the software, is defined as a defect.
► The developer fixed the defects and handed it to the tester
for retesting.
Manual Testing
► Manual testing is mandatory for every newly developed
software before automated testing.
► This testing requires great efforts and time, but it gives the
surety of bug-free software.
► Manual Testing requires knowledge of manual testing
techniques but not of any automated testing tool.
Need of Manual Testing
► If the test engineer does manual testing, he/she can test the
application as an end-user perspective and get more familiar
with the product, which helps them to write the correct test
cases of the application and give the quick feedback of the
application.
Types of Manual Testing
► There are various methods used for manual testing. Each
technique is used according to its testing criteria.
► Types of manual testing are given below:
► White Box Testing
► Black Box Testing
How to perform Manual Testing
► First, tester observes all documents related to software, to
select testing areas.
► Tester analyses requirement documents to cover all
requirements stated by the customer.
► Tester develops the test cases according to the requirement
document.
► All test cases are executed manually by using Black box
testing and white box testing.
► If bugs occurred then the testing team informs the
development team.
► The Development team fixes bugs and handed software to the
testing team for a retest.
How to perform Manual Testing
Advantages of Manual Testing
► It does not require programming knowledge while using the
Black box method.
► It is used to test dynamically changing GUI designs.
► Tester interacts with software as a real user so that they are
able to discover usability and user interface issues.
► It ensures that the software is a hundred percent bug-free.
► It is cost-effective.
► Easy to learn for new testers.
Disadvantages of Manual Testing
► It requires a large number of human resources.
► It is very time-consuming.
► Tester develops test cases based on their skills and
experience. There is no evidence that they have covered all
functions or not.
► Test cases cannot be used again. Need to develop separate
test cases for each new software.
► It does not provide testing on all aspects of testing.
► Since two teams work together, sometimes it is difficult to
understand each other's motives, it can mislead the process.
Conclusion
► Manual testing is an activity where the tester needs to be
very patient, creative & open minded.
► Manual testing is a vital part of user-friendly software
development because humans are involved in testing
software applications and end-users are also humans.
► They need to think and act with an End User perspective.
► Testing can be extremely challenging. Testing an application
for possible use cases with minimum test cases requires high
analytical skills.
Automation Testing
► Automation testing is a Software testing technique to test
and compare the actual outcome with the expected outcome.

► This can be achieved by writing test scripts or using any


automation testing tool.

► Test automation is used to automate repetitive tasks and


other testing tasks which are difficult to perform manually.

► Manual Testing is performed by a human sitting in front of a


computer carefully executing the test steps.
Automation Testing
► Successive development cycles will require execution of same
test suite repeatedly.

► Using a test automation tool, it's possible to record this test


suite and re-play it as required.

► Once the test suite is automated, no human intervention is


required.

► The goal of Automation is to reduce the number of test cases


to be run manually and not to eliminate Manual Testing
altogether.
Test Automation
► Test Automation is the best way to increase the
effectiveness, test coverage, and execution speed in software
testing.

► Manual Testing of all workflows, all fields, all negative


scenarios is time and money consuming.

► It is difficult to test for multilingual sites manually


Test Automation
► Test Automation does not require Human intervention.

► You can run automated test unattended (overnight)

► Test Automation increases the speed of test execution

► Automation helps increase Test Coverage

► Manual Testing can become boring and hence error-prone.


Which Test Cases to Automate?
Test cases to be automated can be selected using the following
criterion :

► High Risk - Business Critical test cases

► Test cases that are repeatedly executed

► Test Cases that are very tedious or difficult to perform


manually

► Test Cases which are time-consuming


Which Test Cases to Automate?
The following category of test cases are not suitable for
automation:

Test Cases that are newly designed and not executed manually
at least once

Test Cases for which the requirements are frequently changing

Test cases which are executed on an ad-hoc basis.


Process of Automation Testing
Process of Automation Testing
Test Automation Feasibility Analysis −
First step is to check if the application can be automated or
not. Not all applications can be automated due to its
limitations.

Appropriate Tool Selection −


The next most important step is the selection of tools. It
depends on the technology in which the application is built, its
features and usage.
Process of Automation Testing
Evaluate the suitable framework −
Upon selecting the tool, the next activity is to select a suitable
framework.
There are various kinds of frameworks and each framework has
its own significance.
Process of Automation Testing
Build Proof of Concept −

Proof of Concept (POC) is developed with an end-to-end


scenario to evaluate if the tool can support the automation of
the application.

It is performed with an end-to-end scenario, which ensures that


the major functionalities can be automated.
Process of Automation Testing
Develop Automation Framework −

After building the POC, framework development is carried out,


which is a crucial step for the success of any test automation
project.

Framework should be built after diligent analysis of the


technology used by the application and also its key features.
Process of Automation Testing
Develop Test Script, Execute, and Analyze −

Once Script development is completed, the scripts are


executed, results are analyzed and defects are logged, if any.

The Test Scripts are usually version controlled.


What to Automate
► Repetitive tests that run for multiple builds.
► Tests that tend to cause human error.
► Tests that require multiple data sets.
► Frequently used functionality that introduces high risk
conditions.
► Tests that are impossible to perform manually.
► Tests that run on several different hardware or software
platforms and configurations.
► Tests that take a lot of effort and time when manual testing.
Advantages of Automated Testing:
1. Automated testing improves the coverage of testing as
automated execution of test cases is faster than manual
execution.
2. Automated testing reduces the dependability of testing on
the availability of the test engineers.
3. Automated testing provides round the clock coverage as
automated tests can be run all time in 24*7 environment.
4. Automated testing takes far less resources in execution as
compared to manual testing.
Advantages of Automated Testing:
5. It helps in testing which is not possible without automation such as
reliability testing, stress testing, load and performance testing.
6. It includes all other activities like selecting the right product build,
generating the right test data and analyzing the results.
7. It acts as test data generator and produces maximum test data to
cover a large number of input and expected output for result
comparison.
8. Automated testing has less chances of error hence more reliable.
9. As with automated testing test engineers have free time and can
focus on other creative tasks.
Disadvantages of Automated Testing:
1. Automated testing is very much expensive than the manual testing.
2. It also becomes inconvenient and burdensome as to decide who
would automate and who would train.
3. It has limited to some organisations as many organisations not prefer
test automation.
4. Automated testing would also require additionally trained and skilled
people.
5. Automated testing only removes the mechanical execution of testing
process, but creation of test cases still required testing
professionals.
Benefits of Automation Testing
► 70% faster than the manual testing
► Wider test coverage of application features
► Reliable in results
► Ensure Consistency
► Saves Time and Cost
► Improves accuracy
► Human Intervention is not required while execution
► Increases Efficiency
► Better speed in executing tests
► Reusable test scripts
► Test Frequently and thoroughly
► More cycle of execution can be achieved through automation
► Early time to market
Advantages of using testing tools :
1. Speed.
The automation tools tests the software under tests with the
very faster speed.

2. Efficiency.
While testers are busy running test cases, testers can't be doing
anything else.
If the tester have a test tool that reduces the time it takes for
him to run his tests, he has more time for test planning and
thinking up new tests.
Advantages of using testing tools :
3. Accuracy and Precision.
After trying a few hundred cases, tester’̳s attention span will
wane and he may start to make mistakes. A test tool will
perform the same test and check the results perfectly, each and
every time.

4. Resource Reduction.
Impossible to perform a certain test case. The number of
people or the amount of equipment required to create the test
condition could be prohibitive. A test tool can be used to
simulate the real world and greatly reduce the physical
resources necessary to perform the testing.
Advantages of using testing tools :
5. Simulation and Emulation.
Test tools are often used to replace hardware or software that
would normally interface to your product. This "fake" device or
application can then be used to drive or respond to your
software in ways that you choose and ways that might otherwise
be difficult to achieve.

6. Relentlessness.
Test tools and automation never tire or give up. they can keep
going and going and on and on without any problem; whereas
the tester gets tired to test again and again.
Disadvantages of using testing tools :
1. It's more expensive to automate. Initial investments are
bigger than manual testing Manual tests can be very time
consuming.

2. You cannot automate everything; some tests still have to be


done manually.

3. You cannot rely on testing tools always.


Selecting a tool:
1. Free tools are not well supported and get phased out soon.

2. Developing in-house tools takes time.

3. Test tools sold by vendors are expensive.

4. Test tools require strong training.

5. Test tools generally do not meet all the requirements for


automation.

6.Not all test tools run on all platforms.


Criteria for Selecting Test Tools:
1. Meeting requirements;
2. Technology expectations;
3. Training/skills;
4. Management aspects.
1. Meeting requirements-
There are plenty of tools available in the market but rarely do
they meet all the requirements of a given product or a given
organization.

Evaluating different tools for different requirements involve


significant effort, money, and time.

Given of the too much of choice available, huge delay is


involved in selecting and implementing test tools.
2. Technology expectations-
Test tools in general may not allow test developers to
extends/modify the functionality of the framework.

So extending the functionality requires going back to the tool


vendor and involves additional cost and effort.

A good number of test tools require their libraries to be linked


with product binaries.
3. Training/skills-
While test tools require plenty of training, very few vendors
provide the training to the required level.

Organization level training is needed to deploy the test tools, as


the user of the test suite are not only the test team but also the
development team and other areas like configuration
management.
4. Management aspects-
A test tool increases the system requirement and requires the
hardware and software to be upgraded.

This increases the cost of the already- expensive test tool.


What is a Test Framework?
A testing framework is a set of guidelines or rules used for
creating and designing test cases.

A framework is comprised of a combination of practices and


tools that are designed to help QA professionals test more
efficiently.

These guidelines could include coding standards, test-data


handling methods, object repositories, processes for storing test
results, or information on how to access external resources.
Benefits of a Test Automation Framework
Utilizing a framework for automated testing will increase a
team’s test speed and efficiency, improve test accuracy, and
will reduce test maintenance costs as well as lower risks.

They are essential to an efficient automated testing process for


a few key reasons:

Improved test efficiency


Lower maintenance costs
Minimal manual intervention
Maximum test coverage
Reusability of code
Types of Automated Testing Frameworks
1. Modular Based Testing Framework
2. Data-Driven Framework
3. Keyword-Driven Framework
4. Hybrid Testing Framework
Modular Based Testing Framework
Implementing a modular framework will require testers to
divide the application under test into separate units,
functions, or sections, each of which will be tested in
isolation.

A test script is created for each part and then combined to


build larger tests in a hierarchical fashion.

These larger sets of tests will begin to represent various test


cases.

The modular framework is to build an abstraction layer, so


that any changes made in individual sections won’t affect the
overarching module.
Data-Driven Framework
Using a data-driven framework separates the test data from
script logic, meaning testers can store data externally.

Very frequently, testers find themselves in a situation where


they need to test the same feature or function of an
application multiple times with different sets of data.

Setting up a data-driven test framework will allow the tester


to store and pass the input/ output parameters to test scripts
from an external data source, such as Excel Spreadsheets,
Text Files, CSV files, SQL Tables, or ODBC repositories.

The test scripts are connected to the external data source


and told to read and populate the necessary data when
needed.
Keyword-Driven Framework
In a keyword-driven framework, each function of the
application under test is laid out in a table with a series of
instructions in consecutive order for each test that needs to be
run.

-keywords are also stored in an external data table (hence the


name), making them independent from the automated testing
tool being used to execute the tests.
Keyword-Driven Framework
Keywords are the part of a script representing the various
actions being performed to test the GUI of an application.

These can be labeled as simply as ‘click,’ or ‘login,’ or with


complex labels like ‘clicklink,’ or ‘verify link.’

Keywords are stored in a step-by-step fashion with an


associated object, or the part of the UI that the action is being
performed on.

For this approach to work properly, a shared object repository


is needed to map the objects to their associated actions.
Hybrid Test Automation Framework
Hybrid framework is a combination of any of the previously
mentioned frameworks-

Every application is different, and so should the processes used


to test them.

A hybrid framework can be more easily adapted to get the best


test results.
Metrics and measurement :
A Metric is a measurement of the degree that any attribute
belongs to a system, product or process.

For example the number of errors per person hours would be a


metric.
Thus, software measurement gives rise to software metrics.

A measurement is an indication of the size, quantity, amount or


dimension of a particular attribute of a product or process.
For example the number of errors in a system is a
measurement.
Metrics and measurement :
A Metric is a quantitative measure of the degree to which a
system, system component, or process possesses a given
attribute.

Metrics can be defined as “STANDARDS OF MEASUREMENT”.

Software Metrics are used to measure the quality of the


project.

Metric is a scale for measurement.


Metrics and measurement :
Suppose, in general, “Kilogram” is a metric for measuring the
attribute “Weight”.

Similarly, in software, “How many issues are found in a


thousand lines of code?”,

here No. of issues is one measurement & No. of lines of code


is another measurement. Metric is defined from these two
measurements.
Metrics and measurement :
Test metrics example:

How many defects exist within the module?


How many test cases are executed per person?
What is Test coverage %?
What Is Software Test Measurement?
Measurement is the quantitative indication of extent,
amount, dimension, capacity, or size of some attribute of a
product or process.
Metrics and measurement :
Need of Software measurement:

1. Establish the quality of the current product or process.


2. To predict future qualities of the product or process.
3. To improve the quality of a product or process.
4. To determine the state of the project in relation to budget
and schedule.
Metrics and measurement :
Collecting and analyzing metrics involves effort and several
steps.
Metrics and measurement :
Step 1:
The first step involved in a metrics program is to decide what
measurements are important and collect data accordingly.

The effort spent on testing, number number of defects, and


number of test cases, are some examples of measurements.

Depending on what the data is used for, the granularity of


measurement will vary.
Metrics and measurement :
Step 1:
While deciding what to measure, the following aspects need to
be kept in mind.

► What is measured should be of relevance to what we are


trying to achieve.
Metrics and measurement :
Step 1:
While deciding what to measure, the following aspects need to
be kept in mind.

► The entities measured should be natural and should not


involve too many overheads for measurements.
Metrics and measurement :
Step 1:
While deciding what to measure, the following aspects need to
be kept in mind.

► What is measured should be at the right level of


granularity to satisfy the objective for which the
measurement is being made.
Metrics and measurement :
Step 1:
The different people who use the measurements may want to
make inferences on different dimensions.

The level of granularity of data obtained depends on the level


of detail required by a specific audience.

Hence the measurements—and the metrics derived from


them—will have to be at different levels for different people.

An approach involved in getting the granular detail is called


data drilling.
Metrics and measurement :
Step 2:
the second step involved in metrics collection is defining how to
combine data points or measurements to provide meaningful
metrics.

A particular metric can use one or more measurements.


Metrics and measurement :
Step 3:
The third step in the metrics program—deciding the operational
requirement for measurements.
The operational requirement for a metrics plan should lay down
not only the periodicity but also other operational issues such as
who should collect measurements, who should receive the
analysis, and so on.
This step helps to decide on the appropriate periodicity for the
measurements as well as assign operational responsibility for
collecting, recording, and reporting the measurements and
dissemination of the metrics information.
Some measurements need to be made on a daily basis
Metrics and measurement :
Step 4:
The fourth step involved in a metrics program is to analyze the
metrics to identify both positive areas and improvement areas
on product quality.

Often, only the improvement aspects pointed to by the metrics


are analyzed and focused; it is important to also highlight and
sustain the positive areas of the product.

This will ensure that the best practices get institutionalized and
also motivate the team better.
Metrics and measurement :
Step 5:
The final step involved in a metrics plan is to take necessary
action and follow up on the action.

The purpose of a metrics program will be defeated if the action


items are not followed through to completion.
WHY METRICS IN TESTING?
Since testing is the last phase before product release, it is
essential to measure the progress of testing and product
quality.

Tracking test progress and product quality can give a good idea
about the release—whether it will be met on time with known
quality.

Measuring and producing metrics to determine the progress of


testing is very important.
WHY METRICS IN TESTING?
To judge the remaining days needed for testing, two data points
are needed—remaining test cases yet to be executed and how
many test cases can be executed per elapsed day.

The test cases that can be executed per person day are calculated
based on a measure called test case execution productivity.

This productivity number is derived from the previous test cycles.

Thus, metrics are needed to know test case execution productivity


and to estimate test completion date.
WHY METRICS IN TESTING?
The number of days needed to fix all outstanding defects is
another crucial data point.

The number of days needed for defects fixes needs to take into
account the “outstanding defects waiting to be fixed” and a
projection of “how many more defects that will be unearthed
from testing in future cycles.”

Hence, metrics helps in predicting the number of defects that can


be found in future test cycles.
WHY METRICS IN TESTING?
The defect-fixing trend collected over a period of time gives
another estimate of the defect-fixing capability of the team.

Combining defect prediction with defect-fixing capability


produces an estimate of the days needed for the release.

Hence, metrics helps in estimating the total days needed for fixing
defects. Once the time needed for testing and the time for
defects fixing are known, the release date can be estimated.
Testing and defect fixing are activities that can be executed
simultaneously,
WHY METRICS IN TESTING?
The defect fixes may arrive after the regular test cycles are
completed. These defect fixes will have to be verified by
regression testing before the product can be released.

Metrics are not only used for reactive activities. Metrics and their
analysis help in preventing the defects proactively, thereby saving
cost and effort.

Metrics help in identifying these opportunities.


Types of Metrics
Metrics can be classified as
Product metrics and
Process metrics.
Product Metrics
Project metrics
Progress metrics
Productivity Metrics
Project Metrics
Project Metrics: It can be used to measure the efficiency of a
project team or any testing tools being used by the team members

Project matrix is describes the project characteristic and


execution process.
Number of software developer
Staffing pattern over the life cycle of software
Cost and schedule
Productivity
Project Metrics
Effort Variance: Difference between the planned outlined effort
and the effort required to actually undertake the task is called
Effort variance.

Effort variance =
[(Actual Effort – Planned Effort)/ Planned Effort ]x 100.
Project Metrics
Schedule Variance: Any difference between the scheduled
completion of an activity and the actual completion is known as
Schedule Variance.

Schedule variance =
[((Actual calendar days – Planned calendar days) / Planned
calendar days ]x 100.
Project Metrics
Size Variance: Difference between the estimated size of the
project and the actual size of the project (normally in KLOC or
FP).

Size variance =
[(Actual size – Estimated size)/ Estimated size ]x 100.
Project Metrics
Cost Variance (CV) Difference between the estimated cost of the
project and the actual cost of the project. this metric is
represented as percentage.

Cost variance =
[(Actual cost – Estimated cost)/ Estimated cost ]x 100.
Progress Metrics
Automation progress refers to the number of tests that have been
automated as a percentage of all automatable test cases.
Any project needs to be tracked from two angles as given below:
1. How the project is doing with respect to effort and schedule.
2. To find out how well the product is meeting the quality
requirements for the released.
Progress Metrics
Productivity Metrics
Productivity metrics combine several measurements and
parameters with effort spent on the product.

They help in finding out the capability of the team as well as for
other purposes, such as
1. Estimating for the new release.
2. Finding out how well the team is progressing, understanding
the reasons for (both positive and negative) variations in
results.
3. Estimating the number of defects that can be found.
4. Estimating release date and quality.
5. Estimating the cost involved in the release.
Productivity Metrics
Defects per 100 Hours of Testing

The metric defects per 100 hours of testing covers the third
point and normalizes the number of defects found in the product
with respect to the effort spent.

Defects per 100 hours of testing =


(Total defects found in the product for a period/Total hours
spent to get those defects) * 100
Productivity Metrics
Test Cases Executed per 100 Hours of Testing
The number of test cases executed by the test team for a
particular duration depends on team productivity and quality of
product.

The team productivity has to be calculated accurately so that it


can be tracked for the current release and be used to estimate
the next release of the product.

Test cases executed per 100 hours of testing =


(Total test cases executed for a period/Total hours spent in
test execution) * 100
Productivity Metrics
Test Cases Developed per 100 Hours of Testing

Both manual execution of test cases and automating test cases


require estimating and tracking of productivity numbers.

In a product scenario, not all test cases are written afresh for
every release.

New test cases are added to address new functionality and for
testing features that were not tested earlier.
Productivity Metrics
Test Cases Developed per 100 Hours of Testing
Existing test cases are modified to reflect changes in the
product.

Some test cases are deleted if they are no longer useful or if


corresponding features are removed from the product.

Hence the formula for test cases developed uses the count
corresponding to added/modified and deleted test cases.

Test cases developed per 100 hours of testing =


Total test cases developed for a period/Total hours spent in
test case development) * 100
Productivity Metrics
Defects per 100 Test Cases
the goal of testing is find out as many defects as possible, it is
appropriate to measure the “defect yield” of tests, that is, how many
defects get uncovered during testing.

This is a function of two parameters:


1. The effectiveness of the tests in uncovering defects
2. The effectiveness of choosing tests that are capable of uncovering
defects.

The ability of a test case to uncover defects depends on how well the
test cases are designed and developed.

Defects per 100 test cases = (Total defects found for a period/Total
test cases executed for the same period) * 100
Productivity Metrics
Defects per 100 Failed Test Cases

Defects per 100 failed test cases is a good measure to find out
how granular the test cases are. It indicates :
● How many test cases need to be executed when a defect is
fixed
● What defects need to be fixed so that an acceptable number
of test cases reach the pass state; and
● How the fail rate of test cases and defects affect each other
for release readiness analysis.

Defects per 100 failed test cases = (Total defects found for a
period/Total test cases failed due to those defects) * 100
Productivity Metrics
Closed Defect Distribution
The testing team also has the objective to ensure that all defects
found through testing are fixed so that the customer gets the benefit
of testing and the product quality improves.

To ensure that most of the defects are fixed, the testing team has to
track the defects and analyze how they are closed.
The closed defect distribution helps in this analysis.
Process Metrics
Software Test metrics used in the process of test preparation
and test execution phase of STLC.

1. Test case preparation productivity


Test case Preparation Productivity=
No.of Test cases/Efforts spent for test case preparation

E.g No of test cases=240


Efforts spent for test case preparation in hours=10

Test case Preparation Productivity= 240/10=24 testcase/hr


Process Metrics
2. Test design Coverage
It helps to measure the percentage of test case coverage against
the number of requirements.

Test design coverage= [Total no of requirements mapped to test


cases/total number of requirements]*100

e.g
Total number of requirement=100
Total no of requirements mapped to test cases=98

Test design coverage=[98/100]*100=98%


Process Metrics
3. Test Execution Productivity
It determines the number of test cases that can be executed per
hr.

Test Execution Productivity=


No of test cases executed/Efforts spent for execution of test
case

e.g
no of test cases executed=180
Efforts spent for execution of test cases= 10
Test execution Productivity=180/10=18 test cases/hr
Process Metrics
4. Test Execution Coverage
It is to measure the number of test cases executed against the
number of test cases planned.

Test Execution Coverage=


[Total no of test cases executed/ Total no of test cases
planned to execute]*100

e.g.
Total no of test cases planned to execute=240
Total no of test cases executed=160
Test Execution Coverage= [180/240]*100=75%
Process Metrics
5. Test cases Passed
It is to measure the percentage number of test cases passed.

Test case pass=


[Total no of test cases passed/total no of test cases
executed]*100

e.g
Test case Pass=[80/90]*100=88.8%
Process Metrics
6. Test cases Failed
It is to measure the percentage number of test cases failed.

Test case failed=


[Total no of test cases failed /total no of test cases
executed]*100

e.g

Test case failed=[10/90]*100=11.1%


Process Metrics
7. Test cases Blocked
It is to measure the percentage number of test cases
blocked.
Test case blocked=
[Total no of test cases blocked/total no of test cases
executed]*100

e.g
Test case blocked=[5/90]*100=5.5%
Object Oriented Metrics in testing
Focus on the combination of function and data as an
integrated object.

1. Method
Cyclomatic complexity(CC) :
● CC is used to evaluate the complexity of an algorithm in a
method.
● Low CC is better.
● CC cannot be used to measure the complexity of class
because of inheritance.
● CC of individual methods can be combined with other
measures to evaluate the complexity of the class.
Object Oriented Metrics in testing

Size:

Size of a method is used to evaluate the ease of


understandability of the code by developers and maintainers.
Size can be measured : counting all LOC, number of
statements and number blank line
Object Oriented Metrics in testing
2. Class

Class is template from which objects can be created.


Three class metrics described to measure the complexity of a
class using the class methods, messages and cohesion.

1. Method:
A method is an operation upon an object and is defined in the
class declaration.

Weighted Methods per class: count of the methods implemented


within a class.
or Sum of the complexities of the methods.
Object Oriented Metrics in testing
2. Class

ii) Message: A message is a request that an object makes of


another object to perform an operation.
The operation executed as a result of receiving a message is
called a method.

Response for a class: The response for a class is the set of all
methods that can be invoked in response to a message to an
object of the class or by some method in the class.

Metrics:Combination of the complexity of a class through the


number of methods and amount of communication with other
class.
Object Oriented Metrics in testing
2. Class

ii) Cohesion: Is the degree to which methods within a class are


related to one another and work together to provide well
bounded behavior.

Lack of Cohesion of Methods: Measure the degree of similarity of


methods by data input variable or attributes.
Object Oriented Metrics in testing
2. Class

Two ways:-
1. Calculate for each data field in a class what percentage of
the methods use that data field. Average the percentage
then subtract from 100%.
Lower percentages mean greater cohesion of data and methods
in the class.

2. Methods are more similar if they operate on the same


attributes.
Count the number of disjoint sets produced from the
intersection of the sets of attributes used by the methods.
Object Oriented Metrics in testing
2. Class

(iv) Coupling:
Coupling is measure of the strength of association established by
a connection from one entity to another.
Classes (objects) are coupled three ways as explained below:
1. When a message is passed between objects, the objects are
said to be coupled.
2. Classes are coupled when methods declared in one class use
methods or attributes of the other classes.
3. Inheritance introduces significant tight coupling between
superclasses and their subclasses.
Object Oriented Metrics in testing
3. Inheritance

Inheritance decreases complexity by reducing the number of


operation and operator but this abstraction of objects can
make maintenance and design difficult.

1. Depth of inheritance tree


The depth of a class within the inheritance hierarchy is the
maximum length from the class node to the root of the tree and
is measured by the number of ancestor classes. The deeper a
class is within the hierarchy the greater the number methods it
is likely to inherit making it more complex to predict its
behavior.
Object Oriented Metrics in testing
3. Inheritance

Inheritance decreases complexity by reducing the number of


operation and operator but this abstraction of objects can
make maintenance and design difficult.

2. Number of Children
The number of children is the number of immediate subclasses
subordinate to a class in the hierarchy. It is an indicator of the
potential influence a class can have on the design and on the
system.

You might also like