0% found this document useful (0 votes)
36 views24 pages

5.2 Test Framework

Uploaded by

Sayee Lembhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views24 pages

5.2 Test Framework

Uploaded by

Sayee Lembhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Metrics & Measurement

The terms "metrics" and "measurement" are often used interchangeably, but they have distinct meanings in
the context of software testing and other fields. Here's a breakdown of the differences:

Measurement

● Definition: Measurement refers to the process of quantifying a specific attribute or characteristic. It


involves collecting data to determine the value of that attribute.
● Example: Measuring the response time of a web application (e.g., 2 seconds for a specific request)
is a measurement. It provides a direct numerical value indicating performance.

Metrics

● Definition: Metrics are calculated values derived from measurements that provide insights into
performance, quality, or efficiency. Metrics often involve aggregating, analyzing, or comparing
measurements to gain a broader understanding.
● Example: The average response time across multiple requests (e.g., the average response time of
all requests over a week) is a metric. It helps in evaluating overall performance trends.
Key Differences between Metric & Measurement

1. Nature:
○ Measurement: A raw data point or value.
○ Metric: An interpretation or analysis of one or more measurements.
2. Purpose:
○ Measurement: To capture specific values for direct assessment.
○ Metric: To provide context and facilitate decision-making based on those values.
3. Complexity:
○ Measurement: Often straightforward and involves a single data point.
○ Metric: Can involve calculations, aggregations, and comparisons to derive insights.

In summary, measurement is about collecting data, while metrics are about analyzing that data to inform
decisions and improve processes.
Measurement in software testing refers to the process of quantifying various attributes of software quality, testing processes,
and the outcomes of those processes. It helps teams assess the effectiveness of testing efforts, understand software
performance, and identify areas for improvement. Key aspects of measurement in software testing include:

1. Defect Metrics: Tracking the number, severity, and types of defects found during testing helps gauge software quality.
2. Test Coverage: Measuring how much of the software's code or functionality is exercised by tests, often expressed as
a percentage.
3. Test Case Metrics: This includes metrics such as the number of test cases created, executed, and passed or failed.
4. Performance Metrics: Measuring aspects like response time, throughput, and resource usage to evaluate the
software's performance under different conditions.
5. Test Execution Metrics: Monitoring the time taken to execute tests and the pass/fail rates can provide insights into
the efficiency of the testing process.
6. Defect Density: Calculating the number of defects relative to the size of the software (e.g., per thousand lines of
code) helps understand the quality of the codebase.
7. Automation Metrics: Evaluating the percentage of tests that are automated versus manual, and the success rate of
automated tests.

By systematically measuring these aspects, teams can make data-driven decisions, improve testing processes, and
ultimately deliver higher-quality software.
Software testing metrics are quantitative measures used to evaluate the effectiveness, efficiency, and quality of the testing
process and the software being tested. These metrics provide insights into various aspects of the testing lifecycle, helping
teams make informed decisions and improve their practices. Here are some common software testing metrics:
1. Defect Density: The number of defects per unit of code (e.g., per 1,000 lines of code). It helps assess code quality.
2. Test Coverage: The percentage of requirements, code, or features tested by the test cases. Higher coverage often
indicates a more thorough testing process.
3. Test Case Pass Rate: The percentage of test cases that pass during a testing cycle. It helps gauge the overall
stability of the software.
4. Defect Discovery Rate: The number of defects found over a specific time period. It can indicate the effectiveness of
the testing process.
5. Test Execution Rate: The percentage of test cases executed against the total number of planned test cases. This
metric helps assess testing progress.
6. Mean Time to Detect (MTTD): The average time taken to find defects after they are introduced. A lower MTTD
indicates more effective testing.
7. Mean Time to Repair (MTTR): The average time taken to fix defects after they are reported. This helps understand
the efficiency of the development process.
8. Automation Rate: The percentage of test cases that are automated compared to the total number of test cases.
Higher automation rates can lead to faster testing cycles.
9. Defect Severity and Priority: Categorizing defects based on their impact (severity) and urgency (priority) helps in
effective triaging and resolution.
10. Cost of Quality: The total cost of ensuring good quality, including prevention, appraisal, and failure costs. It helps in
understanding the financial impact of testing activities.
In software testing and development, metrics can be categorized into three main
types: product metrics, project metrics, and process metrics. Each serves a different
purpose and provides insights into various aspects of software and development
practices. Here’s a breakdown of each type:
1. Product Metrics
Definition: These metrics focus on the characteristics and quality of the software product itself.
Purpose: To assess the performance, reliability, and usability of the software.
Examples:
● Defect Density: Number of defects per unit of code (e.g., per 1,000 lines).
● Test Coverage: Percentage of code or requirements tested.
● Response Time: Time taken to complete a user request.
● User Satisfaction: Often measured through surveys or ratings.

When considering metrics specifically in the context of software testing, product, project, and process metrics each
play a crucial role in evaluating the effectiveness and efficiency of testing activities. Here’s how they apply:

1. Product Metrics in Testing


Definition: These metrics assess the quality of the software product being tested.
Purpose: To evaluate how well the software meets quality standards and requirements.
Examples:
● Defect Density: Measures the number of defects found in a specific size of the software (e.g., per 1,000 lines of code). This
helps determine the overall quality of the product.
● Test Coverage: Indicates the percentage of code or functionalities tested, showing how thoroughly the software has been
evaluated.
● Defect Severity: Categorizes defects based on their impact (e.g., critical, major, minor), helping prioritize fixes and assess
product stability.
● User Satisfaction: Gathered through feedback and surveys after testing phases, reflecting how well the software meets
user expectations.
2. Project Metrics
Definition: These metrics provide insights into the management and execution of a software project.
Purpose: To evaluate project performance, timeline adherence, and resource utilization.
Examples:

● Schedule Variance: Difference between planned and actual project timelines.


● Budget Variance: Difference between budgeted and actual costs.
● Effort Estimation Accuracy: Comparison of estimated effort versus actual effort spent.
● Task Completion Rate: Percentage of tasks completed on time.

Project Metrics in Testing


Definition: These metrics evaluate the testing phase within the broader context of the software development project.
Purpose: To monitor the efficiency and effectiveness of the testing efforts in relation to project timelines and resources.
Examples:

● Test Execution Rate: The percentage of planned test cases that have been executed within a certain timeframe, providing insight into
testing progress.
● Defect Discovery Rate: Tracks how many defects are identified during testing phases, helping to assess the effectiveness of testing
efforts and identifying if further testing is needed.
● Test Case Pass Rate: The percentage of test cases that pass successfully, which helps gauge the stability of the software at various
stages.
● Schedule Variance: Compares planned testing timelines with actual timelines, helping identify delays and potential issues in resource
allocation.
3. Process Metrics
Definition: These metrics assess the efficiency and effectiveness of the processes used in software development and testing.
Purpose: To improve processes by identifying areas for optimization and ensuring best practices.
Examples:

● Defect Discovery Rate: Rate at which defects are found during testing phases.
● Mean Time to Repair (MTTR): Average time taken to resolve defects.
● Test Execution Rate: Percentage of planned test cases executed in a testing cycle.
● Cycle Time: Time taken to complete a specific phase or task in the development process.

Process Metrics in Testing


Definition: These metrics evaluate the testing processes and methodologies used during software testing.
Purpose: To optimize testing practices and improve overall efficiency.

Examples:

● Mean Time to Detect (MTTD): The average time taken to identify defects after they are introduced. A shorter MTTD indicates a more
effective testing process.
● Mean Time to Repair (MTTR): Measures how long it takes to fix identified defects, helping assess the responsiveness of the development
team.
● Test Automation Rate: The percentage of test cases that are automated, which can influence the speed and consistency of testing.
● Defect Resolution Rate: The percentage of reported defects that are resolved within a given timeframe, providing insights into the
efficiency of the defect management process.
Summary

● Product Metrics focus on the quality of the software being tested, providing insights into its readiness for release.
● Project Metrics assess the efficiency of the testing phase within the project, helping manage timelines and resources
effectively.
● Process Metrics evaluate the effectiveness of the testing processes, allowing teams to continuously improve their practices.

By using these metrics, teams can make informed decisions, enhance testing strategies, and ultimately deliver higher-quality software.
A tеst automation framework is likе a sеt of guidеlinеs or rulеs that to automatе tеsts for
softwarе applications. Thеsе guidеlinеs can covеr things likе how to writе automatеd tеst
codе, how to handlе tеst data, whеrе to storе tеst rеsults, or how to usе rеsourcеs from
outsidе thе softwarе bеing tеstеd.

Whilе thеsе guidеlinеs arеn’t strict rulеs, thеy can makе tеst automation morе organizеd
and еfficiеnt. Hеrе arе somе of thе bеnеfits of using a tеst automation framework:
● Fastеr and morе еfficiеnt tеst automation.
● Lowеr costs for maintaining automatеd tеsts.
● Lеss nееd for manual work in thе automation procеss.
● Tеsting a widеr rangе of aspects automatically.
● Bеing ablе to usе thе samе automatеd tеst codе for diffеrеnt tеsts.
Thеrе arе sеvеral typеs of tеst automation framеworks, еach dеsignеd to addrеss
spеcific automated tеsting nееds and challеngеs. Let’s explore some common typеs of
tеst automation framеworks in the next section.
Types of Frameworks:

Typically, there are 5 test automation frameworks that are adopted Popular while

automating the applications:

1. Data Driven Automation Framework


2. Keyword Driven Automation Framework
3. Modularity driven Automation Framework
4. Model based Framework
5. Hybrid Automation Framework
(i) Data Driven Testing (DDT): It is a term used in the testing of computer software to
describe
testing done using a table of conditions directly as test inputs and verifiable outputs as
well as the process where test environment settings and control are not hard-coded.
In the simplest form the tester supplies the inputs from a row in the table and expects
the outputs which occur in the same row. The table typically contains values which
correspond to boundary or partition input spaces.
In the control methodology, test configuration is "read" from a database.
Example: Developing the Flight Reservation Login script using this method will involve two steps.
Step 1) Create a Test – Data file which could be Excel , CSV , or any other database source.
Step 2) Develop Test Script and make references to your Test- Data source.

AgentName Password

Jimmy Mercury

Tina MERCURY

Bill MerCURY
SystemUtil.Run "flight4a.exe","","","open"

Dialog("Login").WinEdit("Agent Name:").Set DataTable("AgentName", dtGlobalSheet)

Dialog("Login").WinEdit("Password:").Set DataTable("Password", dtGlobalSheet)

Dialog("Login").WinButton("OK").Click

'Check Flight Reservation Window has loaded

Window("Flight Reservation").Check CheckPoint("Flight Reservation")

**Note "dtGlobalSheet" is the default excel sheet provided by QTP.


(ii) Modularity-driven Testing: Modularity-driven testing is a term used in the testing of
software.
The test script modularity framework requires the creation of small, independent scripts
that represent modules, sections, and functions of the application-under-test.
These small scripts are then used in a hierarchical fashion to construct larger tests,
realizing a particular test case.
Of all the frameworks, this one should be the simplest to grasp and master. It is a
well-known programming strategy to build an abstraction layer in front of a component to
hide the component from the rest of the application.
This insulates the application from modifications in the component and provides
modularity in the application design.
The test script modularity framework applies this principle of abstraction or
encapsulation in order to improve the maintainability and scalability of automated test
suites.
(iii) Keyword-driven Testing:
Keyword-driven testing, also known as table-driven testing or action word based testing,
is a software testing methodology suitable for both manual and automated testing.
This method separates the documentation of test cases including the data to use from the
prescription of the way the test cases are executed. As a result it separates the test
creation process two distinct stages; a design and development stage, and an execution
stage. This uses keywords (or action words) to symbolize a functionality to be tested, such
as Enter Client.
The keyword Enter Client is defined as the set of actions that must be executed to enter
a new client in the database. Its keyword documentation would contain:
The starting state of the System Under Test (SUT).
The window or menu to start from.
The keys or mouse clicks to get to the correct data entry window.
The names of the fields to find and which arguments to enter. The actions to perform in
case additional dialogs pop up (like confirmations).
The button to click to submit.
An assertion about what the state of the SUT should be after completion of the action.
(iv) Model-based Testing: Model-based testing is application of model-based
design for designing and optionally also executing artifacts to perform software
testing or system testing.

Models can be used to represent the desired behavior of a System Under Test
(SUT), or to represent testing strategies and a test environment.
(v) Hybrid Testing:
As the name suggests this framework is the combination of one or more Automation Frameworks
discussed above pulling from their strengths and trying to mitigate their weaknesses.
The hybrid test QA automation framework is what most test automation frameworks evolve into
over time and multiple projects.
Maximum industry uses Keyword Framework in a combination of Function decomposition method.

You might also like