Testing Unit4 SE

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

UNIT IV

Testing Strategies: A strategic approach to software testing, test strategies for


conventionalsoftware,
Black-Box and White-Box testing, Validation testing, System testing, the art of
Debugging.
Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design
Model,Metrics for source code, Metrics for testing, Metrics for maintenance.
Metrics for Process and Products: Software Measurement, Metrics for software
quality.
Testing Strategies
 Software is tested to uncover errors introduced during design and
construction.
 Testing Strategy provides a road map that describes the steps to be
conducted as part of testing.
 It should incorporate test planning, test case design, test execution and
resultant data collection and execution
 Validation refers to a different set of activities that ensures that the
software is traceable to the Customer requirements.
A strategic Approach for Software testing
Testing is a set of activities that can be planned in advance and conducted
systematically. Testing Strategy Should have the following characteristics:
-- usage of Formal Technical reviews(FTR)
-- Begins at component level and covers entire system
-- Different techniques at different points
-- conducted by developer and test group
-- should include debugging
Software testing is an element of verification and validation.

Verification refers to the set of activities that ensure that software


correctly implements aspecific function.
( Ex: Are we building the product right? )

Validation refers to the set of activities that ensure that the software built
is traceable tocustomer requirements.
( Ex: Are we building the right product ? )
Test Strategies for Conventional Software:

Testing Strategies for Conventional Software can be viewed as a spiral


consisting of four levels of testing:
1) Unit Testing
2) Integration Testing
3) Validation Testing
4) System Testing
Unit Testing
• Unit Testing begins at the vortex of the spiral and concentrates on each
unit of software insource code.
• It uses testing techniques that exercise specific paths in a component
and its control
• structure to ensure complete coverage and maximum error detection. It
focuses on the internal processing logic and data structures.
• Test cases should uncover errors.
Integration testing
• This focuses on designing and constructing software architecture,
addressing verification and program construction issues by testing
inputs and outputs.
• Although modules function independently, interfacing can cause
errors.
• Top-down integration starts from the main control module and moves
downward, while bottom-up integration starts with atomic modules
and moves upward.
• A combined Sandwich strategy uses top-down for higher-level modules
and bottom-up for lower-level ones.
Validation Testing
• Validation testing ensures that software meets all functional, behavioral, and
performance requirements as specified in the software Requirements
Specification.
• It involves high-order tests to validate the constructed software against these
requirements, ensuring it functions as expected by the customer.
• Key aspects include:
1.Validation Test Criteria: These are derived from the SRS.
2.Configuration Review: Ensures all components are correctly integrated.
3.Alpha and Beta Testing:
Alpha Testing: Conducted at the developer's site by end users in a controlled
environment.
 Beta Testing: Conducted at end-user sites in a live environment. Users report problems
to the developer, who then makes necessary modifications before final release.
System Testing:
System testing involves evaluating the software and other system elements as a
whole. This is the final high-order testing step in computer system engineering,
combining software with hardware, people, and databases. Various tests are
conducted to fully exercise the system, including:
1.Recovery Testing: Ensures the system can recover from faults and resume
processing within a specified time, evaluating the Mean Time To Repair (MTTR).
2.Security Testing: Verifies protection mechanisms by attempting to penetrate the
system, aiming to make penetration more costly than the value of the information.
3.Stress Testing: Tests the system's robustness by demanding resources in abnormal
quantities, frequencies, or volumes.
4.Performance Testing: Assesses the run-time performance of the software within
an integrated system, requiring both hardware and software instrumentation.
Testing Tactics:
• The primary goal of testing is to find errors, and a good test is one that has
a high probability of uncovering them.
• An effective test should not be redundant and should strike a balance
between simplicity and complexity.
• Two major categories of software testing are:
1.Black Box Testing: This focuses on examining the fundamental aspects of
a system, ensuring that each function of the product is fully operational
without considering the internal workings.
2.White Box Testing: This involves examining the internal operations of a
system, focusing on the procedural details and internal logic of the
software.
Black Box Testing
• Black Box Testing Also known as behavioral testing, black box testing
focuses on the functional requirements of software.
• It fully exercises all functional requirements to find incorrect or
missing functions, interface errors, and database errors.
• This testing is performed in the later stages of the testing process,
treating the system as a black box whose behavior can be determined
by studying its inputs and related outputs, without concern for its
internal workings.
• The various testing methods employed here include:
1) Graph based testing method:
Testing begins by creating a graph of important objects and their
relationships and then devising a series of tests that will cover the graph
so that each object and relationshipis exercised and errors are
uncovered.
2) Equivalence partitioning:
This divides the input domain of a program into classes of data from which test
Cases can be derived.
Define test cases that uncover classes of errors so that no. of test cases are
reduced. This is based on equivalence classes which represents a set of valid or
invalid states forinputconditions. Reduces the cost of testing

Example
Input consists of 1 to 10
Then classes are n<1,1<=n<=10,n>10
Choose one valid class with value within the allowed range and two invalid
classes where values are greater than maximum value and smaller than
minimum value.
3)Boundary Value analysis
Select input from equivalence classes such that the input lies at the
edge of the equivalence classes. Set of data lies on the edge or
boundary of a class of input data or generates the data that lies at the
boundary of a class of output data. Test cases exercise boundary
values to uncover errors at the boundaries of the input domain.
Example
If 0.0<=x<=1.0
Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for
invalid input
4) Orthogonal arrayTesting
This method is applied to problems in which input domain isrelatively
small but too large for exhaustive testing
Example
Three inputs A,B,C each having three values will require 27 test cases.
Orthogonal testing will reduce the number of test case to 9 as shown
below
White Box testing
• White Box testing also called glass box testing. It uses the control structure to
derive test cases.
• It exercises all independent paths, Involves knowing the internal working of a
program, Guarantees that all independent paths will be exercised at least once
• Exercises all logical decisions on their true and false sides, Executes all
loops,Exercises all data structures for their validity.
• White box testing techniques
1. Basis path testing
2.Control structure testing.
3. Loop testing
1. Basis Path Testing:
• It defines a minimal set of execution paths based on the logical
complexity of a procedural design. This method ensures that every
statement in the program is executed at least once.
• Steps for Basis Path Testing:
1.Draw the flow graph from the program's flow chart.
2.Calculate the cyclomatic complexity of the flow graph.
3.Prepare test cases to cover each unique path identified.
• Two methods to compute Cyclomatic complexity number
1. V(G)=E-N+2 where E is number of edges, N is number of nodes
2. V(G)=Number of regions
2. Control Structure testing:
This broadens testing coverage and improves quality oftesting. It uses the
following methods:
• Condition testing, also known as condition coverage or predicate
coverage, is a software testing technique that aims to ensure that all
logical conditions in a program's decision points (such as if statements
and loops) are evaluated to both true and false outcomes during testing.
• Data Flow Testing is a software testing technique that focuses on the
flow of data within a program during its execution. Unlike some other
testing methods that primarily check the execution paths or logical
conditions, data flow testing aims to uncover errors related to the usage
and flow of data variables within the program.
3. Loop Testing focuses on verifying the correctness and reliability of
loops within a program by testing them under various conditions to
detect potential errors like infinite loops or incorrect loop termination
conditions
1.Simple loops
2.Nested loops
3.Concatenated loopss
4.Unstructured loops
The Art of Debugging
Debugging is the process of identifying, analyzing, and fixing errors or bugs
within a software application. It is an essential part of the software
development lifecycle aimed at ensuring that the program functions correctly
and meets its intended requirements.
Debugging Strategies:
The objective of debugging is to find and correct the cause of a software
error. Three strategies are proposed:
1)Brute Force Method.
2)Back Tracking.
3)Cause Elimination.

• The Brute Force method is an approach to problem-solving where all


possible solutions are tried systematically until the correct one is found.
It's straightforward but can be inefficient for large problems.
• Backtracking is a method for systematically finding solutions to
problems by trying each possibility and backtracking when a dead end
is reached. It's effective for problems requiring sequential decisions,
like puzzles or optimization tasks.

• The Cause Elimination method involves systematically identifying and


eliminating potential causes or factors contributing to a problem until
the most likely cause is identified. It's used in various fields such as
troubleshooting, problem-solving, and root cause analysis to pinpoint
the underlying reason behind an issue or outcome.
Regression Testing:
• When a new module is added as part of integration testing the software
changes. This may cause problems with the functions which worked
properly before.
• Regression testing is the process of testing software to ensure that
recent code changes have not adversely affected existing features.
• It involves rerunning previous test cases to verify that previously
developed and tested software still works correctly after a change or
addition.
• This type of testing helps ensure that new updates or fixes have not
unintentionally introduced bugs or issues into the software.
Product Metrics for Software
Quality
• Product metrics are used to measure various attributes of software
products like functionality, performance, and quality. These metrics help
in assessing how good the software is and how well it performs.
• Measure: This refers to a numerical value or data point that indicates
how much of something is present (like the size of code or number of
errors).
• Metric: It’s a quantitative measure of how well a system or process has a
specific attribute. For example, how reliable the software is.
• Indicator: Metrics are combined to provide deeper insight. For example,
using several metrics to understand the overall quality of the software.
PRODUCT METRICS FOR ANALYSIS, DESIGN, TEST, AND MAINTENANCE

1. Product Metrics for the Analysis Model These metrics are used to
measure the functionality that a software delivers to its users.

a. Function Point Metric: Proposed by Albrecht, it measures how


much functionality the software provides.
b. Parameters of Function Point Metric:
1.Number of External Inputs (EIs)
2.Number of External Outputs (EOs)
3.Number of External Inquiries (EQs)
4.Number of Internal Logical Files (ILFs)
5.Number of External Interface Files (EIFs)
Each parameter is assigned a weight (simple, average, or complex),
and then a Function Point (FP) is calculated using the formula:

FP=Count total×[0.65+0.01×Σ(Fi)]
Where:
• Count Total is the sum of all the parameters.
• Fi refers to complexity factors (like performance, data
communications, etc.).
2. Product Metrics for the Design Model

DSQI (Design Structure Quality Index):


Developed by the US Air Force, it measures the quality of software
design.
• You compute DSQI based on several factors like:
• Total number of modules
• How many modules rely on prior processing or database
• Number of modules with proper entry/exit points

By calculating DSQI, you compare the design’s quality against past


projects. If it’s lower, it suggests that the design needs improvement.
3. Metrics for Source Code

Halstead Software Science (HSS):


This is a way to measure the complexity of source code using operators
and operands:

• n1 = Number of distinct operators in the program.


• n2 = Number of distinct operands in the program.
• N1 = Total occurrences of operators.
• N2 = Total occurrences of operands.

These values help you calculate the overall program length and
complexity.
4. Metrics for Testing
Testing metrics are used to measure the effectiveness and efficiency of
the testing process.

Program Level (PL) and Effort (E) can be calculated based on the number
of operators and operands used during the testing phase.
These metrics help measure how much effort is required to test the
program.
5. Metrics for Maintenance
Maintenance metrics assess how well the software can be updated or
modified over time.

SMI (Software Maturity Index):


This measures the stability and quality of the software by considering the
number of modules added, changed, or deleted in each release.
METRICS FOR PROCESS AND PRODUCT
1. Software Measurement: It refers to gathering data to understand how
well the software process or product is performing.

1.Direct Measures: These include things that can be directly measured,


like cost, effort, lines of code, and execution speed.
2.Indirect Measures: These look at software attributes like complexity,
functionality, efficiency, reliability, and maintainability.

2.Reasons for Measurement:


1.To compare current performance against future projects.
2.To monitor the project’s status.
3.To predict size, cost, and time.
4.To improve both the product and process.
Types of Software Metrics

1. Size-Oriented Metrics:
These focus on measuring the size of the software (e.g., Lines of Code
(LOC)).These metrics include effort, defects, people involved, and errors.

2. Function-Oriented Metrics: These measure the functionality provided


by the software (e.g., Function Points). It’s independent of the
programming language and looks at how the user experiences the
software.
3. Object-Oriented Metrics: These are used in object-oriented
programming and measure things like:
1.Number of scenarios (similar to use cases).
2.Number of key classes.
3.Number of support classes and subsystems.

4. Web-Based Application Metrics: For web applications, metrics


measure:
4.Number of Static Pages (NSP).
5.Number of Dynamic Pages (NDP).
6.The Customization Ratio (C), calculated as
Metrics for Software Quality
These metrics help to assess how well the software is performing from a
quality standpoint:
1.Correctness: Measures how many defects are present per thousand
lines of code (KLOC).
2.Maintainability: This is measured by Mean-Time to Change (MTTC),
which calculates how long it takes to implement changes.
3.Integrity: This is a measure of the security of the software, calculated
as:

• Threat: Probability of an attack.


• Security: Probability that an attack will be repelled.
4. Usability: Measures how easy it is to use the software.
5. Defect Removal Efficiency (DRE): This measures how
effective the team was at finding and fixing errors before
release.

•E = Number of errors found before delivery.


•D = Number of defects reported after delivery.

You might also like