0% found this document useful (0 votes)
21 views35 pages

Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views35 pages

Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT - IV

Software Testing
Software Testing Strategies:

• Software Testing is one of the important phases of software development.

• Testing is the process of execution of a program with the intention of finding


errors Involves 40% of total project cost.

Testing strategy: It is a road map that incorporates test planning, test case design, test
execution and resultant data collection and execution.
Testing Strategies for Conventional Software:
I. Unit Testing
II. Integration Testing
III. Verification and Validation Testing
IV. System Testing
I. Unit Testing

• Unit testing involves the testing of each unit or an individual component of the
software application. It is the first level of functional testing.
• The aim behind unit testing is to validate unit components with its performance.
• A unit is a single testable part of a software system and tested during the
development phase of the application software.
There are several types of unit testing, each with its own advantages and use cases. The
three common types of unit testing:
1. White-box testing,
2. Black-box testing, and
3. Gray-box testing.
Black-box testing :
It treats the system as black box whose behavior can be determined by studying its
input and its related output not concerned with the internal structure of the program. It
focuses on the functional requirements of the software i.e. it enables the s/w engineer to
derive a set of input conditions that fully exercise all the functional requirements for
that program. It is concerned with functionality and implementation. Black box testing
is also called as Functional Testing.
There are three types of Black box Testing:- Black box testing can be applied to three main types of tests:

1. Functional,
2. Non-functional, and
3. Regression testing.

1. Functional Testing: Black box testing can test specific functions or features of the
software under test. for example, checking that it is possible to log in using correct
user credentials, and not possible to log in using wrong credentials.

2.Non-Functional Testing: Black box testing can check additional aspects of the
software, beyond features and functionality. A non-functional test does not check “if”
the software can perform a specific action but “how” it performs that action.
3.Regression Testing: Black box testing can be used to check if a new version of the
software exhibits a regression, or degradation in capabilities, from one version to the
next. Regression testing can be applied to functional aspects of the software (for
example, a specific feature no longer works as expected in the new version), or non-
functional aspects (for example, an operation that performed well is very slow in the
new version).
4.Boundary Value Analysis: Testers can identify that a system has a special response
around a specific boundary value. For example, a specific field may accept only values
between 0 and 99. Testers can focus on the boundary values (-1, 0, 99 and 100), to see
if the system is accepting and rejecting inputs correctly.
White Box testing:
It is also called glass box testing. It involves knowing the internal working of a
program. It guarantees that all independent paths will be exercised at least once. It
Exercises on all logical decisions and their true and false sides. Executes all loops
exercises all data structures for their validity. White box testing techniques are the basis
path testing and Control structure testing. It is also called as Structural Testing.

By combining black box and white box testing, testers can achieve a comprehensive
“inside and outside” inspection of a software application and increase coverage of
quality and security issues. It is called Grey Box Testing.
1. Condition Testing: It Exercises the logical conditions contained in a program module.
It also focuses on testing each condition in the program to ensure that it doesn’t
contain errors.
2. Data flow Testing: Selects test paths according to the locations of definitions and use
of variables in a program and aims to ensure that the definitions of variables and
subsequent and its use is tested.
3. Loop Testing: It is a type of software testing type that is performed to validate the
loops. It is one of the types of Control Structure Testing. Loop testing is a white box
testing technique and is used to test loops in the program. It focuses on the validity of
loop constructs four categories can be defined.

a. Simple loops
b. Nested loops
c. Concatenated loops and
d. Unstructured loops
a. Simple loop Testing: Testing performed in a simple loop is known as Simple
loop testing. Simple loop is basically a normal “for”, “while” or “do-while” in
which a condition is given and loop runs and terminates according to true and false
occurrence of the condition respectively. This type of testing is performed basically
to test the condition of the loop whether the condition is sufficient to terminate
loop after some point of time.
b. Nested Loop Testing: Testing performed in a nested loop in known as Nested loop
testing. Nested loop is basically one loop inside the loop. In nested loop there can be
finite number of loops inside a loop and there a nest is made. It may be either of any
of three loops i.e., for, while or do-while.
Example:
While (condition 1)
{
While (condition 2)
{
statement(s);
}
}
c. Concatenated loops: Testing: Testing performed in a concatenated loop is known
as Concatenated loop testing. It is performed on the concatenated loops.
Concatenated loops are loops after the loop. It is a series of loops. Difference
between nested and concatenated is that in nested loop is inside the loop but here
loop is after the loop.
Example:

while(condition 1)
{
statement(s);
}
while(condition 2)
{
statement(s);
}
d. Unstructured loops testing: Testing performed in an unstructured loop is known as
unstructured loop testing. Unstructured loop is the combination of nested and
concatenated loops. It is basically a group of loops that are in no order.
Example:
while()
{
for()
{}
while()
{}
}

Advantages of Loop Testing:


a. Loop testing limits the number of iterations of loop.
b. Loop testing ensures that program doesn’t go into infinite loop process.
c. Loop testing endures initialization of every used variable inside the loop.
d. Loop testing helps in identification of different problems inside the loop.
e. Loop testing helps in determination of capacity
Disadvantages of Loop Testing: The disadvantages of Loop testing are:

• Loop testing is mostly effective in bug detection in low-level software.


• Loop testing is not useful in bug detection.

II. Integration Testing

It is the process of testing the interface between two software units or modules. It
focuses on determining the correctness of the interface. The purpose of integration
testing is to expose faults in the interaction between integrated units. Once all the
modules have been unit tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a software
application. The goal of integration testing is to identify any problems or bugs that arise
when different components are combined and interact with each other. Integration testing
is typically performed after unit testing and before system testing
III. Verification and Validation Testing

Verification is a process of determining if the software is designed and developed as


per the specified requirements. Validation is the process of checking if the software
(end product) has met the customer’s true needs and expectations.

Validation testing is the process of assessing a new software product to ensure that its
performance matches customer needs. Product development teams might perform
validation testing to learn about the integrity of the product itself and its performance
in different environments.
IV. System Testing

System testing is a type of software testing that evaluates the overall functionality
and performance of a complete and fully integrated software solution. It tests if the
system meets the specified requirements and if it is suitable for delivery to the
end-users. This type of testing is performed after the integration testing and before
the acceptance testing.
Example: Each component of the automobile, such as the seats, steering, mirror,
brake, cable, engine, car structure, and wheels, is made independently. After each
item is manufactured, it is tested separately to see whether it functions as intended.
Overall Testing types
Gorilla Testing

Gorilla testing is a software testing technique that repeatedly applies inputs on a


module to ensure it is functioning correctly and that there are no bugs. Gorilla
testing is a manual testing procedure and is performed on selected modules of the
software system with selected test cases.
Smoke Testing

Smoke testing, also called build verification testing or confidence testing, is a software
testing method that is used to determine if a new software build is ready for the next
testing phase. This testing method determines if the most crucial functions of a program
work but does not delve into finer details.
The art of debugging :
Debugging is the process of identifying and resolving errors, or bugs, in a
software system. It is an important aspect of software engineering because
bugs can cause a software system to malfunction, and can lead to poor
performance or incorrect results. Debugging can be a time-consuming and
complex task, but it is essential for ensuring that a software system is
functioning correctly.

The term “debugging” originated from an incident involving Grace Hopper in


the 1940s when a moth caused a malfunction in the Mark II computer at
Harvard University. The term stuck and is now commonly used to describe the
process of finding and fixing errors in computer programs. In simpler terms,
debugging got its name from removing a moth that caused a computer
problem.
There are several common methods and techniques used in debugging, including:
a. Code Inspection: This involves manually reviewing the source code of a software
system to identify potential bugs or errors.

b. Debugging Tools: There are various tools available for debugging such as
debuggers, trace tools, and profilers that can be used to identify and resolve bugs.

c. Unit Testing: This involves testing individual units or components of a software


system to identify bugs or errors.

d. Integration Testing: This involves testing the interactions between different


components of a software system to identify bugs or errors.

e. System Testing: This involves testing the entire software system to identify bugs or
errors.
f. Monitoring: This involves monitoring a software system for unusual behavior or
performance issues that can indicate the presence of bugs or errors.

g. Logging: This involves recording events and messages related to the software
system, which can be used to identify bugs or errors.
Metrics for Process and Products:

Software measurement:
Product Metrics:
Product metrics in software engineering refer to the quantifiable measurements
used to assess the characteristics and performance of software products throughout
their development and maintenance lifecycle. These metrics provide valuable
insights into various aspects of software quality, effectiveness, efficiency, and
reliability.

Measure:
Provides a quantitative indication of the extent, amount, dimension, capacity or size of some attribute of a
product or process
Product Metrics for analysis, Design, Test and maintenance:

Product metrics for the Analysis model:

Function Point Metric- It is first proposed by Albrecht. It measures the


functionality delivered by the system.
FP computed from the following parameters.

1. Number of external inputs (EIS)


2. Number external outputs (EOS)
3. Number of external Inquiries (EQS)
4. Number of Internal Logical Files (ILF)
5. Number of external interface files (EIFS)
6. Each parameter is classified as simple, average or complex
Function Point Analysis:

What is Function Point Analysis (FPA)?

 It is designed to estimate and measure the time, and thereby the cost, of
developing new software applications and maintaining existing software
applications.

 The main other approach used for measuring the size, and therefore the time
required, and the software project lines of code (LOC)
Function Point Analysis:

These function-point counts are then weighed (multiplied) by their degree of


complexity:
Degree of complexity  Simple Average Complex
Degree of Simple Average Complex
Complexity 
Inputs 2 4 6
Outputs 3 5 7
Files 5 10 15
Inquiries 2 4 6
Interfaces 4 7 10
A simple example:
Inputs
3 simple X 2 = 6
4 average X 4 = 16
1 complex X 6 = 6
Outputs
6 average X 5 = 30
2 complex X 7 = 14
Files
5 complex X 15 = 75
Inquiries
8 average X 4 = 32
Interfaces
3 average X 7 = 21
4 complex X 10 = 40

Unadjusted function points [UFP]: 240


In addition to these individually weighted function points, there are factors that affect
the project and/or system as a whole. There are a number (~35) of factors that affect
the size of the project effort, and each is ranked from “0”- no influence to “5”-
essential.

The following are some examples of these factors:


1. Is the internal processing complex?
2. Is the system to be used in multiple sites and/or by multiple organizations?
3. Is the code designed to be reusable?
4. Is the processing to be distributed? and so forth . . .
Continuing our example . . .
Complex internal processing = 3
Code to be reusable = 2
High performance = 4
Multiple sites = 3
Distributed processing = 5

Project adjustment factor = 17

Adjustment calculation:
Adjusted FP = Unadjusted FP X [0.65 + (adjustment factor /100)]
= 240 X [0.65 + ( 17 /100)]
= 240 X [0.82]
= 197 adjusted function points
But how long will the project take and how much will it cost?

As previously measured, programmers in our organization average 18 function


points per month. Thus . . .
197 FP divided by 18 = 11 person-months

If the average programmer is paid $5,200 per month (including benefits), then the
[labor] cost of the project will be . . .
11 man-months X $5,200 = $57,200
Because function point analysis is independent of language used, development
platform, etc. it can be used to identify the productivity benefits of . . .
• One programming language over another
• One development platform over another
• One development methodology over another
• One programming department over another
• Before-and-after gains in investing in programmer training

But there are problems and criticisms:


• Function point counts are affected by project size
• Difficult to apply to massively distributed systems or to systems with very
complex internal processing
• Difficult to define logical files from physical files
• Different companies will calculate function points slightly different, making
intercompany comparisons questionable
Software quality metrics:

Conformance to explicitly stated functional and performance requirements, explicitly


documented development standards, and implicit characteristics that are expected of all
professionally developed software.

Factors that affect software quality can be categorized in two broad groups:
1. Factors that can be directly measured (e.g. defects uncovered during testing)
2. Factors that can be measured only indirectly (e.g. usability or maintainability)

.
The measures of software quality are correctness, maintainability, integrity, and
usability. These measures will provide useful indicators for the project team.

Correctness: Correctness is the degree to which the software performs its required
function. The most common measure for correctness is defects per KLOC, where a
defect is defined as a verified lack of conformance to requirements

Maintainability: Maintainability is the ease with which a program can be corrected if


an error is encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements. A simple time-oriented metric is mean-
time-to-change (MTTC), the time it takes to analyse the change request, design an
appropriate modification, implement the change, test it, and distribute the change to all
users.
Integrity: Attacks can be made on all three components of software: programs, data,
and documents. To measure integrity, two additional attributes must be defined.
Threat and Security

Threat is the probability (which can be estimated or derived from empirical evidence)
that an attack of a specific type will occur within a given time.
Security is the probability (which can be estimated or derived from empirical
evidence) that the attack of a specific type will be repelled. The integrity of a system
can then be defined as

Integrity = Σ [1-(threat x (1- security))]

Usability: Usability is an attempt to quantify user-friendliness

You might also like