0% found this document useful (0 votes)
74 views35 pages

UNIT 5 Software Testing 1

Software testing notes

Uploaded by

ijantkartanishka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views35 pages

UNIT 5 Software Testing 1

Software testing notes

Uploaded by

ijantkartanishka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 35

Unit V- Software Testing

• Objectives:

– Development Testing
– Test Driven Development
– Release Testing
– User Testing
Types of Testing
• There are many types of software testing :

– Visual testing
– Specification based testing
– Software development testing
– Functional & Non-functional testing
– Performance Testing
– Alpha Testing
– Beta Testing
Black box testing and white box testing are two fundamental software testing approaches used
to evaluate software applications, each with a distinct focus and methodology.
1. Black Box Testing:
•Definition: Black box testing is a software testing method where the tester evaluates the
functionality of the software without having any knowledge of its internal structure, code, or
implementation.
•Focus: This approach focuses on what the software does (i.e., its outputs or behavior) rather
than how it is implemented.
•Test Cases: Test cases are derived based on software requirements, specifications, or user
stories.
•Common Techniques:
• Equivalence Partitioning: Dividing inputs into groups that are expected to behave
similarly.
• Boundary Value Analysis: Testing at the boundaries between different input groups.
• Decision Table Testing: Testing different combinations of inputs.
• State Transition Testing: Testing different states of the software based on
inputs/events.
•Advantages:
• Does not require knowledge of the code, making it suitable for non-developers or
external testers.
• Focuses on user experience and functional correctness.
•Disadvantages:
• Cannot test the internal workings or code-level issues.
• Coverage may be limited to functional aspects.

2. White Box Testing:
•Definition: White box testing (also known as clear box or glass box testing) is a
method where the tester has full visibility into the internal workings, code structure,
and implementation of the software.
•Focus: This approach focuses on how the software works (i.e., the code, logic, and
internal paths). Test Cases: Test cases are written based on the code, control flows,
and data flows.
•Common Techniques:
• Statement Coverage: Ensuring every statement in code executed at least once.
• Branch Coverage: Testing every possible branch (decision point) in the code.
• Path Coverage: Ensuring all possible paths through the code are tested.
• Loop Testing: Testing loops to ensure proper execution under different
conditions.
•Advantages:
• Provides thorough testing of the codebase.
• Helps in identifying hidden errors like security vulnerabilities, logical errors, and
edge cases.
•Disadvantages:
• Requires in-depth knowledge of the code and development expertise.
• Time-consuming for large or complex applications.
•Example: Testing each function of a software module by analyzing the code and
covering all possible branches and paths.
Why Testing…..?
• Testing is intended to show that a program does what
it is intended to do and to discover program defects
before it is put into use.
• The testing process has two distinct goals:
• To demonstrate to the developer and the customer that the
software meets its requirements.
• To discover situations in which the behavior of the software
is incorrect, undesirable, or does not conform to its
specification.
• The first goal leads to validation testing, where you
expect the system to perform correctly using a given
set of test cases.
• The second goal leads to defect testing, where the test
cases are designed to expose defects.
Why Testing…..?
• Testing cannot demonstrate that the software is
free of defects or that it will behave as specified in
every circumstance.
• The ultimate goal of verification and validation
processes is to establish confidence that the
software system is ‘fit for purpose’.
• Testing is part of a broader process of software
verification and validation (V&V).
• It is concerned with checking the software being
developed meets its specification and delivers the
functionality expected by the people paying for
the software.
Why Testing…..?
• Barry Boehm, a pioneer of software engineering,
succinctly expressed the difference during 1979 -
Validation: Are we building the right product?
Verification: Are we building the product right?
Typical examples of Software functionalities to test :

•Business Rules
•Transaction corrections, adjustments and cancellations
•Administrative functions
•Authentication
•Authorization levels
•Audit Tracking
•External Interfaces
•Certification Requirements
•Reporting Requirements
•Historical Data
•Legal or Regulatory Requirements
Non-functional aspects for Testing
Inspection & Testing
• The verification and validation process involve software
inspections and reviews.
• Inspections and reviews analyze and check the system
requirements, design models, the program source code,
and even proposed system tests. These are so-called
‘static’ V & V techniques.
• software inspections and testing support V & V at different
stages in the software process.
Inspection V/s Testing
• There are three advantages of software inspection
over testing
– Inspection is a static process, you don’t have to be
concerned with interactions between errors.
– Incomplete versions of a system can be inspected
against specifications without additional testing costs.
– An inspection can also consider broader quality
attributes of a program like standards compliance,
portability & maintainability.
• Inspections are not good for discovering defects
that arise during interactions of different
parts/objects of a program, timing problems and
the system performance.
Traditional Testing Process…
• In plan driven development, an abstract model of
the ‘traditional’ testing process is used.
• In this process, test cases are devised with test
data or inputs to test the system, the expected
output, the actual output and conclusions driven
from the test.
• In automated system tests, using the test case
with expected results execution can be
automated. The expected results are automatically
compared with the predicted results to look for
errors and anomalies in the test run.
Traditional Testing Process…
Commercial Software 3 stages of Testing
1.Development Testing: where the system is
tested during development to discover bugs
and defects. System designers, programers
and associated testers are likely to be
involved in the testing process. They are
responsible for developing tests and
maintaining detailed records of test results
2.Release testing: where a separate testing
team tests a complete version of the system
before it is released to users. The aim of
release testing is to check the system meets
the requirements of system stakeholders.
Commercial Software Testing
3. User Testing : where users or potential
users of a system test the system in heir
own environment. For software products,
the ‘user’ may be an internal marketing
group who decide if the software can be
marketed, released, and sold.
• Acceptance Testing: is one type of user
testing where the customer formally tests a
system to decide if it should be accepted
from the system supplier or if further
development is required.
Development Testing
• During development, the testing may be carried
out at 3 levels of granularity:
1.Unit testing, where individual program units or
object classes are tested. Unit testing should
focus on testing the functionality of objects or
methods.
2.Component testing, where several individual
units are integrated to create composite
components. Component testing should focus
on testing component interfaces.
Development Testing
3. System testing, where some or all of the
components in a system are integrated and the
system is tested as a whole. System testing
should focus on testing component
interactions.
• As an overview of Development testing, we can
see that it is primarily a defect testing
mechanism. During process the main aim of
testing is to discover bugs in the software.
Development Testing : Unit testing
• Unit testing is the process of testing program components, such as
methods or object classes. Individual functions or methods are the
simplest type of components.
• While testing object classes, you should design your tests to provide
coverage of all of the features of the object. That is test all operations
associated, set and check the value of all attributes associated, put the
object into all of its possible states.
 Example Weather station object interface:
• The interface of this object is identifier
• Define test cases for all methods defined
viz, reportweather(), reportstatus(), restart()
• Unit testing mostly automation preferred
to reset & restart all methods at once.
• State Diagrams are a powerful tool in software engineering
for capturing and visualizing the dynamic behavior of event-
driven systems or have complex state-dependent behaviors.
• Vital tool used to represent the different states of an object
or system and how it transitions between those states
based on various events, especially those that respond to
external events or inputs.
• These are helpful in Visualizing System Behavior, Clarifying
Requirements, Defining Object Lifecycle, Modeling Real-
time and Embedded Systems, Testing and Validation.
• State Diagrams consists of start / stop, states, transitions,
events & triggers, actions.
Development Testing
Example: Weather Station State Diagram for Ref.
Choosing Unit Test Cases
• Testing is expensive and time consuming, so it is
important that you choose effective unit test cases
based on following points:
1. The test cases should show that, when used as
expected, the component that you are testing does
what it is supposed to do.
2. If there are defects in the component, these should be
revealed by test cases.
3. Classify/ group the test cases with partitions, based
on the upper limit and lower limit specified in the
software.
Choosing Unit Test Cases
• There should be two kinds of test cases.
1. The first of these should reflect normal operation of a
program and should show that the component works.
2. The second, kind of test case should be based on testing
experience of where common problems arise and check
abnormal inputs processed properly and component
doesn't crash.
• Equivalence class partitioning technique achieves
aforesaid points. Boundaries refers classes.
• Guideline-based testing, where guidelines reflect the
previous experience observed on programmers
commit errors while developing components.
Guidelines that could help and reveal defects include:
1.Test software with sequences that have only a single value.
Programmers naturally think of sequences as made up of several values
and sometimes they embed this assumption in their programs.
Consequently, if presented with a single value sequence, a program may
not work properly.
2.Use different sequences of different sizes in different tests. This
decreases the chances that a program with defects will accidentally
produce a correct output because of some accidental characteristics of the
input.
3.Derive tests so that the first, middle, and last elements of the sequence
are accessed. This approach reveals problems at partition boundaries.
Development Testing- Component Testing
• Assume software has modules/ components A, B,
C,D, which have been integrated to create a larger
component or subsystem.
• There are different types of interface between
program components. Consequently, we can
expect different types of interface error that can
occur:
– Parameter interfaces
– Shared memory interfaces
– Procedural interfaces
– Message passing interfaces
Component Testing: Interface Errors
• Interface errors are one of the most common
forms of error in complex systems. These errors
fall into three classes:
– Interface misuse
– Interface misunderstanding
– Timing errors
• General guidelines for interface testing wih 5 key
points to remeber :
1. Examine the code to be tested and explicitly list each
call to an external component.
2. Where pointers passed across interface, test it with null
pointer parameters.
Component Testing: Interface Errors
3. Where a component is called through a
procedural interface, design tests that
deliberately cause the component to fail.
4. Use stress testing in message passing systems,
which produces messages more than the real time
so as to see timing problem.
5. Where several components interact through
shared memory, design tests that vary the order
in which these components are activated.
As mentioned earlier, the inspections and their
reviews will workout cost effective compared to
the above as they concentrate on interface
behavior.
System Testing
• System testing during development involves
integrating components to create a version of the
system and then testing the integrated system.
• Eventhough it overlaps with component testing
there are two main differences :
i) Re-useable components and off-the-shelf
systems may also be integrated with new
components.
ii) Components developed by different team
members or groups may be integrated at this
stage. System testing is a collective rather
than an individual process.
• This means, some elements of system
functionality only become obvious while putting
the components together.
• Because of its focus on interactions, use case–
based testing is an effective approach to system
testing.
• Therefore, issuing a request for a report will result
in the execution of the following thread of
methods:
SatComms:request →WeatherStation:reportWeather
→Commslink:Get(summary) →WeatherData:summarize

• The sequence diagram helps you design the


specific test cases that you need as it shows what
inputs are required and what outputs are created:
An input of a request for a report should have
an associated acknowledgment. A report should
ultimately be returned from the request.
An input request for a report to WeatherStation
results in a summarized report being generated.
Test-driven development- TDD
•Test-driven development (TDD) is an approach to
program development in which you interleave
testing and code development.
•Essentially, It is incremental development model.
The code developed along with a test for that
increment. You don’t move on to the next increment
until the code that you have developed passes its
test.
•Test-driven development was introduced as part of
agile methods such as Extreme Programming.

You might also like