0% found this document useful (0 votes)
43 views35 pages

Black Box Testing

The document discusses various black box testing techniques including: 1) Equivalence class partitioning divides inputs into valid and invalid classes to design test cases. 2) Boundary value analysis tests boundary values of input ranges. 3) Fuzzy, load, stress, and other techniques test programs by providing unexpected or extreme values. 4) Integration, sandwich, and incremental testing combine modules to test system interactions and regressions.

Uploaded by

Marwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views35 pages

Black Box Testing

The document discusses various black box testing techniques including: 1) Equivalence class partitioning divides inputs into valid and invalid classes to design test cases. 2) Boundary value analysis tests boundary values of input ranges. 3) Fuzzy, load, stress, and other techniques test programs by providing unexpected or extreme values. 4) Integration, sandwich, and incremental testing combine modules to test system interactions and regressions.

Uploaded by

Marwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Object Oriented Software

Engineering

SOFTWARE
TESTING (DYNAMIC)
BLACK BOX
TESTING

Engr. Muhammad
Software Testing
 Software testing is the process of analyzing a software item
to detect the differences between existing and required
conditions (that is, bugs) and to evaluate the features of the
software item (IEEE, 1986; IEEE, 1990).

Engr. Ali
Testing Process

Engr. Ali
BLACK BOX TESTING
TECHNIQUES
 Equivalence Class
Partitioning Testing
 Boundary Value Testing
 Fuzzy Testing
 Omission Testing
 Integration Testing
 Sandwich Testing
Security Testing
 Compatibility Testing
 Null Case Testing
 Load Testing
 Stress Testing
 Smoke Testing
 Usability Testing
 Exploratory Testing
Equivalence Class
Partitioning Testing
 Equivalence Partitioning is a black-box testing method that divides the input
domain of a program into classes of data from which test cases can be
derived

 An ideal test case single handedly uncovers a class of errors e.g incorrect
processing of all character data that might otherwise require many cases to
be executed before the general error is observed.

 Equivalence Partitioning strives to define the test case that uncovers


classes of errors, there by reducing the total number of test cases that
must be developed.

 An equivalence class represents a set of valid or invalid states for input


conditions.
Equivalence ClassPartitioning
Testing
 Equivalence classes can be define according to the following
guidelines;

 If an input condition specifies a range one valid and two


invalid equivalence classes are defined.

 If an input condition specifies a specific


value one validand two invalid equivalence classes
are defined.

 If an input condition specifies a member of a set one valid


and one invalid equivalence class are defined.

 If an input condition is Boolean, one valid and one invalid


class are defined.
Boundary Value
Testing
 (Specific case of Equivalence Class
Partitioning Testing)

 Boundary value analysis leads to a selection


of test cases that exercise bounding values.
This technique is developed because a
great number of errors tend to occur at the
boundary of input domain rather than at
the center.

 Tests program response to extreme input or


output values in each equivalence class.

 Guideline for BVA are following;

 If an input condition specifies a range


bounded by values a and b, test cases
should be designed with values a and b and
just above and below a and b.
Fuzzy Testing
 Fuzz testing or fuzzing is a software testing technique, often
automated or semi-automated, that involves providing
invalid or unexpected data to the inputs of a computer
program. The program is then monitored for exceptions
such as crashes or failing built-in code assertions.

 The term first originates from a class project at the


University of Wisconsin 1988 although similar techniques
have been used in the field of quality assurance, where they
are referred to as robustness testing or negative testing.
Omission Testing
 Omission Testing (also called Missing Case Testing):

 Exposes defects caused inputting cases (scenarios) the


developer forgot to handle or did not anticipate
 A study by Sherman on a released Microsoft product reported that 30% of
client reported defects were caused by missing cases.

 Other studies show than an average of 22 to 54% of all client reported


defects are caused by missing cases.
Null Case Testing
 Null Testing: (a specific case of Omission Testing, but triggers
defects extremely often)
 Exposes defects triggered by no data or missing data.
 Often triggers defects because developers create programs to act
upon data, they don’t think of the case where the project may not
contain specific data types

 Example: X, Y coordinate missing for drawing various


shapes in Graphics editor.

 Example: Blank file names


Integration Testing
 Integration testing (sometimes called Integration and Testing,
abbreviated "I&T") is the phase in software testing in which
individual software modules are combined and tested as a group.
It occurs after unit testing and before validation testing.
Integration testing takes as its input modules that have been unit
tested, groups them in larger aggregates, applies tests defined in
an integration test plan to those aggregates, and delivers as its
output
 the
BIG integrated system ready for system testing.
BANG
INTEGRATION

 INCREMENTAL
INTEGRATION
Big Bang Approach
 In this approach, all or most of the developed modules are coupled
together to form a complete software system or major part of the
system and then used for integration testing.
 The Big Bang method is very effective for saving time in the
integration testing process. However, the major drawback of Big
Bang Integration is to find the actual location of error.
Incremental
Integration
TOP DOWN INTEGRATION

Top Down Testing is an approach to integrated testing where the top


integrated modules are tested and the branch of the module is tested
step by step until the end of the related module.
Incremental
Integration TOP DOWN
INTEGRATION
Top down integration is performed in a series of steps:
1 The main control module is used as test driver and stubs are substituted for all
. components directly subordinate to the main module.

2 Depending on the integration approach selected (depth-first or breadth-first)


. subordinates stubs are replaced one at a time with actual components.

3 Test are conducted as each component is


. integrated.
4 On completion of each set of tests another stub is replaced with actual
. component.
5 Regression testing may be conducted to make sure that new errors
. have not been introduced.
Incremental Integration

In depth first approach all modules on


a control path are integrated first. See
the fig. on the right. Here sequence of
integration would be (M1, M2, M3),
M4, M5, M6, M7, and M8.

In breadth first all modules directly


subordinate at each level are
integrated together. Using breadth
first for this fig. the sequence of
integration would be (M1, M2, M8),
(M3, M6), M4, M7, andM5.
Incremental
Integration
BOTTOM UP INTEGRATION
 Bottom Up Testing is an approach to integrated testing where the lowest level
components are tested first, then used to facilitate the testing of higher level
components. The process is repeated until the component at the top of the
hierarchy is tested.
 All the bottom or low-level modules, procedures or functions are integrated
and then tested. After the integration testing of lower level integrated
modules, the next level of modules will be formed and can be used for
integration testing. This approach is helpful only when all or most of the
modules of the same development level are ready.
Incremental Integration
BOTTOM UP INTEGRATION
 Bottom up integration is performed in a series of
steps:
1 Low level components are combined into clusters.
.
2 A driver (a control program for testing) is written to coordinate test
.
3 case input and output. The cluster is tested.
.
4 Drivers are removed and clusters are combined moving upward in the
.
program structure.
Sandwich Testing
 Sandwich Testing is an approach to combine top down
testing with bottom up testing.
 The system is view as having three layers
 A target layer in the middle
 A layer above the target
 A layer below the target
 Testing converges at the target layer
 How do you select the target layer if there are more than 3
layers?
Sandwich
Testing
Test
E

Botto Test B, E, F
m Test F
Layer
Tests Test
A, B, C,
Test D, E, F,
Test G D,G G

Test A,B,C, D

Top Test A
Laye
r
Tests
Load Testing
 Load testing is the process of putting demand on a system or
device and measuring its response. Load testing is performed to
determine a system’s behavior under both normal and anticipated
peak load conditions.
 It helps to identify the maximum operating capacity of an
application as well as any bottlenecks and determine which
element is causing degradation.

 Example: Using automation software to simulate 500 users logging into a web site
and performing end- user activities at the same time.
 Example: Typing at 120 words per minute for 3 hours into a word processor.

Engr. Ali
Stress Testing
 Stress testing is a form of testing that is
used to determine the stability of a given
system or entity.

 It involves testing beyond normal


operational capacity, often to a breaking
point, in order to observe the results.

 In stress testing you continually put


excessive load on the system until the
system crashes

 The system is repaired and the stress test


is repeated until a level of stress is
reached that is higher than expected to
be present at a customer site.

Engr. Ali
Compatibility Testing
 Exposes defects related to using files from output one version of the
software in another version of the software.
 Most Landmark applications are designed to be “forwards”
compatible, meaning files created in a previous release of the
software can be used in the version currently under test.
 They are not designed to be “backwards” compatible, meaning a file
output in the version under test will not work in a current released
version.

Engr. Ali
Security Testing
 Security testing is a process to determine that an information system protects
data and maintains functionality as intended.
 To check whether there is any information leakage.
 To test the application whether it has unauthorized access
 To finding out all the potential loopholes and weaknesses of the system.
 Primary purpose of security testing is to identify the vulnerabilities and
subsequently repairing them.

Engr. Ali
Security Testing Techniques
Penetration Testing/ Ethical Hacking

An ethical hacker is a computer and network


expert who attacks a security system on
behalf of its owners, seeking vulnerabilities
that a malicious hacker could exploit.
 To test a security system, ethical hackers
use the same methods as their less
principled counterparts, but report
problems instead of taking advantage of
them.
 Ethical hacking is also known as
penetration testing, intrusion testing and
red teaming.
 An ethical hacker is sometimes called a
white hat, a term that comes from old
Western movies, where the "good guy"
wore a white hat and the "bad guy" wore
a black hat.
Engr. Ali
Recovery Testing
 In software testing, recovery testing is the activity of testing how well an
application is able to recover from crashes, hardware failures and other
similar problems.
 Recovery testing is the forced failure of the software in a variety of ways to
verify that recovery is properly performed.
 Recovery testing is basically done in order to check how fast and better the
application can recover against any type of crash or hardware failure etc.
Type or extent of recovery is specified in the requirement specifications.
 Examples of recovery testing:

 While an application is running, suddenly restart the computer, and afterwards


check the validness of the application's data integrity.
 While an application is receiving data from a network, unplug the connecting cable.
After some time, plug the cable back in and analyze the application's ability to
continue receiving data from the point at which the network connection
disappeared.
 Restart the system while a browser has a definite number of sessions. Afterwards,
check that the browser is able to recover all of them.

Engr. Ali
Usability
 Testing
Usability testing is a technique used to evaluate a product by testing
it on users. This can be seen as an irreplaceable usability practice,
since it gives direct input on how real users use the system.
Usability testing focuses on measuring a human-made product's
capacity to meet its intended purpose.
 Usability testing measures the usability, or ease of use, of a specific
object or set of objects.
 User interviews, surveys, video recording of user sessions, and other
techniques can be
used

Engr. Ali
Exploratory Testing
 Exploratory testing is an approach to software testing that is
concisely described as simultaneous learning, test design and test
execution. Exploratory software testing is a powerful and fun
approach to testing.
 The essence of exploratory testing is that you learn while you test,
and you design your tests based on what you are learning
 Exploratory testing is a method of manual testing.
 The testing is dependent on the tester's skill of inventing test cases
and finding defects. The more the tester knows about the product
and different test methods, the better the testing will be.

Engr. Ali
Regression

Testing
Exposes defects in code that should have not
changed.
 Re-executes some or all existing test cases to exercise code that was tested
in a previous release or previous test cycle.

 Performed when previously tested code has been re-


linked such as when:
 Ported to a new operating system
 A fix has been made to a specific part of the
code.
 Studies shows that:

 The probability of changing the program correctly on the first try is only 50% if
the change involves 10 or fewer lines of code.
 The probability of changing the program correctly on the first try is only
20% if the change involves around 50 lines of code.

Engr. Ali
Progressive VS Regressive
Testing
 When testing new code, you are performing “progressive testing.”

 When testing a program to determine if a change has introduced errors in the


unchanged code, you are performing “regression testing.”

 All black box test design methods apply to both progressive and regressive
testing. Eventually, all your “progressive” tests should become “regression”
tests.

 The Testing Group performs a lot of Regression Testing because most


Landmark development projects are adding enhancements (new
functionality) to existing programs. Therefore, the existing code (code that did
not change) must be regression tested.

Engr. Ali
Regression Testing VS
Retesting
 Re- test - Retesting means we testing only the certain part of an
application again and not considering how it will effect in the other
part or in the whole application.

 Regression Testing - Testing the application after a change in a


module or part of the application for testing that is the code change
will affect rest of the application.
Smoke Testing
 Smoke testing refers to physical tests made to closed systems of pipes to test
for leaks. By metaphorical extension, the term is also used for the first test
made after assembly or repairs to a system, to provide some assurance that
the system under test will not catastrophically fail.
 Smoke testing is non-exhaustive software testing, ascertaining that the most
crucial functions of a program work, but not bothering with finer details. The
term comes to software testing from a similarly basic type of hardware
testing, in which the device passed the test if it didn't catch fire the first time
it was turned on. A daily build and smoke test is among industry best
practices advocated by the IEEE (Institute of Electrical and Electronics
Engineers).
Smoke Testing
 Software Testing done to ensure that whether the build can be
accepted for through software testing or not. Basically, it is done to
check the stability of the build received for software testing.
 In software industry, smoke testing is a shallow and wide approach
whereby all areas of the application without getting into too deep, is
tested.

Engr. Ali
Sanity
Test
In software development, the sanity test
determines whether it is reasonable to proceed
with further testing.
 Software sanity tests are commonly conflated
with smoke tests. A smoke test determines
whether it is possible to continue testing, as
opposed to whether it is reasonable.
 A software smoke test determines whether the
program launches and whether its interfaces
are accessible and responsive (for example, the
responsiveness of a web page or an input
button).
 If the smoke test fails, it is impossible to
conduct a sanity test.
 If the sanity test fails, it is not reasonable to
attempt more rigorous testing.
 Both sanity tests and smoke tests are ways to
avoid wasting time and effort by quickly
determining whether an application is too
Engr. Aliflawed to merit any rigorous testing.
Smoke VS Sanity Test
 Smoke testing is a wide approach where all areas of the software
application are tested without getting into too deep. However, a sanity
software testing is a narrow regression testing with a focus on one or a
small set of areas of functionality of the software application.
 Smoke testing is done to ensure whether the main functions of the software
application are working or not. During smoke testing of the software, we do
not go into finer details. However, sanity testing is done whenever a quick
round of software testing can prove that the software application is
functioning according to business / functional requirements.
 Smoke testing of the software application is done to check whether the
build can be accepted for thorough software testing. Sanity testing of the
software is to ensure whether the requirements are met or not.

Engr. Ali
For any query Feel Free to
3
8 ask

You might also like