0% found this document useful (0 votes)
19 views

Lecture 6 - Software Testing

The document discusses software testing. It begins by defining software testing as executing a program with artificial data to show it works as intended and discover defects. The goals of testing are to demonstrate requirements are met and discover incorrect behavior. Validation testing focuses on requirements while defect testing aims to expose faults. Test cases are then explained along with an example. Different types of testing are also summarized, including static vs dynamic testing and inspection vs testing. Key terms like faults, errors and failures are defined. Finally, models of the software testing process and different testing methods are presented.

Uploaded by

Hasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Lecture 6 - Software Testing

The document discusses software testing. It begins by defining software testing as executing a program with artificial data to show it works as intended and discover defects. The goals of testing are to demonstrate requirements are met and discover incorrect behavior. Validation testing focuses on requirements while defect testing aims to expose faults. Test cases are then explained along with an example. Different types of testing are also summarized, including static vs dynamic testing and inspection vs testing. Key terms like faults, errors and failures are defined. Finally, models of the software testing process and different testing methods are presented.

Uploaded by

Hasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Software Testing

1
Referred Books

• Software Engineering
▪ Ian Sommerville

• Introduction To Software Testing


▪ Jeff Offutt

2
Software Testing

❖ Testing is intended to show that a program does what it is


intended to do and to discover program defects before it
is put into use.
❖ To execute a program, we use artificial data.
❖ Testing can only reveal the presence of errors, not their
absence

3
Testing Goals

❖ To demonstrate to the developer and the customer that the


software meets its requirements.
➢ Custom software testing
➢ Generic software testing
❖ To discover situations in which the behavior of the
software is incorrect, undesirable, or does not conform to
its specification
➢ Defect Testing: system crashes, unwanted interactions with other
systems, incorrect computations

4
Validation and Defect Testing

▪ The first goal leads to validation testing


▪ The test cases are designed to demonstrate that the software works as
intended
▪ A successful test: the system performs correctly

▪ The second goal leads to defect testing


▪ The test cases are designed to expose defects
▪ A successful test: makes the system perform incorrectly and expose
faults

5
A Sample Test Case

TC# Test Case Test Data Test Steps Expected Result

1. Verify Id: test 1. Go to Login page Login


Login pass: 1234 2. Enter UserId Successful
3. Enter Password
4. Click Login
2. Verify Id: $@#d 1. Go to Login page Login
Login pass: 1234 2. Enter UserId Unsuccessful
3. Enter Password
4. Click Login

6
An input-output model of program testing

7
Testing Goals

❖ Immediate Goals
❖ Long-term Goals
❖ Post-Implementation
Goals

https://www.geeksforgeeks.org/
goals-of-software-testing/

8
Software Testing Terminology

9
Software Testing Terminology

● Verification: The process of determining whether the products of a


given phase of the software development process fulfill the
requirements established during the previous phase.

● Validation: The process of evaluating software at the end of


software development to ensure compliance with intended usage.

10
Verification Vs. Validation

Verification Validation
1. Are we building the product Are we building the right
right? product?
2. Checks whether an artifact Checks the final product
conforms to its previous artifact against specification
3. Done by developer Done by Tester
4. Involves reviews, inspections, Involves system testing
unit testing and integration testing
5. Comprises of both Static and Comprises of only Dynamic
Dynamic procedures procedures

11
Verification and Validation

❑ The ultimate goal of verification and validation processes is to


establish confidence that the software system is ‘fit for
purpose’.
❑ The level of required confidence depends on:
o Software purpose
• how critical the software is to the organization
o User expectations
• users have low expectations of certain kinds of software
o Marketing environment
• in a competitive environment, users may be willing to tolerate a lower
level of reliability for cheap products
12
Software Faults, Errors & Failures

✧ Software Fault : A static defect in the software

✧ Software Error : An incorrect internal state that is the


manifestation of some fault

✧ Software Failure : External, incorrect behavior with respect


to the requirements or other description of the expected
behavior

13
Software Faults, Errors & Failures

❑ A patient gives a doctor a list of symptoms


▪ Failures
❑ The doctor tries to diagnose the root cause, the ailment
▪ Fault
❑ The doctor may look for anomalous internal conditions
(high blood pressure, irregular heartbeat, bacteria in the
bloodstream)
▪ Errors

14
A Concrete Example

public static int numZero (int [ ] arr)


Test 1
{ // Effects: If arr is null throw NullPointerException
[ 2, 7, 0 ]
// else return the number of occurrences of 0 in arr
Expected: 1
int count = 0;
Actual: ?
for (int i = 1; i < arr.length; i++)
{
Test 2
if (arr [ i ] == 0)
[ 0, 2, 7 ]
{
Expected: 1
count++;
Actual: ?
}
}
return count;
}

15
A Concrete Example
Fault: Should start
searching at 0, not 1

public static int numZero (int [ ] arr) Test 1


{ // Effects: If arr is null throw NullPointerException [ 2, 7, 0 ]
// else return the number of occurrences of 0 in arr Expected: 1
int count = 0; Actual: 1
for (int i = 1; i < arr.length; i++)
{ Test 2
Error: i is 1, not 0,
if (arr [ i ] == 0) [ 0, 2, 7 ]
on the first iteration
{ Expected: 1
Failure: none
count++; Actual: 0
}
} Error: i is 1, not 0
return count; Error propagates to the variable count
} Failure: count is 0 at the return statement
16
Static Testing vs Dynamic Testing

17
Differences between Static and Dynamic Testing

❖ Static testing finds defects in work products directly rather than


identifying failures caused by defects when the software is run.

❖ Static testing can be used to improve internal quality of the


software, while dynamic testing focuses on externally visible
behaviors.
Static Testing (Inspection)

✧ Inspections involve a team of people reading or visually


inspecting a program

✧ These are so-called ‘static’ V & V technique


✧ Do not require execution of a system
✧ Focuses on static system representations such as requirements,
design model, source code etc. to discover defects
Inspection (Cont.)
General Procedure:

✧ Inspection Team:
▪ Moderator
▪ Programer
▪ Program’s Designer
▪ Test Specialist
✧ Moderator’s Duties:
▪ Distributing materials for, and scheduling, the inspection session.
▪ Leading the session.
▪ Recording all errors found.
▪ Ensuring that the errors are subsequently corrected
Inspection (Cont.)

✧ Inspection Agenda:
▪ The programmer narrates, statement by statement, the logic of the
program. And other participants raise questions
▪ The program is analyzed with respect to checklists of historically
common programming errors

✧ Human Agenda:
▪ The testing group must adopt an appropriate attitude to make the
inspection process effective
▪ The programmer must take the process in a positive and constructive
way
Dynamic Testing (Software Testing)

▪ Dynamic V &V technique


▪ Focuses on executing the implementation of the system with test
data and observes software behavior.

22
Inspection and Testing

23
Advantages of Inspections over Testing

✧ During testing, errors can mask (hide) other errors. Because inspection
is a static process, you don’t have to be concerned with interactions
between errors.
✧ Incomplete versions of a system can be inspected without additional
costs
✧ Compared with dynamic testing, typical defects that are easier and
cheaper to find and fix through static testing include:
● Requirement defects
● Design defects
● Coding defects
● Deviations from standards
● Incorrect interface specifications
● Gaps or inaccuracies in test basis traceability or coverage

24
A model of the software testing process

25
Testing Methods

26
Selection of test cases

Testing is expensive and time consuming, so it is important


to choose effective test cases.
Effectiveness, in this case, means two things
1. The test cases should show that, the components does what it is
supposed to do.
2. Should reveal defects in the component, if there are any
These leads to two types of unit test cases
▪ Tests reflecting normal operation of a program
▪ Tests based on testing experience of where common problems arise.
should use abnormal inputs to check that these are properly processed
and do not crash

27
Testing Methods
❖ White Box Testing
▪ focuses on the internal specifications (e.g., algorithms, logic paths) of
software units (i.e., modules)
▪ performed by developers

❖ Black Box Testing


▪ focuses on the external specifications (i.e., functional specifications and
input/output (interface) specifications) of software units (i.e., modules)

28
White box testing
● Statement coverage
○ This designs test cases that execute every statement in a unit (i.e., module) at
least once.
● Decision condition coverage (branch coverage)
○ This designs test cases that execute TRUE/FALSE at least once for every
“decision condition” (i.e., branch) in a unit (i.e., module).
● Condition coverage
○ This designs test cases that execute TRUE/FALSE at least once for every
“condition” used as a decision condition.
● Decision condition / condition coverage
○ This designs test cases that fulfill both decision condition coverage (i.e.,
branch coverage) and condition coverage.
● Multiple-condition coverage
○ This designs test cases that include every combination of TRUE/FALSE for
every condition.

29
White box testing

30
Black box testing

● Equivalence Partitioning

● Boundary Value Analysis

31
Equivalence Partitioning

▪ An exhaustive input test of a program is impossible.

▪ We are limited to a small subset of all possible inputs.

▪ We want to select the ‘‘right’’ subset, that is, the subset with the
highest probability of finding the most errors.

32
Equivalence Partitioning

33
Identifying the Equivalence Classes

Guidelines for constructing equivalence classes:

▪ If an input condition specifies a range of values, identify one valid


equivalence class and two invalid equivalence classes

▪ If an input condition specifies the number of values, identify one


valid equivalence class and two invalid equivalence classes

34
Practice

35
Practice

36
Boundary Value Analysis

❖ Boundary conditions are those situations directly on, above, and


beneath the edges of equivalence classes.

❖ General Guidelines:
➢ If an input condition specifies a range of values, write test cases for the
ends of the range, and invalid-input test cases for situations just
beyond the ends.
➢ If an input condition specifies a number of values, write test cases for
the minimum and maximum number of values and one beneath and
beyond these values.

37
Example
For example, say a program specification states that the program
accepts 4 to 8 inputs which are five-digit integers greater than
10,000. You use this information to identify the input partitions and
possible test input values.

38
General Testing Guidelines

■ Choose inputs that force the system to generate all error


messages
■ Design inputs that cause input buffers to overflow
■ Repeat the same input or series of inputs numerous times
■ Force invalid outputs to be generated
■ Force computation results to be too large or too small

39
Stages of Testing

40
Testing Procedure

✧ Manual testing
A tester runs the program with some test data and compares the results
to their expectations.

✧ Automated testing
The tests are encoded in a program that is run each time the system
under test (SUT) is to be tested
Faster than manual testing
Testing can never be completely automated
Practically impossible to test systems that depend on how things look
(UI)

41
Automated Testing
An automated test has three parts:

1. A setup part, where you initialize the system with the test
case, namely the inputs and expected outputs
2. A call part, where you call the object or method to be tested.
3. An assertion part, where you compare the result of the call
with the expected result. If the assertion evaluates to true, the
test has been successful; if false, then it has failed.

42
Automated Testing

43
Stages of Testing

❖ Development testing
▪ where the system is tested during development to discover bugs and
defects.
▪ Done by System designers and programmers
❖ Release testing
▪ a complete version of the system is tested before it is released to users
▪ Done by a separate testing team
❖ User testing
▪ users or potential users of a system test the system in their own
environment

44
Development Testing

Development testing includes all testing activities that are


carried out by the team developing the system.
Testing may be carried out at three levels of granularity:
✧ Unit testing:
▪ individual program units or object classes are tested.
✧ Component testing
▪ where several individual units are integrated to create composite
components
✧ System testing
▪ some or all of the components in a system are integrated and the system
is tested as a whole
45
Unit Testing

Unit testing is the process of testing individual program


components in isolation.

✧ Units may be:


▪ Individual functions or methods within an object.
▪ Object classes with several attributes and methods
▪ Composite components with defined interfaces used to access their
functionalities
✧ Testing is performed by calling these units with different input
parameters

46
Object Class Testing

• Testing should be designed in a way which will provide


coverage to all the features of the objects.

• Complete test coverage of a class involves:


▪ testing all operations associated with an object
▪ setting and checking the values of all attributes associated with the
object
▪ putting the object into all possible states

47
Component Testing
❑ After individual objects have been unit tested in isolation, they
are combined (integrated) to form “composite components”.
❑ The functionalities of these objects are accessed through their
defined component interfaces
❑ Component testing focuses on showing that the component
interface behaves according to its specification

48
Interface Testing

✧ Parameter interfaces
▪ data are passed from one component to another.
✧ Shared memory interfaces
▪ a block of memory is shared between components.
✧ Procedural interfaces
▪ one component encapsulates a set of procedures that can be
called by other components.
✧ Message passing interfaces
▪ one component requests a service from another component by
passing a message to it.
Interface Errors

✧ Interface misuse
▪ A calling component calls some other component and makes an
error in the use of its interface. Example: wrong type of
parameters or be passed in the wrong order, or the wrong number
of parameters may be passed
✧ Interface misunderstanding
▪ A calling component misunderstands the specification of the
interface of the called component and makes assumptions about
its behavior .
✧ Timing errors
▪ The called and calling components operate at different speeds
and out-of-date information is accessed.
Interface Testing Guidelines

• Design tests in which parameters of the called components are


at the extreme ends of their ranges.
• always test pointer parameters with null pointer
• Design tests which cause the component to fail
• Use stress testing in message passing systems.
• For shared memory, design tests that vary the order in which
these components are activated
System Testing

✧ System testing during development involves integrating


components to create a version of the system and then
testing the integrated system
✧ Focuses on testing the interactions between the
components
✧ Checks whether the components are compatible, interacts
correctly and transfer data correctly across their interfaces.
System Testing Vs Component Testing

✧ During system testing, reusable components that have been


separately developed and off-the-shelf systems may be
integrated with newly developed components. The complete
system is then tested

✧ Components developed by different team members or groups


may be integrated at this stage. System testing is a collective
rather than an individual process.

You might also like