0% found this document useful (0 votes)
2 views

Object Oriented Testing

The document provides an overview of object-oriented software testing, detailing various types of testing such as unit, integration, system, and acceptance testing. It emphasizes the importance of testing in ensuring software quality and outlines the testing process, including development, release, and user testing. Additionally, it discusses strategies for test planning, execution, and error analysis to improve software reliability.

Uploaded by

bhandariprs321
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Object Oriented Testing

The document provides an overview of object-oriented software testing, detailing various types of testing such as unit, integration, system, and acceptance testing. It emphasizes the importance of testing in ensuring software quality and outlines the testing process, including development, release, and user testing. Additionally, it discusses strategies for test planning, execution, and error analysis to improve software reliability.

Uploaded by

bhandariprs321
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

OBJECT-ORIENTED

SOFTWARE
ENGINEERING
Object Oriented software Testing

Roll No:19
Roll No:20
AN OVERVIEW OF TESTING
 Testing is defined as the activity to
check whether the actual results match
the expected results and to ensures
that software is defect free
 50% of development effort is being
spend

TEST
Unit testing
TYPES
 Integration testing
 System testing
 Regression testing
 Operation testing
 Full-scale testing
 Performance testing
 Stress testing
 Negative testing
 Test based on requirement specification
 Ergonomic tests
 Testing of user documents
 Acceptance testing
 Alpha testing
 beta testing
TEST TYPES
 Unit Testing--one and only one unit is tested
 Integration testing—units are working
together correctly
 System testing—integrated whole system
working correctly
 Regression test– when made changes in
system, to verify the old functionality remains
 Operation test—if system has to be
reconfigured during operation, should be
tested
 Full scale test—run program in maximum
limits
Many simultaneous users , use cases ,
equipments are connected
TEST TYPES
 Performance test—measure the performance
with different load
e.g. Store allocation , CPU utilization , speed
Stress testing
 Stress test—an overload test
 Negative test—perform to break the system
and verify the response of application during
unwanted inputs
 Requirements specification test—check
explicitly requirement specification
 Ergonomic test—user support and usability
test
 Is the interface consistent between several
interfaces?
 Are the menu readable?
TEST TYPES
 Testing of user documents– user manual
and documentation for maintenance should
be checked
 For language
 Readability test
 Balance between text
 Balance between text and picture

 Acceptance testing—tested with real data


 Alpha testing—user of the software work
with development team in developer’s site
 Beta testing—all user to experiment in
user’s site
INSPECTION AND TESTING

inspections

Requirements UML
Software Database
specification diagram
architecture schemas program
model

System
prototype
Testing

fig: inspections and testing


INSPECTION

Inspection and reviews can be more cost


effective than testing for discovering interface
errors

Inspection mostly focus on the source code of


the system ,requirement and design model

 A single inspection can discover many errors in


a system
 Incomplete version of software can be
inspected without additional cost
 look for inefficiencies , inappropriate
algorithms , poor programming styles that
could make system difficult to maintain.
input output

Test
program

Expected
output

tested
unit

Fig : Schematic illustration of automated testing


est program: regard them as part of product but placed in separate block
 Use for further maintenance of system
 To Perform regression test
USE OF INTERFACE SIMULATOR

 To obtain simple interface to system and


make test program independent of the
system Interface simulator or test
driver is built
 Interface simulator saves works since
handling of the interface of system in target
environment
Interface Test
System block
simulator program

Fig: user of and interface simulator Test data


Test
reports

Test cases Test data Test result

Design Prepare Run


Compare
test program
test with test
result with
cases data data
test cases

Fig: A model of software testing

pically a commercial software system has to go through three stages of


sting:
Development testing
A. Unit testing
B. Component/integration testing
C. System testing

Release testing
User testing
1. DEVELOPMENT
TESTING
 Development testing includes all
testing activities that are carried out by
the team developing the system.
 Testing group are responsible for
developing tests and maintaining the
test details.
 Testing carried out at the three level of
granularity:
A. Unit testing
B. Component testing(integration)
C. System testing
System

Sub System
Integration Testing
Service
package
Blocks
Unit Testing
Class

Fig: unit testing and integration testing are made on different levels
NOTE that service package and blocks are tested in both manners
TESTING ACTIVITIES
construction Testing
system
system
Sub system
sub
system Use case
Use case
Service packages Service packages
blocks blocks
classes class

Fig: testing activities performed


A) UNIT TESTING
 Process of testing units:-
 classes
 Individual methods/functions
 Blocks
 Service packages
 Test is design to provide coverage of
all features of the object
 Test all operation associated with the
object
 Set and check the values of all
attributes associated with the object
 Put the object into all possible states
I. STRUCTURAL TESTING
 Test internal structure is correct
 measures of test coverage

Least coverage is to exercise each decision-to-


decision path at least once e.g. IF-statement
 Loops must be given extra cares

Fig: the member function in (a) can be tested with only two test
Cases illustrated in(b) to do a full cover of each DDpath
unit test cont….
STRUCTURAL TESTING CONT……
 polymorphism
 Dynamic binding : runtime binding/late
binding
 E.g.
Class member{
…………………..
Abstract boolean validate payment ( account a , int
amount , card c );
 Gold member
……………………….  Nepal Acc
 Silver member  China Acc
}  Bronze  Pakistan Acc
member  EU Acc
 Japan Acc
 Visa card  3*5*3=45 possible combination of
 Debit card dynamic binding
 AM express Often difficult to find and test all binding
card that may occurs
unit test cont….
STRUCTURAL TESTING CONT……
 Polymorphism

Fig : a class hierarchy of shapes

Declare a variable figure of type


shape
Descendants all have a Draw
operation
Then assign figure to an object of
any of shape’s descendants

Cont…..
STRUCTURAL TESTING CONT…..
 Inheritance
 Not only when we modify an ancestor class
must we re-test the descendants, but also
when we added a new descendent we may
need to re-test inherited operations
class A {
Protected int x= 200; // invariant x>100
…………………………………
Void m()
{ ………………… //correctness depends on variable }
}
Above condition
Class B extends A { is not defined
here
Void m1() {x=1,2,3,…….}
}

 Need to develop unique test cases for


every level in inheritance unit test cont….
STRUCTURAL TESTING
CONT…..
 Overridden
Class A{
Void m() {………… m2(); }
M() calls m2()
Void m2() {……..}
}  Class B has overridden method
m2()
 M() is called in context of class B
Class B extends A { due to dynamic binding B.m2() is
called
Void m2() {…………….}  Can’t be sure that m() is correct
anymore
} We need to re-test it with B
instance

unit test cont….


STRUCTURAL TESTING CONT…..
To test Abstract class focus on two
properties
 We must test that we can inherit the class
and That we can create instance of the
descendant
 We must test any stimuli sent to the object
itself work properly

unit test cont….


II. SPECIFICATION TESTING
 Contains the conditions for the tests Which
control testing instead of test out comes
 Which test in which order
 Detail description of how the test shall be
performed
 Specify expected output
 Criteria for an approved test
 Report skeletons are prepared prior to the
testing
STATE-BASED TESTING

State-Based testing
 While messaging passing from one
object to another or receiving
message from object perform some
tasks (operation) which leads to
alternation of it’s state
STATE-BASED TESTING
Order received

Unprocessed
order
Checked (reject)
Checked (accept)

Reject order Accept order

Checked (pending) Checked (fulfilled)

Fulfilled
Pending order

Figure: example of state model


CHOOSING UNIT TEST CASES
 Equivalence Partition  To reduce number of testes
that we need to perform
testing  Select reasonably small
 Identify groups of input that number of test cases
have common characteristics
and should be processed in
 Probability of fault finding is
the same way high
 Divides the input data of a
 E.g. We write test cases for
software unit into partitions stack , we may write tests
of equivalent data from cases for when stack is
which test cases can Output
be partition empty and loaded or full
derived 

Input equivalence partitions

system

Inp

Possible inputs Possible outputs


 Test cases are designed such so that
inputs and outputs lies with in this
CHOOSING UNIT TEST CASES
 Guideline testing
 What kind of test cases are effective for discovering
errors
 Guidelines reflects previous experience of the kinds of
error that programmers often make when developing
components
 Some guidelines:
 Design inputs that cause input buffer overflow
 Force computation result to be too large or too small
 Choose inputs that force to generate all error messages.
 Repeat the same input or series of inputs numerous time
 Force invalid outputs to be generated.
Identify partition by using the program specification or user documents
nd from experience where predicted the classes of input values that are
ikely to detect errors
3
7 11
10
4

Less than 4 Between 4 and 10 More then 10

Numbers of input values

100000
9999 10000 50000 99999

Less than 10000 Between 10000 and 99999 More then 99999

Input values

Fig: equivalence partitions


Thank you
B) INTEGRATION TESTING
 Blocks, service packages , sub systems are
tested
 Perform by programmer or testing groups
 Use use cases
 Purpose of verifying that units are working
together correctly
 The order in which the subsystems are
selected for testing and integration
determines the testing strategy

 Bottom up integration
 Top down integration
 Sandwich testing
Test cases are applied to the interface of
INTEGRATION TESTING

the composite component created by


combining components
 Different interface errors may
occurs
 Parameter interface:-data or function
references are passed from one
component to another
 Method in an object have a parameter
interface
 Shared memory interface:- block
memory shared between components
 Data is placed by one sub system and
accessed by other sub system
INTEGRATION TESTING
 Procedural interface:-one component
encapsulate a set of procedures that
can be called by another components
 Object and reusable components have
this form of interfaces
 Message passing interfaces:- one
component request message from
another component by passing message
Test Cases

A B
Fig : interface testing
C
C) SYSTEM TESTING
 System testing checks the components are
compatible , interact correctly and transfer
the right data at the right time across their
interfaces.
 System testing may involve separate
testing teams with no involvement from
designer and programmers.
 Because of focus on interactions , use case
based testing is an effective approach to
system testing.
SYSTEM TESTING
 Includes following testes :
 Operation testing
 Full scale testing
 Negative testing
 Test based on requirement specifications
 Test of user documents
SYSTEM TESTING CONT …….
Weather information system

Sat comms Weather Comm Weather


Request (report) station slink data

acknowledge

Report weather()

acknowledge
get (summary) Summaries ()

Send (report)
Reply(report)
acknowledge
acknowledge

Fig: collect weather data sequence chart


eather station is asked to report summarized weather data to a remote computer

Sat comms : request weather station: report comm


slink : get (summary) weather data : summarized.
Fig: activities in integration/system testing
TEST PLANNING
 Testing guide lines established
 Automatic or manual testing
 When requirement model is ready start test
planning
 Existing test program data can be used?
 Determine what degree of coverage of our
test have
 Test incrementally
 Test log kept during entire test work(brief
survey history of test activities)
TEST IDENTIFICATION
 To find out What shall be tested ?
 Use case initially tested separately

Use cases constitute an excellent tools for


the integration test , explicitly interconnect
several blocks and service packages
 For use case following tests:
 Basic course tests– expected flow of
events
 Odd course tests
 Tests based on requirement specifications
 Test of user documents
 Regression test
TEST EXECUTION
 As soon as some use cases are approved
test execution started
 Test as much as possible in parallel
 By automatic testing and manual testing
 decision table: get assessment of result
of the tests

Evaluation is sum up and compare to limit value


If sum exceeds this value then , test have approved the test object
Test report is prepared: result of individual subsets , resources
spend ,
ERROR ANALYSIS
 Test must be analyzed and reason of the
fault identified
 Fault need not be due to system , but of
some cases:
 Has the test been performed correctly?
 Is there fault in test data or test program?
 Is the failure caused by test bed?
 After test deficient blocks should be send to
designer
TEST COMPLETION

 After testing being completed , the


equipment and test bed should be restored
for re use again
 Documentation prepared should be saved
 Experiences of testing are collected and
discussed in order to learn for future test
activities
 Concluding notes are made and filed
2) RELEASE TESTING
 Release testing is the process of testing a
particular release of a system that is
intended for use outside of the
development team.
 Objective is to check that the system
meets it’s requirements
 called ‘functional testing’ because the
tester is only concerned with
functionality and not the implementation
of the software
RELEASE TESTING CONT …….
 Requirement based testing:
requirement based testing is validation
 Scenario testing :in software testing is a
method in which actual scenarios are used
for testing the software application instead
of test cases.
 Performance testing : is the practice of
evaluating how a system performs in
terms of responsiveness and stability
under a particular workload.
Performance tests are typically executed to
examine speed, robustness, reliability, and
application size.
C)USER TESTING
 User or customers provide input or advice
 In practice there are three types of user
testing:-
 Alpha testing: user of the software work
with the development team to test the
software at the developer’s site
 Beta testing: release of software is made
available to users to allow them to
experiment and to raise problems that are
discovered with system
 Acceptance testing: customers test a
system to decide whether or not it is ready
to accepted from the system developers and
deployed in the customers.
Test Test Test Testing
tests
criteria plan results report

Define Plan Derive


acceptan acceptan Accept or
accepta Run Negotiati
ce ce reject
nce acceptan on test
criteria testing system
testing ce tests results

Fig: Acceptance testing process

Acceptance testing:
tested with real data
 Alpha testing—user of the software work with
development team in developer’s site
 Beta testing—all user to experiment in user’s site
Thank you

You might also like