0% found this document useful (0 votes)
0 views

Lesson 8 Testing

The document discusses the processes of software testing, verification, and validation, emphasizing the importance of ensuring that software meets its requirements and functions correctly. It outlines various testing goals, stages, and methodologies, including development, release, and user testing, as well as techniques like black-box and white-box testing. The ultimate aim is to establish confidence that the software is fit for purpose, aligning with user expectations and market conditions.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Lesson 8 Testing

The document discusses the processes of software testing, verification, and validation, emphasizing the importance of ensuring that software meets its requirements and functions correctly. It outlines various testing goals, stages, and methodologies, including development, release, and user testing, as well as techniques like black-box and white-box testing. The ultimate aim is to establish confidence that the software is fit for purpose, aligning with user expectations and market conditions.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 67

BCS:

CERTIFICATE IN SOFTWARE
DEVELOPMENT

PHASE-SPECIFIC ISSUES: (TESTING AND DEBUGGING)


SYSTEM TESTING
 Testing is intended to show that a program does what it is intended
to do and to discover program defects before it is put into use.
 When you test software, you execute a program using artificial data.
 You check the results of the test run for errors, anomalies, or
information about the program’s non-functional attributes.
 Testing cannot demonstrate that the system is free of defects or
that it will behave as specified in every circumstance.
 It is always possible that a test that you have overlooked could
discover further problems with the system.
 Testing can only show the presence of errors, not their absence
TESTING GOALS
1. To demonstrate to the developer and the customer that the
software meets its requirements.
2. To discover situations in which the behavior of the software is
incorrect, undesirable, or does not conform to its specification.
These are a consequence of software defects. Defect testing is
concerned with rooting out undesirable system behavior such as
system crashes, unwanted interactions with other systems,
incorrect computations, and data corruption.
TESTING GOALS

 The first goal leads to validation testing, where you expect the
system to perform correctly using a given set of test cases that
reflect the system’s expected use.
 The second goal leads to defect testing, where the test cases are
designed to expose defects.
 The test cases in defect testing can be deliberately obscure and
need not reflect how the system is normally used.
SOFTWARE VERIFICATION AND
 VALIDATION
Testing is part of a broader process of software verification and validation (V
& V).
1. Validation: Are we building the right product?
2. Verification: Are we building the product right?
 Verification and validation processes are concerned with checking that
software being developed meets its specification and delivers the functionality
expected by the people paying for the software.
 The aim of verification is to check that the software meets its stated functional
and non-functional requirements.
 Validation, however, is a more general process. The aim of validation is to
ensure that the software meets the customer’s expectations. It goes beyond
simply checking conformance with the specification to demonstrating that the
software does what the customer expects it to do.
 Validation is essential because, requirements specifications do not always
reflect the real wishes or needs of system customers and users.
ULTIMATE GOAL OF VERIFICATION AND VALIDATION

 The ultimate goal of verification and validation processes is to establish


confidence that the software system is ‘fit for purpose’.
 This means that the system must be good enough for its intended use.

 The level of required confidence depends on the system’s purpose, the


expectations of the system users, and the current marketing environment for
the system
1. Software purpose The more critical the software, the more important that
it is reliable. For example, the level of confidence required for software used
to control a safety-critical system is much higher than that required for a
prototype that has been developed to demonstrate new product ideas.
ULTIMATE GOAL OF VERIFICATION AND
VALIDATION
2. User expectations When a new system is installed, users may
tolerate failures because the benefits of use outweigh the costs of
failure recovery. However, as software matures, users expect it to
become more reliable so more thorough testing of later versions
may be required.
3. Marketing environment When a system is marketed, the sellers
of the system must take into account competing products, the
price that customers are willing to pay for a system, and the
required schedule for delivering that system. In a competitive
environment, a software company may decide to release a
program before it has been fully tested and debugged because
they want to be the first into the market. If a software product is
very cheap, users may be willing to tolerate a lower level of
reliability.
SOFTWARE INSPECTIONS AND REVIEWS
 The verification and validation process may involve software
inspections and reviews.
 Inspections and reviews analyze and check the system
requirements, design models, the program source code, and even
proposed system tests.
 This is usually referred to as ‘static’ V & V techniques in which you
don’t need to execute the software to verify it.
 Inspections mostly focus on the source code of a system but any
readable representation of the software, such as its requirements or
a design model, can be inspected.
 When you inspect a system, you use knowledge of the system, its
application domain, and the programming or modeling language to
discover errors.
ADVANTAGES OF SOFTWARE INSPECTION OVER
TESTING

1. During testing, errors can mask (hide) other errors. When an error leads to
unexpected outputs, you can never be sure if later output anomalies are due to a
new error or are side effects of the original error. Because inspection is a static
process, you don’t have to be concerned with interactions between errors.
Consequently, a single inspection session can discover many errors in a system.
2. Incomplete versions of a system can be inspected without additional costs. If a
program is incomplete, then you need to develop specialized test harnesses to test
the parts that are available. This obviously adds to the system development costs.
3. As well as searching for program defects, an inspection can also consider broader
quality attributes of a program, such as compliance with standards, portability, and
maintainability. You can look for inefficiencies, inappropriate algorithms, and poor
programming style that could make the system difficult to maintain and update.
 Typically, a commercialSTAGES OF TESTING
software system has to go through three stages of
testing:
1. Development testing, where the system is tested during
development to discover bugs and defects. System designers and
programmers are likely to be involved in the testing process.
2. Release testing, where a separate testing team tests a complete
version of the system before it is released to users. The aim of release
testing is to check that the system meets the requirements of system
stakeholders.
3. User testing, where users or potential users of a system test the
system in their own environment. For software products, the ‘user’ may
be an internal marketing group who decide if the software can be
marketed, released, and sold. Acceptance testing is one type of user
testing where the customer formally tests a system to decide if it
should be accepted from the system supplier or if further development
1. DEVELOPMENT TESTING
 Development testing includes all testing activities that are carried
out by the team developing the system.
 The tester of the software is usually the programmer who developed
that software, although this is not always the case.
 Some development processes use programmer/tester pairs where
each programmer has an associated tester who develops tests and
assists with the testing process.
 For critical systems, a more formal process may be used, with a
separate testing group within the development team.
 They are responsible for developing tests and maintaining detailed
records of test results.
DEVELOPMENT TESTING LEVELS

 During development, testing may be carried out at three levels of


granularity:
1. Unit testing, where individual program units or object classes are tested.
Unit testing should focus on testing the functionality of objects or methods.
2. Component testing, where several individual units are integrated to
create composite components. Component testing should focus on testing
component interfaces.
3. System testing, where some or all of the components in a system are
integrated and the system is tested as a whole. System testing should
focus on testing component interactions.
DEVELOPMENT TESTING LEVELS

 Development testing is primarily a defect testing process, where the


aim of testing is to discover bugs in the software.
 It is therefore usually interleaved with debugging
 Debugging the process of locating problems with the code and
changing the program to fix these problems.
A) UNIT TESTING
 Unit testing is the process of testing program components, such as methods or object
classes.
 Individual functions or methods are the simplest type of component. Your
tests should be calls to these routines with different input parameters.
 When you are testing object classes, you should design your tests to provide coverage of all
of the features of the object.
 This means that you should:
1. test all operations associated with the object;
2. set and check the value of all attributes associated with the object;
3. put the object into all possible states. This means that you should simulate all events
that cause a state change.
CHOOSING UNIT TEST CASES

 Testing is expensive and time consuming, so it is important that you


choose effective unit test cases.
 Effectiveness, in this case, means two things:
1. The test cases should show that, when used as expected, the
component that you are testing does what it is supposed to do.
2. If there are defects in the component, these should be revealed
by test cases.
STRATEGIES FOR EFFECTIVE TEST CASES

 Two possible strategies that can be effective in choosing test cases are:
1. Partition testing, where you identify groups of inputs that have
common characteristics and should be processed in the same way. You
should choose tests from within each of these groups.
2. Guideline-based testing, where you use testing guidelines to choose
test cases. These guidelines reflect previous experience of the kinds of
errors that programmers often make when developing components.
PARTITION TESTING
 The input data and output results of a program often fall into a number of
different classes with common characteristics. Examples of these classes are
positive numbers, negative numbers, and menu selections.
 Programs normally behave in a comparable way for all members of a class.
That is, if you test a program that does a computation and requires two
positive numbers, then you would expect the program to behave in the same
way for all positive numbers.
 Because of this equivalent behavior, these classes are sometimes called
equivalence partitions or domains.
 One systematic approach to test case design is based on identifying all input
and output partitions for a system or component.
 Test cases are designed so that the inputs or outputs lie within these partitions.
PARTITION TESTING
 Partition testing can be used to design test cases for both systems and
components.
 Once you have identified a set of partitions, you choose test cases from
each of these partitions.
 A good rule of thumb for test case selection is to choose test cases on the
boundaries of the partitions, plus cases close to the midpoint of the
partition.
 The reason for this is that designers and programmers tend to consider
typical values of inputs when developing a system. You test these by
choosing the midpoint of the partition.
 Boundary values are often atypical (e.g., zero may behave differently from
other non-negative numbers) so are sometimes overlooked by developers.
 Program failures often occur when processing these atypical values.
EQUIVALENCE PARTITIONING
 When you use the specification of a system to identify equivalence
partitions, this is called ‘black-box testing’. Here, you don’t need
any knowledge of how the system works.
 However, it may be helpful to supplement the black-box tests with
‘white-box testing’, where you look at the code of the program to
find other possible tests.
 For example, your code may include exceptions to handle incorrect
inputs. You can use this knowledge to identify ‘exception
partitions’—different ranges where the same exception handling
should be applied.
 Equivalence partitioning is an effective approach to testing because
it helps account for errors that programmers often make when
processing inputs at the edges of partitions.
TESTING GUIDELINES

 Guidelines encapsulate knowledge of what kinds of test cases are effective for
discovering errors. For example, when you are testing programs with
sequences, arrays, or lists, guidelines that could help reveal defects include:
1. Test software with sequences that have only a single value. Programmers
naturally think of sequences as made up of several values and sometimes
they embed this assumption in their programs. Consequently, if presented
with a single value sequence, a program may not work properly.
2. Use different sequences of different sizes in different tests. This decreases
the chances that a program with defects will accidentally produce a correct
output because of some accidental characteristics of the input.
3. Derive tests so that the first, middle, and last elements of the sequence are
accessed. This approach is reveals problems at partition boundaries.
GENERAL GUIDELINES

 Some of the most general guidelines that he suggests are:

1. Choose inputs that force the system to generate all error messages;
2. Design inputs that cause input buffers to overflow;
3. Repeat the same input or series of inputs numerous times;
4. Force invalid outputs to be generated;
5. Force computation results to be too large or too small.
 As you gain experience with testing, you can develop your own
guidelines about how to choose effective test cases
BLACK BOX (FUNCTIONAL) TESTING
 This type of testing is termed black box testing because no knowledge of the workings of
the program is used as part of the testing – we only consider inputs and outputs. The
program is thought of as being enclosed within a black box.
 We then run the program, input the data and see what happens.
 Black box testing is also known as functional testing because it use only knowledge of the
function of the program (not how it works).
 Ideally, testing proceeds by writing down the test data and the expected outcome of the
test before testing takes place. This is called a test specification or schedule.
 Then you run the program, input the data and examine the outputs for discrepancies
between the predicted outcome and the actual outcome.
 Test data should also check whether exceptions are handled by the program in accordance
with its specification.
BLACK BOX (FUNCTIONAL) TESTING
 Consider a program that decides whether a person can vote, depending on
their age. The minimum voting age is 18.
 We cannot realistically test this program with all possible values, but
instead we need some typical values.
 The approach to devising test data for black box testing is to use
equivalence partitioning.
 This means looking at the nature of the input data to identify common
features.
 In the voting program, we recognize that the input data falls into two
partitions:
1. the numbers less than 18
2. the numbers greater than or equal to 18
BLACK BOX (FUNCTIONAL) TESTING

 This can be diagrammed as follows:

0 18
17 infinity
 We run the program with the following sets of data and note any discrepancies between
predicted and actual outcome.

Test Number Data Outcome


1 0 You cannot vote
2 33 You can vote
3 18 You can vote
SELECTING TEST DATA

 In summary, the rules for selecting test data for black box testing
using equivalence partitioning are:
1. partition the input data values
2. select representative data from each partition (equivalent data)
3. select data at the boundaries of partitions.
SELF-TEST QUESTION

In a program to play the game of chess, the player specifies the


destination for a move as a pair of indices, the row and column
number. The program checks that the destination square is
valid, that it is not outside the board. Devise black box test data
to check that this part of the program is working correctly.
WHITE BOX (STRUCTURAL) TESTING
 This form of testing makes use of knowledge of how the program works – the
structure of the program – as the basis for devising test data.
 In white box testing every statement in the program is executed at some time
during the testing.
 This is equivalent to ensuring that every path (every sequence of instructions)
through the program is executed at some time during testing.
 This includes null paths, so an if statement without an else has two paths
and every loop has two paths.
 Testing should also include any exception handling carried out by the
program.
WHITE BOX (STRUCTURAL) TESTING
 Here is the Java code for the voting checker program we are using as a case study:

public void actionPerformed(ActionEvent event) {


int age;
age = Integer.parseInt(textField.getText());
if (age >= 18) {
result.setText("you can vote");
}
else {
result.setText("you cannot vote");
}
}
 In this program, there are two paths (because the if has two branches) and therefore two
sets of data will serve to ensure that all statements are executed at some time during the
testing.
SELF-TEST QUESTION
A program’s function is to find the largest of three numbers. Devise white box test data for this section of
program. The code is:
int a, b, c;
int largest;
if (a >= b) {
if (a >= c) {
largest = a;
}
else {
largest = c;
}
}
else {
if (b >= c) {
largest = b;
}
else {
largest = c;
}
}
B) COMPONENT TESTING
 Software components are often composite components that are made up of
several interacting objects.
 You access the functionality of these objects through the defined component
interface.
 Testing composite components should therefore focus on showing that the
component interface behaves according to its specification.
 You can assume that unit tests on the individual objects within the component
have been completed.
 The test cases are not applied to the individual components but rather to the
interface of the composite component created by combining these components.
 Interface errors in the composite component may not be detectable by testing the
individual objects because these errors result from interactions between the
objects in the component.
TYPES OF INTERFACE BETWEEN PROGRAM COMPONENTS
There are different types of interface between program components and,
consequently, different types of interface error that can occur:
1. Parameter interfaces These are interfaces in which data or sometimes function
references are passed from one component to another. Methods in an object have
a parameter interface.
2. Shared memory interfaces These are interfaces in which a block of memory is
shared between components. Data is placed in the memory by one subsystem and
retrieved from there by other sub-systems. This type of interface is often used in
embedded systems, where sensors create data that is retrieved and processed by
other system components.
3. Procedural interfaces These are interfaces in which one component
encapsulates a set of procedures that can be called by other components. Objects
and reusable components have this form of interface.
4. Message passing interfaces These are interfaces in which one component
requests a service from another component by passing a message to it. A return
message includes the results of executing the service. Some object-oriented
systems have this form of interface, as do client–server systems.
INTERFACE
 Interface errors are one of ERRORS
the most common forms of error in complex systems.
These errors fall into three classes:
1. Interface misuse A calling component calls some other component and makes an
error in the use of its interface. This type of error is common with parameter
interfaces, where parameters may be of the wrong type or be passed in the wrong
order, or the wrong number of parameters may be passed.
2. Interface misunderstanding A calling component misunderstands the
specification of the interface of the called component and makes assumptions
about its behavior. The called component does not behave as expected which then
causes unexpected behavior in the calling component. For example, a binary search
method may be called with a parameter that is an unordered array. The search
would then fail.
3. Timing errors These occur in real-time systems that use a shared memory or a
message-passing interface. The producer of data and the consumer of data may
operate at different speeds. Unless particular care is taken in the interface design,
the consumer can access out-of-date information because the producer of the
information has not updated the shared interface information.
GENERAL GUIDELINES FOR INTERFACE TESTING
1. Examine the code to be tested and explicitly list each call to an external component. Design a
set of tests in which the values of the parameters to the external components are at the
extreme ends of their ranges. These extreme values are most likely to reveal interface
inconsistencies.
2. Where pointers are passed across an interface, always test the interface with null pointer
parameters.
3. Where a component is called through a procedural interface, design tests that deliberately
cause the component to fail. Differing failure assumptions are one of the most common
specification misunderstandings.
4. Use stress testing in message passing systems. This means that you should design tests that
generate many more messages than are likely to occur in practice. This is an effective way of
revealing timing problems.
5. Where several components interact through shared memory, design tests that vary the order
in which these components are activated. These tests may reveal implicit assumptions made
by the programmer about the order in which the shared data is produced and consumed.
DISCOVERING INTERFACE ERRORS

 Inspections and reviews can sometimes be more cost effective than


testing for discovering interface errors.
 Inspections can concentrate on component interfaces and questions
about the assumed interface behavior asked during the inspection
process.
 A strongly typed language such as Java allows many interface errors
to be trapped by the compiler.
C) SYSTEM TESTING
 System testing during development involves integrating components to create a
version of the system and then testing the integrated system.
 System testing checks that components are compatible, interact correctly and
transfer the right data at the right time across their interfaces.
 It obviously overlaps with component testing but there are two important differences:

1. During system testing, reusable components that have been separately


developed and off-the-shelf systems may be integrated with newly developed
components. The complete system is then tested.
2. Components developed by different team members or groups may be integrated
at this stage.
 System testing is a collective rather than an individual process. In some companies,
system testing may involve a separate testing team with no involvement from
designers and programmers.
SYSTEM TESTING
 System testing focus on testing the interactions between the components and
objects that make up a system.
 Reusable components or systems are tested to check that they work as expected
when they are integrated with new components.
 This interaction testing should discover those component bugs that are only
revealed when a component is used by other components in the system.
 Interaction testing also helps find misunderstandings, made by component
developers, about other components in the system.
 Because of its focus on interactions, use case–based testing is an effective
approach to system testing.
 Typically, each use case is implemented by several components or objects in the
system.
TEST-DRIVEN DEVELOPMENT
 Test-driven development (TDD) is an approach to program development in
which you interleave testing and code development (Beck, 2002; Jeffries
and Melnik, 2007).
 Essentially, you develop the code incrementally, along with a test for that
increment.
 You don’t move on to the next increment until the code that you have
developed passes its test.
 Test-driven development was introduced as part of agile methods such as
Extreme Programming. However, it can also be used in plan-driven
development processes.
THE FUNDAMENTAL TDD PROCESS
 The steps in the process are as follows:

1. You start by identifying the increment of functionality that is required. This


should normally be small and implementable in a few lines of code.
2. You write a test for this functionality and implement this as an automated test.
This means that the test can be executed and will report whether or not it has
passed or failed.
3. You then run the test, along with all other tests that have been implemented.
Initially, you have not implemented the functionality so the new test will fail.
This is deliberate as it shows that the test adds something to the test set.
4. You then implement the functionality and re-run the test. This may involve
refactoring existing code to improve it and add new code to what’s already
there.
5. Once all tests run successfully, you move on to implementing the next chunk
of functionality.
BENEFITS OF TEST-DRIVEN DEVELOPMENT
1. Better problem understanding
2. Code coverage In principle, every code segment that you write should have at least one
associated test. Therefore, you can be confident that all of the code in the system has
actually been executed. Code is tested as it is written so defects are discovered early in
the development process.
3. Regression testing A test suite is developed incrementally as a program is developed.
You can always run regression tests to check that changes to the program have not
introduced new bugs.
4. Simplified debugging When a test fails, it should be obvious where the problem lies.
The newly written code needs to be checked and modified. You do not need to use
debugging tools to locate the problem. Reports of the use of test-driven development
suggest that it is hardly ever necessary to use an automated debugger in test-driven
development (Martin, 2007).
5. System documentation The tests themselves act as a form of documentation that
describe what the code should be doing. Reading the tests can make it easier to
understand the code.
2. RELEASE TESTING

 Release testing is the process of testing a particular release of a


system that is intended for use outside of the development team.
 Normally, the system release is for customers and users.
 In a complex project, however, the release could be for other teams
that are developing related systems.
 For software products, the release could be for product
management who then prepare it for sale.
RELEASE TESTING VS. SYSTEM TESTING

 There are two important distinctions between release testing and


system testing during the development process:
1. A separate team that has not been involved in the system
development should be responsible for release testing.
2. System testing by the development team should focus on
discovering bugs in the system (defect testing). The objective of
release testing is to check that the system meets its
requirements and is good enough for external use (validation
testing).
RELEASE TESTING PROCESS

 The primary goal of the release testing process is to convince the


supplier of the system that it is good enough for use.
 Release testing, has to show that the system delivers its specified
functionality, performance, and dependability, and that it does not
fail during normal use.
 It should take into account all of the system requirements, not just
the requirements of the end-users of the system.
 Release testing is usually a black-box testing process where tests
are derived from the system specification.
A) REQUIREMENTS-BASED TESTING

 A general principle of good requirements engineering practice is that


requirements should be testable; that is, the requirement should be written
so that a test can be designed for that requirement.
 A tester can then check that the requirement has been satisfied.
 Requirements-based testing, therefore, is a systematic approach to test
case design where you consider each requirement and derive a set of tests
for it.
 Requirements-based testing is validation rather than defect testing—you
are trying to demonstrate that the system has properly implemented its
requirements.
B) SCENARIO TESTING

 Scenario testing is an approach to release testing where you devise typical scenarios of
use and use these to develop test cases for the system.
 A scenario is a story that describes one way in which the system might be used.
 Scenarios should be realistic and real system users should be able to relate to them.
 In a short paper on scenario testing, Kaner (2003) suggests that a scenario test should
be a narrative story that is credible and fairly complex.
 It should motivate stakeholders; that is, they should relate to the scenario and believe
that it is important that the system passes the test. He also suggests that it should be
easy to evaluate.
 If there are problems with the system, then the release testing team should recognize
them.
C) PERFORMANCE TESTING
 Once a system has been completely integrated, it is possible to test for evolving properties,
such as performance and reliability.
 Performance tests have to be designed to ensure that the system can process its intended
load.
 This usually involves running a series of tests where you increase the load until the system
performance becomes unacceptable.
 Performance testing is concerned both with demonstrating that the system meets its
requirements and discovering problems and defects in the system.
 To test whether performance requirements are being achieved, you may have to construct
an operational profile.
 An operational profile is a set of tests that reflect the actual mix of work that will be
handled by the system.
 Therefore, if 90% of the transactions in a system are of type A; 5% of type B; and the
remainder of types C, D, and E, then you have to design the operational profile so that the
vast majority of tests are of type A. Otherwise, you will not get an accurate test of the
operational performance of the system.
D) STRESS TESTING

 An effective way to discover defects is to design tests around the limits of the
system.
 In performance testing, this means stressing the system by making demands
that are outside the design limits of the software.
 This is known as ‘stress testing’.
 For example, say you are testing a transaction processing system that is
designed to process up to 300 transactions per second. You start by testing
this system with fewer than 300 transactions per second. You then gradually
increase the load on the system beyond 300 transactions per second until it
is well beyond the maximum design load of the system and the system fails.
STRESS TESTING
 This type of testing has two functions:
1. It tests the failure behavior of the system. Circumstances may arise
through an unexpected combination of events where the load
placed on the system exceeds the maximum anticipated load. In
these circumstances, it is important that system failure should not
cause data corruption or unexpected loss of user services. Stress
testing checks that overloading the system causes it to ‘fail-soft’
rather than collapse under its load.
2. It stresses the system and may cause defects to come to light that
would not normally be discovered. Although it can be argued that
these defects are unlikely to cause system failures in normal
usage, there may be unusual combinations of normal
circumstances that the stress testing replicates.
3. USER TESTING

 User or customer testing is a stage in the testing process in which users or


customers provide input and advice on system testing.
 This may involve formally testing a system that has been commissioned
from an external supplier, or could be an informal process where users
experiment with a new software product to see if they like it and that it
does what they need.
 User testing is essential, even when comprehensive system and release
testing have been carried out. The reason for this is that influences from
the user’s working environment have a major effect on the reliability,
performance, usability, and robustness of a system.
TYPES OF USER TESTING

 In practice, there are three different types of user testing:


1. Alpha testing, where users of the software work with the
development team to test the software at the developer’s site.
2. Beta testing, where a release of the software is made available
to users to allow them to experiment and to raise problems that
they discover with the system developers.
3. Acceptance testing, where customers test a system to decide
whether or not it is ready to be accepted from the system
developers and deployed in the customer environment.
A) ALPHA TESTING

 In alpha testing, users and developers work together to test a


system as it is being developed.
 This means that the users can identify problems and issues that are
not readily apparent to the development testing team.
 Developers can only really work from the requirements but these
often do not reflect other factors that affect the practical use of the
software.
 Users can therefore provide information about practice that helps
with the design of more realistic tests.
ALPHA TESTING
 Alpha testing is often used when developing software products that
are sold as shrink-wrapped systems.
 Users of these products may be willing to get involved in the alpha
testing process because this gives them early information about
new system features that they can exploit.
 It also reduces the risk that unanticipated changes to the software
will have disruptive effects on their business.
 However, alpha testing may also be used when custom software is
being developed.
B) BETA TESTING

 Beta testing takes place when an early, sometimes unfinished,


release of a software system is made available to customers and
users for evaluation.
 Beta testers may be a selected group of customers who are early
adopters of the system.
 Alternatively, the software may be made publicly available for use
by anyone who is interested in it.
BETA TESTING

 Beta testing is mostly used for software products that are used in many
different environments (as opposed to custom systems which are
generally used in a defined environment).
 It is impossible for product developers to know and replicate all the
environments in which the software will be used.
 Beta testing is therefore essential to discover interaction problems
between the software and features of the environment where it is used.
 Beta testing is also a form of marketing— customers learn about their
system and what it can do for them.
C) ACCEPTANCE TESTING

 Acceptance testing is an inherent part of custom systems


development.
 It takes place after release testing.
 It involves a customer formally testing a system to decide whether
or not it should be accepted from the system developer.
 Acceptance implies that payment should be made for the system.
 There are six stages in the acceptance testing process.
STAGES IN THE ACCEPTANCE TESTING PROCESS
1. Define acceptance criteria This stage should, ideally, take place
early in the process before the contract for the system is signed.
The acceptance criteria should be part of the system contract and
be agreed between the customer and the developer. In practice,
however, it can be difficult to define criteria so early in the process.
Detailed requirements may not be available and there may be
significant requirements change during the development process.
2. Plan acceptance testing This involves deciding on the resources,
time, and budget for acceptance testing and establishing a testing
schedule. The acceptance test plan should also discuss the
required coverage of the requirements and the order in which
system features are tested. It should define risks to the testing
process, such as system crashes and inadequate performance, and
discuss how these risks can be mitigated.
STAGES IN THE ACCEPTANCE TESTING PROCESS

3. Derive acceptance tests Once acceptance criteria have been established, tests
have to be designed to check whether or not a system is acceptable. Acceptance
tests should aim to test both the functional and non-functional characteristics
(e.g., performance) of the system. They should, ideally, provide complete
coverage of the system requirements. In practice, it is difficult to establish
completely objective acceptance criteria. There is often scope for argument about
whether or not a test shows that a criterion has definitely been met.
4. Run acceptance tests The agreed acceptance tests are executed on the
system. Ideally, this should take place in the actual environment where the
system will be used, but this may be disruptive and impractical. Therefore, a user
testing environment may have to be set up to run these tests. It is difficult to
automate this process as part of the acceptance tests may involve testing the
interactions between end-users and the system. Some training of end-users may
be required.
STAGES IN THE ACCEPTANCE TESTING PROCESS
5. Negotiate test results It is very unlikely that all of the defined
acceptance tests will pass and that there will be no problems with
the system. If this is the case, then acceptance testing is complete
and the system can be handed over. More commonly, some
problems will be discovered. In such cases, the developer and the
customer have to negotiate to decide if the system is good enough
to be put into use. They must also agree on the developer’s
response to identified problems.
6. Reject/accept system This stage involves a meeting between the
developers and the customer to decide on whether or not the
system should be accepted. If the system is not good enough for
use, then further development is required to fix the identified
problems. Once complete, the acceptance testing phase is
repeated.
OTHER TESTING METHODS

 There are various other testing methods that can be conducted on a


software
 These tests they either fall under development, release or user testing
 The following slides will give a brief explanation on some of the common
testing methods
DRY RUN TEST

 A dry run (or a practice run) is a testing process where


the effects of a possible failure are intentionally mitigated.
 For example, an aerospace company may conduct a “dry
run” test of a jet's new pilot ejection seat while the jet is
parked on the ground, rather than while it is in flight.
STEPPING THROUGH CODE
 Some debuggers allow the user to step through a program, executing just
one instruction at a time.
 This is sometimes called single-shotting.
 Each time you execute one instruction you can see which path of
execution has been taken. You can also see (or watch) the values of
variables. It is rather like an automated structured walkthrough.
 In this form of testing, you concentrate on the variables and closely check
their values as they are changed by the program to verify that they have
been changed correctly.
 A debugger is usually used for debugging (locating a bug); here it is used
for testing (establishing the existence of a bug).
REGRESSION TESTING

 Similar in scope to a functional test, a regression test allows a


consistent, repeatable validation of each new release of a product or
Web site.
 Such testing ensures reported product defects have been corrected
for each new release and that no new quality problems were
introduced in the maintenance process.
 Though regression testing can be performed manually an
automated test suite is often used to reduce the time and resources
needed to perform the required testing.
EXHAUSTIVE TESTING

 Exhaustive testing, would be to use all possible data values, in each


case checking the correctness of the outcome.
 But even for the method to multiply two 32-bit integers would take
100 years (assuming a 1 millisecond integer multiply instruction is
provided by the hardware of the computer).
 So exhaustive testing is almost always impracticable.
 These considerations lead us to the unpalatable conclusion that it is
impossible to test any program exhaustively.
 Thus any program of significant size is likely to contain bugs.
SMOKE TESTING

 A quick-and-dirty test that the major functions of a piece of software


work without bothering with finer details.
 Originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it
does not catch on fire.
COMPATIBILITY TESTING

 Testing to verify a product meets customer specified requirements.


 A customer usually does this type of testing on a product that is
developed externally.
CONFORMANCE TESTING

 Verifying implementation conformance to industry standards.


 Producing tests for the behavior of an implementation to be sure it
provides the portability, interoperability, and/or compatibility a
standard defines.
PRACTICE QUESTIONS
1. Explain why it is not necessary for a program to be completely free of defects before it is delivered
to its customers.
2. Explain why testing can only detect the presence of errors, not their absence.
3. Some people argue that developers should not be involved in testing their own code but that all
testing should be the responsibility of a separate team. Give arguments for and against testing by
the developers themselves.
4. What is regression testing?
5. What do you understand by the term ‘stress testing’? Suggest how you might stress test a system
of your choice
6. What are the benefits of involving users in release testing at an early stage in the testing process?
Are there disadvantages in user involvement?
7. A common approach to system testing is to test the system until the testing budget is exhausted
and then deliver the system to customers. Discuss the ethics of this approach for systems that are
delivered to external customers.
THE END!!!
PREPARED BY F. ZINYOWERA

You might also like