0% found this document useful (0 votes)
44 views34 pages

Software Testing

Software Engineering

Uploaded by

cosum kondowe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views34 pages

Software Testing

Software Engineering

Uploaded by

cosum kondowe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

SOFTWARE TESTING

BICTU1521
SOFTWARE?

Software is defined in terms of


 instructions (computer programs) that when executed provide desired features, function,
and performance;
 data structures that enable the programs to adequately manipulate information;
 descriptive information in both hard copy and virtual forms that describes the operation
and use of the programs
 A software is characteristic by the feature it does not wear out. Every developed software
needs to be test before it reaches the hands of the owners to uncover errors.
SOFTWARE TESTING

 Testing is intended to show that the program does what is intended to do and discover
program defects before it is put into use. It involves the use of artificial data. During testing
these things happens.
 Demonstrating to the developers and the customers that the software meets its requirements
(validation) and finding the inputs where the behaviour of the software is incorrect,
undesirable or does not to its specifications (defect testing).
 The key principles to guide software testing includes Validation and Verification.
 Validation is defined as are we building the right product while Verification are we
building the product right. The goal of both terms is to establish confidence that the
software is fit for purpose.
SOFTWARE INSPECTION

 As well as software, testing, the verification and validation process may involve software
inspections and reviews. Inspection mostly focuses on the code of the system. During
inspection, the knowledge of the system, its application domain, programming language to
discover errors.
 Inspection helps in error discovery as during testing can hide leading to undesirable
outputs. In addition, incomplete version of a system can be inspected without additional
costs.
 Finally, when searching for program defects, inspection can also consider broader quality
attributes of the system. One can look for inefficiencies, inappropriate algorithms and poor
programming style that could make the system difficult to maintain and update.
STAGES OF SOFTWARE TESTING

 A commercial software system has to go through three stages of testing.


 First, Development testing, this is where the software is tested during development to
discover bugs with the help of the system programmers.
 After Development testing is done, release testing is followed. In this stage, separate team
test a complete of the system before it is released to users.
 Finally, user testing where users of the system test the system in their own environments.
1.DEVELOPMENT TESTING

 This includes all testing activities that are carried by out by the team developing the
system. The system programmer who developed that software usually performs the test.
 In case the system is critical, process that is more formal may be use, with a separate
testing group within the development team. This group is responsible for developing tests
and maintaining detailed records of the test results.
Stages of development testing

 Unit testing is a type of software testing that focuses on individual units or components of
a software system.
 The purpose of unit testing is to validate that each unit of the software works as intended
and meets the requirements.
 Developers typically perform unit testing, and it is performed early in the development
process before the code is integrated and tested as a whole system
A.CHOOSING THE UNIT CASES

 Testing is expensive and time consuming. When choosing the unit test case which is
effective.
 The effectiveness of means the test cases should show that, when used as expected, the
components does what is it supposed to do and if there are defects in the components, these
should be revealed by the test cases
Unit Testing Techniques

 Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.
 White Box Testing: This technique is used in testing the functional behaviour of the
system by giving the input and checking the functionality output including the internal
design structure and code of the modules.
 Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analysing the code performance for the modules.
B.Component testing

 Component Testing is a type of software testing in which usability of each individual


component is tested. Along with the usability test, behavioural evaluation is also done for
each individual component.
 To perform this type of testing, each component needs to be in independent state and
should be in controllable state.
 Component testing mainly focus on testing the component interfaces that provide access to
the component functions.
Types of interfaces

 There are different types of interface between the program components and interface errors
that can occur.
 Parameter interface. Data or sometimes function references are passed from one
component to another in these interfaces. Methods in an object have a parameter interface.
 Shared memory interface. A block of memory is shared between components in these
interfaces. Data is placed in the memory by one subsystem and retrieved from there by
other subsystems. This type of interface is used in embedded systems, where sensors create
data that is retrieved and processed by other system components.
Types of interfaces

 Message passing interfaces .These are interfaces in which one component requests a
service from another component by passing a message to it. A return message includes the
results of executing the service. Some object-oriented systems have this form of interface,
as do client–server systems.
 Procedural interfaces .These are interfaces in which one component encapsulates a set of
procedures that can be called by other components. Objects and reusable components have
this form of interface.
INTERFACE ERRORS

 Interface misuse. A calling component calls some other component and makes an error in
the use of its interface. This type of error is common in parameter interfaces, where
parameters may be of the wrong type or be passed in the wrong order, or the wrong number
of parameters may be passed.
 Interface misunderstanding. A calling component misunderstands the specification of the
interface of the called component and makes assumptions about its behaviour. The called
component does not behave as expected, which then causes unexpected behaviour in the
calling component
 Timing errors .these occur in real-time systems that use a shared memory or a Message-
passing interface. The producer of data and the consumer of data may operate at different
speeds.
C. SYSTEM TESTING

 System testing development involves integrating components to create version of the


system and then testing the integrated system. It checks that all components are compatible,
interact, and correctly, and transfer the right data at the right time across their interfaces.
 During system testing, reusable components that have been separately developed may be
integrated with newly developed components and components developed by different
members are integrated are this stage.
2. TEST DRIVEN-DEVELOPMENT

 Test-driven development (TDD) is a methodology where testing and code development are
entangled.
 This approach involves incrementally developing code while simultaneously creating tests
for each increment.
 Progression to the next stage is only allowed once the developed code successfully passes
all associated tests. Initially a part of the XP agile development method, TDD has now
become widely accepted and is utilized in both agile and plan-based software development
processes.
PROCESS STEPS

1. You start by identifying the increment of functionality that is required. This should
normally be small and implementable in a few lines of code.
2. You write a test for this functionality and implement it as an automated test. This means
that the test can be executed and will report whether or not it has passed or failed.
3. You then run the test, along with all other tests that have been implemented. Initially, you
have not implemented the functionality so the new test will fail. This is deliberate as it
shows that the test adds something to the test set.
4. You then implement the functionality and re-run the test. This may involve refactoring
existing code to improve it and add new code to what is already there.
5. Once all tests run successfully, you move on to implementing the next chunk of
functionality.
Advantages of TDD

 Code coverage. In this principle, every code segment that you write should have at least
one associated test
 Regression Testing. A test suite id developed incrementally as a program is developed.
 Simplified debugging. When a test fails its easier to identify where the problem relies.
 System Documentation. The test itself act as a form of documentation that describes what
the code should be doing.
Disadvantages of Test-Driven Development

 Time-consuming: Writing tests before writing code can take more time, and writing tests
requires a certain level of skill and expertise, which can slow down the development
process if team members are not experienced in writing tests
 Limited scope: TDD is most effective when used to test small, discrete units of code. It
may be less effective when used to test larger, more complex systems or systems that
interact with external dependencies
 Over-reliance on tests: Developers may become so focused on writing tests that they
forget about other aspects of the development process, such as design or performance.
Additionally, developers may become complacent if all tests pass, assuming that the code is
free of defects or errors
 Difficulty in testing user interfaces: User interfaces are often complex and require a high
level of interactivity, making them difficult to test using automated tests. In these cases,
additional manual testing may be necessary to ensure that the user interface meets the
customer's requirements
3. RELEASE TESTING

 Release testing is the process of testing a particular release of a system that is intended for
use outside of the development team.
 Normally, the system release is for customers and users. In a complex project, however, the
release could be for other teams that are developing related systems. For software products,
the release could be for product management who then prepare it for sale.
 During development process, two important distinctions between release testing and
system testing.
 The system development, team should not be responsible for release testing and release
testing is a process of validation checking to ensure that a system meets its requirements
and is good enough for use by system customers.
 The primary goal of the release testing process is to convince the supplier of the system
that it is good enough for use.
 Release testing is usually a black-box testing process where test is derived from system
specifications. The systems whose behaviour can only be determined by studying its inputs
and related outputs
CATEGORIES OF RELEASE TESTING

 Requirements-based testing. In this category, a general principle of good requirement


engineering practice requirement should be testable. Requirement testing is validation
rather defect testing. This involves trying to demonstrate that the system has properly
implemented its requirements.
 SCENARIO TESTING. Scenario testing in an approach to release testing whereby you
devise typical scenarios of the use and these scenarios to develop test cases for the system.
A scenario is a story that describes one way in which the system might be used.
 Performance Testing. This test is designed to ensure that system can process its intended
load. This involves running a series of tests where you can increase the load until the
system performance becomes unacceptable. In order to successfully test the system
performance, one has to perform stress testing.
 Stress testing helps to test the failure behaviour of the system. Circumstances may arise
through an unexpected combination of events where the load placed on the systems
exceeds the maximum anticipated load. Finally, stress testing reveals defects that only
show up when the system is fully loaded.
 USER TESTING. User testing is the stage in the testing process in which users provide
input and advice on the system testing. Influence from users working environment can
have a major effect on the reliability, performance, usability and robustness of a system.
TYPES OF USER TESTING

 Alpha Testing, which involves a selected group of software user’s work closely with the
development team to test early release of the software. This is often used when developing
software products or apps.
 Beta Testing where a release of the software is made available to a larger group of users to
allow them to experiment and to raise problems that they discover with the development. It
is mostly used for software products that are used in many different settings.
 Finally Acceptance testing where customers test a system to decide whether it is ready to
be accepted from the system developers and deployed in the customer environment.
STAGES OF ACCEPTANCE TESTING

 Define the acceptance criteria. To take place early in the process before the contract of
the system is signed and it should be part of the system contract and be approved by the
customer and the developer.
 Plan the acceptance. This stage involves deciding on the resources, time and budget for
the acceptance testing and establishing a testing schedule. This test should also discuss the
required coverage of the requirements and the order in which system features are tested.
 Derive acceptance tests once acceptance criteria have been established, tests have to be
designed to check whether a system is acceptable. Acceptance tests should aim to test both
the functional and non-functional characteristics (e.g., performance) of the system
 Run acceptance tests the agreed acceptance tests are executed on the system. Ideally, this
step should take place in the actual environment where the system will be used, but this
may be disruptive and impractical. Therefore, a user testing environment may have to be
set up to run these tests.
 Negotiate test results It is very unlikely that all of the defined acceptance tests will pass
and that there will be no problems with the system. If this is the case, then acceptance
testing is complete and the system can be handed over
 Reject/accept system. This stage involves a meeting between the developers and the
customer to decide on whether or not the system should be accepted. If the system is not
good, enough for use, then further development is required to fix the identified problems.
Once complete, the acceptance-testing phase is repeated.
4.WHITE-BOX TESTING

 White-box testing, sometimes-called glass-box testing or structural testing is an integration


testing philosophy that uses implementation knowledge of the control structures described
as part of component-level design to derive test cases. White-box testing of software is
predicated on close examination of procedural implementation details and data structure
implementation details. White-box tests can be designed only after component-level design
(or source code) exists

 Using white-box testing methods, you can derive test cases that (1) guarantee that all
independent paths within a module have been exercised at least once, (2) exercise all
logical decisions on their true and false sides, (3) execute all loops at their boundaries and
within their operational bounds, and (4) exercise internal data structures to ensure their
validity.
Types

 Path Testing
 Path Testing Basis Path Testing is a white-box testing technique based on the control
structure of a program or a module.
 Using this structure, a control flow graph is prepared and the various possible paths
present in the graph are executed as a part of testing. Therefore, by definition, Basis path
testing is a technique of selecting the paths in the control flow graph, that provide a basis
set of execution paths through the program or module.
 Since this testing is based on the control structure of the program, it requires complete
knowledge of the program’s structure.
To design test cases using this technique,
four steps are followed
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
B.CONTROL STRUCTURE TESTING

 Condition testing is a test-case design method that exercises the logical conditions
contained in a program module.
 Data flow testing selects test paths of a program according to the locations of definitions
and uses of variables in the program. Loop testing is a white-box testing technique that
focuses exclusively on the validity of loop constructs.
 Two types of loops can be used, the simple loops and nested loops
Steps involved in nested loops

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or
excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.
5.TEST CASE DRIVEN

 This involves designing the unit test cases before you develop code for a component. This
ensures that the code developed will pass the tests you have them already. The module
interface is tested to ensure that information properly flows into and out of the program
until under test.
 Requirements and Use Cases
 This involves requirements gathering process by working with customers to generate user
stories that developers can refine into a formal use cases and analysis models. These
models can be used to guide the systematic creation of test cases that do a good job of
testing the functional requirements of each software components and provide good test
coverage overall.
Traceability

 To ensure that the testing process is auditable, each test needs to be traceable back to
specific functional or non-functional requirements. More often non-functional requirements
need to be traceable to specific architectural requirements.
NEXT SOFTWARE QUALITY!

You might also like