0% found this document useful (0 votes)
17 views54 pages

Slide4 Levels of Testing

Uploaded by

202111238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views54 pages

Slide4 Levels of Testing

Uploaded by

202111238
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Unit 4

Levels of Testing
Based on the following textbooks:
Ilene Burnstein, Practical Software Testing: A Process-Oriented
Approach, Springer, 2013. (Chapter 6)
Ian Sommerville, Software Engineering, 9th Edition, 2011 (Chapter 8)
Outline
• Introduction
• Unit Testing
• Integration Testing
• System Testing
• Regression Testing
• Alpha, Beta, and Acceptance Testing
• The Special Role of Use Cases
Introduction
• Execution-based software testing is usually carried out at different levels:
• Unit test
• Integration test
• System test
• Acceptance test
Introduction
Introduction
• At each level there are specific testing goals:
• At unit test level a single component is tested: the main goal is to detect functional and
structural defects in the component.
• Both white and black box test strategies can be used for test case design at the unit test level.
• At the integration level several components are tested as a group: the tester investigates
component interactions.
• Proper interaction at the component interfaces is of special interest at the integration test level.
• At the system level the system as a whole is tested: the tester looks for defects but also
evaluates quality-related attributes such as performance and usability.
Introduction
• If the system is being custom made for an individual client, then the next step
following system test is acceptance test.
• Software developed for the mass market (i.e., shrink-wrapped software) often
goes through a series of tests called alpha and beta tests.
• Alpha tests bring potential users to the developer’s site to use the software.
• Beta tests send the software out to potential users who use it under real-world conditions
and report defects to the developing organization.
Introduction
• The approach used to design and develop a software system has an impact on
how testers plan and design suitable tests.
• There are two major approaches to system development: bottom-up and top-down.
• These approaches are supported by two major types of programming languages: procedure-
oriented and object-oriented.
Unit Testing
• Definition: A software unit is the smallest possible testable software component.
• For example, a unit in a typical procedure-oriented software system is a function (or
procedure) implemented in a procedural (imperative) programming language.
• A unit in an object-oriented systems corresponds to a method or class.
• Since the unit size is relatively small in size and simple in function, it is easier to
design, execute, record, and analyze tests.
Unit Testing
• Unit testing should be planned. Planning includes:
‐ Designing tests to reveal defects.
‐ Allocating resources.
‐ Executing the test cases and recording and analyzing results.
• A general unit test plan should be prepared.
• It should be developed in conjunction with the master test plan and the project plan for each
project.
• Test case design at the unit level can be based on use of the black and white box
test design strategies.
Unit Testing
• Class Discussions:
• Trade-offs in selecting the component to be considered for unit testing in object-oriented
systems (method as a unit vs a class as a unit).
• Special issues relating to unit testing using a class as the selected component.
Unit Testing
• In addition to developing the test cases, supported
code must be developed to exercise each unit and to
connect it to the outside world.
• The auxiliary code developed to support testing of units
and components is called a test harness.
• The harness consists of drivers that call the target code
and stubs that represent modules it calls.
• The drivers and stubs must be tested themselves to insure
that they working properly and that they are reusable for
subsequent releases of the software.
Unit Testing
• Test logs are documents that can be used to record the results of test cases.
• A simple form (shown below) can be used to record the results of testing a unit.
• These forms can also be included in the test summary report.
Unit Testing
• When a unit fails a test there may be several reasons for the failure.
• The most likely reason for the failure is a fault in the unit implementation (the
code).
• Other causes include faults in the unit testing itself:
• Faults in the test case specification
• Faults in the test harness
• Faults in the test environment
• The causes of the failure should be recorded in the test summary report.
• When a unit has been completely tested and finally passes all of the required
tests it is ready for integration.
Integration Testing
• Integration test has two major goals:
• To detect defects that occur on the interfaces of units.
• To assemble the individual units into working subsystems and finally a complete system that
is ready for system test.
• Integration test should be performed on units that have been reviewed and have
successfully passed unit testing.
• Integration testing works best as an iterative process: one unit at a time is
integrated into a set of previously integrated modules which have passed a set of
integration tests.
Integration Strategies for
Procedural/functional-oriented Systems
• Two major integration strategies:
• Top-down: the top modules are tested first.
• Stubs need to be implemented.
• Bottom-up: the lowest-level modules are tested first.
• Drivers need to be implemented.
• A structure chart (call graph) can be used to plan the order of integration of the
modules.
Integration Strategies for
Procedural/functional-oriented Systems
• Integration test coverage criteria:
• All modules in the graph should be executed at least once (all nodes covered).
• All calls should be executed at least once (all edges covered).
• All descending sequences of calls should be executed at least once (all paths covered).
• The test planner should take into account risk factors associated with each
module and plan the order of integration accordingly.
• The test planner should consult the project plan to determine the availability of
the modules necessary for integration testing.
Integration Strategies for Object-
Oriented Systems
• Definition: A cluster consists of classes that are related, for example, they may
work together (cooperate) to support a required functionality for the complete
system.
• Clusters are somewhat analogous to small subsystems in procedural-oriented systems.
Integration Strategies for Object-
Oriented Systems
• To integrate an object-oriented system using the cluster approach, a tester could
select clusters of classes that work together to support simple function as the first
to be integrated. Then these are combined to form higher-level, or more
complex, clusters that perform multiple related functions, until the system as a
whole is assembled.
Interface Testing
• The test cases are not applied to the
individual components but rather to the
interface of the composite component
created by combining these
components.
• Interface errors in the composite
component may not be detectable by
testing the individual objects, because
these errors result from interactions
between the objects in the component.
Interface Testing
• Types of interface between components:
• Parameter interfaces: These are interfaces in which data are passed from one component to
another.
• Shared memory interfaces: These are interfaces in which a block of memory is shared
between components.
• Procedural interfaces: These are interfaces in which one component encapsulates a set of
procedures that can be called by other components.
• Message passing interfaces: These are interfaces in which one component requests a service
from another component by passing a message to it. A return message includes the results of
executing the service.
Interface Testing
• Interface errors fall into three classes:
• Interface misuse: A calling component calls some other component and makes an error in the
use of its interface.
• Interface misunderstanding: A calling component misunderstands the specification of the
interface of the called component and makes assumptions about its behavior.
• Timing errors: These occur in real-time systems that use a shared memory or message-
passing interfaces. The producer of data and the consumer of data may operate at different
speeds.
Designing Integration Tests
• Integration tests can be designed using a black or white box
approach.
• Since many errors occur at module interface, test designers need
to focus on exercising all input/output parameter pairs, and calling
relationships.
Integration Test Planning
• Integration test must be planned.
• Planning can begin when high-level design is complete so that the system
architecture is defined.
• Documents relevant to integration test planning include the requirements
document, the user manual, and usage scenarios.
• The strategy for integration should be defined.
• The order of integration of units should be defined.
• Subsystems that represent key features, critical features, and/or user-oriented functions may
be prioritized when planning for integration testing.
System Test
• When integration tests are completed and the software system is assembled,
testers can begin to test it as a whole.
• System test planning should begin at the requirements phase with the
development of a master test plan and requirements-based (black box) tests.
• System test evaluates both functional behavior and quality requirements.
• After system test the software will be turned over to users for evaluation during
acceptance test or alpha/beta test.
• System test is performed by a team of testers. The best scenario is for the team
to be part of an independent testing group.
• Good system test is essential for high software quality.
Types of System Tests
• Functional testing
• Performance testing
• Stress testing
• Configuration testing
• Security testing
• Recovery testing
• Usability testing
• Reliability testing
Types of System Tests
• Not all software systems need to undergo all
the types of system testing.
• The figure to the side shows some of the
documents useful for system test design.
• Use cases are helpful for system test design.
• Paper and on-line forms are helpful for
system test. Some are used to insure
coverage of all the requirements (e.g., the
Requirements Traceability Matrix). Others,
like test logs, support record keeping for test
results.
Types of System Test
• We consider two types of requirements in a requirements document:
• Functional requirements state what functions the software should perform.
• Quality requirements are nonfunctional in nature but describe quality levels expected for the
software. Example quality requirements include performance, usability, and security
requirements.
Functional Testing
• Functional tests at the system level are used to ensure that the behavior of the
system adheres to the functional requirements specification.
• Functional tests are black box in nature.
• Many of the system-level tests including functional tests should be designed at
the requirements specification time and be included in the master and system
test plans.
• If a failure is observed, a formal test incident report should be completed and
returned with the test log to the developers for code repair.
Performance Testing
• The goal of system performance test is to see if the software meets the
performance requirements.
• Performance objectives must be articulated clearly in the requirements
documents. The objectives must be quantified.
• For example, a requirement that the system return a response to a query in “a reasonable
amount of time” is not an acceptable requirement.
• Resources for performance testing must be allocated in the system test plan.
Performance Testing
Performance Testing
• An important tool for implementing system tests is a load generator.
• A load is a series of inputs that simulates a group of transactions.
• A transaction is a unit of work seen from the system user’s view.
• A transaction consists of a set of operations that may be performed by a person, software system, or a
device that is outside the system.
• A use case can be used to describe a transaction.
• A load can be a real load or can be synthetic (i.e., produced by tools called load generators).
• Load generators can be simple tools that output a fixed set of predetermined transactions
or they can be complex tools that use statistical patterns to generate input data or simulate
complex environments.
• Users of the load generators can usually set various parameters. Usage profiles and sets of
use cases can be used to set up loads for use in performance, stress, security, and other
types of system test.
Stress Testing
• When a system is tested with a load that causes it to allocate its resources in
maximum amounts, this is called stress testing.
• The goal of stress testing is to break the system; find the circumstances under
which it will crash.
• Stress testing is important because it can reveal defects in real-time and other
types of systems, as well as weak areas where poor design could cause
unavailability of service.
Configuration Testing
• Typical software systems interact with hardware devices such as disc drives and
printers.
• Many software systems also interact with multiple CPUs.
• Embedded software (software that control real-time processes) also interacts
with devices.
• In many cases, users require that devices be interchangeable, removable, or
reconfigurable.
• Very often the software will have a set of commands, or menus, that allows users
to make these configuration changes.
• Configuration testing allows developers/testers to evaluate system performance
and availability when hardware exchanges and reconfigurations occur.
Recovery Testing
• Recovery testing subjects a system to losses of resources in order to determine if
it can recover properly from those losses.
• This type of testing is especially important for transaction systems, for example, on-line
banking software.
Security Testing
• Designing and testing software to ensure that they are secure is a big issue facing
software developers and test specialists.
• Security testing evaluates system characteristics that relate to the availability,
integrity, and confidentiality of system data and services.
• If security is an especially important issue, the best approach if resources permit,
is to hire a so-called “tiger team” which is an outside group of penetration
experts who attempt to breach the system security.
Security Testing
• Developers try to ensure the security of their systems through the use of
protection mechanisms such as passwords, encryption, virus checkers, and the
detection and elimination of trap doors.
• Protection from security attacks must be addressed at design time.
• Examples of areas to focus on during security testing:
• Password checking
• Legal and illegal entry with passwords
• Password expiration
• Encryption
• Authorization
• Trap doors and viruses
Security Testing
• Example Attacks - SQL Injection Attack:
• SQL injection attacks are performed on SQL databases with weak codes that do not
adequately filter, use strong typing, or correctly execute user input.

• This vulnerability can be used by attackers to execute database queries to collect sensitive
information, modify database entries, or attach malicious code, resulting in total compromise
of the most sensitive data.

• Penetration testers should test relevant web applications for various vulnerabilities and flaws,
that can be exploited to perform SQL injection attacks.

• Sample query modifiers:


• Username field: blah' or 1=1 --
• Username field: blah';insert into login values ('john','apple123'); --
Security Testing
• To identify security threats, use a threat modeling framework such as STRIDE:
• Spoofed identify
• Tempering with inputs
• Repudiation of actions
• Information disclosure
• Denial of service
• Escalation of privileges

This part of the slides is based on Chapter 7 from the following book: Full Stack Testing by G. Mohan, 1 st Edition, 2022.
Security Testing
• Threat Modeling Steps:
• Define the feature
• Define the assets
• Black hat thinking
• Prioritize the threats and capture stories
Security Testing
• Thread modeling exercise in class (Sample order management system)
• Example abuser user stories:
• As an abusive user, I should not be able to see customer details even if I gain access to the
database.
• As an abusive user, if I get access to the system administrator’s or customer service
executive’s login credentials, I should not be able to edit orders.
• Example Security test cases from the threat model:
• In the UI Layer:
• Verify that the user credentials are locked after a set number of failed login attempts.
• In the DB Layer:
• Verify that passwords are stored as hashes.
Usability Testing
• Usability is a quality factor that is related to the effort needed to learn, operate,
prepare input, and interpret the output of a computer program.
• Usability is a complex quality factor and can be decomposed according to IEEE
standards1 into the subfactors:
• Understandability
• Ease of learning
• Operability
• Communicativeness

1
IEEE Std 1061-1992
Reliability Testing
• As software plays a more critical role in society, demands for its proper
functioning with an absence of failures over long periods of time increases.
• Users/clients require software products that produce consistent and expected
results over long periods of use.
• Software reliability is the ability of a system or component to perform its required
functions under stated conditions for a specified period of time.
Regression Testing
• Regression testing is not a level of testing, but it is the retesting of software that
occurs when changes are made to ensure that the new version of the software
has retained the capabilities of the old version and that no new defects have
been introduced due to the changes.
• Regression testing can occur at any level of test.
• Regression tests are especially important when multiple software releases are
developed.
Acceptance Tests
• When software is being developed for a specific client, acceptance tests are
carried out after system testing.
• Based on the outcome of acceptance tests, the clients will determine if the
software meets their requirements.
• Contractual obligations can be satisfied if the client is satisfied with the software.
• Development organizations will often receive their final payment when acceptance tests have
been passed.
• Acceptance tests are based on requirements. The user manual is an additional
source of test cases.
• The software must run under real-world conditions on operational hardware and
software. Conditions should be typical for a working day.
Acceptance Tests

The acceptance test process


Acceptance Test
• After acceptance testing the client will point out to the developers which
requirements have /have not been satisfied.
• Some requirements may be deleted, modified, or added due to changing needs.
• If the client is satisfied that the software is usable and reliable, and they give their
approval, then the next step is to install the system at the client’s site.
• If the client’s site conditions are different from that of the developers, the developers must
set up the system so that it can interface with client software and hardware.
• Retesting may have to be done to ensure that the software works as required in the client’s
environment. This is called installation test.
Alpha and Beta Tests
• If the software has been developed for the mass market (shrink-wrapped
software), then testing this type of software often undergoes two stages of
acceptance test:
1. Alpha test: a cross-section of potential users are invited to use the software. Developers
observe the users and note problems. This test takes place at the developer’s site.
2. Beta test: the software is sent to a cross-section of users who install it and use it under
real-world working conditions. The users send records of problems to the development
organization where the defects are repaired.
The Special Role of Use Cases
Scenario Testing
• Use case models can be very useful in designing test cases.
• A use case describes a typical interaction between the software system and a
user.
• A use case scenario is initiated by an actor.
• The interaction can be described using textual description and can be depicted as a diagram.
• All the events that occur and the system’s responses to the events are part of the textual
description (scenario script).
• The use cases can be refined to include exception conditions.
• Use cases are useful to testers at the integration or system level in addition to
acceptance testing.
• Testing goals can be defined in terms of the coverage of the scenarios in the use cases.
• For example: covering the typical scenarios in each use case.
Scenario Testing
• Scenario testing is an approach to testing where you devise typical scenarios of
use and use these to develop test cases for the system.
Scenario Testing

This set of inputs can be used in designing a single


test case to cover a typical flow of interaction.

What about exceptional cases?


Scenario Testing

• Discuss several examples in class.


Requirements-based Testing
• The requirements should be testable:
• The requirement should be written so that a test can be designed for the requirement. Thus,
a tester can then check that the requirement has been satisfied.
• Requirements-based testing is a systematic approach to test case design where
you consider each requirement and derive a set of tests for it
Requirements-based Testing - Example
• Consider this example requirements:
Requirements-based Testing - Example
• To check if these requirements have been satisfied, you may need to develop
several related tests:

You might also like