0% found this document useful (0 votes)
36 views

Chapter - 2 Software Testing

Uploaded by

nemona2025hirko
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Chapter - 2 Software Testing

Uploaded by

nemona2025hirko
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

1

2. Introduction to Software Testing Approach

Outline:

o Introduction to Software Testing

o Role of Software specification

o Software Verification and Validation

o Software Testing Life Cycle(STLC)

2
2.1 What is Software Testing?

Testin
g
 Testing is the process of executing software in a controlled manner, to answer the
question:
“does the software behave as specified?”
 Implies that we have a specification

 Testing is often associated with the words validation and verification

3
2.1 What is Software Testing?

 Software testing is a process of evaluating a software product or application to ensure

that it meets the specified requirements and quality standards. It involves testing the

software for defects, errors, and other issues that may impact its functionality,

usability, security, and performance.

 The main objective of software testing is to identify and report defects so that they can

be fixed before the software is released to end users. Testing also helps to ensure that

the software meets the needs of end users and that it is reliable, efficient, and easy to

use.

4
2.2 Software Verification and Validation

 Verification: is the is the process of checking that a software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is right or
not. It verifies whether the developed product fulfills the requirements that we have.
Verification- answers the question:

“Are we building the product right?”


 Validation: is the process of checking whether the software product is up to the mark
or in other words product has high level requirements. It is the process of checking the
validation of product i.e. it checks what we are developing is the right product. it is
validation of actual and expected product.
Validation- answers the question:

“Are we building the right product?”


5
2.2 Software Verification and Validation
 Verification and validation include a wide array of SQA activities: technical reviews,

quality and configuration audits, performance monitoring, simulation, feasibility study,

documentation review, database review, algorithm analysis, development testing,

usability testing, qualification testing, acceptance testing, and installation testing.

 Verification vs Validation
 Verification: Checking that what has been specified is what the user actually
wanted usually involves meetings, reviews and discussions.
 Validation: Testing is most useful in validation – checking software (or anything
else) for conformance and consistency with a given specification.

6
2.2 Software Verification and Validation

 Testing vs Debugging

Debugging is not
Testing!
Testing: is the process of evaluating a
Debugging: is the process identifying and
software product to ensure that it meets
fixing defects or bugs in the software that
the specified requirements and quality
have already been identified.
standards.
Debugging is a reactive activity
Testing is a proactive activity

7
Software Verification vs Validation

8
2.3. The Role of Specification

The Need for Specification:

 Validation and verification activities, such as testing, cannot be meaningful

unless we have a specification for the software

 The software we are building could be a single module or class, or could be an

entire system
 Depending on the size of the project and the development methods,

specifications can range from a single page to a complex hierarchy of interrelated

documents

9
2.3. The Role of Specification
Level of Specifications:
There are usually at least three levels of software specification documents in large
systems:

1. Functional specifications (or requirements) give a precise description of the

required behavior of the system – what the software should do, not how it

should

do it – may also describe constraints on how this can be achieved

2. Design specifications describe the architecture of the design to implement the

functional specification – the components of the software and how they are

to relate to one another

1 3. Detailed design specifications


is to be implemented – down to thedescribe how
individual codeeach component of the
0
2.4. Testing in SDLC
 Testing is oriented toward “detection” primarily of the defects and anomalies
that fall under the general category of a software “bug.” Functionally, testing
involves operation of a system or application under controlled conditions.
 The IEEE Standard 610, Glossary of Software Engineering Technology, defines
testing as “The process of operating a system or component under specified
conditions, observing or recording the results, and making an evaluation of
some
aspects of the system or component.”
 IEEE Standard 829-1983 defines testing as “The process of analyzing a software
item to detect the differences between existing required conditions (that is,
bugs) and to evaluate the features of the software items.”

11
2.4. Testing in SDLC
 Each project in software development should be following a life cycle model.
 Where is the place for testing in a software life cycle? The simple answer is
“it is part of it.” There can be no software development life cycle (SDLC)
without testing. However, when and how should testing be done?
 The general V-model plays an especially important role to answer this question.
In the V-model, the whole life cycle of a software development is clearly
displayed; the sequence of the development is based on the requirement
and specification.
 The first stage in the software life cycle is the high-level designing and the
second is the final build. Once the final build is completed, the first step is the
unit test usually done by the developer, and the next step is the system
integration
test, or SIT. After the SIT test, it goes to the system test; after system test is
12
done, it goes to user acceptance test, or UAT
2.4.1 Requirements and Specifications
 Identification and Specification
o The needs and requirements of the customer are gathered, specified, and
finally approved. Thus, the purpose of the system and the desired
characteristics and features are defined and identified
 Specification
 Functional System Development
o The requirements are mapped onto functions and dialogues of the new
system.
 Technical System Design
o The implementation of the system is designed. This includes the definition
of interfaces to the system environment and the decomposition of the
system
into smaller understandable system architecture. Each subsystem can then
13
be developed independently.
2.4.1 Requirements and Specifications
 Component Specification
o Each subsystem, including its task, behavior, inner structure, and
interfaces to other subsystems, is defined.
 Coding
o Coding consists of the process of designing, writing, unit testing,
debugging/troubleshooting, and maintaining the source code.
 Testing
o Are We Building the Right System? During validation, the tester judges
whether a product (or a part of the product) solves its task, and therefore,
whether this product is suitable for its intended use

14
2.4.1 Requirements and Specifications
 To validate: To affirm, to declare as valid, to check if something is valid. In
addition to validation testing, the V-model also requires verification testing.
 To verify: To prove and to inspect. Unlike validation, verification refers to only
one single phase of the development process. Verification shall assure that the
outcome of a particular development phase has been achieved correctly and
completely, according to its specification (the input documents for that
development level).
 Are We Building the System Right? It is examined as to whether specifications
are correctly implemented and whether the product meets its specification but not
whether the resulting product is suitable for its intended use. In reality, every test
includes both aspects, but the validation aspect increases from lower to higher
levels of testing.

15
2.4.1 Requirements and Specifications

Figure 1: The V-model that displays the whole process from requirement to
user acceptance test.

16
2.5 Software Testing Life Cycle (STLC)
 Software testing life cycle (STLC) process is an integral part of the SDLC. The overall
aspect of STLC phase deals with testing and rectifying any error code generating
within the program under various test conditions.
 STLC is basically testing phases in the SDLC. As we have stated earlier, testing is a
part
of SDLC, in the same way, STLC is also part of SDLC.
 Similar to SDLC, STLC has its own phases as
follows:
o Requirement analysis: Even though requirement analysis is part of whole SDLC;
however, it is a major part of testing life cycle.
o Test planning: Preparing the test strategy and planning
o Test case development: Creating the testing environment and writing the test
cases
o Test execution: Test executing and reporting
o Result analysis: Analysis result and bug report

1 o Defect analysis and fix : Analyze bugs and application errors


7
o
2.5 SDLC and STLC
 SDLC and STLC have some common features but they are different to each other
in several ways.
The following are some explanations to make it clearer.
o STLC is a part of SDLC. We cannot have STLC running independently.
o STLC is limited to testing, and the SDLC is a greater scope with more inputs and
executions.
o STLC is the very important part of the SDLC. A software release cannot happen without

running it through STLC process


 SDLC consists of the following:
o Requirement elicitation and analysis
o Design
o Development
o Testing
o Installation
o Maintenance

18
2.5 SDLC and
STLC
 STLC consists of the following:
o Requirement analysis
o Test planning
o Test case development
o Test execution (defect tracking and fixing)
o Test result analysis
o Test cycle closure

1 Figure 2: The testing life cycle


9
2.5 SDLC and
STLC

Figure 3: STLC Process


2
0
…..STLC: Test Plan

A Test Plan is a document that describes the test scope, test strategy, objectives, schedule,
deliverables and resources required to perform testing for a software product.

Test plan template contents:


 Overview
 Scope
o Inclusions
o Test Environments
o Exclusions
 Test Strategy
 Defect Reporting Procedure
 Roles/Responsibilities
 Test Schedule
 Test Deliverables
 Pricing
 Entry and Exit Criteria
 Suspension and Resumption Criteria
 Tools
 Risks and Mitigations
 Approvals
2
1
…..STLC: Use case, Test Scenario and Test Case

 Use Case:
• Use case describes the requirement.
• Use case contains THREE Items.
Actor, which is the user, which can be a single person or a group of people,
interacting with a process.
Action, which is to reach the final outcome
Goal/Outcome, which is the successful user outcome.
 Test Scenario:
• A possible area to be tested (What to test)
 Test Case:

• Step by step actions to be performed to validate functionality of AUT (How to


test).
• Test case contains test steps, expected result & actual result.
22
…..STLC: Test Scenario vs Test Case

• Test Scenario is ‘What to be tested’ and Test Case is ‘How to be


tested’.
Example:
Test Scenario: Checking the functionality of Login button
 TC1: Click the button without entering user name and password.
 TC2: Click the button only entering User name.
 TC3: Click the button while entering wrong user name and
wrong password.

23
…..STLC: Test Suite

Test Suite is group of test cases which belongs to same


category.

24
…..STLC: Test Case Contents

• A Test Case is a set of actions executed to validate particular feature or functionality of your
software application.
 Test Case ID
 Test Case Title
 Description
 Pre-condition
 Priority ( P0, P1,P2,P3) – order
 Requirement ID
 Steps/Actions
 Expected Result
 Actual Result
 Post Condition
 Test data

25
…..STLC: Test Case Template

26
…..STLC: Test Case Template

27
…..STLC: Requirement Traceability Matrix(RTM)

What is RTM (Requirement Traceability Matrix)?

• RTM describes the mapping of Requirement’s with the Test cases.

The main purpose of RTM is to see that all test cases are covered so that

no functionality should miss while doing Software testing.

• Requirement Traceability Matrix – Parameters includes:

 Requirement ID

 Requirement Description

 Test case ID’s

28
…..STLC: Requirement Traceability Matrix Sample

29
…..STLC: Test Environment

 Test Environment is a platform specially build for test case execution on


the

software product.

 It is created by integrating the required software and hardware along

with proper network configurations.


 Test environment simulates production/real time environment.

 Another name of test environment is Test Bed.

30
…..STLC: Test Execution

• During this phase test team will carry out the testing based on the test plans and the test cases

prepared.

• Entry Criteria: Test cases , Test Data & Test Plan

• Activities:

o Test cases are executed based on the test planning.

o Status of test cases are marked, like Passed, Failed, Blocked, Run, and others.

o Documentation of test results and log defects for failed cases is done.

o All the blocked and failed test cases are assigned bug ids.

o Retesting once the defects are fixed.

o Defects are tracked till closure.

• Deliverables: Provides defect and test case execution report with completed results
31
…..STLC: Test Closure

 Activities:
• Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software,
Critical Business Objectives , Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test summary report
• Qualitative and quantitative reporting of quality of the work product to the
customer.
• Test result analysis to find out the defect distribution by type and severity.

 Deliverables
• Test Closure report
• Test metric
32
2.6 Testing as a Process

33
Verificatio Validation
n
 Validation is performed
 Verification activities are
on the entire system by
performed at the interim
actually running the
products by applying mostly
system in its real
static analysis techniques,
environment and using
such as inspection,
a variety of tests.
walkthrough, and reviews,
and using standards and
checklists.

35
The V & V process
 Is a whole life-cycle process
 V & V must be applied at each stage in the software process.
 Has two principal objectives
1. The discovery of defects in a system
2. The assessment of whether or not the system is usable in an operational
situation.

V & V goals

 Verification and validation should establish confidence that the software is fit for
purpose. This does NOT mean completely free of defects.
 Rather, it must be good enough for its intended use and the type of use will
determine the degree of confidence that is needed

36
Static and Dynamic Verification
Static verification:
 Software inspections: Concerned with analysis of the static system representation
to discover problems: Some of the essential activities are done under static
verification such as business requirement review, design review, code
walkthroughs, and the test documentation review.
Dynamic verification:
 Software testing: Concerned with exercising and observing product
behavior.

 The system is executed with test data and its operational behavior is
observed.

Software
Requirements specifications
High Formal Detaile
specification Progra
level Specificati d
Design on Design m

Program
Prototype Testing

37
V&V Planning
 Careful planning is required to get the most out of testing & Inspection processes
should be started early in the development process.
 The plan should identify the balance between static verification and testing

Figure 3: The V V -Model


38 Development
Critical Systems Validation

Validating the safety ,reliability, and security of systems

Validation perspectives
 Safety Validation:
Does the system always operate in such a way that accidents do
not occur or that accident consequences are minimized.

 Reliability validation:
- Does the measured reliability of the system meet its specification?
- Is the reliability of the system good enough to satisfy users?
 Security validation:
Is the system and its data secure against external attack?

39
Dynamic Validation Techniques
 These are techniques that are concerned with validating the system in execution.

1. Testing - analyzing the system outside of its operational environment (@


Developing organization)
2. Run-time checking - checking during execution that the system is operating
within a dependability environment(@ Customer site)

Reliability Validation:
 Reliability validation involves testing the application to assess whether or not
it has reached the required level of reliability
 Cannot be included as part of a normal defect testing process because data
for

defect testing is (usually) a typical of actual usage data

40
Security Validation

o Security validation has something in common with safety validation


It is intended to demonstrate that the system cannot enter some unsafe or

o an insecure state rather than to demonstrate that the system can do


Experience-based
something validation:
o The system is reviewed and analyzed against the types of attack that
are known to the validation team.
Tool-based validation:
o Various security tools such as password checkers are used to analyze the
system in operation.
Security teams:
o A team is established whose goal is to breach the security of the system
by simulating attacks on the system.

41
Software Testing:
 Software Testing answers the question
“Does the software behave as specified...?”

 Software testing is a broad term that covers a variety of


processes designed

 To ensure that software applications function as


intended.

 Are able to handle the volume required.

 Integrate correctly with other software applications.

42
…..Testing
 One of the practical methods commonly used to detect the presence
of errors (failures) in a program is to test it for a set of inputs. Testing
detects errors;

44
Goal Software of Testing

To Prove
that:
• Requirement specification for which software was designed is
correct

• Design and coding correctly respond to the requirements

• To find all possible bugs (defects) in a work product

• Testing cannot show the absence of errors and defects, it can show
only that software errors and defects are present.

45
When to start Testing

An early start to testing reduces the cost, time to rework and error free
software that is delivered to the client.

 However in Software Development Life Cycle (SDLC) testing can


be started from the Requirements Gathering phase and lasts till the
deployment of the software.

 However it also depends on the development model that is being


used.

 For example, in Water fall model ,formal testing is conducted in


the Testing phase,

 But in Incremental model, testing is performed at the end of


every increment/iteration and at the end the whole application is
tested.
46
When to stop testing
 Unlike when to start testing, it is difficult to determine when to stop
testing, as testing is a never ending process and no one can say
that any software is 100% tested. Following are the aspects which
should be considered to stop the testing:

 Testing Deadlines.
 Management decision.
 Completion of test case execution.
 Completion of Functional and code coverage to a certain point.

 Bug rate falls below a certain level and no high priority bugs
are identified.
47
Software Testing Principle

 Principle 1. Testing is the process of exercising a software


component using a selected set of test cases, with the
intent of (i)
, and (ii) .

 Principle 2. When the test objective is to detect defects,


then a good test case is one that has of revealing a yet
undetected defect(s).

 Principle 3. Test results should be inspected meticulously.


 Principle 4. A test case must contain the

 Principle 5. Test cases should be developed for both


48
Software Testing Principle
 Principle 6. The probability of the existence of
additional defects in a software component is
proportional to the number of defects already
detected in that component.

 Principle 7. Testing should be carried out by a group


that is
independent of the development group.

 Principle 8. Tests must be repeatable and reusable


 Principle 9. Testing should be planned.
 Principle 10. Testing activities should be integrated into
the software life cycle.

49
 Principle 11. Testing is a creative and challenging task
1.A good test has a high probability of finding an error.
 Tester must understand the software and attempt to
develop a mental picture of how the software might
fail.

2.A good test is not redundant.


 Testing time and resources are limited. Every test
should have a
different purpose Ex. Valid/ invalid password.
3.A good test should be “best of breed”
 In a group of tests that have a similar intent, time and
resource limitations may mitigate toward the execution
of only a subset of these tests.
4.A good test should be neither too simple nor too complex.
 Sometimes it is possible to combine a series of tests
into one test
50 case, the possible side effects associated with this
approach may
51

You might also like