0% found this document useful (0 votes)
13 views33 pages

Lecture 2

The document discusses software quality engineering, focusing on the importance of software testing, quality assurance, and control. It outlines major activities in testing, objectives, levels of testing, and types of testing, including black box, white box, and grey box testing. Additionally, it emphasizes the significance of verification and validation in ensuring software meets customer requirements.

Uploaded by

Zainab Athar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views33 pages

Lecture 2

The document discusses software quality engineering, focusing on the importance of software testing, quality assurance, and control. It outlines major activities in testing, objectives, levels of testing, and types of testing, including black box, white box, and grey box testing. Additionally, it emphasizes the significance of verification and validation in ensuring software meets customer requirements.

Uploaded by

Zainab Athar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

SOFTWARE QUALITY

ENGINEERING

Ayesha Kanwal

Lecture No. 2
Review of Previous Lecture
2

 Why Software Quality Assurance is important?


 Discuss some of the famous software

disasters.
 What is Software Testing, Quality Assurance and
Software Quality Control.
 Scope and Content Hierarchy.

 Difference between software Testing, Quality


Assurance and Software Quality Control.
 What is software testing, its role and objectives
and software reliability.
 Defined and explained defect, fault, error and
failure.
 Described test cases.
 Concept of complete testing and what are the
reasons that limit complete testing.
Major Activities

 Test planning and preparation:


 Set the goals for testing, select an overall testing
strategy and prepare specific test cases and the
general test procedure
 Test execution:
 Related observation and measurement of product
behavior
 Analysis and follow-up:
 Result checking and analysis to determine if a
failure has been observed, and if so, follow-up
activities are initiated and monitored to ensure
removal of the underlying cases or faults that led to
the observed failures in the first place.
Generic testing process –
Example of a Load
Test Case Example
 http://www.math-cs.gordon.edu/courses/
cs211/ATMExample/InitialFunctionalTests
.html
Basic Question About
Testing
 What artifacts are tested?
 Program code
 Requirements
 System
 What to test and what kind of fault is
found?
 From which view?
 Functional (BB testing)
 Structural (WB testing)

 When or at what defect level, to stop


testing?
 Stopping criterion
 Product reliability goal
7 Objectives of Testing
Four objectives of Testing:
1. It does work

2. It does not work

3. Reduce the risk of failure

4. Reduce the cost of testing


1. It does work
8

 While implementing a program unit, the


programmer may want to test whether or
not the unit works in normal circumstances.
The programmer gets much confidence if
the unit works to his or her satisfaction.
Same idea applies to the entire system.

 Objective of testing is to show that the


system works, rather than it does not work.
2. It does not work
9

 Once the programmer or the


development team is satisfied that a unit
or the system works to a certain degree,
more tests are conducted with the
objective of finding faults in the unit or
the system.

 Here, the idea is to try to make the unit


or the system fail.
3. Reduce the Risk of Failure
10

 Most of the complex software systems


contain faults, which cause the system to
fail from time to time. This concept of
“failing from time to time” gives rise to the
notion of failure rate.
 As faults are discovered and fixed while
performing more and more tests, the failure
rate of a system generally decreases.
 Thus, a higher level objective of performing
tests is to bring down the risk of failing to
an acceptable level.
4. Reduce the cost of
11
Testing
 The different kinds of costs associated
with a test process include:
 the cost of designing, maintaining, and

executing test cases,


 the cost of analysing the result of

executing each test case,


 the cost of documenting the test cases,

and
 the cost of actually executing the

system and documenting it.


12
Concept of Complete
Testing
“I have exhaustively tested the program.”

 Complete, or exhaustive, testing means there are


no undiscovered faults at the end of the test
phase.
 For most of the systems, complete testing is near
impossible because of the following reasons:
Reason 1
13

 The domain of possible inputs of a program is too


large to be completely used in testing a system.
There are both valid inputs and invalid inputs.
 The program may have a large number of states.
There may be timing constraints on the inputs,
that is, an input may be valid at a certain time
and invalid at other times. An input value which
is valid but is not properly timed is called an
inopportune input.
 The input domain of a system can be very large
to be completely used in testing a program.
Reason 2
14

 The design issues may be too complex


to completely test.
 The design may have included implicit
design decisions and assumptions.
 For example, a programmer may use a
global variable or a static variable to
control program execution.
Reason 3
15

 It may not be possible to create all


possible execution environments of the
system.
 This becomes more significant when the
behaviour of the software system
depends on the real, outside world, such
as weather, temperature, altitude,
pressure, and so on.
16 Levels of Testing
There are four levels of testing based on
four levels of system development:
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing

Another intermediate level of testing is


Regression Testing.
Levels of Testing
17

 Testing is performed at different levels


involving the complete system or parts of it
throughout the life cycle of a software product.
 A software system goes through four stages of
testing before it is actually deployed. These
four stages are known as unit, integration,
system, and acceptance level testing.
 The first three levels of testing are performed
by a number of different stakeholders in the
development organization, where as
acceptance testing is performed by the
customers.
Verification and Validation
18

Verification Validation
 process to ensure that the design  is the process to test whether the
outputs of particular phase of the product meets the customer
SDLC meet all specified requirements in the intended
requirements for that phase. environment.
 also known as In-Process Testing.  also known as Exit or End Process
 typically involves reviews and Testing.
meetings to evaluate documents,
plans, code, requirements and  involves actual testing and takes
specifications. place after the verification is
 determines consistency, complete.
correctness and completeness of  determines the correctness of final
a program at each stage. build with respect to its
 Verification is the checking or requirements.
testing of items, including
software, for conformance and
 Validation is the process of checking
consistency with an associated that what has been specified is
specification. what the user actually wanted.
 Verification: Are we building  Validation: Are we building the
the product right? right product?
The ‘V’ Model
19

 The four stages of testing have been illustrated in


the form of classical ‘V’ model.

Development and Testing Phases in the ‘V’ Model


The ‘W’ Model
20

 Extension of V model is W Model, which is also


known as verification and validation (VnV) model.
Unit Testing
21

 In unit testing, programmers test


individual program units, such as
procedures, functions, methods, or
classes, in isolation.
 After ensuring that individual units work
to a satisfactory extent, modules are
assembled to construct larger
subsystems by following integration
testing techniques.
Integration Testing
22

 Integration testing is jointly performed


by software developers and integration
test engineers.
 The objective of integration testing is to
construct a reasonably stable system
that can withstand the rigor of system-
level testing.
System Testing
23

 System-level testing includes a wide spectrum of


testing, such as functionality testing, security testing,
robustness testing, load testing, stability testing,
stress testing, performance testing, and reliability
testing.
 System testing is a critical phase in a software
development process because of the need to meet a
tight schedule close to delivery date, to discover most
of the faults, and to verify that fixes are working and
have not resulted in new faults.
 System testing comprises a number of distinct
activities:
 creating a test plan,
 Designing a test suite,
 preparing test environments,
 executing the tests by following a clear strategy, and
 monitoring the process of test execution.
Regression Testing
24

 Regression testing is another level of testing that


is performed throughout the life cycle of a
system. Regression testing is performed
whenever a component of the system is modified.
 The key idea in regression testing is to ascertain
that the modification has not introduced any new
faults in the portion that was not subject to
modification.
 Regression testing is not a distinct level of
testing. Rather, it is considered as a sub-phase of
unit, integration, and system-level testing.
Acceptance Testing
25

 After the completion of system-level testing,


the product is delivered to the customer. The
customer performs their own series of tests,
commonly known as acceptance testing.
 The objective of acceptance testing is to
measure the quality of the product, rather than
searching for the defects, which is objective of
system testing.
 A key notion in acceptance testing is the
customer’s expectations from the system. By
the time of acceptance testing, the customer
should have developed their acceptance
criteria based on their own expectations from
the system.
Kinds of Acceptance Testing
26

 There are two kinds of acceptance testing


as explained in the following:
 User acceptance testing (UAT): is conducted by
the customer to ensure that the system
satisfies the contractual acceptance criteria
before being signed off as meeting user needs.
 Business acceptance testing (BAT): is
undertaken within the supplier’s development
organization. The idea in having a BAT is to
ensure that the system will eventually pass the
user acceptance test. It is a rehearsal of UAT at
the supplier’s premises.
27 Types of Testing
Testing is often divided into:
 Black Box Testing .

 White Box Testing.

 Grey Box Testing.


Black Box Testing
28

 Black box testing is a strategy in which


testing is based solely on the
requirements and specifications.
 Black box testing requires no knowledge
of the internal paths, structure, or
implementation of the software under
test.
White Box Testing
29

 White box testing is a strategy in which


testing is based on the internal paths,
structure, and implementation of the
software under test.
 White box testing generally requires
detailed programming skills.
Black Box and White Box Testing
30
in Action
The following figure shows how
both types of testers view an
accounting application during
testing. Black box testers view
the basic accounting application.
While during white box testing
the tester knows the internal
structure of the application. In
most scenarios white box testing
is done by developers as they
know the internals of the
application. In black box testing
we check the overall
functionality of the application
while in white box testing we do
code reviews, view the
architecture, remove bad code
ack box and White box testing in Actionpractices, and do component
level testing.
Grey Box Testing
31

 An additional type of testing is called


Grey box testing. In this approach we
peek into the "box" under test just long
enough to understand how it has been
implemented. Then we close up the box
and use our knowledge to choose more
effective black box tests.
Summary
32

 Testing: process of comparing "what is" with


"what ought to be."
 Levels of Testing:
 Unit Testing
 Integration Testing
 System Testing
 Acceptance Testing
 Regression Testing
 Types of Testing
 Black Box Testing
 White Box Testing
 Grey Box Testing
Class activity-1
33

 Write down functional test cases for any


of your project (Developed in previous
semester)

You might also like