0% found this document useful (0 votes)
7 views80 pages

Unit 06

Unit 6 covers software quality and testing, addressing the quality dilemma between customers and developers, and introduces various software quality assurance techniques and standards, including ISO 9000. It discusses Garvin's and McCall's quality dimensions and factors, emphasizing the importance of management and practices in achieving high software quality. The document also highlights Six Sigma methodology for quality improvement and outlines the roles of software reviews and quality assurance activities in maintaining software quality.

Uploaded by

Dip Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views80 pages

Unit 06

Unit 6 covers software quality and testing, addressing the quality dilemma between customers and developers, and introduces various software quality assurance techniques and standards, including ISO 9000. It discusses Garvin's and McCall's quality dimensions and factors, emphasizing the importance of management and practices in achieving high software quality. The document also highlights Six Sigma methodology for quality improvement and outlines the roles of software reviews and quality assurance activities in maintaining software quality.

Uploaded by

Dip Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Unit 6 - Software Quality and Testing

❑ Software quality and quality dilemma,


❑ Achieving software quality,
❑ Introduction to Software review techniques,
❑ Introduction to software quality assurance,
❑ ISO 9000 quality standards,
❑ A strategic approach to software testing, Strategic issues,
❑ Test strategies for object oriented softwares,
❑ Test strategies for WebApps,
❑ Validation testing, System testing,
❑ The art of debugging,
❑ White box testing,
❑ Basic path testing,
❑ Control structure testing,
❑ Balck-box testing,
❑ Model based testing,
❑ Object oriented test strategies and methods.
Software Quality

Software Quality remains an issue


Who is to blame?

Customers blame developers


Arguing that careless practices lead to low-quality software

Developers blame Customers & other stakeholders


Arguing that irrational delivery dates and continuous stream of changes force the to deliver software before
it has been fully validated

Who is Right? Both – and that’s the problem


Garvin’s Quality Dimensions
❑ Performance quality. Does the software deliver all content, functions, and features that are specified
as part of the requirements model in a way that provides value to the end user? Feature quality. Does
the software provide features that surprise and delight first-time end users?
❑ Reliability. Does the software deliver all features and capability without failure? Is it available when
it is needed? Does it deliver functionality that is error-free?
❑ Conformance. Does the software conform to local and external software standards that are relevant
to the application? Does it conform to de facto design and coding conventions? For example, does
the user interface conform to accepted design rules for menu selection or data input? Durability. Can
the software be maintained (changed) or corrected (debugged) without the inadvertent generation of
unintended side effects? Will changes cause the error rate or reliability to degrade with time?
❑ Serviceability. Can the software be maintained (changed) or corrected (debugged) in an acceptably
short time period? Can support staff acquire all information they need to make changes or correct
defects? Douglas Adams [Ada93] makes a wry comment that seems appropriate here: “The
difference between something that can go wrong and something that can’t possibly go wrong is that
when something that can’t possibly go wrong goes wrong it usually turns out to be impossible to get
at or repair.”
Garvin’s Quality Dimensions
❑ Aesthetics. There’s no question that each of us has a different and very subjective vision of what is
aesthetic. And yet, most of us would agree that an aesthetic entity has a certain elegance, a unique
flow, and an obvious “presence” that are hard to quantify but are evident nonetheless. Aesthetic
software has these characteristics.
❑ Perception. In some situations, you have a set of prejudices that will influence your perception of
quality. For example, if you are introduced to a software product that was built by a vendor who has
produced poor quality in the past, your guard will be raised and your perception of the current
software product quality might be influenced negatively. Similarly, if a vendor has an excellent
reputation, you may perceive quality, even when it does not really exist.
Garvin’s quality dimensions provide you with a “soft” look at software quality. Many (but not all) of
these dimensions can only be considered subjectively. For this reason, you also need a set of “hard”
quality factors that can be categorized in two broad groups:
(1) factors that can be directly measured (e.g., defects uncovered during testing) and
(2) factors that can be measured only indirectly (e.g., usability or maintainability).
In each case measurement must occur. You should compare the software to some datum and arrive at an
indication of quality
McCall’s Quality Factors
❑ Correctness. The extent to which a program satisfies its specification and fulfills the
customer’s mission objectives.
❑ Reliability. The extent to which a program can be expected to perform its intended function
with required precision. [It should be noted that other, more complete definitions of
reliability have been proposed]
❑ Efficiency. The amount of computing resources and code required by a program to perform
its function.
❑ Integrity. Extent to which access to software or data by unauthorized persons can be
controlled.
❑ Usability. Effort required to learn, operate, prepare input for, and interpret output of a
program.
❑ Maintainability. Effort required to locate and fix an error in a program. [This is a very
limited definition.]
❑ Flexibility. Effort required to modify an operational program. Testability. Effort required to
test a program to ensure that it performs its intended function.
❑ Portability. Effort required to transfer the program from one hardware and/or software
system environment to another.
McCall’s Quality Factors
❑ Reusability. Extent to which a program [or parts of a program] can be reused in other
applications—related to the packaging and scope of the functions that the program performs.
❑ Interoperability. Effort required to couple one system to another.

It is difficult, and in some cases impossible, to develop direct measures 2 of these quality
factors. In fact, many of the metrics defined by McCall et al. can be measured only indirectly.
However, assessing the quality of an application using these factors will provide you with a
solid indication of software quality
Six Sigma for Software Engineering
❑ Six Sigma is the most widely used strategy for statistical quality assurance in industry today.
Originally popularized by Motorola in the 1980s, the Six Sigma strategy “is a rigorous and
disciplined methodology that uses data and statistical analysis to measure and improve a
company’s operational performance by identifying and eliminating defects’ in
manufacturing and service-related processes” [ISI08].
❑ The term Six Sigma is derived from six standard deviations—3.4 instances (defects) per
million occurrences—implying an extremely high quality standard.
❑ The Six Sigma methodology defines three core steps:
❑ Define customer requirements and deliverables and project goals via welldefined methods of customer
communication.
❑ Measure the existing process and its output to determine current quality performance (collect defect
metrics).
❑ Analyze defect metrics and determine the vital few causes.
❑ If an existing software process is in place, but improvement is required, Six Sigma suggests
two additional steps:
❑ Improve the process by eliminating the root causes of defects.
❑ Control the process to ensure that future work does not reintroduce the causes of defects.
Six Sigma for Software Engineering
❑ These core and additional steps are sometimes referred to as the DMAIC (define, measure,
analyze, improve, and control) method.
❑ If an organization is developing a software process (rather than improving an existing
process), the core steps are augmented as follows:
❑ Design the process to
(1) avoid the root causes of defects and
(2) to meet customer requirements.
❑ Verify that the process model will, in fact, avoid defects and meet customer requirements.
❑ This variation is sometimes called the DMADV (define, measure, analyze, design, and
verify) method.
❑ A comprehensive discussion of Six Sigma is best left to resources dedicated to the subject.
Software Quality dilemma
It’s fine to state that software engineers should strive to produce high-quality systems. It’s even
better to apply good practices in your attempt to do so. But the situation discussed by Meyer is
real life and represents a dilemma for even the best software engineering organizations
1. “Good Enough” Software
2. The Cost of Quality - The cost of quality can be divided into costs associated with
prevention, appraisal, and failure.
• Prevention costs include (1) the cost of management activities required to plan and coordinate all quality
control and quality assurance activities, (2) the cost of added technical activities to develop complete
requirements and design models, (3) test planning costs, and (4) the cost of all training associated with
these activities
• Appraisal costs include activities to gain insight into product condition the “first time through” each
process.
• Failure costs are those that would disappear if no errors appeared before or after shipping a product to
customers. Failure costs may be subdivided into internal failure costs and external failure costs. Internal
failure costs are incurred when you detect an error in a product prior to shipment.
• External failure costs are associated with defects found after the product has been shipped to the customer.
Examples of external failure costs are complaint resolution, product return and replacement, help line
support, and labor costs associated with warranty work.
Software Quality dilemma
3. Risks - Poor quality leads to risks, some of them very serious
4. Negligence and Liability - the quality of the delivered system comes into question
5. Quality and Security - To build a secure system, you must focus on quality, and that focus
must begin during design.
6. The Impact of Management Actions - Software quality is often influenced as much by
management decisions as it is by technology decisions. Even the best software engineering
practices can be subverted by poor business decisions and questionable project management
actions As each project task is initiated, a project leader will make decisions that can have a
significant impact on product quality.
• Estimation decisions
• Scheduling decisions
• Risk-oriented decisions
Achieving Software Quality
❑ Management and practice are applied within the context of four broad activities that help a
software team achieve high software quality:
• software engineering methods - If you expect to build high-quality software, you must understand the
problem to be solved. You must also be capable of creating a design that conforms to the problem while at the
same time exhibiting characteristics that lead to software that exhibits the quality dimensions and factors
• project management techniques - The implications are clear: if (1) a project manager uses estimation to
verify that delivery dates are achievable, (2) schedule dependencies are understood and the team resists the
temptation to use short cuts, (3) risk planning is conducted so problems do not breed chaos, software quality
will be affected in a positive way. In addition, the project plan should include explicit techniques for quality
and change management.
• quality control actions - Quality control encompasses a set of software engineering actions that help to
ensure that each work product meets its quality goals. Models are reviewed to ensure that they are complete
and consistent. Code may be inspected in order to uncover and correct errors before testing commences. A
series of testing steps is applied to uncover errors in processing logic, data manipulation, and interface
communication
• software quality assurance - The goal of quality assurance is to provide management and technical staff
with the data necessary to be informed about product quality, thereby gaining insight and confidence that
actions to achieve product quality are working. Of course, if the data provided through quality assurance
identifies problems, it is management’s responsibility to address the problems and apply the necessary
Introduction to Software review techniques
❑ Software reviews are a “filter” for the software process.
❑ That is, reviews are applied at various points during software engineering and serve to
uncover errors and defects that can then be removed.
❑ Software reviews “purify” software engineering work products, including requirements and
design models, code, and testing data
❑ A review—any review—is a way of using the diversity of a group of people to:
1. Point out needed improvements in the product of a single person or team;
2. Confirm those parts of a product in which improvement is either not desired or not
needed;
3. Achieve technical work of more uniform, or at least more predictable, quality than can
be achieved without reviews, in order to make technical work more manageable.
Different types of reviews can be conducted as part of software engineering –
• An informal meeting around the coffee machine is a form of review, if technical problems
are discussed.
• A formal presentation of software architecture to an audience of customers, management,
and technical staff is also a form of review
Introduction to software quality assurance
Quality – developed product meets it’s specification
▪ Software quality can be defined as “the conformance to explicitly stated functional
requirement, explicitly documented development standards and implicit characteristics that
are expected of all professionally developed software”
▪ There are two kind of quality
1. Quality of design is the characteristics of the item which specified for the designer.
2. Quality of conformance is the degree to which the design specifications are followed
during manufacturing.
Thus in software development process quality of design is concerned towards requirements,
specification and design of the system and quality of conformance is concerned with
implementation.
User satisfaction = Compliant product + Good quality + Delivery within budget
Introduction to software quality assurance
Quality Management
Ensuring that required level of product quality is achieved
• Defining procedures and standards
• Applying procedures and standards to the product and process
• Checking that procedures are followed
• Collecting and analysing various quality data
Introduction to software quality assurance
• Software Quality Assurance (SQA) is simply a way to assure quality in the software.
• It is the set of activities which ensure processes, procedures as well as standards suitable for
the project and implemented correctly.
• Software quality assurance (also called quality management) is an umbrella activity that
is applied throughout the software process.
• Software Quality Assurance is a process which works parallel to development of a software.
It focuses on improving the process of development of software so that problems can be
prevented before they become a major issue.
• It is planned and systematic pattern of activities necessary to provide high degree of
confidence in the quality
Introduction to software quality assurance
▪ Software quality assurance (SQA) encompasses
• An SQA process
• Specific quality assurance and quality control tasks
• Effective software engineering practice
• Control of all software work products
• A procedure to ensure compliance with software development standards
• Measurement and reporting mechanisms
SQA Activities
7.2.1 SQA Activities
1. SQA Management Plan:
▪ Make a plan how you will carry out the SQA through out the project. Think which set of
software engineering activities are the best for project.
▪ Identify evaluation to be performed.
▪ Audits and reviews to performed.
▪ Procedures for error reporting and tracking.
▪ Documentation
▪ Check level of SQA team skills.
SQA Activities
2. Set The Check Points:
▪ SQA team should set checkpoints. Evaluate the performance of the project on the basis
of collected data on different check points.

3. Multi testing Strategy:


▪ Do not depend on single testing approach. When you have lot of testing approaches
available use them.

4. Measure Change Impact:


▪ The changes for making the correction of an error sometimes reintroduces more errors
keep the measure of impact of change on project.
▪ Reset the new change to change check the compatibility of this fix with whole project.
SQA Activities
5. Reviews software engineering activities
▪ The SQA group identifies and documents the processes. The group also verifies the
correctness of software product.
6. Ensure that deviations in software work and work products are documented and
handled according to a documented procedure

7. Review software engineering activities to verify compliance.


8. Records any noncompliance and reporting to senior management

9. Manage Good Relations:


▪ In the working environment managing the good relation with other teams involved
in the project development is mandatory.
▪ Bad relation of SQA team with programmers team will impact directly and badly on
project.
▪ Don’t play politics.
SQA Techniques
▪ Statistical quality assurance implies the following steps:
1. Information about software defects is collected and categorized
2. An attempt is made to trace each defect to its underlying cause
o Ex., non-conformance to specifications, design error, violation of standards, poor
communication with the customer
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent
4. Once the vital few causes have been identified, move to correct the problems that have
caused the defects.
▪ Some of the defects are uncovered as software is being developed.
▪ Other are encountered after the software has been released.
Benefits of Software Quality Assurance (SQA)
▪ SQA produce high quality software.
▪ High quality application saves time and cost.
▪ SQA is beneficial for better reliability.
▪ SQA is beneficial in the condition of no maintenance for long time.
▪ High quality commercial software increase market share of company.
▪ Improving the process of creating software.
▪ Improves the quality of the software.

Disadvantage of SQA:
▪ Require more resources
▪ Time consuming process
▪ Employing more workers to help maintain quality
▪ Costly.
ISO 9000 quality standards
▪ ISO 9000 describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered. To become registered to one of the
quality assurance system models contained in ISO 9000, a company’s quality system and
operations are scrutinized by third-party auditors for compliance to the standard and for
effective operation. Upon successful registration, a company is issued a certificate from a
registration body represented by the auditors. Semiannual surveillance audits ensure
continued compliance to the standard.
▪ In order to bring quality in product and service, many organizations are adopting Quality
Assurance System
▪ ISO standards are issued by the International Organization for Standardization (ISO) in
Switzerland
▪ It is organization which standardized the things on international level so things become
easy to judge. Proper documentation is an important part of an ISO 9001 Quality
Management System.
▪ ISO 9001 is the quality assurance standard that applies to software engineering
▪ It includes, requirements that must be present for an effective quality assurance system
▪ ISO 9001 standard is applicable to all engineering discipline
ISO 9000 quality standards
The Guideline steps for ISO 9001:2000 are:
▪ Establish quality management system
▪ Document the quality management system
▪ Support the quality
▪ Satisfy the costumers
▪ Establish quality policy
▪ Conduct quality planning
▪ Perform management reviews
▪ Provide quality resources, infrastructure and environment
▪ Control actual planning, customer processes
▪ Control product development, purchasing function
▪ Control monitoring devices (inspection, audits etc.)
▪ Analyze quality information
▪ Make quality improvement
ISO 9000 quality standards
In order for a software organization to become registered to ISO 9001:2000

1. It must establish policies and procedures to address


each of the requirements just noted

2. Able to demonstrate that these policies and procedures


are being followed
ISO 9000 quality standards
▪ In order to bring quality in product and service, many organizations are adopting Quality
Assurance System
▪ ISO standards are issued by the International Organization for Standardization (ISO) in
Switzerland
▪ It is organization which standardized the things on international level so things become
easy to judge.
▪ Proper documentation is an important part of an ISO 9001 Quality Management System.
▪ ISO 9001 is the quality assurance standard that applies to software engineering
▪ It includes, requirements that must be present for an effective quality assurance system
▪ ISO 9001 standard is applicable to all engineering discipline
Software Documentation
When various kinds of software products are developed, various kinds of
documents are also developed as part of any software engineering process e.g..

Users’ manual Design documents Test documents Installation manual


Software requirements specification (SRS) documents, etc

Different types of software documents can broadly be classified into the following:

Software
Documents

Internal External
Documentation Documentation
Who Test the Software
Developer

Who Test

Tester
the
Software?

Understands the system but, will Must learn about the system, but,
test "gently" will attempt to break it <Developer
and, is driven by "delivery" and, is driven by quality > O
Testing without plan is of no
Testing need a strategy R
[Tester
Dev team needs to work with
point ]
Test team, “Egoless
It wastes time and effort
Programming”
When to Test the Software?
Component Code Component Code Component Code
Unit Test Unit Test Unit Test

Design Specifications Integration Test Integrated


modules
System functional requirements Function Test Functioning system

Other software Performance Test Verified, validated


requirements software
Customer Acceptance Test Accepted
SRS system
User Installation Test System in
environment use!
Verification & Validation
Verification Verificatio Validatio
Are we building the product
n
Process of evaluating
products of a Process of evaluating n
software at the
right? development phase to find out end of the development to determine
The objective of Verification is whether they meet the specified whether software meets the customer
to make sure that the product requirements. expectations and requirements.
being develop is as per the
Activities involved: Reviews, Activities involved: Testing like black
requirements and design
Meetings and Inspections box testing, white box testing, gray box
specifications. testing

Validation Carried out by QA Carried out by testing team


team
Are we building the right Execution of code is not comes under Execution of code is comes under
product? Verification Validation
The objective of Validation is to
make sure that the product Explains whether the outputs are Describes whether the software is
according to inputs or not accepted by the user or not
actually meet up the user’s
requirements, and check whether Cost of errors caught is Cost of errors caught is high
the specifications were correct in less
the first place.
Software Testing Strategy
A strategy for software testing in the context of the spiral

Unit Testing
It concentrate on each
unit of the software as
implemented in source
code
Unit Testing It focuses on each
Integration Testing component individual,
Validation Testing ensuring that it functions
properly as a unit.
System Testing
Software Testing Strategy Cont.

Integration Testing Validation Testing System Testing


It focus is on design and Software is validated The software and other software
construction of against requirements elements are tested as a whole
software architecture established as a part of Software once validated, must be
requirement modeling combined with other system
Integration testing is the
elements e.g. hardware, people,
process of testing the It give assurance that database etc…
interface between two software meets all
software units or informational, functional, It verifies that all elements mesh
modules behavioral and properly and that overall system
performance requirements function / performance is achieved.
Unit Testing
Unit is the smallest part of a software system which is testable.
It may include code files, classes and methods which can be tested individually
for correctness.
Unit Testing validates small building block of a complex system before testing
an integrated large module or whole system
The unit test focuses on the internal processing logic and data structures
within the boundaries of a component.
The module is tested to ensure that information properly flows into and out of
the program unit
Local data structures are examined to ensure that data stored temporarily
maintains its integrity during execution
All independent paths through the control structures are exercised to ensure
that all statements in module have been executed at least once
Boundary conditions are tested to ensure that the module operates properly
at boundaries established to limit or restricted processing
All error handling paths are tested
Driver & Stub (Unit Testing)
A B C
Let’s take an example to understand it in a better
way.
Suppose there is an application consisting of three
modules say, module A, module B & module C.
Developer has design in such a way that module B
depends on module A & module C depends on
module B
Component-testing (Unit Testing) may
The developer has developed the module B and now
be done in isolation from rest of the
wanted to test it.
system
In such case the missing software is But the module A and module C has not been
replaced by Stubs and Drivers and developed yet.
simulate the interface between the
software components in a simple manner In that case to test the module B completely we can
replace the module A by Driver and module C by
Diver & Stub (Unit Testing) Cont.
Driver Stub
Driver and/or Stub software must be Stubs serve to replace modules that are
developed for each unit test subordinate (called by) the component to
A driver is nothing more than a "main be tested.
program" A stub or "dummy subprogram"
It accepts test case data Uses the subordinate module's interface
Passes such data to the component and May do minimal data manipulation
Prints relevant results. Prints verification of entry and
Driver Returns control to the module undergoing
testing
Used in Bottom up approach
Lowest modules are tested first. Stubs
Simulates the higher level of components Used in Top down approach
Dummy program for Higher level component Top most module is tested first
Simulates the lower level of components
Dummy program of lower level components
Integration Testing
Integration testing is the process of testing the interface between two software units or
modules
It can be done in 3 1. Big Bang 2. Top Down 3. Bottom Up
ways Approach Approach Approach
Big Bang Top Down
Approach
Combining all the modules Approach
Testing take place from top to bottom
once and verifying the
High level modules are tested first and then low-level modules and
functionality after completion
finally integrated the low level modules to high level to ensure the
of individual module testing
system is working as intended
Stubs are used as a temporary module, if a module is not ready for
integration testing
Bottom Up
Approach
Testing take place from bottom to up
Lowest level modules are tested first and then high-level modules and finally integrated the high level
modules to low level to ensure the system is working as intended
Drivers are used as a temporary module, if a module is not ready for integration testing
Regression Testing When to do regression
Repeated testing of an already tested testing?
When new functionalities are added to the
program, after modification, to application
discover any defects introduced or E.g. A website has login functionality with only Email.
uncovered as a result of the changes in Now the new features look like “also allow login using
the software being tested Facebook”
Regression testing is done by When there is a change requirement
re-executing the tests against the Forgot password should be removed from the login
modified application to evaluate page
whether the modified code breaks
anything which was working earlier When there is a defect fix
E.g. assume that “Login” button is not working and
Anytime we modify an application, we tester reports a bug. Once the bug is fixed by developer,
should do regression testing tester tests using this approach
It gives confidence to the developers When there is a performance issue
that there is no unexpected side effects
E.g. loading a page takes 15 seconds. Reducing load
after modification time to 2 seconds
When there is an environment change
E.g. Updating database from MySQL to Oracle
Smoke Testing
Smoke Testing is an integrated testing Build
approach that is commonly used when product
software is developed
This test is performed after each Build Release
Smoke testing verifies – Build Stability F1 F2 F3 F4 F5 F6
This testing is performed by “Tester” or Critic Critic Majo Majo
“Developer” al al r r
This testing is executed for Integration It test the build just to check if any major
Testing, System Testing & Acceptance or critical functionalities are broken
Testing
If there are smoke or Failure in the build
What to Test? after Test, build is rejected and developer
All major and critical functionalities of the
application is tested
team is reported with the issue
It does not go into depth to test each
functionalities
This does not incudes detailed testing for the
Validation Testing
The process of evaluating software to determine whether it satisfies specified business
requirements (client’s need).
It provides final assurance that software meets all informational, functional, behavioral, and
performance requirements
When custom software is build for one customer, a series of acceptance tests are conducted
to validate all requirements
It is conducted by end user rather then software engineers
If software is developed as a product to be used by many customers, it is impractical to
perform formal acceptance tests with each one
Most software product builders use a process called alpha and beta testing to uncover
errors that only the end user seems able to find
Validation Testing – Alpha & Beta Test
Alpha Test
The alpha test is conducted at the developer’s site by a representative group of end users
The software is used in a natural setting with the developer “looking over the shoulders” of
the users and recording errors and usage problems
The alpha tests are conducted in a controlled environment
Beta Test
The beta test is conducted at one or more end-user sites
Developers are not generally present
Beta test is a “live” application of the software in an environment that can not be
controlled by the developer
The customer records all problems and reports to the developers at regular intervals
After modifications, software is released for entire customer base
System Testing
In system testing the software and other system elements are tested.
To test computer software, you spiral out in a clockwise direction along streamlines that
increase the scope of testing with each turn.
System testing verifies that all elements mesh properly and overall system
function/performance is achieved.
System testing is actually a series of different tests whose primary purpose is to fully
exercise the computer-based system.

Types of System
Testing
1 Recovery Testing 4 Performance Testing
2 Security Testing 5 Deployment Testing
3 Stress Testing
Types of System Testing
Recovery Testing
It is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself)
Re-initialization, check pointing mechanisms, data recovery, and restart
are evaluated for correctness.
If recovery requires human intervention
The mean-time-to-repair (MTTR) is evaluated to determine whether it
is within acceptable limits
Security Testing
It attempts to verify software’s protection mechanisms, which
protect it from improper penetration (access).
During this test, the tester plays the role of the individual who
desires to penetrate the system.
Types of System Testing Cont.
Stress Testing
It executes a system in a manner that demands resources in
abnormal quantity, frequency or volume.
A variation of stress testing is a technique called sensitivity testing.

Performance Testing
It is designed to test the run-time performance of software.
It occurs throughout all steps in the testing process.
Even at the unit testing level, the performance of an individual
module may be tested.
Types of System Testing Cont.
Deployment Testing
It exercises the software in each environment in which it is to
operate.
In addition, it examines
All installation procedures
Specialized installation software that will be used by customers
All documentation that will be used to introduce the software to end
users
Acceptance Testing
It is a level of the software testing where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business
requirements.
It is a formal testing conducted to determine whether or not a system satisfies the
acceptance criteria with respect to user needs, requirements, and business processes
It enables the customer to determine, whether or not to accept the system.
It is performed after System Testing and before making the system available for actual use.
Views of Test Objects
Black Box Testing White Box Testing Grey Box Testing
Close Box Testing Open Box Testing Partial knowledge of
Testing based only on Testing based on actual source code
specification source code
Black Box Testing
Also known as specification-based testing
Tester has access only to running code and the specification it is supposed to satisfy
Test cases are written with no knowledge of internal workings of the code
No access to source code
So test cases don’t worry about structure
Emphasis is only on ensuring that the contract is met
Advantages
Scalable; not dependent on size of code
Testing needs no knowledge of implementation
Tester and developer can be truly independent of each other
Tests are done with requirements in mind
Does not excuse inconsistencies in the specifications
Test cases can be developed in parallel with code
Black Box Testing Cont.
Disadvantages Test Case Design
Examine pre-condition, and identify Test size will have to be small
equivalence classes Specifications must be clear, concise,
All possible inputs such that all classes are and correct
covered May leave many program paths
Apply the specification to input to write down untested
expected output Weighting of program paths is not
possible

Test Case 1
Specification Input: x1 (sat. X)
Specification-
Operation op Exp. Output: y2
Based Test
Pre: X
Case Test Case 2
Post: Y
Design Input: x2 (sat. X)
Exp. Output: y2
Black Box Testing Cont.
Exhausting testing is not always possible when there is a large set of input combinations,
because of budget and time constraint.
The special techniques are needed which select test-cases smartly from the all combination
of test-cases in such a way that all scenarios are covered

Two techniques are


used
1 Equivalence Partitioning 2 Boundary Value Analysis (BVA)

Equivalence Partitioning
Input data for a program unit usually falls into a number of
partitions, e.g. all negative integers, zero, all positive numbers
Each partition of input data makes the program behave in a similar
way
Two test cases based on members from the same partition is likely to
reveal the same bugs
Equivalence Partitioning (Black Box Testing)
By identifying and testing one member of each partition we gain 'good' coverage with 'small'
number of test cases
Testing one member of a partition should be as good as testing any member of the partition

Example - Equivalence
Partitioning
For binary search the following partitions exist Pick specific conditions of the array
Inputs that conform to pre-conditions The array has a single value
Inputs where the precondition is false Array length is even
Inputs where the key element is a member of the array Array length is odd
Inputs where the key element is not a member of the array
Equivalence Partitioning (Black Box Testing) Cont.

Example - Equivalence
Partitioning
Example: Assume that we have to test field which accepts SPI (Semester Performance
Index) as input (SPI range is 0 to 10)

SPI * Accepts value 0 to 10

Equivalence Partitioning
Invalid Valid Invalid
<=-1 0 to 10 >=11

Valid Class: 0 – 10, pick any one input test data from 0 to 10
Invalid Class 1: <=-1, pick any one input test data less than or equal to -1
Invalid Class 2: >=11, pick any one input test data greater than or equal to 11
Boundary Value Analysis (BVA) (Black Box Testing)
It arises from the fact that most program fail at input boundaries
Boundary testing is the process of testing between extreme ends or boundaries between
partitions of the input values.
In Boundary Testing, Equivalence Class Partitioning plays a good role
Boundary Testing comes after the Equivalence Class Partitioning
The basic idea in boundary value testing is to select input variable values at their:

Just below the minimum Minimum Just above the minimum

Just below the Maximum Just above the maximum


maximum
Boundar
y

Boundary Values
Boundary Value Analysis (BVA) (Black Box Testing)
Suppose system asks for “a number between 100 and 999
inclusive”
998 999
The boundaries are 100 and 999 99 100 101
1000
We therefore test for values Lower Upper
BVA - Advantages boundary boundary
The BVA is easy to use and remember because of the uniformity of identified tests and the
automated nature of this technique.
One can easily control the expenses made on the testing by controlling the number of identified
test cases.
BVA is the best approach in cases where the functionality of a software is based on numerous
variables representing physical quantities.
The technique is best at user input troubles in the software.
The procedure and guidelines are crystal clear and easy when it comes to determining the test
cases through BVA.
The test cases generated through BVA are very small.
Boundary Value Analysis (BVA) (Black Box Testing) Cont.
BVA - Disadvantages
This technique sometimes fails to test all the potential input values. And so, the results
are unsure.
The dependencies with BVA are not tested between two inputs.
This technique doesn’t fit well when it comes to Boolean Variables.
It only works well with independent variables that depict quantity.
White Box Testing
Also known as structural testing
White Box Testing is a software testing method in which the internal
structure/design/implementation of the module being tested is known to the tester
Focus is on ensuring that even abnormal invocations are handled gracefully
Using white-box testing methods, you can derive test cases that
Guarantee that all independent paths within a module have been exercised at least once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries
Exercise internal data structures to ensure their validity

...our goal is to ensure that


all statements and conditions
have been executed at least
once ...
White Box Testing Cont.
It is applicable to the following levels of software testing
Unit Testing: For testing paths within a unit
Integration Testing: For testing paths between units
System Testing: For testing paths between subsystems

Advantages Disadvantages
Testing can be commenced Since tests can be very complex, highly skilled
at an earlier stage as one resources are required, with thorough knowledge of
need not wait for the GUI to programming and implementation
be available. Test script maintenance can be a burden, if the
Testing is more thorough, implementation changes too frequently
with the possibility of Since this method of testing is closely tied with the
covering most paths application being testing, tools to cater to every kind of
implementation/platform may not be readily available
White-box testing strategies
One white-box testing strategy is said to be stronger than another strategy, if all types of
errors detected by the first testing strategy is also detected by the second testing strategy, and
the second testing strategy additionally detects some more types of errors.
White-box testing
strategies
1 Statement coverage 2 Branch coverage 3 Path coverage
Statement coverage
It aims to design test cases so that every statement in a program is executed at least once
Principal idea is unless a statement is executed, it is very hard to determine if an error exists
in that statement
Unless a statement is executed, it is very difficult to observe whether it causes failure due to
some illegal memory access, wrong result computation, etc.
White-box testing strategies Cont.
Consider the Euclid’s GCD computation algorithm

int compute_gcd(x, y) By choosing the test set {(x=3, y=3),


int x, y; (x=4, y=3), (x=3, y=4)}, we can
exercise the program such that all
{
statements are executed at least once.
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;
5 }
6 return x;
}
White-box testing strategies Cont.
Branch coverage
In the branch coverage based testing strategy, test cases are designed to make each
branch condition to assume true and false values in turn
It is also known as edge Testing as in this testing scheme, each edge of a program’s control
flow graph is traversed at least once
Branch coverage guarantees statement coverage, so it is stronger strategy compared to
Statement Coverage.
Path Coverage
In this strategy test cases are executed in such a way that every path is executed at least
once
All possible control paths taken, including
All loop paths taken zero, once and multiple items in technique
The test case are prepared based on the logical complexity measure of the procedure design
Flow graph, Cyclomatic Complexity and Graph Metrices are used to arrive at basis path.
White-box testing strategies Cont.
Branch coverage Path Coverage
In the branch coverage based testing In this strategy test cases are executed in
strategy, test cases are designed to such a way that every path is executed at
make each branch condition to assume least once
true and false values in turn All possible control paths taken, including
It is also known as edge Testing as in All loop paths taken zero, once and multiple
this testing scheme, each edge of a items in technique
program’s control flow graph is The test case are prepared based on the
traversed at least once logical complexity measure of the procedure
design
Branch coverage guarantees statement
coverage, so it is stronger strategy Flow graph, Cyclomatic Complexity and
compared to Statement Coverage. Graph Metrices are used to arrive at basis
path.
Grey Box Testing
Combination of white box and black box testing
Tester has access to source code, but uses it in a restricted manner
Test cases are still written using specifications based on expected outputs for given input
These test cases are informed by program code structure
Testing Object Oriented Applications
Unit Testing in the OO Context

The concept of the unit testing changes in object-oriented


software
Encapsulation drives the definition of classes and objects
Means, each class and each instance of a class (object) packages
attributes (data) and the operations (methods or services) that
manipulate these data
Rather than testing an individual module, the smallest testable unit
is the encapsulated class
Unlike unit testing of conventional software,
which focuses on the algorithmic detail of a module and the data that
flows across the module interface,
class testing for OO software is driven by the operations
encapsulated by the class and the state behavior of the class
Integration Testing in the OO Context
Object-oriented software does not have a hierarchical control structure,
conventional top-down and bottom-up integration strategies have little meaning
There are two different strategies for integration testing of OO systems.
1. Thread-based testing
▪ integrates the set of classes required to respond to one input or event for the system
▪ Each thread is integrated and tested individually
▪ Regression testing is applied to ensure that no side effects occur
2. Use-based testing
▪ begins the construction of the system by testing those classes (called independent classes) that use very few (if
any) of server classes
▪ After the independent classes are tested, the next layer of classes, called dependent classes, that use the
independent classes are tested
Cluster testing is one step in the integration testing of OO software
Here, a cluster of collaborating classes is exercised by designing test cases that attempt to
uncover
Validation Testing in an OO Context
At the validation or system level, the details of class connections disappear
Like conventional validation, the validation of OO software focuses on user-visible actions
and user-recognizable outputs from the system
To assist in the derivation of validation tests, the tester should draw upon use cases that are
part of the requirements model
Conventional black-box testing methods can be used to drive validation tests
Object oriented test methods
Fault Based Testing: This technique tries to identify possible faults in the design or code
based on the consumer specification or the code or both.
Class Testing Based on Method Testing: This technique tests each class by testing its
methods individually and in combination.
Random Testing: This technique generates random test cases and inputs to test the software.
Partition Testing: This technique divides the input domain into subdomains and selects test
cases from each subdomain.
Scenario-based Testing: This technique tests the software by simulating realistic scenarios
and user interactions.
Testing Web Applications
WebApp testing is a collection of related activities with a
single goal to uncover errors in WebApp content, function,
usability, navigability, performance, capacity, and security
To accomplish this, a testing strategy that encompasses both
reviews and executable testing is applied.
Dimensions of Quality
Content is evaluated at both a syntactic and semantic level.
At the syntactic level spelling, punctuation, and grammar
are assessed for text-based documents.
At a semantic level correctness of information presented,
Consistency across the entire content object and related
objects, and lack of ambiguity are all assessed.
Dimensions of Quality Cont.
Function is tested to uncover errors that indicate lack of conformance to customer
requirements
Structure is assessed to ensure that it properly delivers WebApp content
Usability is tested to ensure that each category of user is supported by the interface and can
learn and apply all required navigation.
Navigability is tested to ensure that all navigation syntax and semantics are exercised to
uncover any navigation errors
Ex., dead links, improper links, and erroneous links
Performance is tested under a variety of operating conditions, configurations and loading
to ensure that the system is responsive to user interaction and handles extreme loading
Compatibility is tested by executing the WebApp in a variety of different host configurations
on both the client and server sides
Interoperability is tested to ensure that the WebApp properly interfaces with other applications
and/or databases
Security is tested by assessing potential vulnerabilities
Content Testing User Interface
Errors in WebApp content can be Testing
Verification and validation of a WebApp
as trivial as minor typographical errors or user interface occurs at three distinct points
as significant as incorrect information,
improper organization, or violation of
1. During requirements analysis
intellectual property laws the interface model is reviewed to ensure that
it conforms to stakeholder requirements
Content testing attempts to uncover these
and many other problems before the user 2. During design
encounters them the interface design model is reviewed to
ensure that generic quality criteria established
Content testing combines both reviews and for all user interfaces have been achieved
the generation of executable test cases 3. During testing
Reviews are applied to uncover semantic the focus shifts to the execution of
errors in content application-specific aspects of user interaction
as they are manifested by interface syntax and
Executable testing is used to uncover semantics.
content errors that can be traced to In addition, testing provides a final
dynamically derived content that is driven assessment of usability
by data acquired from one or more
databases.
Component-Level Testing Navigation
Component-level testing (function
Testing
The job of navigation testing is to ensure
testing), focuses on a set of tests that that
attempt to uncover errors in WebApp the mechanisms that allow the WebApp user
functions. to travel through the WebApp are all
functional and,
Each WebApp function is a software to validate that each Navigation Semantic
component (implemented in one of a Unit (NSU) can be achieved by the appropriate
variety of programming languages) user category
WebApp function can be tested using Navigation mechanisms should be tested
black-box (and in some cases, white-box)
techniques. are
Navigation links,
Component-level test cases are often Redirects,
driven by forms-level input. Bookmarks,
Once forms data are defined, the user selects Frames and framesets,
a button or other control mechanism to
initiate execution. Site maps,
Internal search engines.
Configuration Testing Security
Configuration variability and instability are
Testing
Security tests are designed to probe
important factors that make WebApp testing a vulnerabilities of the client-side
challenge. environment,
the network communications that occur
Hardware, operating system(s), browsers, as data are passed from client to server
storage capacity, network communication and back again, and
speeds, and a variety of other client-side factors the server-side environment.
are difficult to predict for each user.
Each of these domains can be
One user’s impression of the WebApp and the attacked, and it is the job of the
manner in which he/she interacts with it can security tester to uncover weaknesses
differ significantly. that can be exploited by those with the
Configuration testing is to test a set of intent to do so.
probable client-side and server-side
configurations
to ensure that the user experience will be the same
on all of them and,
to isolate errors that may be specific to a
particular configuration
Performance Testing
Performance testing is used to uncover
performance problems that can result from lack of server-side resources,
inappropriate network bandwidth,
inadequate database capabilities,
faulty or weak operating system capabilities,
poorly designed WebApp functionality, and
other hardware or software issues that can lead to degraded client-server performance
The Debugging Process
The Debugging Process
Debugging is not testing but often occurs as a consequence of testing.
As shown in figure, the debugging process begins with the execution of a test case.
Results are assessed and a lack of correspondence between expected and actual performance
is encountered.
In many cases, the noncorresponding data are a symptom of an underlying cause as yet
hidden.
The debugging process attempts to match symptom with cause, thereby leading to error
correction.
The debugging process will usually have one of two outcomes:
(1) the cause will be found and corrected or
(2) the cause will not be found. In the latter case, the person performing debugging may suspect a cause,
design a test case to help validate that suspicion, and work toward error correction in an iterative fashion.
During debugging, you’ll encounter errors that range from mildly annoying (e.g., an incorrect
output format) to catastrophic (e.g., the system fails, causing serious economic or physical
damage). As the consequences of an error increase, the amount of pressure to find the cause
also increases. Often, pressure forces some software developers to fix one error and at the
The Debugging Process
Psychological Considerations
Unfortunately, there appears to be some evidence that debugging prowess is an innate human
trait.
Some people are good at it and others aren’t. Although experimental evidence on debugging
is open to many interpretations, large variances in debugging ability have been reported for
programmers with the same education and experience.
Commenting on the human aspects of debugging, Shneiderman [Shn80] states:
Debugging is one of the more frustrating parts of programming.
It has elements of problem solving or brain teasers, coupled with the annoying recognition that you have
made a mistake.
Heightened anxiety and the unwillingness to accept the possibility of errors increases the task difficulty.
Fortunately, there is a great sigh of relief and a lessening of tension when the bug is ultimately . . .
corrected.
The Debugging Process
Psychological Considerations
Debugging Strategies
In general, three debugging strategies have been proposed : (1) brute force, (2) backtracking,
and (3) cause elimination. Each of these strategies can be conducted manually, but modern
debugging tools can make the process much more effective
Debugging tactics.
The brute force category of debugging is probably the most common and least efficient method for
isolating the cause of a software error.
You apply brute force debugging methods when all else fails.
Using a “let the computer find the error” philosophy, memory dumps are taken, run-time traces are
invoked, and the program is loaded with output statements.
You hope that somewhere in the morass of information that is produced you’ll find a clue that can lead to
the cause of an error.
Although the mass of information produced may ultimately lead to success, it more frequently leads to
wasted effort and time.
Thought must be expended first! Backtracking is a fairly common debugging approach that can be used
successfully in small programs.
The Debugging Process
Thought must be expended first! Backtracking is a fairly common debugging approach that can be used
successfully in small programs.
Beginning at the site where a symptom has been uncovered, the source code is traced backward (manually)
until the cause is found.
Unfortunately, as the number of source lines increases, the number of potential backward paths may
become unmanageably large.
The third approach to debugging—cause elimination—is manifested by induction or deduction and
introduces the concept of binary partitioning.
Data related to the error occurrence are organized to isolate potential causes.
A “cause hypothesis” is devised and the aforementioned data are used to prove or disprove the hypothesis.
Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.
If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to
isolate the bug.
The Debugging Process
Automated debugging.
Each of these debugging approaches can be supplemented with debugging tools that can provide you with
semiautomated support as debugging strategies are attempted.
Hailpern and Santhanam [Hai02] summarize the state of these tools when they note, “. . . many new
approaches have been proposed and many commercial debugging environments are available.
Integrated development environments (IDEs) provide a way to capture some of the languagespecific
predetermined errors (e.g., missing end-of-statement characters, undefined variables, and so on) without
requiring compilation.”
A wide variety of debugging compilers, dynamic debugging aids (“tracers”), automatic test-case
generators, and cross-reference mapping tools are available.
However, tools are not a substitute for careful evaluation based on a complete design model and clear
source code
The people factor
Any discussion of debugging approaches and tools is incomplete without mention of a powerful
ally—other people! A fresh viewpoint, unclouded by hours of frustration, can do wonders.
A final maxim for debugging might be: “When all else fails, get help!
Correcting the Error
Once a bug has been found, it must be corrected.
But, as we have already noted, the correction of a bug can introduce other errors and therefore
do more harm than good.
Van Vleck [Van89] suggests three simple questions that you should ask before making the
“correction” that removes the cause of a bug:
1. Is the cause of the bug reproduced in another part of the program? In many situations, a
program defect is caused by an erroneous pattern of logic that may be reproduced
elsewhere. Explicit consideration of the logical pattern may result in the discovery of other
errors.
2. What “next bug” might be introduced by the fix I’m about to make? Before the correction is
made, the source code (or, better, the design) should be evaluated to assess coupling of logic
and data structures. If the correction is to be made in a highly coupled section of the
program, special care must be taken when any change is made.
3. What could we have done to prevent this bug in the first place? This question is the first step
toward establishing a statistical software quality assurance approach. If you correct the
process as well as the product, the bug will be removed from the current program and may
Model based Testing
Model-based testing (MBT) is a black-box testing technique that uses information contained
in the requirements model as the basis for the generation of test cases.
many cases, the model-based testing technique uses UML state diagrams, an element of the
behavioral model, as the basis for the design of test cases.
The MBT technique requires five steps:
Analyze an existing behavioral model for the software or create one
Traverse the behavioral model and specify the inputs that will force the software to make
the transition from state to state.
Review the behavioral model and note the expected outputs as the software makes the
transition from state to state.
Execute the test cases.
Compare actual and expected results and take corrective action as required.
MBT helps to uncover errors in software behavior, and as a consequence, it is extremely
useful when testing event-driven applications
THANK YOU

You might also like