0% found this document useful (0 votes)
76 views39 pages

Testing Presentation

The document discusses foundational concepts in software testing. It defines testing as evaluating attributes or capabilities of a program to determine if it meets requirements. Testing aims to establish confidence in a program's correctness and reliability. The document outlines different levels of testing (unit, integration, system, acceptance) and discusses principles like risk-based prioritization, the impossibility of complete testing, and the importance of planning tests. General techniques covered include positive and negative testing as well as white-box and black-box approaches.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views39 pages

Testing Presentation

The document discusses foundational concepts in software testing. It defines testing as evaluating attributes or capabilities of a program to determine if it meets requirements. Testing aims to establish confidence in a program's correctness and reliability. The document outlines different levels of testing (unit, integration, system, acceptance) and discusses principles like risk-based prioritization, the impossibility of complete testing, and the importance of planning tests. General techniques covered include positive and negative testing as well as white-box and black-box approaches.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
You are on page 1/ 39

Testing Foundations

What is testing?
 Testing is the process of establishing
confidence that a program or system does what
it is supposed to. Hetzel 1973.
 Testing is the process of executing a program
or system with the intent of finding errors.
Myers 1979.
 Testing is any activity aimed at evaluating an
attribute or capability of a program or system
and determining that it meets its required
results. Hetzel 1983
Another definition found in some of the recent
literature defines testing as the measurement
of software quality.

 What do we mean by software quality?


Quality

Quality means "meets requirements"

This definition establishes that meeting


requirements is a criterion of quality that must
be included in the guidelines we establish. If
requirements are complete and a product
meets them, then, by definition, it is a quality
product!
Quality

Quality is not intangible.

The purpose of testing is to make quality


visible.

Testing is the measurement of software quality


Typical Software Quality Factors

Functionality (exterior quality)


 Correctness
 Reliability
 Usability
 Integrity
Typical Software Quality Factors

 Engineering (interior quality)


 Efficiency
 Testability
 Documentation
 Structure
 Adaptability (future qualities)
 Flexibility
 Reusability
 Maintainability
Testing practices

Most organizations conduct at least three


distinct levels of testing work. Individual
programs are "unit tested," and then groups of
programs are tested together in "system
testing." Completed systems are then
"acceptance tested." Different individuals or
groups may perform these different levels, with
acceptance testing usually carried out by the
end user or customer. Additional levels of
testing are employed in larger, more complex
projects.
 Error: Mistake in the code.
 Failure: Misbehavior of the program.
 Critical conditions: The data values,
environmental conditions, and program steps
that are essential for eliciting the failure.
 Symtom: A minor misbehavior that warns of an
error that can cause a much more serius
failure.
Unit testing

Deciding what to test and when to stop at the


unit testing level is most often the programmer's
decision. Programmers may approach unit
testing from either an external (black-box)
perspective with test cases based on the
specifications of what the program is supposed
to do or on an internal (white-box) perspective
with test cases developed to "cover" or exercise
the internal logic of the program.
System testing

System level testing begins when modules are


brought together. Often a separate testing level,
called integration testing, is carried out first to
test interfaces and ensure that modules are
communicating as expected. Then the system
functions are exercised and the software is
stressed to uncover its limitations and measure
its full capabilities. As a practical matter, most
system testing relies on the black-box
perspective.
Acceptance testing

Acceptance testing begins when system testing


is complete. Its purpose is to provide the end
user or customer with confidence and
insurance that the software is ready to be used.
Test cases or situations are often a subset of
the system test set and usually include typical
business transactions or a parallel month of
processing. Tests are quite often conducted
informally, with little record maintained as to
what is tested and what results are obtained.
Some organizations have established independent quality
assurance and/or test groups and have given these groups
authority and responsibility to address testing technique
improvements. Some use tools and automated aids
extensively. Others are outstanding on certain projects or
levels of testing and terrible at others. Almost all report
serious concerns with system quality problems that go
undetected or are discovered too late in the development
cycle. Most feel they need significant improvements in
testing techniques and are striving to find ways to become
more systematic and effective.
Why testing is important?

Testing can help reduce the risk of failures or


the ocurrence of problems.
 By locating and removing defects before shipping,
the costs and impacts are lowered.
 Testing, if applied correctly, can contribute with the
overall quality of the software.
In some situations testing may be contractually
mandated or required by legal or other
standards.
Testing principles
Testing principles

Complete Testing Is Not Possible


Many programmers seem to believe that they can and
do test their programs "thoroughly." When asking
about testing practices, it's frequently to hear
expressions such as the following: "I'll stop testing
when I am sure it works"; "We'll implement the system
as soon as all the errors are corrected." Such
expressions assume that testing can be "finished," that
one can be certain that all defects are removed, or that
one can become totally confident of success. This is
an incorrect view. We cannot hope to achieve
complete testing, and, as we shall see, the reasons
are both practical limitations and theoretical
impossibility.
Testing principles

Testing Work Is Creative and Difficult


 Anyone who has had much experience in
testing work knows that testing a piece of
software is not simple. Despite this we
perpetuate several myths that have caused a
great deal of harm.
Testing principles
False Beliefs about Testing
 Testing is easy.
 Anyone can do testing.
 No training or prior experience is required.
Why Testing Is Not Simple
 To test effectively you must thoroughly
understand a system.
 Systems are neither simple nor simple to
understand.
 Therefore, testing is not simple.
Testing principles

An Important Reason for Testing is to Prevent


Deficiencies from Occurring
 Most practitioners do not appreciate the
significance of testing in terms of preventing
defects instead of just discovering them.
 The traditional view of testing was as an after-the-
fact activity that happened after all coding was
completed. Gradually, during the last fifteen years,
a very different attitude has emerged: we now look
at testing not as a phase or step in the
development cycle, but as a continuous activity
over the entire development cycle.
Testing principles

Testing Is Risk-Based
 The amount of testing we should (and are
willing to) perform depends directly on the risk
involved. Programs or systems with high risk
require more test cases and more test
emphasis. Programs with low risk or limited
impact of failure do not warrant the same
concern.
Risk: potential failure areas (adverse future
events or hazards) in the software or system
are known as product risks.
Testing principles

Testing Must Be Planned


A document that defines the overall testing
objectives and the testing approach is called a
test plan. A document or statement that defines
what we have selected to be tested and
describes the results expected is called a test
design. Test plans and designs can be
developed for any level of testing—
requirements and designs, programs,
subsystems, and complete systems.
What Is the Cost of Not Testing?

 Some defects are very subtle and can be


difficult to detect, but they may still have a
significant effect on an organization’s business.
For example, if a system fails and is
unavailable for a day before it can be
recovered, then the organization may lose a
day’s effort per person affected
 In practice, it is impossible to ensure that even
relatively simple programs are free of defects
because of the complexity of computer systems
and the fallibility of the development process
and of the humans involved in this process.
General Testing Techniques

 The process of Positive Testing is intended to


verify that a system conforms to its stated
requirements.
 The process of Positive testing must be
performed in order to determine if the AUT
(Application Under Test) is “fit for purpose”, and
where the application will be delivered to a
customer, the process is likely to be a
compulsory aspect of contractual acceptance.
General Testing Techniques

 The process of Negative Testing is intended to


demonstrate “that a system does not do what it
is not supposed to do”. Negative Testing if often
used to tests aspects of the AUT that have not
been documented or have been poorly or
incompletely documented in the specification.
White Box and Black Box Testing

 White Box and Black Box testing are


complementary approaches to the testing of the
AUT that rely on the level of knowledge the
Test Analyst has of the internal structure of the
software.
White Box and Black Box Testing

 White Box tests are designed with the


knowledge of how the system under test is
constructed.
 It is anticipated that the Test Analyst will have
access to the design documents for the system
or other implementation documentation, or will
be familiar with the internal aspects of the
system.
White Box and Black Box Testing

 Black Box test are designed without the


knowledge of how the system under test is
constructed.
 In BBT, the design of Test Cases must be
based on the external behavior of the system. If
requirements exist for the system, these must
be tested against. Where there are no
requirements, then user guides or
documentation describing how the system
should behave can be used as the basis of test
design.
 The terms behavioral and structural testing are
often used synonymously with the terms ”Black
Box Testing” and ”White Box Testing”. In fact,
behavioral test design is slightly different from
BBT because the knowledge of how the system
under test is constructed is not strictly
prohibited.
 Error guessing

 It's not a testing technique itself, but rather a


skill can be applied to all of the other testing
techniques. Is the ability of finding errors or
defects in the AUT by what appears to be
intuition.
Oracles

 Is the principle or mechanism by which you


recognize a problem. Any desition support tool
that sometimes is wrong but still usefull is a
heuristic.
 They are faillable but they're usefull.
Functional Testing Techniques
 Equivalence Partitioning
 Boundary Analysis.
 Intrusive Testing.
 Random Testing.
 Static Testing
 Thread Testing.
Nonfunctional Testing
Techniques
 Configuration/Installation Testing.
 Compatibility and Interoperability Testing.
 Documentation and Help Testing.
 Fault Recovery Testing.
 Performance Testing.
 Reliability Testing.
 Security Testing.
 Stress Testing.
 Usability Testing.
The Testing Phases

 Overview.
 Unit testing.
 Integration.
 System Testing.
 System Integration Testing.
 Acceptance Testing.
 Regression Testing.
Methodolgy

 A Methodology is a set of steps or tasks for


achieving an activity or end.
 Software Methodologies emerged with the
development of the phased life cycle model,
consisting of analysis, design, implementation,
and maintenance.
 Testing Methodology should be a part of the
software methodology.
Methodology

 The effective testing methodology recognizes


testing as having its own testing life cycle,
consisting of planning (analysis and objective
setting); acquisition (design and implementation
of the tests), and measurement (execution and
evaluation).
 Testing methodology must consider both what
(the steps) and when (the timing).
 Testing is no longer an art.
The Testing life cycle
The Testing life cycle

 Project Initiation
 Develop broad test strategy.
 Establish the overall test approach and effort.
 Requirements
 Establish the testing requirements.
 Assign testing responsibilities.
 Design preliminary test procedures and
requirements-based tests.
 Test and validate the requirements.
The Testing life cycle

 Design
 Prepare preliminary system test plan and design
specification.
 Complete acceptance test plan and design
specification.
 Complete design-based tests.
 Test and validate the design.
The Testing life cycle
 Development
 Complete the system test plan.
 Finalize test procedures and any code-based tests.
 Complete module or unit test designs.
 Test the programs.
 Integrate and test subsystems.
 Conduct the system test.
 Implementation
 Conduct the acceptance test.
 Test changes and fixes.
 Evaluate testing effectiveness.

You might also like