0% found this document useful (0 votes)
26 views54 pages

Software Testing

The document discusses principles and concepts related to software testing including that testing aims to find defects early, testing can only prove presence not absence of defects, and exhaustive testing is not possible. It also covers testing goals, types of faults, and differences between testing and debugging.

Uploaded by

aishdesai31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views54 pages

Software Testing

The document discusses principles and concepts related to software testing including that testing aims to find defects early, testing can only prove presence not absence of defects, and exhaustive testing is not possible. It also covers testing goals, types of faults, and differences between testing and debugging.

Uploaded by

aishdesai31
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

B.SC.

(COMPUTER SCIENCE)
SEM – V

SOFTWARE TESTING

Trupti Wani Note: This PPT is used only for educational purpose
Software Testing
4

 Testing should focus on finding defects before


customers find them.
 Testing can only prove the presence of defects,
never their absence.
 “Why one tests” is as important as “What to test”
and “How to test”
 Test the tests first- a defective test is more
dangerous than a defective product.
Fundamental objective
5

The fundamental objective


of software testing is to
find defects, as early as
possible and get them fix.
Testing ballpoint pen
6

 Does the pen write in the right color, with


the right line thickness?
 Is the logo on the pen according to
company standards?
 Is it safe to chew on the pen?
 Does the click-mechanism still work after
100 000 clicks?
 Does it still write after a car has run over
it?

What is expected from this pen?


Intended use!!
7

Goal: develop software to meet its intended use!


But: human beings make mistake!

Product of any engineering activity must be verified


against its requirements throughout its development.
8

 Verifying bridge = verifying design, construction,


process,…

 Software must be verified in much the same spirit.


In this lecture, however, we will identify that
verifying software is perhaps more difficult than
verifying other engineering products.
Why software has bugs?
9
Why software has bugs?
10

 Ambiguous/ Unclear requirements


 Increased complexity of software
 Programming errors
 Poor documentation/ Knowledge transfer
 Communication gap
 Inability to manage change
 Time pressure/ Deadlines
Software development
11
Objectives of Software Testing :

• To check whether software which builds, it is


as per the requirement or not.
• Finding defects from the software before
customers find them out.
• Defects get fix from the developer
• Preventing defects
• Gaining confidence about the level of quality
Principles of Testing
1. Testing shows presence of defects

2. Early Testing

3. Exhaustive testing is not possible

4. Testing is Context Dependent

5. Defect Clustering

6. Pesticide Paradox

7. Absence of Errors fallacy


Testing Shows the Presence of Defects

Every application or product is released into production after a sufficient amount


of testing by different teams or passes through different phases like System
Integration Testing, User Acceptance Testing, and Beta Testing etc.

So have you ever seen or heard from any of the testing team that they have
tested the software fully and there is no defect in the software? Instead of that,
every testing team confirms that the software meets all business requirements
and it is functioning as per the needs of the end user.

In the software testing industry, no one will say that there is no defect in the
software, which is quite true as testing cannot prove that the software is error-
free or defect-free.

However, the objective of testing is to find more and more hidden defects using
different techniques and methods. Testing can reveal undiscovered defects and
if no defects are found then it does not mean that the software is defect free.
Early Testing
Testers need to get involved at an early stage of the Software Development Life Cycle
(SDLC). Thus the defects during the requirement analysis phase or any documentation
defects can be identified. The cost involved in fixing such defects is very less when
compared to those that are found during the later stages of testing.

Consider the below image which shows how the cost of defect fixing gets
increased as testing move towards the live production.
Exhaustive Testing is Not Possible

 It is not possible to test all the functionalities with all valid and invalid
combinations of input data during actual testing. Instead of this approach,
testing of a few combinations is considered based on priority using different
techniques.

 Exhaustive testing will take unlimited efforts and most of those efforts are
ineffective. Also, the project timelines will not allow testing of so many
number of combinations. Hence it is recommended to test input data using
different methods like Equivalence Partitioning and Boundary Value
Analysis.
Testing is Context-Dependent

 There are several domains available in the market like Banking,


Insurance, Medical, Travel, Advertisement etc and each domain has a
number of applications. Also for each domain, their applications have
different requirements, functions, different testing purpose, risk,
techniques etc.

 Different domains are tested differently, thus testing is purely based on


the context of the domain or application.
Defect Clustering

 During testing, it may happen that most of the defects found are related to a
small number of modules. There might be multiple reasons for this like the
modules may be complex, coding related to such modules may be
complicated etc.

 This is the Pareto Principle of software testing where 80% of the problems
are found in 20% of the modules.
Pesticide Paradox

 Pesticide Paradox principle says that if the same set of test cases are
executed again and again over the period of time then these set of tests are
not capable enough to identify new defects in the system.

 In order to overcome this “Pesticide Paradox”, the set of test cases needs
to be regularly reviewed and revised. If required a new set of test cases can
be added and the existing test cases can be deleted if they are not able to
find any more defects from the system.
Absence of Error

If the software is tested fully and if no defects are found before release, then
we can say that the software is 99% defect free. But what if this software is
tested against wrong requirements? In such cases, even finding defects and
fixing them on time would not help as testing is performed on wrong
requirements which are not as per needs of the end user.
Error, Fault, Failure
21
Error
22

 People make errors. A good synonym is mistake.


 When people make mistakes while coding, we call
these mistakes bugs.
 Errors tend to propagate; a requirements error may
be magnified during design and amplified still
more during coding.
Fault
23

 A fault is the result of an error.


 It is more precise to say that a fault is the
representation of an error, where representation is
the mode of expression, such as narrative text, data
flow diagrams, hierarchy charts, source code, and
so on.
 Defect is a good synonym for fault, as is bug.
 Faults can be elusive.
Testing vs. Debugging
24

Testing Debugging
Testing starts with known Debugging starts from possibly
conditions, uses predefined unknown initial conditions and
procedures and has predictable the end can not be predicted
outcomes. except statistically.
Procedure and duration of
Testing can and should be
debugging cannot be so
planned, designed and scheduled.
constrained.
Testing is a demonstration of Debugging is a deductive
error or apparent correctness. process.
Testing proves a programmer's Debugging is the programmer's
failure. justification.
Fault
25

 When a designer makes an error of omission, the


resulting fault is that something is missing that should
be present in the representation.

 We might speak of faults of commission and faults of


omission.
 A fault of commission occurs when we enter something into
a representation that is incorrect.
 Faults of omission occur when we fail to enter correct
information. Of these two types, faults of omission are more
difficult to detect and resolve.
Failure
26

 A failure occurs when a fault executes.


 Two points arise here: one is that failures only
occur in an executable representation, which is
usually taken to be source code, or more precisely,
loaded object; the second point is that this
definition relates failures only to faults of
commission.
 How can we deal with failures that correspond to
faults of omission?
Program behavior
27
Types of faults
28
 Algorithmic: division by zero
 Computation & Precision: order of op
 Documentation: doc - code
 Stress/Overload: data structure size ( dimensions of tables, size
of buffers)
 Capacity/Boundary: x devices, y parallel tasks, z interrupts
 Timing/Coordination: real-time systems
 Throughout/Performance: speed in req.
 Recovery: power failure
 Hardware & System Software: modem
 Standards & Procedure: organizational standard; difficult for
programmers to follow each other.
Testing vs. Debugging
29

Testing Debugging
Testing, as executes, should Debugging demands intuitive
strive to be predictable, dull, leaps, experimentation and
constrained, rigid and inhuman. freedom.
Debugging is impossible
Much testing can be done
without detailed design
without design knowledge.
knowledge.
Testing can often be done by an Debugging must be done by an
outsider. insider.
Much of test execution and Automated debugging is still a
design can be automated. dream.
What is defect?
30

 The software doesn’t do something that the product


specification says it should do.
 The software does something that the product
specifications say it shouldn’t do.
 The software does something that the product
specification doesn’t mention.
 The software doesn’t do something that the product
specification doesn’t mention but should do.
 Difficult to understand, hard to use, slow etc.
Software testing process
31

 Test planning
 Test design
 Test implementation (in case of automation)
 Test execution
 Test analysis
 Postmortem reviews
Test initiation criteria
32

 Timing: as soon as we have software requirements


(baseline).
 Objective –
 To trap requirements-related defects as early as they
can be identified.
 Approach
 Test requirements
Test completion criteria
33

Following factors are considered


 Deadlines; e.g. release deadlines, testing deadlines

 Test cases completed with certain percentage

passed
 Coverage of code, functionality, or requirements

reaches a specified point


 Bug rate falls below a certain level; or

 Beta or alpha testing period ends


Participants in testing
34

 Customer
 Users
 Developers
 Includes individual/group which gather requirements,
design, build, change and maintain software
 Testers
 Senior management
 Auditors
Challenges in testing
35

 Testing considered late in the project


 Requirements not testable
 Integrate after all components have been developed
 Test progress hard to measure
 Complete testing is not possible
Economics of testing
36

Customer
Dissatisfaction
Cost of testing

Number of Optimum testing


defects

Quantity

Over testing
Under testing

Duration
The Psychology of testing
37

 People make mistakes, but they do not like to admit


them!
 Developer test
 Blindness to one’s own errors
 Independent testing team
 Reporting of failures
 Mutual comprehension
Qualities of Good Tester
 A good software tester asks questions
Software testing requires asking a lot of questions. “What happens if ____?”, “Why isn’t
____ working?”, “Why does it work like this?”. Questions inspire new test cases and
different ways of approaching the tests. When QA is involved early on, questions are asked
early on, and quality is injected early on.

 A good software tester is curious


Good software testers want to know what’s going on behind the scenes. They’re constantly
looking for problems to overcome. A curious software tester is more likely to find bugs
and usability issues than a non-curious tester.

 A good software tester is a strong communicator


Software testers are often the bearer of bad news. They need to be able to communicate
effectively and passionately, but also delicately. From written communication to verbal
communication, being able to communicate problems and weaknesses, steps to replicate
bugs, and why something should be a certain way is an art form.
 A good software tester is patient
It’s easy to get caught up in the excitement of finding a bug or identifying an opportunity for
improvement. Software testers often get “no” as a response. Being patient is key to being a successful
software tester and building strong relationships with your team.

 A good software tester is a strong writer


Software testing involves a lot of writing. Writing test cases, writing bug reports, writing emails. Poor
writing leads to communication breakdowns and wasted time, both things most software teams can’t
afford. Writing efficiently and effectively goes a long ways in software testing.

 A good software tester meets deadlines


There’s never enough time to test everything. The ability to prioritize things and stay on task is crucial
for meeting deadlines. Time management is very important to being a good software tester.

 A good software tester is empathetic


Just like a good designer should design with the user at the top of mind, testers should test with the user
at the top of their mind. Putting yourself in the shoes of the user will help uncover different problems
and areas for improvement.
 A good software tester thinks creatively
As a software tester, you need to approach things from different angles. Think outside of the box and
try different things. Try to break it, and try to break it in different ways.

 A good software tester is organized


Organization is crucial to being a successful software tester. As you know, testing involves a lot of
documentation, artifacts, time lines, and communication. If you drop the ball, it affects the entire
team and project. Use the right tools and processes that work for you and your team to stay on top of
the ball.

 A good software tester has technical skills


Knowing which questions to ask requires some technical knowledge. Technical skills also help you
understand limitations and boundaries within the application.

 A good software tester is a team player


Be easy to work with. Being a team player encompasses many of the other qualities we’ve discussed.
Remember to appreciate the work of your team. Developers have an entire different skill set than
you and they are smart even if they are the source of bugs. When things go wrong, remember you’re
on the same team.
 A good software tester is quality driven
Quality is a mindset. A quality-driven person lives and breathes quality every day. As a
software tester, this is where the passion for “quality” comes from. You strongly desire
the best possible product and don’t compromise for quality ever.

 A good software tester pays attention to the details


A good software tester is thorough. Testers need to drive things to completion without
missing a beat. Remember, testing is the last phase of the software development
lifecycle. Your team and your customers are relying on you.

As you can see, software testing takes a unique set of skills. Characteristics of a good
software tester include both hard skills and soft skills. Testing isn’t for everyone. It takes
a creative, technically-minded person to be a successful tester. Approach your job with
these things in mind, and you’ll find yourself becoming a better tester over time.
Quality, QA and QC
42

 Quality is meeting the requirements expected of the


software, consistently and predictably.
 Expected behavior is characterized by as set of test
cases. Each test case is characterized by
 The environment under which the test case is to be executed
 Inputs that should be provided for that test case
 How these inputs should get processed?
 How changes should be produced in the internal state on
environment; and
 What output should be produced?
Quality, QA and QC
43

 The actual behavior of a given software for a given test


case, under given inputs, in a given environment, and in
a given internal state is characterized by
 How these inputs are actually get processed?
 What changes are actually produced in the internal state or
environment; and
 What outputs are actually produced.

 If actual and expected behavior are identical then test


case is passed otherwise the given software has defect on
that test case
Quality Control
44

 Build the product


 Test it for expected behavior after it is built
 If expected behavior is not same as the actual behavior
of product, fixes the product as is necessary
 Rebuild the product

 Above steps are repeated till expected behavior of


product matches the actual behavior for scenarios tested.
 QC is defect detection and defect correction oriented
 QC works on product rather than process
Quality Assurance
45

 Attempts defect prevention by concentrating on


producing the product rather than on defect detection/
correction after the product is built.

 Review the design before product is built


 Correct he design errors
 Production of better code (coding standards to be
followed)
 QA continued throughout the life of the product hence it
is a staff function (Everybody's responsibility)
QA vs. QC
46

Quality Assurance Quality Control


Concentrate on process of Concentrates on specific
producing the products products
Defect prevention oriented Defect detection and
correction oriented
Usually done throughout the Usually done after the product
life cycle is built
This is a staff function This is a line function
Examples: Reviews and audits Example: Software testing at
various levels
Knows as verification Known as validation /testing
SDLC
Software Quality Factors
The various factors, which influence the software, are termed as software
factors. They can be broadly divided into two categories. The first category of
the factors is of those that can be measured directly such as the number of
logical errors, and the second category clubs those factors which can be
measured only indirectly. For example, maintainability but each of the factors is
to be measured to check for the content and the quality control.
Several models of software quality factors and their categorization have been
suggested over the years. The classic model of software quality factors,
suggested by McCall, consists of 11 factors (McCall et al., 1977). Similarly,
models consisting of 12 to 15 factors, were suggested by Deutsch and Willis
(1988) and by Evans and Marciniak (1987).
All these models do not differ substantially from McCall’s model. The McCall
factor model provides a practical, up-to-date method for classifying software
requirements (Pressman, 2000).
McCall’s Factor Model
 This model classifies all software requirements
into 11 software quality factors. The 11 factors are
grouped into three categories – product operation,
product revision, and product transition factors.
• Product operation factors − Correctness,
Reliability, Efficiency, Integrity, Usability.
• Product revision factors − Maintainability,
Flexibility, Testability.
• Product transition factors − Portability,
Reusability, Interoperability.
Product Operation Software Quality Factors

According to McCall’s model, product operation category includes five software quality factors,
which deal with the requirements that directly affect the daily operation of the software. They are as
follows −
 Correctness
These requirements deal with the correctness of the output of the software system. They include −
• Output mission
• The required accuracy of output that can be negatively affected by inaccurate data or
inaccurate calculations.
• The completeness of the output information, which can be affected by incomplete data.
• The up-to-dateness of the information defined as the time between the event and the response
by the software system.
• The availability of the information.
• The standards for coding and documenting the software system.

 Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate
of the software system and can refer to the entire system or to one or more of its separate
functions.
 Efficiency
 It deals with the hardware resources needed to perform the different functions of the
software system. It includes processing capabilities (given in MHz), its storage
capacity (given in MB or GB) and the data communication capability (given in MBPS
or GBPS).
 It also deals with the time between recharging of the system’s portable units, such
as, information system units located in portable computers, or meteorological units
placed outdoors.

 Integrity
 This factor deals with the software system security, that is, to prevent access to
unauthorized persons, also to distinguish between the group of people to be given
read as well as write permit.

 Usability
 Usability requirements deal with the staff resources needed to train a new employee
and to operate the software system.
Product Revision Quality Factors

According to McCall’s model, three software quality factors are included in the product revision
category. These factors are as follows −
 Maintainability
This factor considers the efforts that will be needed by users and maintenance personnel to
identify the reasons for software failures, to correct the failures, and to verify the success of the
corrections.

 Flexibility
This factor deals with the capabilities and efforts required to support adaptive maintenance
activities of the software. These include adapting the current software to additional circumstances
and customers without changing the software. This factor’s requirements also support perfective
maintenance activities, such as changes and additions to the software in order to improve its
service and to adapt it to changes in the firm’s technical or commercial environment.

 Testability
Testability requirements deal with the testing of the software system as well as with its operation.
It includes predefined intermediate results, log files, and also the automatic diagnostics performed
by the software system prior to starting the system, to find out whether all components of the
system are in working order and to obtain a report about the detected faults. Another type of
these requirements deals with automatic diagnostic checks applied by the maintenance
technicians to detect the causes of software failures.
Product Transition Software Quality Factor

According to McCall’s model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows −
 Portability
Portability requirements tend to the adaptation of a software system to other environments
consisting of different hardware, different operating systems, and so forth. The software
should be possible to continue using the same basic software in diverse situations.

 Reusability
This factor deals with the use of software modules originally designed for one project in a
new software project currently being developed. They may also enable future projects to
make use of a given module or a group of modules of the currently developed software. The
reuse of software is expected to save development resources, shorten the development
period, and provide higher quality modules.

 Interoperability
Interoperability requirements focus on creating interfaces with other software systems or
with other equipment firmware. For example, the firmware of the production machinery and
testing equipment interfaces with the production control software.
a nk yo u
T h

You might also like