0% found this document useful (0 votes)
24 views27 pages

Soft Test Unit1

The document provides an overview of software testing, detailing its importance, principles, and fundamentals. It emphasizes the need for effective testing strategies, error identification, and the role of debugging in the software development lifecycle. Key concepts include testability, characteristics of good tests, and the significance of capturing user requirements and needs.

Uploaded by

Namrata Singare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views27 pages

Soft Test Unit1

The document provides an overview of software testing, detailing its importance, principles, and fundamentals. It emphasizes the need for effective testing strategies, error identification, and the role of debugging in the software development lifecycle. Key concepts include testability, characteristics of good tests, and the significance of capturing user requirements and needs.

Uploaded by

Namrata Singare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT 1

Software Testing

1.1. Software Testing


1.2. Software Testing Fundamentals
1.3. Debugging
Introduction
Today, software takes on a dual role. It is a product and at the same time, the vehicle for
delivering a product.

As a product, it delivers the computing potential embodied by computer hardware or


more broadly, by a network of computers that are accessible by local hardware.

Testing is done manually or using automated tools. Testing is done by a separate group
of Testers. Testing is done right from the beginning of the software development life
cycle till the end; it is delivered to the customer.

Software testing refers to process of evaluating the software with intention to find out
error in it. Software testing is a technique aimed at evaluating an attribute or capability
of a program or product and determining that it meets its quality.

Software testing is also used to test the software for other software quality factors like
reliability, usability, integrity, security, capability, efficiency, portability, maintainability,
compatibility etc.
1.1 Software Testing
Software testing is a process of verifying and validating that a software application or
program meets the business and technical requirements that guided its design and
development and works as expected and also identifies important errors or flaws categorised
as per the severity level in the application that must be fixed

A) The Nature of Errors:

STAGES OF
DEVELOPMENT SPECIFICATIONS

REQUIREMENTS COMPONENT
ANALYSIS SPECIFICATION
1.1 Software Testing
A) The Nature of Errors:
1) Stages of Development:
It would be convenient to know how errors arise, because then it could try to avoid them
during all the stages of development.
2) Specifications:
Similarly, it would be useful to know the most commonly occurring faults, because then it
could look for them during verification. Regrettably, the data is inconclusive and it is only
possible to make vague statements about these things.
3) Requirements Analysis:
A software system has an overall specification, derived from requirements analysis. In
addition, each component of the software ideally has an individual specification that is
derived from architectural design. The specification for a component can be:
a) Ambiguous (unclear).
b) Incomplete.
c) Faulty.
4) Component Specification:
Any such problems should, of course, be detected and remedied by verification of the
specification prior to development of the component, but, of course, this verification
cannot and will not be totally effective. So there are often problems with a component
specification.
1.1 Software Testing
B) Testing Principles:
Principles for software testing are as follows:

Test a Program to Try to Make it Fail

Start Testing Early

Testing is Context Dependant:

Define Test Plan

Design Effective Test Cases

Test for Valid as well as Invalid Conditions

Review Test Cases Regularly

Testing Must be Done by Different Persons at Different Levels

Test A Program Innovatively

Use Both Static and Dynamic Testing

Defect Clustering

Test Evaluation

Error Absence Myth

End of Testing
1.1 Software Testing
B) Testing Principles:
1) Test a Program to Try to Make it Fail:
Testing is the process of executing a program with the intent of finding errors [9]. Our
objective should be to demonstrate that a program has errors and then only true value of
testing can be accomplished.
2) Start Testing Early:
If person want to find errors, start as early as possible. This helps in fixing enormous
errors in early stages of development, reduces the rework of finding the errors in the
initial stages.
3) Testing is Context Dependant:
Testing is done differently in different contexts. Testing should be appropriate and
different for different points of time.
4) Define Test Plan:
Test Plan usually describes test scope, test objectives, test strategy, test environment,
deliverables of the test, risks and mitigation, schedule, levels of testing to be applied,
methods, techniques and tools to be used.
5) Design Effective Test Cases:
Complete and precise requirements are crucial for effective testing. User Requirements
should be well known before test case design. Testing should be performed against those
user requirements.
1.1 Software Testing
B) Testing Principles:
6) Test for Valid as well as Invalid Conditions:
In addition to valid inputs, we should also test system for invalid and unexpected
inputs/conditions. Many errors are discovered when a program under test is used in
some new and unexpected way and invalid input conditions seem to have higher error
detection yield than do test cases for valid input conditions.
7) Review Test Cases Regularly:
Repeating same test cases over and over again eventually will no longer find any new
errors. Therefore the test cases need to be regularly reviewed and revised and new and
different tests need to be written to exercise different parts of the software or system to
potentially find more defects.
8) Testing Must be Done by Different Persons at Different Levels:
Different purposes are addressed at the different levels of testing. Factors which decide
who will perform testing include the size and context of the system, the risks, the
development methodology used, the skill and experience of the developers.
9) Test A Program Innovatively:
Testing everything (all combinations of inputs and preconditions) is not feasible except
for trivial cases. It is impossible to test a program sufficiently to guarantee the absence of
all errors. Instead of exhaustive testing, we use risks and priorities to focus testing efforts
more on suspected components as compared to less suspected.
1.1 Software Testing
B) Testing Principles:
10) Use Both Static and Dynamic Testing:
Static testing is good at depth; it reveals developers understanding of the problem
domain and data structure.
11) Defect Clustering:
Errors tend to come in clusters. The probability of the existence of more errors in a
section of a program is proportional to the number of errors already found in that
section
12) Test Evaluation:
We should have some criterion to decide whether a test is successful or not. If limited
test cases are executed, the test oracle (human or mechanical agent which decides
whether program behaved correctly on a given test can be tester himself/herself who
inspects and decides the conditions that makes test run successful.
13) Error Absence Myth:
System that does not fulfill user requirements will not be usable even if it does not have
any errors. Finding and fixing defects does not help if the system built does not fulfill the
user’s needs and expectations.
14) End of Testing:
Software testing is an ongoing process, which is potentially endless but has to be
stopped somewhere.
1.2 Software Testing
Fundamentals
Software testing fundamentals are so important that without developing them one cannot
progress within a discipline. Just as our business domain knowledge impacts on our
effectiveness to design test cases; so our fundamental competencies impact on our ability to
become skilled within any given discipline. The fundamentals cover the analysis of systems
and requirements to create test scenarios; reporting and management of defects; testing
techniques and test execution.

A) Testing Objectives:
Glen Myers states a number of rules that can serve well as testing objectives:
1) Testing is a process of executing a program with the intent of finding an error.
2) A good test case is one that has a high probability of finding an as-yet undiscovered
error.
3) A successful test is one that uncovers an as-yet-undiscovered error.
These objectives imply a dramatic change in viewpoint. They move counter to the
commonly held view that a successful test is one in which no errors are found. Our
objective is to design tests that systematically uncover different classes of errors and to
do so with a minimum amount of time and effort. If testing is conducted successfully
(according to the objectives stated previously), it will uncover errors in the software. It is
important to keep this (rather gloomy) statement in mind as testing is being conducted.
1.2 Software Testing
B) Testability:
Fundamentals
In ideal circumstances, a software engineer designs a computer program, a system or a
product with “testability” in mind. This enables the individuals charged with testing to design
effective test cases more easily testability describes testability in the following manner.

Operability

Observability

Controllability

Decomposability

Simplicity

Stability

Understandability
1.2 Software Testing
B) Testability:
Fundamentals
1) Operability:
“The better it works, the more efficiently it can be tested.” If a system is designed and
implemented with quality in mind, relatively few bugs will block the execution of tests,
allowing testing to progress without fits and starts.

2) Observability:
“What you see is what you test.” Inputs provided as part of testing produce distinct
outputs. System states and variables are visible or queriable during execution. Incorrect
output is easily identified. Internal errors are automatically detected and reported. Source
code is accessible.

3) Controllability:
“The better we can control the software, the more the testing can be automated and
optimised.” All possible outputs can be generated through some combination of input,
and I/O formats are consistent and structured. All code is executable through some
combination of input. Software and hardware states and variables can be controlled
directly by the test engineer. Tests can be conveniently specified, automated, and
reproduced.
1.2 Software Testing
B) Testability:
Fundamentals
4) Decomposability:
“By controlling the scope of testing, we can more quickly isolate problems and perform
smarter retesting.” The software system is built from independent modules that can be
tested independently.
5) Simplicity:
“The less there is to test, the more quickly we can test it.” The program should exhibit
functional simplicity
6) Stability:
“The fewer the changes, the fewer the disruptions to testing.” Changes to the software
are infrequent, controlled when they do occur and do not invalidate existing tests. The
software recovers well from failures
7) Understandability:
“The more information we have, the smarter we will test.” The architectural design and
the dependencies between internal, external, and shared components are well
understood. Technical documentation is instantly accessible, well organised, specific and
detailed, and accurate. Changes to the design are communicated to testers.
1.2 Software Testing
c) Test Characteristics:
Fundamentals
The following are the characteristics of a “good” test:

A Good Test has a High


Probability of Finding an Error

A Good Test is not Redundant

A Good Test should be “Best


of Breed”
A Good Test should be Neither
too Simple nor too Complex
1.2 Software Testing
c) Test Characteristics:
Fundamentals
1) A Good Test has a High Probability of Finding an Error:
To achieve this goal, the tester must understand the software and attempt to develop a
mental picture of how the software might fail. Ideally, the classes of failure are probed.
For example:
one class of potential failure in a graphical user interface is the failure to recognize proper
mouse position. A set of tests would be designed to exercise the mouse in an attempt to
demonstrate an error in mouse position recognition.
2) A Good Test is not Redundant:
Testing time and resources are limited. There is no point in conducting a test that has the
same purpose as another test. Every test should have a different purpose (even if it is
subtly different).
3) A Good Test should be “Best of Breed”:
In a group of tests that have a similar intent, time and resource limitations may mitigate
toward the execution of only a subset of these tests. In such cases, the test that has the
highest likelihood of uncovering a whole class of errors should be used.
4) A Good Test should be Neither too Simple nor too Complex:
Although it is sometimes possible to combine a series of tests into one test case, the
possible side effects associated with this approach may mask errors. In general, each test
should be executed separately.
1.2 Software Testing
Fundamentals
D) Essentials of Software Testing:
Software testing is a disciplined approach. It executes software work products and finds
defects in it. The intention of software testing is to find all possible failures, so that
eventually these are eliminated and a good product is given to the customer. It intends to
find all possible defects and/or identify risks which final user may face in real life while
using the software.

Strengths

Threats Weakness
1.2 Software Testing
Fundamentals
D) Essentials of Software Testing:

1) Strengths:
Some areas of software are very strong and no (very less) defects are found during
testing of such areas. The areas may be in terms of some modules, screens, and
algorithms or processes like requirement definition, designs, coding and testing. This
represents strong processes present in these areas supporting development of a good
product. It can always rely on these processes and try to deploy them in other areas.
2) Weakness:
The areas of software where requirement compliance is on the verge of failure may
represent weak areas. It may not be a failure at that moment, but it may be on the
boundary condition of compliance and if something goes wrong in production
environment, it will result into defect or failure of software product. The processes in
these areas represent some problems.
3) Threats:
Threats are the problems or defects with the software which result into failures. They
represent the problems associated with some processes in the organisation such as
requirement clarity, knowledge base and expertise. An organisation must invest in
making these processes stronger. Threats clearly indicate the failure of an application,
and eventually may lead to customer dissatisfaction.
1.2 Software Testing
Fundamentals
E) Salient Features of Good Testing:
Good software testing involves testing of the following:

Capturing User Requirements

Capturing User Needs

Design Objectives

User Interfaces

Internal Structures

Execution of Code
1.2 Software Testing
Fundamentals
E) Salient Features of Good Testing:

1) Capturing User Requirements:


The requirements defined by the users or customer as well as some implied
requirements (which are intended by the users but not put in words) represent the
foundation on which software is built. Intended requirements are to be analysed and
documented by testers so that they can write the test scenario and test cases for these
requirements.
2) Capturing User Needs:
User needs may be different from user requirements specified in software requirement
specifications. User needs may include present and future requirements and other
requirements which may include process requirements (including definition of
deliverables) and implied requirements.
3) Design Objectives:
Design objectives state why a particular approach has been selected for building
software. The selection process indicates the reasons and criteria framework used for
development and testing. How an applications functional requirements, user interface
requirements, performance requirements and other requirements are interpreted in
design and how they can be achieved in further development must be defined in an
approach document.
1.2 Software Testing
Fundamentals
E) Salient Features of Good Testing:

4) User Interfaces:
User interfaces are the ways in which the user interacts with the system. This includes
screens and other ways of communication with the system as well as displays and reports
generated by the system. User interfaces should be simple, so that the user can
understand what he is supposed to do and what the system is doing.

5) Internal Structures:
Internal structures are mainly guided by software designs and guidelines or standards
used for designing and development. Internal structures may be defined by development
organisation or sometimes defined by customer. It may talk about reusability, nesting,
etc. to analyse the software product as per standards or guidelines.

6) Execution of Code:
Testing is execution of a work product to ensure that it works as intended by customer or
user and is prevented from any probable misuse or risk of failure. Execution can only
prove that application, module, and program work correctly as defined in requirements
and interpreted in design
1.3 Debugging
Software testing is an action that can be systematically planned and specified. Test case
design can be conducted, a strategy can be defined and results can be evaluated against
prescribed expectations. Debugging occurs as a consequence of successful testing. That is,
when a test case uncovers an error, debugging is an action that results in the removal of the
error. Although debugging can and should be an orderly process, it is still very much an art.
That is, the external manifestation of the error and the internal cause of the error may have
no obvious relationship to one another. The poorly understood mental process that connects
a symptom to a cause is debugging.

A) Meaning:
Debugging is the process of analyzing causes behind the bugs and removing them.
Debugging starts with a list of reported bugs with unknown initial conditions. In
debugging it is not possible to plan and estimate schedule and effort for debugging.
Debugging is a problem solving involving deduction. Detailed design knowledge is of great
help in good debugging. Debugging is done by the development team and hence is an
insider’s work. Also, automation of debugging is not in place.
1.3 Debugging
B) Process of Debugging:
A debugging process can be divided into four main steps these are as follows:

Localising a Bug

Classifying a Bug

Understanding a Bug

Repairing a Bug
1.3 Debugging
B) Process of Debugging:
1) Localising a Bug:
A typical attitude of inexperienced programmers towards bugs is to consider their
localisation an easy task: they notice their code does not do what they expected, and they
are led astray by their confidence in knowing what their code should do.
2) Classifying a Bug:
Despite the appearance, bugs have often a common background. The list is arranged in
order of increasing difficulty (which fortunately means in order of decreasing frequency).
a) Syntactical Errors:
In any case, it is vital to remember that quite often the problem might not be at the
exact position indicated by the compiler error message.
b) Build Errors:
Build Errors derive from linking object files which were not rebuilt after a change in
some source files.
c) Basic Semantic Errors:
Basic Semantic Errors comprise using uninitialised variables, dead code (code that will
never be executed) and problems with variable types.
d) Semantic Errors:
Semantic Errors include using wrong variables or operators. No tool can catch these
problems, because they are syntactically correct statements, although logically wrong.
1.3 Debugging
B) Process of Debugging:

3) Understanding a Bug:
A bug should be fully understood before attempting to fix it. Trying to fix a bug before
understanding it completely could end in provoking even more damage to the code, since
the problem could change form and manifest itself somewhere else, maybe randomly.
Again, a typical example is memory corruption:.
The following check-list is useful to assure a correct approach to the investigation:
1) Do not confuse observing symptoms with finding the real source of the problem;
2) Check if similar mistakes (especially wrong assumptions) were made elsewhere in the
code;
3) Verify that just a programming error and not a more fundamental problem (e.g. an
incorrect algorithm), was found.

4) Repairing a Bug:
The final step in the debugging process is bug fixing. Repairing a bug is more than
modifying code. Any fixes must be documented in the code and tested properly. More
important, learning from mistakes is an effective attitude: it is good practice filling a small
file with detailed explanations about the way the bug was discovered and corrected. A
check-list can be a useful aid.
1.3 Debugging
C) General Debugging Techniques:
The following are the General Debugging Techniques:

Exploiting Reading the The Abused


Compiler Right Cout Debugging
Features Documentation Technique

Defensive
ACI Debugging
Programming Logging
Technique
and Assertions

Reading the
The Debugger
Code Through
1.3 Debugging
C) General Debugging Techniques:
1) Exploiting Compiler Features:
A good compiler can do some static analysis on the code. Static code analysis is the
analysis of software that is performed without actually executing programs built from
that software.
2) Reading the Right Documentation:
This seems quite an obvious tip, but too often unexperienced programmers read the
wrong papers looking for hints about the task they have to accomplish. The relevant
documentation for the task, the tools, the libraries and the algorithms employed must be
at fingertips to find the relevant information easily.
3) The Abused Cout Debugging Technique:
The cout debugging technique takes its names from the C++ statement for printing on
the standard output stream (usually the terminal screen). It consists of adding print
statements in the code to track the control flow and data values during code execution.
In some (very few) circumstances cout debugging can be appropriate, although it can
always be replaced by other techniques.
4) Logging:
Logging is takes the concept of printing messages, expressed in the previous paragraph,
one step further. Logging is a common aid to debugging. Everyone who has tried at least
once to solve some system related problems knows how useful a log file can be.
1.3 Debugging
C) General Debugging Techniques:
5) Defensive Programming and Assertions:
Assertions are expressions which should evaluate to be true at a specific point in the
code. If an assertion fails, a problem was found. The problem could possibly be in the
assertion, but more likely it will be in the code.
6) ACI Debugging Technique:
On the contrary it is serious and I invite the reader not to underestimate the power of
this technique with a funny name. ACI is the acronym of Automobile Club d’italia, an
Italian organisation that helps with car troubles.
7) Reading the Code Through:
This technique is quite similar to the ACI technique, with the exception that it doesn't rely
on a bystander. The recipe is quite simple as well. When the programmer finds himself in
complete darkness and has not the slightest idea of what is going wrong, he must print
his code, leave his terminal and go to the cafeteria.
8) The Debugger:
When every other checking tool fails to detect the problem, then it is debugger's turn. A
debugger allows working through the code line-by-line to find out what it is going wrong,
where and why. It allows working interactively, controlling the execution of the program,
stopping it at various times, inspecting variables, changing code flow whilst running.
1.3 Debugging
D) Difference Between Debugging and Testing:

Basis Debugging Testing


Meaning Debugging is a technique to find out an error or errors and Software testing is a set of activities conducted with the
remove them in a program or set of programs; otherwise, intent of finding errors in software or a process of
they will lead to failure of these programs. Debugging could verifying that a set of programs collectively functions
be a continuous activity to enhance the effectiveness and correctly.
efficiency of coding and the scope is limited to the extent of a
program or set of programs.

Purpose To find out an error or a bug that occurs during the execution To show that the program or set of programs has a bug
of the program and then to correct the bug. or an error. The correction phase is left to the set of
developers or designers.

Process Debugging starts as soon as the results are obtained or Testing is a planned activity. It has proper steps and can
cannot be obtained in the execution phase of the program. be a scheduled activity.
This means, based on the unexpected results, the debugging
process starts.
Done By Initially, a program can be debugged only by the programmer Functional testing of the software (or set of programs)
who has written the code or who is aware about the code. can be done by an outsider.
This means, debugging has necessarily to be done by an
insider (at least to start with).

Documentation Debugging can continue irrespective of documentation (at Precondition of technical testing is to have a proper
least initially) and may deliver the desired results. documentation.

Automated Debugging cannot be automated. Testing can be automated.

You might also like