0% found this document useful (0 votes)
3 views

Software Testing

Software testing is a critical process aimed at evaluating a program's capabilities and ensuring it meets required results, despite the inherent complexity and challenges in detecting design defects. It encompasses various techniques, including black-box and white-box testing, and serves multiple purposes such as quality assurance, verification, and reliability estimation. Testing remains an essential yet imperfect art, as complete verification of software correctness is infeasible due to the vast number of potential failure modes.

Uploaded by

snyoru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Software Testing

Software testing is a critical process aimed at evaluating a program's capabilities and ensuring it meets required results, despite the inherent complexity and challenges in detecting design defects. It encompasses various techniques, including black-box and white-box testing, and serves multiple purposes such as quality assurance, verification, and reliability estimation. Testing remains an essential yet imperfect art, as complete verification of software correctness is infeasible due to the vast number of potential failure modes.

Uploaded by

snyoru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

.

Software testing

Software testing is any activity aimed at evaluating an attribute or capability of a program or


system and determining that it meets its required results. Although crucial to software quality
and widely deployed by programmers and testers, software testing still remains an art, due to
limited understanding of the principles of software. The difficulty in software testing stems from
the complexity of software: we cannot completely test a program with moderate complexity.
Testing is more than just debugging. The purpose of testing can be quality assurance, verification
and validation, or reliability estimation. Testing can be used as a generic metric as well.
Correctness testing and reliability testing are two major areas of testing. Software testing is a
trade-off between budget, time and quality.

Software Testing is the process of executing a program or system with the intent of finding
errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or
system and determining that it meets its required results. Software is not unlike other physical
processes where inputs are received and outputs are produced. Where software differs is in the
manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of
ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure
modes for software is generally infeasible.

Unlike most physical systems, most of the defects in software are design errors, not
manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will
not change until upgrades, or until obsolescence. So once the software is shipped, the design
defects -- or bugs -- will be buried in and remain latent until activation.

Software bugs will almost always exist in any software module with moderate size: not because
programmers are careless or irresponsible, but because the complexity of software is generally
intractable -- and humans have only limited ability to manage complexity. It is also true that for
any complex systems, design defects can never be completely ruled out.

Discovering the design defects in software, is equally difficult, for the same reason of
complexity. Because software and any digital systems are not continuous, testing boundary
values are not sufficient to guarantee correctness. All the possible values need to be tested and
verified, but complete testing is infeasible. Exhaustively testing a simple program to add only
two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years,
even if tests were performed at a rate of thousands per second. Obviously, for a realistic software
module, the complexity can be far beyond the example mentioned here. If inputs from the real
world are involved, the problem will get worse, because timing and unpredictable environmental
effects and human interactions are all possible input parameters under consideration.

A further complication has to do with the dynamic nature of programs. If a failure occurs during
preliminary testing and the code is changed, the software may now work for a test case that it
didn't work for previously. But its behavior on pre-error test cases that it passed before can no
longer be guaranteed. To account for this possibility, testing should be restarted. The expense of
doing this is often prohibitive.

An interesting analogy parallels the difficulty in software testing with the pesticide, known as the
Pesticide Paradox: Every method you use to prevent or find bugs leaves a residue of subtler bugs
against which those methods are ineffectual. But this alone will not guarantee to make the
software better, because the Complexity Barrier principle states: Software complexity(and
therefore that of bugs) grows to the limits of our ability to manage that complexity. By
eliminating the (previous) easy bugs you allowed another escalation of features and complexity,
but his time you have subtler bugs to face, just to retain the reliability you had before. Society
seems to be unwilling to limit complexity because we all want that extra bell, whistle, and
feature interaction. Thus, our users always push us to the complexity barrier and how close we
can approach that barrier is largely determined by the strength of the techniques we can wield
against ever more complex and subtle bugs.

Regardless of the limitations, testing is an integral part in software development. It is broadly


deployed in every phase in the software development cycle. Typically, more than 50% percent of
the development time is spent in testing. Testing is usually performed for the following purposes:

 To improve quality.

As computers and software are used in critical applications, the outcome of a bug can be severe.
Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space
shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs
can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of
consultants and programming tools dedicated to making sure the modern world doesn't come to a
screeching halt on the first day of the next century. In a computerized embedded world, the
quality and reliability of software is a matter of life and death.

Quality means the conformance to the specified design requirement. Being correct, the minimum
requirement of quality, means performing as required under specified circumstances. Debugging,
a narrow view of software testing, is performed heavily to find out design defects by the
programmer. The imperfection of human nature makes it almost impossible to make a
moderately complex program correct the first time. Finding the problems and get them fixed, is
the purpose of debugging in programming phase.

 For Verification & Validation (V&V)

Just as topic Verification and Validation indicated, another important purpose of testing is
verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the
V&V process. Testers can make claims based on interpretations of the testing results, which
either the product works under certain situations, or it does not work. We can also compare the
quality among different products under the same specification, based on results from the same
test.

We can not test quality directly, but we can test related factors to make
quality visible. Quality has three sets of factors -- functionality,
engineering, and adaptability. These three sets of factors can be thought of
as dimensions in the software quality space. Each dimension may be broken
down into its component factors and considerations at successively lower
levels of detail. Table 1 illustrates some of the most frequently cited quality
considerations.

Good testing provides measures for all relevant factors. The importance of any particular factor
varies from application to application. Any system where human lives are at stake must place
extreme emphasis on reliability and integrity. In the typical business system usability and
maintainability are the key factors, while for a one-time scientific program neither may be
significant. Our testing, to be fully effective, must be geared to measuring each relevant factor
and thus forcing quality to become tangible and visible.

Tests with the purpose of validating the product works are named clean tests, or positive tests.
The drawbacks are that it can only validate that the software works for the specified test cases. A
finite number of tests can not validate that the software works for all situations. On the contrary,
only one failed test is sufficient enough to show that the software does not work. Dirty tests, or
negative tests, refers to the tests aiming at breaking the software, or showing that it does not
work. A piece of software must have sufficient exception handling capabilities to survive a
significant level of dirty tests.

A testable design is a design that can be easily validated, falsified and maintained. Because
testing is a rigorous effort and requires significant time and cost, design for testability is also an
important design rule for software development.

 For reliability estimation

Software reliability has important relations with many aspects of software, including the
structure, and the amount of testing it has been subjected to. Based on an operational profile (an
estimate of the relative frequency of use of various inputs to the program, testing can serve as a
statistical sampling method to gain failure data for reliability estimation.

Software testing is not mature. It still remains an art, because we still cannot make it a science.
We are still using the same testing techniques invented 20-30 years ago, some of which are
crafted methods or heuristics rather than good engineering methods. Software testing can be
costly, but not testing software is even more expensive, especially in places that human lives are
at stake. Solving the software-testing problem is no easier than solving the Turing halting
problem. We can never be sure that a piece of software is correct. We can never be sure that the
specifications are correct. No verification system can verify every correct program. We can
never be certain that a verification system is correct either.
Black-box testing
Black Box Testing is testing a product without having the knowledge of the internal workings of
the item being tested. For example, when black box testing is applied to software engineering,
the tester would only know the "legal" inputs and what the expected outputs should be, but not
how the program actually arrives at those outputs. It is because of this that black box testing can
be considered testing with respect to the specifications, no other knowledge of the program is
necessary. The opposite of this would be white box (also known as glass box) testing, where test
data are derived from direct examination of the code to be tested. For white box testing, the test
cases cannot be determined until the code has actually been written. Both of these testing
techniques have advantages and disadvantages, but when combined, they help to ensure thorough
testing of the product.

System Testing:

System testing of software or hardware is testing conducted on a complete, integrated system


to evaluate the system's compliance with its specified requirements. System testing falls within
the scope of black box testing, and as such, should require no knowledge of the inner design of
the code or logic.

Acceptance Testing:

Acceptance testing is black-box testing performed on a system prior to its delivery. In some
engineering sub-disciplines, it is known as functional testing, black-box testing, release
acceptance, QA testing, application testing, confidence testing, final testing, validation testing,
usability testing, or factory acceptance testing.

Beta Testing

This is giving of partial or full version of a software package for free to the potential customers
with understanding that they will report errors revealed during usage. The choice of the users are
those with experienced use of the previous version

White-box testing

"Bugs lurk in corner and congregate in the boundaries." White box testing is far more likely to
uncover them.
White box testing is a testing technique where knowledge of the internal structure and logic of
the system is necessary to develop hypothetical test cases. It's sometimes called structural testing
or glass box testing. It is also a test case design method that uses the control structure of the
procedural design to derive test cases.

If a software development team creates a block of code that will allow the system to process
information in a certain way, a test team would verify this structurally by reading the code and
given the systems' structure, see if the code could work reasonably. If they felt it could, they
would plug the code into the system and run an application to structurally validate the system.

White box tests are derived from the internal design specification or the code. The knowledge
needed for white box test design approach often becomes available during the detailed design of
development cycle.

Unit Testing:

Unit testing is a test that validates that individual units of source code are working properly. A
unit is the smallest testable part of an application. In procedural programming a unit may be an
individual program, function, procedure, etc., while in object-oriented programming, the smallest
unit is a method, which may belong to a base/super class, abstract class or derived/child class.

Integration Testing:

Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase
of software testing in which individual software modules are combined and tested as a group. It
follows unit testing and precedes system testing. Integration testing takes as its input modules
that have been unit tested, groups them in larger aggregates, applies tests defined in an
integration test plan to those aggregates, and delivers as its output the integrated system ready for
system testing.

Regression Testing

This is selective retesting of a system to verify that modifications have caused unintended effects
and that the system still complies to its original specified requirements.

You might also like