Introduction To Testing
Introduction To Testing
Introduction To Testing
Introduction to Testing
What is testing?
Objectives of Testing
Modes of Testing
Testing methods
* White box testing Use the control structure of the procedural design to derive
test cases.
* Black box testing Derive sets of input conditions that will fully exercise the
functional requirements for a program.
* Integration Assembling parts of a system
Verification and Validation
* Verification: Are we doing the job right? The set of activities that ensure that
software correctly implements a specific function. (i.e. The process of
determining whether or not products of a given phase of the software
development cycle fulfill the requirements established during previous phase).
Ex: - Technical reviews, quality & configuration audits, performance
monitoring, simulation, feasibility study, documentation review, database
review, algorithm analysis etc
* Validation: Are we doing the right job? The set of activities that ensure that
the software that has been built is traceable to customer requirements.(An
attempt to find errors by executing the program in a real environment ). Ex: -
Unit testing, system testing and installation testing etc
Those are:
Ad Hoc Testing: Similar to exploratory testing, but often taken to mean that
the testers have significant understanding of the software before testing it.
Automated Testing:
· Testing employing software tools which execute tests without manual
intervention. Can be applied in GUI, performance, API, etc. testing.
· The use of software to control the execution of tests, the comparison of
actual outcomes to predicted outcomes, the setting up of test preconditions,
and other test control and test reporting functions.
Click this Link for : Approaches for automated testing tools
Basis Path Testing: A white box test case design technique that uses the
algorithmic flow of the program to design tests.
Baseline: The point at which some deliverable produced during the software
engineering process is put under formal change control.
Beta Testing: Testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically
done by end-users or others, not by programmers or testers.
Black Box Testing: Testing based on an analysis of the specification of a piece
of software without reference to its internal workings. The goal is to test how
well the component conforms to the published requirements for the
component.
Branch Testing : Testing in which all branches in the program source code are
tested at least once.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model
for judging the maturity of the software processes of an organization and for
identifying the key practices that are required to increase the maturity of
these processes.
Cause Effect Graph: A graphical representation of inputs and the associated
outputs effects which can be used to design test cases.
Endurance Testing: Checks for memory leaks or other problems that may occur
with prolonged execution.
Gray Box Testing: A combination of Black Box and White Box testing
methodologies: testing a piece of software against its specification but using
some knowledge of its internal workings.
Installation Testing: Confirms that the application under test recovers from
expected or unexpected events without loss of data or functionality. Events
can include shortage of disk space, unexpected loss of communication, or
power out conditions.
Load Testing: Load Tests are end to end performance tests under anticipated
production load. The primary objective of this test is to determine the
response times for various time critical transactions and business processes and
that they are within documented expectations (or Service Level Agreements -
SLAs). The test also measures the capability of the application to function
correctly under load, by measuring transaction pass/fail/error rates.
This is a major test, requiring substantial input from the business, so that
anticipated activity can be accurately simulated in a test situation. If the
project has a pilot in production then logs from the pilot can be used to
generate ‘usage profiles’ that can be used as part of the testing process, and
can even be used to ‘drive’ large portions of the Load Test.
Load testing must be executed on “today’s” production size database, and
optionally with a “projected” database. If some database tables will be much
larger in some months time, then Load testing should also be conducted against
a projected database. It is important that such tests are repeatable as they
may need to be executed several times in the first year of wide scale
deployment, to ensure that new releases and changes in database size do not
push response times beyond prescribed SLAs.
See Performance Testing also.
Loop Testing: A white box testing technique that exercises program loops.
Monkey Testing: Testing a system or an Application on the fly, i.e. just few
tests here and there to ensure the system or an application does not crash out.
Mutation testing: A method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.
Negative Testing: Testing aimed at showing software does not work. Also
known as "test to fail".
For example, a customer search may take 15 seconds in a full sized database if
indexes had not been applied correctly, or if an SQL 'hint' was incorporated in a
statement that had been optimized with a much smaller database. Such
performance testing would highlight such a slow customer search transaction,
which could be remediate prior to a full end to end load test.
Positive Testing: Testing aimed at showing software works. Also known as "test
to pass".
Quality Control: The operational techniques and the activities used to fulfill
and verify requirements of quality.
Security Testing: Testing which confirms that the program can restrict access
to authorized personnel and that the authorized personnel can access the
functions available to their security level.
Smoke Testing: Typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing
effort. For example, if the new software is crashing systems every 5 minutes,
bogging down systems to a crawl, or corrupting databases, the software may
not be in a 'sane' enough condition to warrant further testing in its current
state.
Storage Testing: Testing that verifies the program under test stores data files
in the correct directories and that it reserves sufficient space to prevent
unexpected termination resulting from lack of space. This is external storage as
opposed to internal storage.
Stress Testing: Stress Tests determine the load under which a system fails, and
how it fails. This is in contrast to Load Testing, which attempts to simulate
anticipated load. It is important to know in advance if a ‘stress’ situation will
result in a catastrophic system failure, or if everything just “goes really slow”.
There are various varieties of Stress Tests, including spike, stepped and gradual
ramp-up tests. Catastrophic failures require restarting various infrastructures
and contribute to downtime, a stress-full environment for support staff and
managers, as well as possible financial losses. This test is one of the most
fundamental load and performance tests.
System Testing: Testing that attempts to discover defects that are properties
of the entire system rather than of its individual components. It’s a black-box
type testing that is based on overall requirements specifications; covers all
combined parts of a system.
Testing:
· The process of exercising software to verify that it satisfies specified
requirements and to detect errors.
· The process of analyzing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the features of
the software item.
· The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect of
the system or component.
Test Automation: See Automated Testing.
Test Case:
· Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing. A Test Case will consist of information such as
requirements testing, test steps, verification steps, prerequisites, outputs, test
environment, etc.
· A set of inputs, execution preconditions, and expected outcomes developed
for a particular objective, such as to exercise a particular program path or to
verify compliance with a specific requirement.
Test Environment: The hardware and software environment in which tests will
be run, and any other software with which the software under test interacts
when under test including stubs and test drivers.
Test Harness: A program or test tool used to execute a test. Also known as a
Test Driver.
Test Script: Commonly used to refer to the instructions for a particular test
that will be carried out by an automated test tool.
Test Suite: A collection of tests used to validate the behavior of a product. The
scope of a Test Suite varies from organization to organization. There may be
several Test Suites for a particular product for example. In most cases however
a Test Suite is a high level concept, grouping together hundreds or thousands of
tests related by what they are intended to test.
Usability Testing: Testing the ease with which users can learn and use a
product.
Use Case: The specification of tests that are conducted from the end-user
perspective. Use cases tend to focus on operating software as an end-user
would conduct their day-to-day activities.
Unit Testing: The most 'micro' scale of testing; to test particular functions or
code modules. Typically done by the programmer and not by testers, as it
requires detailed knowledge of the internal program design and code. Not
always easily done unless the application has a well-designed architecture with
tight code; may require developing test driver modules or test harnesses.
The testing process includes the Test planning, Test design and Test execution.
Test Planning
* Define what to test
* Identify Functions to be tested
* Test conditions
* Manual or Automated
* Prioritize to identify Most Important Tests
* Record Document References
Test Design
* Define how to test
* Identify Test Specifications
* Build detailed test scripts
* Quick Script generation
* Documents
Test Execution
* Define when to test
* Build test execution schedule
* Record test results
Testing Roles:
As in any organization or organized endeavor there are Roles that must be
fulfilled within any testing organization. The requirement for any given role
depends on the size, complexity, goals, and maturity of the testing
organization. These are roles, so it is quite possible that one person could
fulfill many roles within the testing organization.
The Role of Test Lead / Manager is to effectively lead the testing team. To
fulfill this role the Lead must understand the discipline of testing and how to
effectively implement a testing process while fulfilling the traditional
leadership roles of a manager. What does this mean? The manager must
manage and implement or maintain an effective testing process.
Test Architect
The Role of the Test Designer / Tester is to: design and document test cases,
execute tests, record test results, document defects, and perform test
coverage analysis. To fulfill this role the designer must be able to apply the
most appropriate testing techniques to test the application as efficiently as
possible while meeting the test organizations testing mandate.
The Role of the Test Automation Engineer to is to create automated test case
scripts that perform the tests as designed by the Test Designer. To fulfill this
role the Test Automation Engineer must develop and maintain an effective test
automation infrastructure using the tools and techniques available to the
testing organization. The Test Automation Engineer must work in concert with
the Test Designer to ensure the appropriate automation solution is being
deployed.
The Role of the Test Methodologist is to provide the test organization with
resources on testing methodologies. To fulfill this role the Methodologist works
with Quality Assurance to facilitate continuous quality improvement within the
testing methodology and the testing organization as a whole. To this end the
methodologist: evaluates the test strategy, provides testing frameworks and
templates, and ensures effective implementation of the appropriate testing
techniques.
7. Test Development Life Cycle
The Software test development Life cycle involves the below steps.
A test plan states what the items to be tested are, at what level they will be
tested, what sequence they are to be tested in, how the test strategy will be
applied to the testing of each item, and describes the test environment.
The objective of each test plan is to provide a plan for verification, by testing
the software, the software produced fulfils the functional or design statements
of the appropriate software specification. In the case of acceptance testing
and system testing, this generally means the Functional Specification.
The following are some of the items that might be included in a test plan,
depending on the particular project:
Formal, written test cases consist of three main parts with subsections:
1. Critical - The defect results in the failure of the complete software system,
of a subsystem, or of a software unit (program or module) within the system.
2. Major - The defect results in the failure of the complete software system, of
a subsystem, or of a software unit (program or module) within the system.
There is no way to make the failed component(s), however, there are
acceptable processing alternatives which will yield the desired result.
3. Average - The defect does not result in a failure, but causes the system to
produce incorrect, incomplete, or inconsistent results, or the defect impairs
the systems usability.
4. Minor - The defect does not cause a failure, does not impair usability, and
the desired processing results are easily obtained by working around the
defect.
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed
1. New: When the bug is posted for the first time, its state will be “NEW”. This
means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that
the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to
corresponding developer or developer team. The state of the bug now is
changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the
testing team for next round of testing. Before he releases the software with
bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has
been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to
be fixed in next releases. The reasons for changing the bug to this state have
many factors. Some of them are priority of the bug may be low, lack of time
for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the
bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same
concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the
tester tests the bug. If the bug is not present in the software, he approves that
the bug is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the
developer, the tester changes the status to “REOPENED”. The bug traverses the
life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels
that the bug no longer exists in the software, he changes the status of the bug
to “CLOSED”. This state means that the bug is fixed, tested and approved.
Indicate the impact each defect has on testing efforts or users and
administrators of the application under test. This information is used by
developers and management as the basis for assigning priority of work on
defects.
A sample guideline for assignment of Priority Levels during the product test
phase includes:
1. Critical / Show Stopper — An item that prevents further testing
of the product or function under test can be classified as Critical Bug.
No workaround is possible for such bugs. Examples of this include a
missing menu option or security permission required to access a
function under test.
.
2. Major / High — A defect that does not function as
expected/designed or cause other functionality to fail to meet
requirements can be classified as Major Bug. The workaround can be
provided for such bugs. Examples of this include inaccurate
calculations; the wrong field being updated, etc.
.
3. Average / Medium — The defects which do not conform to
standards and conventions can be classified as Medium Bugs. Easy
workarounds exists to achieve functionality objectives. Examples
include matching visual and text links which lead to different end
points.
.
4. Minor / Low — Cosmetic defects which does not affect the
functionality of the system can be classified as Minor Bugs.
This document defines the defect Severity scale for determining defect
criticality and the associated defect Priority levels to be assigned to errors
found in software. It is a scale which can be easily adapted to other automated
test management tools.
A few simple definitions of Testing Terminology.
• Complete information such that developers can understand the bug, get
an idea of it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug
occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a
test case or if the developer doesn't have easy access to the test
case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool
logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is
common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results