Manual Testing Document
Manual Testing Document
Testing Tools
Mind
Software
Computer software has become a driving force. It is the engine that drives business
decision making. It serves as the basis for modern scientific investigation and engineering
problem solving. It is a key factor that differentiates modern products and services. It is
embedded in systems of all kinds: Transportation, Medical, Telecommunications Military,
Industrial processes, Entertainment, Office products,, the list is almost endless.
Software is virtually inescapable in a modern world. And as we move into the twentyfirst century, it will become the driver for new advances in everything from elementary
education to genetic engineering.
It affects nearly every aspect of our lives and has become pervasive in our commerce, our
culture, and our everyday activities.
Why Software has bugs?
Software have bugs because of; Mis-interpretation of requirements or no communication,
software complexity, programming errors, changing requirements, time pressure, egos of
people, poorly documented code, and software development tools used.
5 common problems in the Software Development Process
1. Poor requirement if requirements are unclear, incomplete, too general, or not
testable, there will be problems
2. Unrealistic schedule If too much work is crammed in too little time, problems
are inevitable
3. Inadequate testing no one will know whether or not the program is any good
until the customer complains or systems crash.
4. Futurities Requests to pile on new features after development is underway,
extremely common.
5. Miscommunications If developers dont know whats needed or customers
have erroneous expectations, problems are guaranteed.
5 Common Solutions to Software Development Problems:
1. Solid requirements
2. Realistic schedule
3. Adequate Testing
4. Stick to initial requirements as much as possible
5. Communication
Manual Testing
Q
Testing Tools
Mind
Or
Verifying and validating the application with respect to customer requirements
Or
Finding the differences between customer expected and actual values
Testing should also ensure that a quality product is delivered to the customer.
Tester Responsibilities:
Identifying the most appropriate implementation approach for a given test
Implementing individual tests.
Setting up and executing the tests
Logging outcomes and verifying test execution
Analyzing and recovering from execution errors.
SDLC (Software Development Life Cycle)
Any software Development has to go through the below 5 stages
2
Manual Testing
Q
Testing Tools
Mind
Design
Design
- How will the system solve the problem?
Logical implementation of the s/w happens.
Coding
Coding
-Translating the design into the actual system
- Physical construction of the s/w product
Testing
Installation &
Maintenance
Testing
-Does the system completely solve the problem?
-Have the requirements been satisfied?
-Does the system work properly in all situations?
Maintenance
Small Enhancements to the s/w happens and the
support is provided to solve the real time problems that
the system faces when the system goes live
Manual Testing
Q
Testing Tools
Mind
Feasibility Study:
Feasibility Study
Analysis
Design
Coding
Testing
Installation &
Maintenance
Coding
Manual Testing
Q
Testing Tools
Mind
Design
Coding
Testing
Installation &
Maintenance
Manual Testing
Q
Testing Tools
Mind
Analysis
Design
Program development
Draft up user guides
Coding
Testing
Installation & Maintenance
Testing
Testing is executing a program with an intent of finding Error / Fault and Failure. Fault is
a condition that causes the software to fail to perform its required function. Error refers
to difference between Actual Output and Expected Output. Failure is the inability of a
system of a system or component to perform required function according to its
specification. Failure is an event; fault is a state of the software, caused by an error.
Manual Testing
Q
Testing Tools
Mind
Feasibility Study
Analysis
Design
Coding
Testing
Installation &
Maintenance
To discover defects.
To learn about the reliability of the software
To ensure that product works as user expected.
To avoid being sued by customers To detect defects early, which helps in
reducing the cost of defect fixing.
% Cost
0
10
20
50
100
Manual Testing
Q
Testing Tools
Mind
Installation:
File conversion
System changeover
New system becomes operational
8
Manual Testing
Q
Testing Tools
Mind
Staff training
Maintenance:
Corrective maintenance
Perfective maintenance
Adaptive maintenance
Analysis
Design
Coding
Testing
Maintenance
Waterfall Strengths
Easy to understand, easy to use
Provides structure to inexperienced staff
Milestones are well understood
Sets requirements stability
Good for management control (plan, staff, track)
Works well when quality is more important than cost or
schedule
Disadvantages
The waterfall model is the oldest and the most widely used paradigm.
However, many projects rarely follow its sequential flow. This is due to the inherent
problems associated with its rigid format. Namely:
9
Manual Testing
Q
Testing Tools
Mind
Prototyping Model
Developers build a prototype during the requirements phase
Prototype is evaluated by end users
Users give corrective feedback
Developers further refine the prototype
When the user is satisfied, the prototype code is brought up to the standards needed for
a final product.
Prototyping Steps
A preliminary project plan is developed
An partial high-level paper model is created
The model is source for a partial requirements specification
A prototype is built with basic and critical attributes
The designer builds
the database
user interface
algorithmic functions
The designer demonstrates the prototype, the user evaluates for problems and suggests
improvements.
This loop continues until the user is satisfied
10
Manual Testing
Q
Testing Tools
Mind
Prototyping Strengths
Customers can see the system requirements as they are being gathered
Developers learn from customers
A more accurate end product
Unexpected requirements accommodated
Allows for flexible design and development
Steady, visible signs of progress produced
Interaction with the prototype stimulates awareness of additional needed functionality
Prototyping Weaknesses
Tendency to abandon structured program development for code-and-fix development
Bad reputation for quick-and-dirty methods
Overall maintainability may be overlooked
The customer may want the prototype delivered.
Process may continue forever (scope creep)
When to use Prototyping Model
Requirements are unstable or have to be clarified
As the requirements clarification stage of a waterfall model
Develop user interfaces
Short-lived demonstrations
New, original development
With the analysis and design portions of object-oriented development.
Prototype Model
This model is suitable when the client is not clear about the requirements. This is
a cyclic version of the linear model. In this model, once the requirement analysis is done
and the design for a prototype is made, the development process gets started. Once the
prototype is created, it is given to the customer for evaluation. The customer tests the
package and gives his/her feed back to the developer who refines the product according
to the customer's exact expectation. After a finite number of iterations, the final software
package is given to the customer. In this methodology, the software is evolved as a result
of periodic shuttling of information between the customer and developer. This is the most
popular development model in the contemporary IT industry. Most of the successful
software products have been developed using this model - as it is very difficult (even for
a whiz kid!) to comprehend all the requirements of a customer in one shot. There are
many variations of this model skewed with respect to the project management styles of
the companies. New versions of a software product evolve as a result of prototyping.
11
Manual Testing
Q
Testing Tools
Mind
3. Process modeling
The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing the descriptions
are created for adding, modifying, deleting, or retrieving a data object.
4. Application generation
The RAD model assumes the use of the RAD tools like VB, VC++, Delphi etc... rather
than creating software using conventional third generation programming languages. The
RAD model works to reuse existing program components (when possible) or create
reusable components (when necessary). In all cases, automated tools are used to facilitate
construction of the software.
6. Testing and turnover
Since the RAD process emphasizes reuse, many of the program components have already
been tested. This minimizes the testing and development time.
12
Manual Testing
Q
Testing Tools
Mind
It is the most generic of the models Most life cycle models can be derived as special
cases of the spiral model. The spiral uses a risk management approach to S/W
development some advantages of the spiral model are:
Draw backs: Even though there is no technical draw back the maintanance is very high
13
Manual Testing
Q
Testing Tools
Mind
V Model
Business Req
User Accept
System Req
System Testing
High level
Integration
Low Level
UNIT Test
Coding
V model stands for Verification & Validation model which has the above stages of
software development, left side is all development and involves more verification where
as right side involves more validation and little bit of verification. It is a suitable model
for large scale companies to maintain testing process. This model defines co-existence
relation between development process and testing process.
There are different levels of testing involved in V-model
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
After Completion of every development phase the corresponding testing activities should
be initiated.
Draw Backs:The cost of Maintaining of independent testing team is very high.
14
Manual Testing
Q
Testing Tools
Mind
Agile SDLCs
Speed up or bypass one or more life cycle phases
Usually less formal and reduced scope
Used for time-critical applications
Used in organizations that employ disciplined methods
Some Agile Methods
Adaptive Software Development (ASD)
Feature Driven Development (FDD)
Crystal Clear
Dynamic Software Development Method (DSDM)
Rapid Application Development (RAD)
Scrum
Extreme Programming (XP)
Rational Unify Process (RUP)
Extreme Programming XP
For small-to-medium-sized teams
developing software with vague or
rapidly changing
Requirements Coding is the key
activity throughout a software project
Communication among teammates is
done with code
Life cycle and behavior of complex
objects defined in test cases again in
code
XP Practices
1. Planning game determine scope of
the next release by combining business
priorities and technical estimates
2. Small releases put a simple system
into production, then release new versions in very short cycle
3. Metaphor all development is guided by a simple shared story of how the whole
system works
15
Manual Testing
Q
Testing Tools
Mind
Testing Types:Black Box Testing:- Black box testing is also called as Functionality Testing. In this
testing user will be asked to test the correctness of the functionality with the help of
Inputs and Outputs. User doesnt require the knowledge of software code.
BBT methods focus on the functional requirements of the software / product.
and
attempts to find errors in the following categories.
Incorrect or missing functions
Interface errors
Errors in database structures / External database access
Performance errors.
16
Manual Testing
Q
Testing Tools
Mind
Approach:
Equivalance Class
For each piece of the specification, generate one or more equivalence class and
give an equivalent treatment.
Label the classes as Valid or Invalid
Generate one test case for each Invalid Equivalence class
Generate a test case that covers as many Valid Equivalence Classes as possible
Boundary Value Analysis
Generate test cases for the boundary values.
Minimum Value, Minimum Value + 1, Minimum Value 1
Maximum Value, Maximum Value + 1, Maximum Value 1
Error Guessing
-Generating test cases against to the specification based on the experience.
It is a typical check list driven testing method.
White Box Testing:White box testing is also called as Structural testing. User does require the knowledge of
software code. Using WBT methods, the S/W engineer can derive test cases that do the
following:
Guarantee that all independent paths within a module have been exercised at least
once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their operation bounds
Exercise internal data structures to ensure their validity
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
The diagram does not depict where and when you write your Test Plan and Strategy
documents. But, it is understood that before you begin your testing activities these
documents should be ready. Ideally, when the Project Plan and Project Strategy are being
made, this is the time when the Test Plan and Test Strategy documents are also made.
Testing at Each Stage of Development
Requirement Study
Requirement Checklist
Software Requirement
Specification
Software Requirement
Specification
Functional Specification
Checklist
Functional Specification
Document
Functional Specification
Document
Architecture Design
Architecture Design
Coding
Coding
Functional Specification
Document
19
Manual Testing
Q
Testing Tools
Mind
Reviews:A process or meeting during which a work product, or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment
or approval.
The main goal of reviews is to find defects. Reviews are a good compliment to testing to
help assure quality. A few purposes of SQA reviews can be as follows:
Assure the quality of deliverable before the project moves to the next stage.
Once a deliverable has been reviewed, revised as required, and approved, it can be
used as a basis for the next stage in the life cycle.
Types of reviews:Types of reviews include Management Reviews, Technical Reviews, Inspections,
Walkthroughs and Audits.
Management Reviews:20
Manual Testing
Q
Testing Tools
Mind
Management reviews are performed by those directly responsible for the system in order
to monitor progress, determine status of plans and schedules, confirm requirements and
their system allocation.
Therefore the main objectives of Management Reviews can be categorized as follows:
Validate from a management perspective that the project is making progress
according to the project plan.
Ensure that deliverables are ready for management approvals.
Resolve issues that require managements attention.
Identify any project bottlenecks.
Keeping project in Control.
Support decisions made during such reviews include Corrective actions, Changes in the
allocation of resources or changes to the scope of the project.
In management reviews the following Software products are reviewed:
Audit Reports
Software Configuration Management Plan
Contingency plans
Installation plans
Risk management plans
Software Q/A
The participants of the review play the roles of Decision-Maker, Review Leader,
Recorder, Management Staff, and Technical Staff.
Technical Reviews
:Technical reviews confirm that product Conforms to specifications, adheres to
regulations, standards, guidelines, plans, changes are properly implemented, changes
affect only those system areas identified by the change specification.
The main objectives of Technical Reviews can be categorized as follows:
Ensure that the software confirms to the organization standards.
Ensure that any changes in the development procedures (design, coding, testing)
are implemented per the organization pre-defined standards.
21
Manual Testing
Q
Testing Tools
Mind
Requirement Review:A process or meeting during which the requirements for a system, hardware item, or
software item are presented to project personnel, managers, users, customers, or other
interested parties for comment or approval. Types include system requirements review,
software requirements review.
Who is involved in Requirement Review?
Product management leads Requirement Review. Members from every affected
department participate in the review which includes functional consultants from
customer end.
Input Criteria
Software requirement specification is the essential document for the review. A
checklist can be used for the review.
Exit Criteria
Exit criteria include the filled & completed checklist with the reviewers
comments & suggestions and the re-verification whether they are incorporated in the
documents.
Design Review:A process or meeting during which a system, hardware, or software design is presented to
project personnel, managers, users, customers, or other interested parties for comment or
approval. Types include critical design review, preliminary design review, and system
design review.
22
Manual Testing
Q
Testing Tools
Mind
Code Review:A meeting at which software code is presented to project personnel, managers, users,
customers, or other interested parties for comment or approval.
Who is involved in Code Review?
QA team member (In case the QA Team is only involved in Black Box Testing, then
the Development team lead chairs the review team) leads code review. Members from
development team and QA team participate in the review.
Input Criteria:The Coding Standards Document and the Source file are the essential documents
for the review. A checklist can be used for the review.
Exit Criteria:Exit criteria include the filled & completed checklist with the reviewers
comments & suggestions and the re-verification whether they are incorporated in the
documents.
Walkthroughs:A static analysis technique in which a designer or programmer leads members of the
development team and other interested parties through a segment of documentation or
code, and the participants ask questions and make comments about possible errors,
violation of development standards, and other problems.
The objectives of Walkthrough can be summarized as follows:
Detect errors early.
23
Manual Testing
Q
Testing Tools
Mind
Inspection:A static analysis technique that relies on visual examination of development products to
detect errors, violations of development standards, and other problems. Types include
code inspection; design inspection, Architectural inspections, Test ware inspections etc.
The participants in Inspections assume one or more of the following roles:
a) Inspection leader
24
Manual Testing
Q
Testing Tools
Mind
b) Recorder
c) Reader
d) Author
e) Inspector
All participants in the review are inspectors. The author shall not act as inspection leader
and should not act as reader or recorder. Other roles may be shared among the team
members. Individual participants may act in more than one role.
Individuals holding management positions over any member of the inspection team shall
not participate in the inspection.
Input to the inspection shall include the following:
a) A statement of objectives for the inspection
b) The software product to be inspected
c) Documented inspection procedure
d) Inspection reporting forms
e) Current anomalies or issues list
Input to the inspection may also include the following:
f) Inspection checklists
g) Any regulations, standards, guidelines, plans, and procedures against which the
software product is to be inspected
h) Hardware product specifications
i) Hardware performance data
j) Anomaly categories
The individuals may make additional reference material available responsible for the
software product when requested by the inspection leader.
The purpose of the exit criteria is to bring an unambiguous closure to the inspection
meeting. The exit decision shall determine if the software product meets the inspection
exit criteria and shall prescribe any appropriate rework and verification. Specifically, the
inspection team shall identify the software product disposition as one of the following:
a) Accept with no or minor rework. The software product is accepted as is or with only
minor rework. (For example, that would require no further verification).
b) Accept with rework verification. The software product is to be accepted after the
inspection leader or
a designated member of the inspection team (other than the author) verifies rework.
c) Re-inspect. Schedule a re-inspection to verify rework. At a minimum, a re-inspection
shall examine the software product areas changed to resolve anomalies identified in the
last inspection, as well as side effects of those changes.
25
Manual Testing
Q
Testing Tools
Mind
White Box Testing:White box testing involves looking at the structure of the code. When you know the
internal structure of a product, tests can be conducted to ensure that the internal
operations performed according to the specification. And all internal components have
been adequately exercised. In other word WBT tends to involve the coverage of the
specification in the code.
Code coverage is defined in six types as listed below.
What do we do in WBT?
In WBT, we use the control structure of the procedural design to derive test cases.
Using WBT methods a tester can derive the test cases that
26
Manual Testing
Q
Testing Tools
Mind
Guarantee that all independent paths within a module have been exercised at
least once.
Exercise all logical decisions on their true and false values.
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity.
White box testing (WBT) is also called Structural or Glass box testing.
Why WBT?
We do WBT because Black box testing is unlikely to uncover numerous sorts of defects
in the program. These defects can be of the following nature:
Manual Testing
Q
Testing Tools
Mind
Basis Path Testing:Basis path testing is a white box testing technique first proposed by Tom McCabe. The
Basis path method enables to derive a logical complexity measure of a procedural design
and use this measure as a guide for defining a basis set of execution paths. Test Cases
derived to exercise the basis set are guaranteed to execute every statement in the program
at least one time during testing.
The flow graph depicts logical control flow using a diagrammatic notation. Each
structured construct has a corresponding flow graph symbol.
Cyclomatic Complexity:Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical complexity of a program. When used in the context of a basis path testing method,
the value computed for Cyclomatic complexity defines the number for independent paths
in the basis set of a program and provides us an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set
of processing statements or a new condition.
Computing Cyclomatic Complexity:Cyclomatic complexity has a foundation in graph theory and provides us with extremely
useful software metric. Complexity is computed in one of the three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the number of flow graph edges, N is the number of flow graph nodes.
3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.
Graph Matrices:The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. To develop a software tool that assists in basis path testing, a
data structure, called a graph matrix can be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow
graph. Each row and column corresponds to an identified node, and matrix entries
correspond to connections between nodes.
28
Manual Testing
Q
Testing Tools
Mind
Control Structure Testing:Described below are some of the variations of Control Structure Testing.
Condition Testing:Condition testing is a test case design method that exercises the logical conditions
contained in a program module.
Data Flow Testing:The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program.
Loop Testing:Loop Testing is a white box testing technique that focuses exclusively on the validity of
loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops,
nested loops, and unstructured loops.
Simple Loops:The following sets of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m<n.
5. n-1, n, n+1 passes through the loop.
Nested Loops:If we extend the test approach from simple loops to nested loops, the number of possible
tests would grow geometrically as the level of nesting increases.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter values. Add other tests for out-of-range or exclude values.
29
Manual Testing
Q
Testing Tools
Mind
3. Work outward, conducting tests for the next loop, but keep all other outer loops at
minimum values and other nested loops to typical values.
4. Continue until all loops have been tested.
Concatenated Loops:Concatenated loops can be tested using the approach defined for simple loops, if each of
the loops is independent of the other. However, if two loops are concatenated and the
loop counter for loop 1 is used as the initial value for loop 2, then the loops are not
independent.
Unstructured Loops:Whenever possible, this class of loops should be redesigned to reflect the use of the
structured programming constructs.
Manual Testing
Q
Testing Tools
Mind
Unit Testing:This is a typical scenario of Manual Unit Testing activityA Unit is allocated to a Programmer for programming. Programmer has to use
Functional Specifications document as input for his work. Programmer prepares
Program Specifications for his Unit from the Functional Specifications. Program
Specifications describe the programming approach, coding tips for the Units coding.
The programmer implements some functionality for the system to be developed. The
same is tested by referring the unit test cases. While testing that functionality if
any defects have been found, they are recorded using the defect logging tool
whichever is applicable. The programmer fixes the bugs found and tests the
same for any errors.
Stubs and Drivers:A software application is made up of a number of Units, where output of one Unit
goes as an Input of another Unit. e.g. A Sales Order Printing program takes a Sales
Order as an input, which is actually an output of Sales Order Creation program.
Due to such interfaces, independent testing of a Unit becomes impossible. But that is
what we want to do; we want to test a Unit in isolation! So here we use Stub and
Driver.
A Driver is a piece of software that drives (invokes) the Unit being tested. A driver
creates necessary Inputs required for the Unit and then invokes the Unit.
31
Manual Testing
Q
Testing Tools
Mind
A Unit may reference another Unit in its logic. A Stub takes place of such subordinate
unit during the Unit Testing. A Stub is a piece of software that works similar to a unit
which is referenced by the Unit being tested, but it is much simpler that the actual unit. A
Stub works as a Stand-in for the subordinate unit and provides the minimum required
behavior for that unit.
Programmer needs to create such Drivers and Stubs for carrying out Unit Testing.
Both the Driver and the Stub are kept at a minimum level of complexity, so that they do
not induce any errors while testing the Unit in question.
Example - For Unit Testing of Sales Order Printing program, a Driver program will
have the code which will create Sales Order records using hardcoded data and then call
Sales Order Printing program. Suppose this printing program uses another unit which
calculates Sales discounts by some complex calculations. Then call to this unit will be
replaced by a Stub, which will simply return fix discount data.
Integration Testing:Integration testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with interfacing. The
objective is to take unit tested components and build a program structure that has been
dictated by design.
Usually, the following methods of Integration testing are followed:
1. Top-down Integration approach.
2. Bottom-up Integration approach.
Top-Down Integration:Top-down integration testing is an incremental approach to construction of program
structure. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module. Modules subordinate to the main control
module are incorporated into the structure in either a depth-first or breadth-first manner.
1. The Integration process is performed in a series of five steps:
2. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
3. Depending on the integration approach selected subordinate stubs are replaced
one at a time with actual components.
4. Tests are conducted as each component is integrated.
5. On completion of each set of tests, stub is replaced with the real component.
6. Regression testing may be conducted to ensure that new errors have not been
introduced.
32
Manual Testing
Q
Testing Tools
Mind
Bottom-Up Integration:Bottom-up integration testing begins construction and testing with atomic modules (i.e.
components at the lowest levels in the program structure). Because components are
integrated from the bottom up, processing required for components subordinate to a given
level is always available and the need for stubs is eliminated.
1. A Bottom-up integration strategy may be implemented with the following steps:
2. Low level components are combined into clusters that perform a specific software
sub function.
3. A driver is written to coordinate test case input and output.
4. The cluster is tested.
Drivers are removed and clusters are combined moving upward in the program structure.
System Testing:System testing concentrates on testing the complete system with a variety of techniques
and methods. System Testing comes into picture after the Unit and Integration Tests.
Compatibility Testing:Compatibility Testing concentrates on testing whether the given application goes well
with third party tools, software or hardware platform.
For example, you have developed a web application. The major compatibility issue is, the
web site should work well in various browsers. Similarly when you develop applications
on one platform, you need to check if the application works on other operating systems as
well. This is the main goal of Compatibility Testing.
Before you begin compatibility tests, our sincere suggestion is that you should have a
cross reference matrix between various softwares, hardware based on the application
requirements. For example, let us suppose you are testing a web application. A sample list
can be as follows:
Hardware
Software
Operating System
Pentium II, 128 MB RAM IE 4.x, Opera, Netscape
Windows 95
Pentium III, 256 MB IE 5.x, Netscape
Windows XP
RAM
Pentium IV, 512 MB Mozilla
Linux
RAM
Compatibility tests are also performed for various client/server based applications where
the hardware changes from client to client.
33
Manual Testing
Q
Testing Tools
Mind
Compatibility Testing is very crucial to organizations developing their own products. The
products have to be checked for compliance with the competitors of the third party tools,
hardware, or software platform. E.g. A Call center product has been built for a solution
with X product but there is a client interested in using it with Y product; then the issue of
compatibility arises. It is of importance that the product is compatible with varying
platforms. Within the same platform, the organization has to be watchful that with each
new release the product has to be tested for compatibility.
A good way to keep up with this would be to have a few resources assigned along with
their routine tasks to keep updated about such compatibility issues and plan for testing
when and if the need arises.
By the above example it is not intended that companies which are not developing
products do not have to cater for this type of testing. There case is equally existent, if an
application uses standard software then would it be able to run successfully with the
newer versions too? Or if a website is running on IE or Netscape, what will happen when
it is opened through Opera or Mozilla. Here again it is best to keep these issues in mind
and plan for compatibility testing in parallel to avoid any catastrophic failures and delays.
Recovery Testing:Recovery testing is a system test that focuses the software to fall in a variety of ways and
verifies that recovery is properly performed. If it is automatic recovery then reinitialization, check pointing mechanisms, data recovery and restart should be evaluated
for correctness. If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable limits.
Usability Testing:Usability is the degree to which a user can easily learn and use a product to achieve a
goal. Usability testing is the system testing which attempts to find any human-factor
problems. A simpler description is testing the software from a users point of view.
Essentially it means testing software to prove/ensure that it is user-friendly, as distinct
from testing the functionality of the software. In practical terms it includes ergonomic
considerations, screen design, standardization etc.
Security Testing
Security testing attempts to verify that protection mechanisms built into a system will, in
fact, protect it from improper penetration. During Security testing, password cracking,
unauthorized entry into the software, network security are all taken into consideration.
Stress Testing:-
34
Manual Testing
Q
Testing Tools
Mind
Performance Testing
Performance testing of a Web site is basically the process of understanding how the Web
application and its operating environment respond at various user load levels. In general,
we want to measure the Response Time, Throughput, and Utilization of the Web site
while simulating attempts by virtual users to simultaneously access the site. One of the
main objectives of performance testing is to maintain a Web site with low response time,
high throughput, and low utilization.
The effort of performance testing is addressed in two ways:
Load testing
Stress testing
Load testing:Load testing is a much used industry term for the effort of performance testing. Here load
means the number of users or the traffic for the system. Load testing is defined as the
testing to determine whether the system is capable of handling anticipated number of
users or not.
In Load Testing, the virtual users are simulated to exhibit the real user behavior as much
as possible. Even the user think time such as how users will take time to think before
inputting data will also be emulated. It is carried out to justify whether the system is
performing well for the specified limit of load.
For example, Let us say an online-shopping application is anticipating 1000 concurrent
user hits at peak period. In addition, the peak period is expected to stay for 12 hrs. Then
the system is load tested with 1000 virtual users for 12 hrs. These kinds of tests are
carried out in levels: first 1 user, 50 users, and 100 users, 250 users, 500 users and so on
till the anticipated limit are reached. The testing effort is closed exactly for 1000
concurrent users.
35
Manual Testing
Q
Testing Tools
Mind
The objective of load testing is to check whether the system can perform well for
specified load. The system may be capable of accommodating more than 1000 concurrent
users. But, validating that is not under the scope of load testing. No attempt is made to
determine how many more concurrent users the system is capable of servicing. Table 1
illustrates the example specified.
Stress testing:Stress testing is another industry term of performance testing. Though load testing &
Stress testing are used synonymously for performancerelated efforts, their goal is
different.
Unlike load testing where testing is conducted for specified number of users, stress
testing is conducted for the number of concurrent users beyond the specified limit. The
objective is to identify the maximum number of users the system can handle before
breaking down or degrading drastically. Since the aim is to put more stress on system,
think time of the user is ignored and the system is exposed to excess load. The goals of
load and stress testing are listed in Table 2. Refer to table 3 for the inference drawn
through the Performance Testing Efforts.
Let us take the same example of online shopping application to illustrate the objective of
stress testing. It determines the maximum number of concurrent users an online system
can service which can be beyond 1000 users (specified limit). However, there is a
possibility that the maximum load that can be handled by the system may found to be
same as the anticipated limit.
Stress testing also determines the behavior of the system as user base increases. It checks
whether the system is going to degrade gracefully or crash at a shot when the load goes
beyond the specified limit.
Load and stress testing of illustrative example
Types of Testing
Load Testing
Stress Testing
Duration
12 Hours
12 Hours
36
Manual Testing
Q
Testing Tools
Mind
Goals
Testing for anticipated user base
Validates
whether
system
is
specified limit
Testing beyond the anticipated
user base
Checks
whether
the
system
Regression Testing:Regression testing as the name suggests is used to test / check the effect of changes made
in the code. Most of the time the testing team is asked to check last minute changes in the
code just before making a release to the client, in this situation the testing team needs to
check only the affected areas. So in short for the regression testing the testing team
should get the input from the development team about the nature / amount of change in
the fix so that testing team can first check the fix and then the side effects of the fix.
In fact the regression testing is the testing in which maximum automation can be done.
The reason being the same set of test cases will be run on different builds multiple times.
But again the extent of automation depends on whether the test cases will remain
applicable over the time, In case the automated test cases do not remain applicable for
some amount of time then test engineers will end up in wasting time to automate and
dont get enough out of automation.
37
Manual Testing
Q
Testing Tools
Mind
Test Strategy
Test Planning
Test Design
Test Execution
Defect Report
Test Strategy:Before starting any testing activities, the team lead will have to think a lot & arrive at a
strategy. This will describe the approach, which is to be adopted for carrying out test
activities including the planning activities. This is a formal document and the very first
document regarding the testing area and is prepared at a very early stag in SDLC. This
38
Manual Testing
Q
Testing Tools
Mind
document must provide generic test approach as well as specific details regarding the
project. The following areas are addressed in the test strategy document.
For example we should have a master Test Strategy document at a project level and we
should have a detailed Test Plan for every release. This document should give the overall
scope of project at a high level.
1 Test Levels
The test strategy must talk about what are the test levels that will be carried out for that
particular project. Unit, Integration & System testing will be carried out in all projects.
But many times, the integration & system testing may be combined. Details like this may
be addressed in this section.
2 Roles and Responsibilities
The roles and responsibilities of test leader, individual testers, project manager are to be
clearly defined at a project level in this section. This may not have names associated: but
the role has to be very clearly defined. The review and approval mechanism must be
stated here for test plans and other test documents. Also, we have to state who reviews the
test cases, test records and who approved them. The documents may go thru a series of
reviews or multiple approvals and they have to be mentioned here.
3 Testing Tools
Any testing tools, which are to be used in different test levels, must be, clearly identified.
This includes justifications for the tools being used in that particular level also.
4 Risks and Mitigation
Any risks that will affect the testing process must be listed along with the mitigation. By
documenting the risks in this document, we can anticipate the occurrence of it well ahead
of time and then we can proactively prevent it from occurring. Sample risks are
dependency of completion of coding, which is done by sub-contractors, capability of
testing tools etc.
5 Regression Test Approach
When a particular problem is identified, the programs will be debugged and the fix will
be done to the program. To make sure that the fix works, the program will be tested
again. Regression test will make sure that one fix does not create some other problems in
that program or in any other interface. So, a set of related test cases may have to be
repeated again, to make sure that nothing else is affected by a particular fix. How this is
going to be carried out must be elaborated in this section. In some companies, whenever
there is a fix in one unit, all unit test cases for that unit will be repeated, to achieve a
higher level of quality.
39
Manual Testing
Q
Testing Tools
Mind
6 Test Groups
From the list of requirements, we can identify related areas, whose functionality is
similar. These areas are the test groups. For example, in a railway reservation system,
anything related to ticket booking is a functional group; anything related with report
generation is a functional group. Same way, we have to identify the test groups based on
the functionality aspect.
7 Test Priorities
Among test cases, we need to establish priorities. While testing software projects, certain
test cases will be treated as the most important ones and if they fail, the product cannot be
released. Some other test cases may be treated like cosmetic and if they fail, we can
release the product without much compromise on the functionality. This priority levels
must be clearly stated. These may be mapped to the test groups also.
8 Test Status Collections and Reporting
When test cases are executed, the test leader and the project manager must know, where
exactly we stand in terms of testing activities. To know where we stand, the inputs from
the individual testers must come to the test leader. This will include, what test cases are
executed, how long it took, how many test cases passed and how many-failed etc. Also,
how often we collect the status is to be clearly mentioned. Some companies will have a
practice of collecting the status on a daily basis or weekly basis. This has to be mentioned
clearly.
9 Test Records Maintenance
When the test cases are executed, we need to keep track of the execution details like
when it is executed, who did it, how long it took, what is the result etc. This data must be
available to the test leader and the project manager, along with all the team members, in a
central location. This may be stored in a specific directory in a central server and the
document must say clearly about the locations and the directories. The naming
convention for the documents and files must also be mentioned.
10 Requirements Traceability Matrix
Ideally each software developed must satisfy the set of requirements completely. So, right
from design, each requirement must be addressed in every single document in the
software process. The documents include the HLD, LLD, source codes, unit test cases,
integration test cases and the system test cases. Refer the following sample table which
describes Requirements Traceability Matrix process. In this matrix, the rows will have the
requirements. For every document {HLD, LLD etc}, there will be a separate column. So,
in every cell, we need to state, what section in HLD addresses a particular requirement.
Ideally, if every requirement is addressed in every single document, all the individual
40
Manual Testing
Q
Testing Tools
Mind
cells must have valid section ids or names filled in. Then we know that every requirement
is addressed. In case of any missing of requirement, we need to go back to the document
and correct it, so that it addressed the requirement. For testing at each level, we may have
to address the requirements. One integration and the system test case may address
multiple requirements.
11 Test Summary
The senior management may like to have test summary on a weekly or monthly basis. If
the project is very critical, they may need it on a daily basis also. This section must
address what kind of test summary reports will be produced for the senior management
along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole
project for the entire duration. This document will/may be presented to the client also, if
needed. The person, who prepares this document, must be functionally strong in the
product domain, with a very good experience, as this is the document that is going to
drive the entire team for the testing activities. Test strategy must be clearly explained to
the testing team members tight at the beginning of the project.
Test Plan:The test strategy identifies multiple test levels, which are going to be performed for the
project. Activities at each level must be planned well in advance and it has to be formally
documented. Based on the individual plans only, the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the ETVX
{Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the entry point to
that phase. For example, for unit testing, the coding must be complete and then only one
can start unit testing. Task is the activity that is performed. Validation is the way in which
the progress and correctness and compliance are verified for that phase. Exit tells the
completion criteria of that phase, after the validation is done. For example, the exit
criterion for unit testing is all unit test cases must pass.
ETVX is a modeling technique for developing worldly and atomic level models. It sands
for Entry, Task, Verification and Exit. It is a task-based model where the details of each
task are explicitly defined in a specification table against each phase i.e. Entry, Exit, Task,
Feedback In, Feedback Out, and measures.
There are two types of cells, unit cells and implementation cells. The implementation
cells are basically unit cells containing the further tasks.
For example if there is a task of size estimation, then there will be a unit cell of size
estimation. Then since this task has further tasks namely, define measures, estimate size.
41
Manual Testing
Q
Testing Tools
Mind
The unit cell containing these further tasks will be referred to as the implementation cell
and a separate table will be constructed for it.
A purpose is also stated and the viewer of the model may also be defined e.g. top
management or customer.
Unit Test Plan:-The test plan is the overall plan to carry out the unit test activities.
The lead tester prepares it and it will be distributed to the individual testers, which
contains the following sections.
1 What is to be tested?
The unit test plan must clearly specify the scope of unit testing. In this, normally the basic
input/output of the units along with their basic functionality will be tested. In this case
mostly the input units will be tested for the format, alignment, accuracy and the totals.
The UTP will clearly give the rules of what data types are present in the system, their
format and their boundary conditions. This list may not be exhaustive; but it is better to
have a complete list of these details.
2Sequence of Testing
The sequences of test activities that are to be carried out in this phase are to be listed in
this section. This includes whether to execute positive test cases first or negative test
cases first, to execute test cases based on the priority, to execute test cases based on test
groups etc. Positive test cases prove that the system performs what is supposed to do;
negative test cases prove that the system does not perform what is not supposed to do.
Testing the screens, files, database etc., are to be given in proper sequence.
3 Basic Functionality of Units
How the independent functionalities of the units are tested which excludes any
communication between the unit and other units. The interface part is out of scope of this
test level. Apart from the above sections, the following sections are addressed, very
specific to unit testing.
Unit Testing Tools
Priority of Program units
Naming convention for test cases
Status reporting mechanism
Regression test approach
ETVX criteria
Manual Testing
Q
Testing Tools
Mind
2.1What is to be tested?
This section clearly specifies the kinds of interfaces fall under the scope of testing
internal, external interfaces, with request and response is to be explained. This need not
go deep in terms of technical details but the general approach how the interfaces are
triggered is explained.
2.2Sequence of Integration
When there are multiple modules present in an application, the sequence in which they
are to be integrated will be specified in this section. In this, the dependencies between the
modules play a vital role. If a unit B has to be executed, it may need the data that is fed
by unit A and unit X. In this case, the units A and X have to be integrated and then using
that data, the unit B has to be tested. This has to be stated to the whole set of units in the
program. Given this correctly, the testing activities will lead to the product, slowly
building the product, unit by unit and then integrating them.
2.3 List of Modules and Interface Functions
There may be N number of units in the application, but the units that are going to
communicate with each other, alone are tested in this phase. If the units are designed in
such a way that they are mutually independent, then the interfaces do not come into
picture. This is almost impossible in any system, as the units have to communicate to
other units, in order to get different types of functionalities executed. In this section, we
need to list the units and for what purpose it talks to the others need to be mentioned. This
will not go into technical aspects, but at a higher level, this has to be explained in plain
English.
Manual Testing
Q
Testing Tools
Mind
anything related to customer accounts can be grouped into one area, anything related to
inter-branch transactions may be grouped into one area etc. Same way for the product
being tested, these areas are to be mentioned here and the suggested sequences of testing
of these areas, based on the priorities are to be described.
3.3Special Testing Methods
This covers the different special tests like load/volume testing, stress testing,
interoperability testing etc. These testing are to be done based on the nature of the
product and it is not mandatory that every one of these special tests must be performed
for every product.
Apart from the above sections, the following sections are addressed, very specific to
system testing.
44
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Test Case Design:Designing good test cases is a complex art. The complexity comes from three sources:
Test cases help us discover information. Different types of tests are
more effective for different classes of information.
Test cases can be good in a variety of ways. No test case will be
good in all of them.
People tend to create test cases according to certain testing styles,
such as domain testing or risk-based testing. Good domain tests are
different from good risk-based tests.
Whats a test case?
A test case specifies the pretest state of the IUT and its environment, the test inputs or
conditions, and the expected result. The expected result specifies what the IUT should
produce from the test inputs. This specification includes messages generated by the IUT,
exceptions, returned values, and resultant state of the IUT and its environment. Test cases
may also specify initial and resulting conditions for other objects that constitute the IUT
and its environment.
46
Manual Testing
Q
Testing Tools
Mind
Or
A Test Case is a description of what to be tested, what data to be given, what data to be
done to check the actual result against the expected.
Or
The process of designing test cases, including executing them as thought experiments,
will often identify bugs before the software has even been built. It is not uncommon to
find more bugs when designing tests than when executing tests.
Let us now see how to design test cases in a generic manner:
Understand the requirements document.
Break the requirements into smaller requirements (if it improves your testability).
For each Requirement, decide what technique you should use to derive the test
cases. For example, if you are testing a Login page, you need to write test cases
basing on error guessing and also negative cases for handling failures.
Have a Traceability Matrix as follows:
Requirement No (In RD)
Requirement
Test Case No
What this Traceability Matrix provides you is the coverage of Testing. Keep filling in the
Traceability matrix when you complete writing test cases for each requirement.
Whats a scenario?
A scenario is a hypothetical story, used to help a person think through a complex problem
or system.
Characteristics of good test case: TC should start with what are u testing
TC should be independent
TC should be not contain If / or statements.
47
Manual Testing
Q
Testing Tools
Mind
TC should be uniform.
Every TC designed should be traced back to at least one requirement.
A TC should have high probability of finding errors.
Issues to consider during test case design:
Error Guessing:Guessing is the art of guessing where errors can be hidden. There are no specific tools
and techniques for this, but you can write test cases depending on the situation: Either
when reading the functional documents or when you are testing and find an error that you
have not documented.
Error guessing is based mostly upon experience, with some assistance from other
techniques such as boundary value analysis. Based on experience, the test designer
guesses the types of errors that could occur in a particular type of software and designs
test cases to uncover them. For example, if any type of resource is allocated dynamically,
a good place to look for errors is in the de-allocation of resources. Are all resources
correctly de-allocated, or are some lost as the software executes?
Error guessing by an experienced engineer is probably the single most effective method
of designing tests, which uncover bugs. A well-placed error guess can show a bug, which
could easily be missed by many of the other test case design techniques presented in this
paper.
Conversely, in the wrong hands error guessing can be a waste of time. To make the
maximum use of available experience and to add some structure to this test case design
technique, it is a good idea to build a checklist of types of errors. This checklist can then
be used to help guess where errors may occur within a unit.The checklist should be
maintained with the benefit of experience gained in earlier unit tests, helping to improve
the overall effectiveness of error guessing.
48
Manual Testing
Q
Testing Tools
Mind
Boundary Value Analysis:Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing
technique) where the extreme values are chosen. Boundary values include maximum,
minimum, just inside/outside boundaries, typical values, and error values. The hope is
that, if a system works correctly for these special values then it will work correctly for all
values in between.
49
Manual Testing
Q
Testing Tools
Mind
Limitations of Boundary Value Analysis:BVA works best when the program is a function of several independent variables that
represent bounded physical quantities
Independent Variables
o NextDate test cases derived from BVA would be inadequate: focusing
on the boundary would not leave emphasis on February or leap years
o Dependencies exist with NextDate's Day, Month and Year
o Test cases derived without consideration of the function
Physical Quantities
o An example of physical variables being tested, telephone numbers what faults might be revealed by numbers of 000-0000, 000-0001,
555-5555, 999-9998, 999-9999?
Equivalence Partitioning:Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
EP can be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and one two invalid classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:There are situations where independent versions of software be developed for critical
applications, even when only a single version will be used in the delivered computer
based system. It is these independent versions which form the basis of a black box testing
technique called Comparison testing or back-to-back testing.
50
Manual Testing
Q
Testing Tools
Mind
Orthogonal Array Testing:The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing
pair-wise interactions by deriving a suitable small set of test cases (from a large number
of possibilities).
Manual Testing
Q
Testing Tools
Mind
Pass/Fail - If the Expected and Actual results are same then test is Pass otherwise
Fail.
The test cases are classified into positive and negative test cases. Positive test cases are
designed to prove that the system accepts the valid inputs and then process them
correctly. Suitable techniques to design the positive test cases are Specification derived
tests, Equivalence partitioning and State-transition testing. The negative test cases are
designed to prove that the system rejects invalid inputs and does not process them.
Suitable techniques to design the negative test cases are Error guessing, Boundary value
analysis, internal boundary value testing and State-transition testing. The test cases details
must be very clearly specified, so that a new person can go through the test cases step and
step and is able to execute it. The test cases will be explained with specific examples in
the following section.
For example consider online shopping application. At the user interface level the client
request the web server to display the product details by giving email id and Username.
The web server processes the request and will give the response. For this application we
will design the unit, Integration and system test cases.
Test Engineers can write testcases based on Requiremnets or use cases, the use cases
are described as below
Use Case
Each use case focuses on describing how to achieve a goal or task. For most software
projects this means that multiple, perhaps dozens, of use cases are needed to embrace the
scope of the new system. The degree of formality of a particular software project and the
stage of the project will influence the level of detail required in each use case.
52
Manual Testing
Q
Testing Tools
Mind
Use cases should not be confused with the features of the system under consideration. A
use case may be related to one or more features, a feature may be related to one or more
use cases.
A use case defines the interactions between external actors and the system under
consideration to accomplish a goal. An actor is a role that a person or thing plays when
interacting with the system. The same person using the system may be represented as two
different actors because they are playing different roles. For example, "Joe" could be
playing the role of a Customer when using an Automated Teller Machine to Withdraw
Cash, or playing the role of a Bank Teller when using the system to Restock the Cash
Drawer.
Use cases treat the system as a black box, and the interactions with the system, including
system responses, are perceived as from outside the system. This is a deliberate policy,
because it forces the author to focus on what the system must do, not how it is to be done,
and avoids the trap of making assumptions about how this functionality will be
accomplished.
Use cases may be described at the abstract level (business use case, sometimes called
essential use case), or at the system level (system use case). The difference between these
is the scope.
The business use case is described in technology free terminology which treats the
business process as a black box and describes the business process that is used by its
business actors (people or systems) to achieve their goals (e.g., manual payment
processing, expense report approval, manage corporate real estate.) The business use case
will describe a process that provides value to the business actor, and it describes what the
process does.
The system use cases are normally described at the sub process level (for example, create
voucher) and specify the data input and the expected data response. The system use case
will describe how the actor and the system interact. For this reason it is recommended
that a system use case specification begin with a verb (e.g., create voucher, select
payments, exclude payment, cancel voucher.)
A use case should:
Describe how the system shall be used by an actor to achieve a particular goal. Have no
implementation-specific language. Be at the appropriate level of detail. Not include detail
regarding user interfaces and screens. This is done in user-interface design.
Sample Use Case Diagrams
A use case is a set of scenarios that describing an interaction between a user and a
system. A use case diagram displays the relationship among actors and use cases. The
two main components of a use case diagram are use cases and actors.
53
Manual Testing
Q
Testing Tools
Mind
An actor represents a user or another system that will interact with the system you are
modeling. A use case is an external view of the system that represents some action the
user might perform in order to complete a task.
When to Use: Use Cases Diagrams
Use cases are used in almost every project. They are helpful in exposing requirements
and planning the project. During the initial stage of a project most use cases should be
defined, but as the project continues more might become visible.
How to Draw: Use Cases Diagrams
Use cases are a relatively easy UML diagram to draw, but this is a very simplified
example. This example is only meant as an introduction to the UML and use cases.
Start by listing a sequence of steps a user might take in order to complete an action. For
example a user placing an order with a sales company might follow these steps.
1.
2.
3.
4.
5.
54
Manual Testing
Q
Testing Tools
Mind
This example shows the customer as a actor because the customer is using the ordering
system. The diagram takes the simple steps listed above and shows them as actions the
customer might perform. The salesperson could also be included in this use case diagram
because the salesperson is also interacting with the ordering system.
From this simple diagram the requirements of the ordering system can easily be derived.
The system will need to be able to perform actions for all of the use cases listed. As the
project progresses other use cases might appear. The customer might have a need to add
an item to an order that has already been placed. This diagram can easily be expanded
until a complete description of the ordering system is derived capturing all of the
requirements that the system will need to perform.
55
Manual Testing
Q
Testing Tools
Mind
Types of test cases :Unit Test Cases (UTC):Specifying the test cases for testing of individual units of software. These may form
sections of the Detailed Design Specifications.
These are very specific to a particular unit. The basic functionality of the unit is to be
understood based on the requirements and the design documents. Generally, Design
document will provide a lot of information about the functionality of a unit. The Design
document has to be referred before UTC is written, because it provides the actual
functionality of how the system must behave, for given inputs.
For example, In the Online shopping application, If the user enters valid Email id and
Username values, let us assume that Design document says, that the system must display
a product details and should insert the Email id and Username in database table. If user
enters invalid values the system will display appropriate error message and will not store
it in database.
Integration Test Cases:Before designing the integration test cases the testers should go through the Integration
test plan. It will give complete idea of how to write integration test cases. The main aim
of integration test cases is that it tests the multiple modules together. By executing these
test cases the user can find out the errors in the interfaces between the Modules.
For example, in online shopping, there will be Catalog and Administration module. In
catalog section the customer can track the list of products and can buy the products
online. In administration module the admin can enter the product name and information
related to it.
System Test Cases: The system test cases are meant to test the system as per the requirements; end-to end.
This is basically to make sure that the application works as per SRS. In system test cases,
(generally in system testing itself), the testers are supposed to act as an end user. So,
system test cases normally do concentrate on the functionality of the system, inputs are
fed through the system and each and every check is performed using the system itself.
Normally, the verifications done by checking the database tables directly or running
programs manually are not encouraged in the system test.
The system test must focus on functional groups, rather than identifying the program
units. When it comes to system testing, it is assume that the interfaces between the
modules are working fine (integration passed).
56
Manual Testing
Q
Testing Tools
Mind
Ideally the test cases are nothing but a union of the functionalities tested in the unit
testing and the integration testing. Instead of testing the system inputs outputs through
database or external programs, everything is tested through the system itself. For
example, in a online shopping application, the catalog and administration screens
(program units) would have been independently unit tested and the test results would be
verified through the database. In system testing, the tester will mimic as an end user and
hence checks the application through its output.
There are occasions, where some/many of the integration and unit test cases are repeated
in system testing also; especially when the units are tested with test stubs before and not
actually tested with other real modules, during system testing those cases will be
performed again with real modules/data in
Once the test plan for a level of testing has been written, the next stage of test design is to
specify a set of test cases or test paths for each item to be tested at that level A
number of test cases will be identified for each item to be tested at each level of
testing. Each test case will specify how the implementation of a particular
requirement of design decision is to be tested and the criteria for success of the test.
Review Test Cases:
After preparation of test cases, testing team
completeness and correctness.
During review meeting they cover the below factors through coverage analysis.
SRS Based Coverage
BRS Based Coverage
Note: In most of the companies all three types of test cases will be prepared combinedly
Activity
Yes
No
Comme
nts
Manual Testing
Q
Testing Tools
Mind
58
Manual Testing
Q
Testing Tools
Mind
SRS Information
gathering
Testing
Team
BRS
HLD+LLD
Study SRS/BRS
Coding
Pm/TL
Prepare testplan
Unit Testing
Integration Testing
Manual Testing
Q
Testing Tools
Mind
Otherwise
Test execution closure (T.L)
Pre acceptance Testing
Acceptance Testing
Sign off & Release
Sanity/ Smoke testing:It is the first testing technique applied by testing team. After getting initial build from
development tem testing team verifies whether all screens are opening are not? all objects
are responding are not etc. This is called as sanity testing
The main objective of sanity testing is to ensure that whether is suitable for conducting
next level testing are not?
After conducting sanity testing team reviews whether build is acceptable or not? if they
satisfy with the build then they will accept otherwise they reject the build to development
team.
Note:- Sanity testing will be repeated no.of times until testing team gets stable build or
suitable build
Preparing Test batches or Test suites:After completion of sanity testing i.e., after getting stable build from development team
testing identifies all dependencies between test cases and groups them as test batch, then
they execute as test batches.
Test Environment Preparation:
After preparation of test batches testing team prepares necessary environment for
conducting testing as follows
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Software performance data is usually generated during system testing, once the software
has been integrated and functional testing is complete.
USE OF METRIC
3.
Number of Tests
Extent of testing
4.
Paths Tested
5.
Acceptance Criteria
Tested
6.
Test Cost
7.
8.
Achieving Budget
DESCRIPTION
Manual Testing
Q
9.
Detected Production
Errors
Testing Tools
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
Testing
Effectiveness of
Testing
14
Requirements Phase
Testing Effectiveness
Effectiveness of
testing
Effectiveness of
testing
Mind
64
Manual Testing
Q
Testing Tools
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
testing
Effectiveness of
Testing
Assessment of
testing
Mind
Manual Testing
Q
Testing Tools
Assessment of
testing
Assessment of
testing
Mind
Defect Management:Defects determine the effectiveness of the Testing what we do. If there are no defects, it
directly implies that we dont have our job. There are two points worth considering here,
either the developer is so strong that there are no defects arising out, or the test engineer
is weak. In many situations, the second is proving correct. This implies that we lack the
knack. In this section, let us understand Defects.
What is a Defect?
For a test engineer, a defect is following: Any deviation from specification
Anything that causes user dissatisfaction
Incorrect output
Software does not do what it intended to do.
Bug / Defect / Error:
But as for a test engineer all are same as the above definition is only for the purpose of
documentation or indicative.
Categories of Defects:All software defects can be broadly categorized into the below mentioned types:
Errors of commission: something wrong is done
66
Manual Testing
Q
Testing Tools
Mind
Submit Defect
Update Defect
Assign
Fix/Change
Review, Verify
and Qualify
Validate
Duplicate,
Reject or More
Info
Close
67
Manual Testing
Q
Testing Tools
Mind
Cancel
Report
Defect
Acknowledge
Defect
68
Manual Testing
Q
Testing Tools
Mind
Report Defect:Once found, defects must be brought to the attention of the developers. When the defect
is found by a technique specially designed to find defects, such as those mentioned
above, this is a relative straight forward process and is almost as simple as writing a
problem report. Techniques that facilitate the reporting of the defect may significantly
shorten the defect discovery time. As software becomes more complex and more widely
used, these techniques become more valuable. Those techniques include computer
forums electronic mail, help desks, etc.
It should also be noted that there are some human factors / cultural involved with the
defect discovery process. When a defect is initially uncovered. May be very unclear
whether it is a defect, a change, user error, or misunderstanding. Developers may resist
calling something a defect because that implies bad work and may not reflect well on
the development team. Users may resist calling something a change because that
implies that the developers can charge them more money. Some organizations have
skirted this issue initially labeling everything by a different name e.g., incidents or
issues from a defect management perspective, what they are called is not an important
issue. What is important is that the defect be quickly brought to the developers attention
and formally controlled.
Defect Naming:It is important that defects be named early in the defect management process. This will
enable individuals to better articulate the problem they are encountering. This will
eliminate vocabulary such as defect, bug, and problem, and articulating more specifically
what the defect is.
A three-level framework for naming defects is recommended as follows:
69
Manual Testing
Q
Testing Tools
Mind
Level 2 Developmental
Level 3 The category of defect. The following defect categories are suggested
for each phase.
1.
Missing
2.
Inaccurate
3.
Incomplete
4.
Inconsistent
Schedule Fix
Fix Defect
Report
Resolution
70
Manual Testing
Q
Testing Tools
Mind
Defect ID number
Descriptive defect name and type
Source of defect test case or other source
Defect severity
Defect priority
Defect status (e.g. open, fixed, closed, user error, design, and so on) more
robust tools provide a status history for the defect
Date and time tracking for either the most recent status change, or for each
change in the status history.
Detailed description, including the steps necessary to reproduce the defect
Component or program where defect was found.
Screen prints, logs, etc., that will aid the developer in resolution process
Stage of origination
Person assigned to research and/or correct the defect.
Fault is a condition that causes the software to fail to perform its required function. Error
refers to difference between Actual Output and Expected Output. Failure is the inability
of a system of a system or component to perform required function according to its
specification.
71
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Testing limitations?
Alpha Testing:A software prototype stage when the software is first available for run. Here the software
has the core functionalities in it but complete functionality is not aimed at. It would be
able to accept inputs and give outputs. Usually the most used functionalities (parts of
code) are developed more. The test is conducted at the developers site only.
73
Manual Testing
Q
Testing Tools
Mind
Make up user experiences or User stories, which are short descriptions of the
features to be coded.
Acceptance tests verify the completion of user stories.
Ideally they are written before coding.
With all these features and process included we can define a practice for Agile testing
encompassing the following features.
Conversational Test Creation
Coaching Tests
Providing Test Interfaces
Exploratory Learning
Conversational Test Creation
74
Manual Testing
Q
Testing Tools
Mind
Client Server Testing Tests to examine the N/W communication, and the interplay
between SW that resides on the Client and Server. Checks are run on the Client, on the
Server and on both are.
Application Function tests.
Server tests
Database tests
Transaction tests
N/W communication tests
Website Testing Testing which goes beyond the basic functional and system testing of
the C/S world, to include tests for availability, Performance / Load, Scalability, Usability,
Compatibility and Links.
Optimize testing with through risk analysis of the site to identify and prioritize
key areas and testing tasks.
Consider interactions between HTML pages TCP/IP communications, internet
connections, Firwwalls, and Applications that run on the Server side.
E-application Testing:
Testing ensures that reliability, accuracy and performance of web-based
applications, including web services.
Simulate a live e-application environment if required.
Conduct tests across heterogeneous environments and across all the application
tiers.
Alpha and Beta Testing:
Alpha Testing
Testing an application, when development
is nearing completion, minor design
changes may still be made as a result of
such testing.
Typically done by the end-user / others at
our place.
Beta Testing
Testing when development and testing are
essentially completed and final bugs and
problems need to be found before final
release. Typically done by the end-user /
others at the clients plance.
75
Manual Testing
Q
Testing Tools
Mind
Product Testing
Checks for all requirements across all stages of product development
Encompasses additional tests like Compatibility, User Aceptance, Maintainability,
Installation, Serviceability and Usability etc.
Comparison / Back to Back Testing There are some situations in which the reliability
of the S/W is absolutely critical. Ex: Aircraft avionics, automobile braking system,
nuclear power plant control etc. In such application redundant H/W and S/W are often
used to minimize the possibility of errors.
Test Engineer Vs.Quality Assurance Engineer:
Test Engineer
QA Engineer
Has a test to break attitude and ability
QA Engineer should have the same qualities
to take the point of view of the customer. of a good test engineer. Additionally they
A strong desire for quality and an
must be able to understand the entire SWD
attention to detail
process and how it can fit into the business
approach and goals of an organization.
Tact and diplomacy are useful in
Communication skills and ability to
maintaining a co-operative relation
understand various sides of issues are
between developers and ability to
important. In organizations in the early stages
communicate with both technical and
of implementing QA processes. Patience and
non-technical people is useful
diplomacy are essentially needed. The ability
to find problems as well as to see what is
missing is important, inspection and reviews.
Manual Testing
Q
Testing Tools
Mind
make reality of your ideas regarding your site. Special features are performed on your
browser and live events are broadcast over the world, through the web.
The possibility of real time publishing in many cases sets the pace for web site
development. With the growing complexity and demands for rapid deployment the web
site development tends to lack testing efforts even when the need for it, in fact, increases.
Classification of web sites
When publishing a web site the construction and design, of course, is based upon what
you hope to achieve with the site. Depending on this, the site may be classified as a
certain type of site. There are a number of different types of sites published on the web.
These sites have been categorized by a number of authors. We have chosen two different
classifications that we believe in a clear way show the different angels from which to
view the sites.
The first classification is based on the different business purposes of a commercial web
site. These purposes of commercial web sites into three categories:
Promotion is information about products and services that are part of the companys
business, whereas provision is information about, for instance, the environmental care
program the company may sponsor. Processing refers to regular business transactions.
Although this classification is meant to show the purposes with one commercial web site,
we believe it can also be used to categorize the main purpose of a web site. For instance,
a companys on-line catalogue would be a promotional site, a private persons homepage
may be considered a provisional site and, of course, a web site for banking services may
be considered a site for processing.
Another classification is based on the degree of interactivity the web site offers.
Static Web Sites
The most basic web site. Presentation of HTML-documents. The only
interactivity offered is the choice of page by clicking links.
Manual Testing
Q
Testing Tools
Mind
This classification is derived from the need of methodology during the development of
web sites. The classification is useful also for the testing process, not only for the need of
methodology but also for how extensive the testing must be. For instance, for a static web
site the demands may be, besides that the information is correct and up-to-date, that the
source code is correct and that the load capacity of the server is great enough, i.e. the
server can handle a large enough number of visitors at the same time. There is no need to
go much deeper in this case. For the other extreme, Web-Based Software Application, the
requirements are much greater, where, for instance, security is of great importance.
These two classifications are two major ways of showing distinctions between web sites.
Together they provide information about interactivity and purpose which gives us an idea
on the sites complexity.
Web applications
The title of this paper, Web Application Testing, creates a need to define what we mean
with web applications. Are we talking only about high complexity e-commerce web
sites?
Above we introduced two different authors classifications of web sites. We find it
interesting that Powell (1998) in his definitions does not use the word application until a
higher degree of interactivity is offered. Instead he uses the word Site for the first,
simpler, categories. Regardless of if this is indeed intended or not, we choose to define
web application as any web-based site or application available on internet or on an
78
Manual Testing
Q
Testing Tools
Mind
intranet, whether it be a static promotion site or a highly interactive site for banking
services.
User Issues
When we, ordinary web surfers, use the Internet, what is it that we experience as
problems? Which sites make us leave and move on to another? What characteristics shall
a site have in order to make users want to stay? It is hard, if not impossible, to give an
answer of general character to these questions. What makes it so difficult is the diversity
of users. Since visitors to a site may come from all corners of the world, they differ
greatly in how they experience a site as satisfying. But regardless of which culture they
are from or what kind of site that is visited, some things are never appreciated. For
example, when a page takes too long to load many users get impatient and move on to
another site or page. The same if a site is too difficult to navigate. Overall, users tend not
to tolerate certain problems when out surfing the web. If we have trouble understanding
the layout or if it takes to much effort to find the information we are seeking, the site is
experienced as complex and we will start looking elsewhere for what we seek. Many sites
today present animations or other graphical effects, which many users experience as
positive. But if you are a visitor searching for specific information, you seldom
appreciate waiting time in order to obtain what you seek. Today though, there is almost
always an option to skip the feature, which is positive.
Another problem that always irritates when on the web is broken links. We dont think
that there is anyone with some web-browsing experience that hasnt encountered this. It
is an always-returning error that will continue to haunt the web for as long as pages are
moved or taken off the Internet. These relatively small errors shouldnt be too difficult to
remove, and there is therefore no excuse to have broken links on a site for more than a
short period of time.
Below is a presentation of the main areas to test when developing and publishing a web
site. It is a checklist that presents the most important features to test under each area and
how to perform them.
Functionality testing
1. Links
Links are maybe the main feature on web sites. They constitute the mean of
transport between pages and guide the user to certain addresses without the user
knowing the actual address itself. Linkage testing is divided into three sub areas.
First - check that the link takes you to the page it said it would. Second That the
link isnt broken i.e. that the page youre linking to exists. Third Ensure that you
have no orphan pages at your site. An orphan page is a page that has no links to it,
79
Manual Testing
Q
Testing Tools
Mind
and may therefore only be reached if you know the correct URL. Remember that
to reduce redundant testing, there is no need to test a link more than once to a
specific page if it appears on several pages; it needs only to be tested once.
This kind of test can preferably be automated and several tools provide solutions
for this.
Link testing should be done during integration testing, when connections between
pages subsist.
Summary:
Verify that you end up at the designated page
Verify that the link isnt broken
Locate orphan pages if present
2. Forms
Forms are used to submit information from the user to the host, which in turn gets
processed and acted upon in some way. Testing the integrity of the submitting
operation should be done in order to verify that the information hits the server in
correct form. If default values are used, verify the correctness of the value. If the
forms are designed to only accept certain values this should also be tested for. For
example, if only certain characters should be accepted, try to override this when
testing. These controls can be done both on the client side as well as the server
side, depending on how the application is designed, for example using scripting
languages such as Jscript, JavaScript or VBScript. Check that invalid inputs are
detected and handled.
Summary:
Information hits the server in correct form
Acceptance of invalid input
Handling of wrong input (both client an server side)
Optional versus mandatory fields
Input longer than field allows
Radio buttons
Default values
3. Cookies
Cookies are often used to store information about the user and his actions on a
particular site. When a user accesses a site that uses cookies, the web server sends
information about the user and stores it on the client computer in form of a
cookie. These can be used to create more dynamic and custom-made pages or by
storing, for example, login info. If you have designed your site to use cookies,
80
Manual Testing
Q
Testing Tools
Mind
they need to be checked. Verify that the information that is to be retrieved is there.
If login information is stored in cookies check for correct encryption of these. If
your applications require cookies, how does it respond to users that disabled the
use of such? Does it still function or will the user get notified of the current
situation. How will temporary cookies be handled? What will happen when
cookies expire? Depending on what cookies are used for, one should examine the
possibilities for other solutions.
Summary:
Encryption of e.g. login info
Users denying or accepting
Temporary and expired cookies
4. Web Indexing
There are a number of different techniques and algorithms used by different
search engines to search the Internet. Depending on how the site is designed using
Meta tags, frames, HTML syntax, dynamically created pages, passwords or
different languages, your site will be searchable in different ways.
Summary:
Meta tags
Frames
HTML syntax
Passwords
Dynamically created pages
5. Programming Language
Differences in web programming language versions or specifications can cause
serious problems on both client and server side. For example, which HTML
specification will be used (for example 3.2 or 4.0)? How strictly? When HTML is
generated dynamically it is important to know how it is generated.
When development is done in a distributed environment where developers, for
instance, are geographically separated, this area becomes increasingly important.
Make sure that specifications are well spread throughout the development
organization to avoid future problems.
81
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Summary:
Intuitive navigation
Main features accessible from main page
Site map or other navigational help
Consistent conventions (navigation bars, menus, links etc.)
2. Graphics
The graphics of a web site include images, animations, borders, colours, movie
clips, fonts, backgrounds, buttons etc. Issues to check are:
Make sure that the graphics serve a definite purpose and that images or
animations dont just clutter up the visual design and waste bandwidth
Verify that fonts are consistent in style
Suitable background colours combined with font- and foreground colour.
Remember that a computer display exceptionally well presents contrasts
apposed to printed paper
Three-dimensional effects on buttons often gives useful cues
When displaying large amount of images, consider using thumbnails.
Check that the original picture appears when a thumbnail is clicked
Size quality of pictures, usage of compressed formats (JPG or GIF)
Mouse-over effects
3. Content
Content testing is done to verify the correctness, accuracy and relevancy of
information presented on the site, or in a database, in forms of text, images or
animations.
Correctness is whether the information is truthful or contains misinformation. For
example wrong prices in a price list may cause financial problems or even induce
legal issues.
The accuracy of the information is whether it is without grammatical or spelling
errors. These kinds of verifications are often done in e.g. Word or other word
processors.
Remove irrelevant information from your site. This may otherwise cause
misunderstandings or confusion. Content testing should be done as early as
possible, i.e. when the information is posted.
Summary:
Correctness
Accuracy
84
Manual Testing
Q
Testing Tools
Mind
Relevancy
4. General Appearance
Does the site feel right when using it? Do you intuitively know where to look for
information? Is the design consistent throughout the site? Make sure that the
design and aim goes hand in hand. Too much design can easily turn a conservative
corporate site in to a publicity stunt. Important to all kinds of usability tests is to
involve external personnel that have little or no connection to the development of
the site. Its easy to get fond of ones own solution, so having actual users
evaluating the site may be critical.
Summary:
Intuitive design
Consistent design
If using frames, make sure that the main area is large enough
Consider size of pages. Several screens on the same page or links between
them
Do features on the site need help systems or will they be intuitive
Server Side Interface
1. Server Interface
Due to the complex architecture of web systems, interface and compatibility
issues may occur on several areas. The core components are web servers,
application servers and database servers (and possibly mail servers). Web servers
normally hosts HTML pages and other web services. Application severs typically
contains objects such as programs, scripts, DLLs or third party products, that
provide and extend functionality and effects for the web application. Test the
communication between the different servers by making transactions and view
logfiles to verify the result. Depending on the configuration of the server side
compatibility issues may occur depending on, for example, server hardware,
server software or network connections. Database compatibility issues may occur
depending on different database types (SQL, Oracle, Sybase etc.).
Issues to test:
Verify that communication is done correctly, web server-application
server, application server-database server and vice versa.
Compatibility of server software, hardware, network connections
Database compatibility (SQL, Oracle, Sybase etc.)
2. External Interface
Several web pages have external interfaces, such as merchants verifying credit
card numbers to allow transactions to be made or a site like http://www.pris.nu/
85
Manual Testing
Q
Testing Tools
Mind
that compares prices and delivery times on different merchants on the web. Verify
that is sent and retrieved in correct form.
Client Side compatibility
1. Platform
There are several different operating systems that are being used on the market
today, and depending on the configuration of the user system, compatibility issues
may occur. Different applications may work fine under certain operating systems,
but fail under another. The following are the most commonly used:
Windows (95, 98, 2000, NT)
Unix (different sets)
Macintosh
Linux
2. Browsers
The browser is the most central component on the client side of the web.
Browsers come in different brands and versions and have different support for
Java, JavaScript, ActiveX, plugins or different HTML specifications. ActiveX, for
example, is a Microsoft product and therefore designed for Internet Explorer,
while JavaScript is produced by Netscape and Java by Sun. This substantiates the
fact that compatibility problems commonly occur. Frames and Cascading style
sheets may display differently on different browsers, or not at all. Different
browsers also have different settings for e.g. security or Java support.
A good way to test browser compatibility is to create a compatibility matrix where
different brands and versions of browsers are tested to a certain number of
components and settings, for example Applets, scripting, ActiveX controls or
cookies.
Summary:
Internet Explorer (3.X 4.X, 5.X)
Netscape Navigator (3.X, 4.X, 6.X)
AOL
Browser settings (security settings, graphics, Java etc.)
Frames and Cascade Style sheets
Applets, ActiveX controls, DHTML, client side scripting
HTML specifications
Graphics
3. Settings, Preferences
86
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
3. Stress
Stress testing is done in order to actually break a site or a certain feature to
determine how the system reacts. Stress tests are designed to push and test system
limitations and determine whether the system recovers gracefully from crashes.
Hackers often stress systems by providing loads of wrong in-data until it crash
and then gain access to it during start-up. Typical areas to test are forms, logins or
other information transaction components.
Summary:
Performance of memory, CPU, file handling etc.
Error in software, hardware, memory errors (leakage, overwrite or
pointers)
4. Continuous use
Is the application or certain features going to be used only during certain periods
of time or will it be used continuously 24 hours a day 7 days a week? Test that the
application is able to perform during those conditions. Will downtime be allowed
or is that out of the question? Verify that the application is able to meet the
requirements and does not run out of memory or disk space.
Security
Security is an area of immense extent, and would need extensive writing to be fairly
covered. We will no more than point out the most central elements to test. First make
sure that you have a correct directory setup. You dont want users to be able to brows
through directories on your server.
Logins are very common on todays web sites, and they must be error free. Make sure
to test both valid and invalid login names and passwords. Are they case sensitive? Is
there a limit to how many tries that are allowed? Can it be bypassed by typing the
URL to a page inside directly in the browser?
Is there a time-out limit within your site? What happens when its exceeded? Are
users still able to navigate through the site?
Logfiles are a very important in order to maintain security at the site. Verify that
relevant information is written to the logfiles and that the information is traceable.
When secure socket layers are used, verify that the encryption is done correctly and
check the integrity of the information.
88
Manual Testing
Q
Testing Tools
Mind
Scripting on the server often constitute security holes and are often used by hackers.
Test that it isnt possible to plant or edit scripts on the server without authorisation.
Summary:
Directory setup
Logins
Time-out
Logfiles
SSL
Scripting Languages
Database Testing
There are several reasons why you need to develop a comprehensive testing strategy for
RDBMS:
Here's a few interesting questions to ask someone who isn't convinced that you need to
test the DB:
If you're implementing code in the DB in the form of stored procedures, triggers, ...
shouldn't you test that code to the same level that you test your app code?
Think of all the data quality problems you've run into over the years. Wouldn't it have
been nice if someone had originally tested and discovered those problems before you
did?
Wouldn't it be nice to have a test suite to run so that you could determine how (and if) the
DB actually works?
What Should We Test?
Figure 1 indicates what you should consider testing when it comes to relational
databases. The diagram is drawn from the point of view of a single database, the dashed
lines indicate threat boundaries, indicating that you need to consider threats both within
the database (clear box testing) and at the interface to the database (black box testing).
Table 1 lists the issues which you should consider testing for both internally within the
database and at the interface to it.
89
Manual Testing
Q
Testing Tools
Mind
O/R mappings (including the meta Scaffolding code (e.g. triggers or updateable
data)
views) which support refactorings
Incoming data values
Typical unit tests for your stored procedures,
Outgoing data values (from queries, functions, and triggers
stored functions, views ...)
Existence tests for database schema elements
(tables, procedures, ...)
90
Manual Testing
Q
Testing Tools
Mind
View definitions
Referential integrity (RI) rules
Default values for a column
Data invariants for a single column
Data invariants involving several columns
How to Test
Although you want to keep your database testing efforts as simple as possible, at first you
will discover that you have a fair bit of both learning and set up to do. In this section we
discuss the need for various database sandboxes in which people will test: in short, if you
want to do database testing then you're going to need test databases (sandboxes) to work
in. We then overview how to write a database test and more importantly describe setup
strategies for database tests. Finally, we overview several database testing tools which
you may want to consider.
Database Sandboxes
A sandbox is basically a technical environment whose scope is well defined and
respected. In each sandbox you'll have a copy of the database. In the development
sandbox you'll experiment, implement new functionality, and refactor existing
functionality, validate your changes through testing, and then eventually you'll promote
your work once you're happy with it to the project integration sandbox.
Writing Database Tests
There's no magic when it comes to writing a database test, you write them just like you
would any other type of test. Database tests are typically a three-step process:
Setup the test. You need to put your database into a known state before running tests
against it. There are several strategies for doing so.
Run the test. Using a database regression testing tool, run your database tests just like
you would run your application tests.
Check the results. You'll need to be able to do "table dumps" to obtain the current
values in the database so that you can compare them against the results which you
expected.
The article What To Test in an RDBMS goes into greater detail.
91
Manual Testing
Q
Testing Tools
Mind
To successfully your database you must first know the exact state of the database, and the
best way to do that is to simply put the database in a known state before running your test
suite. There are two common strategies for doing this:
Fresh start. A common practice is to rebuild the database, including both creation of the
schema as well as loading of initial test data, for every major test run (e.g. testing that you
do in your project integration or pre-production test sandboxes).
Data reinitialization. For testing in developer sandboxes, something that you should do
every time you rebuild the system, you may want to forgo dropping and rebuilding the
database in favor of simply reinitializing the source data. You can do this either by
erasing all existing data and then inserting the initial data vales back into the database, or
you can simple run updates to reset the data values. The first approach is less risky and
may even be faster for large amounts of data.
An important part of writing database tests is the creation of test data. You have several
strategies for doing so:
Have source test data. You can maintain an external definition of the test data, perhaps
in flat files, XML files, or a secondary set of tables. This data would be loaded in from
the external source as needed.
Test data creation scripts. You develop and maintain scripts, perhaps using data
manipulation language (DML) SQL code or simply application source code (e.g. Java or
C#), which does the necessary deletions, insertions, and/or updates required to create the
test data.
Self-contained test cases. Each individual test case puts the database into a known state
required for the test.
These approaches to creating test data can be used alone or in combination. A significant
advantage of writing creation scripts and self-contained test cases is that it is much more
likely that the developers of that code will place it under configuration management
(CM) control. Although it is possible to put test data itself under CM control, worst case
you generate an export file that you check in, this isnt a common practice and therefore
may not occur as frequently as required. Choose an approach that reflects the culture of
your organization.
Where does test data come from? For unit testing, we should prefer to create sample data
with known values. This way we can predict the actual results for the tests that we write
and know that we have the appropriate data values for those tests. For other forms of
testing -- particularly load/stress, system integration, and function testing, live data
should be used so as to better simulate real-world conditions.
92
Manual Testing
Q
Testing Tools
Mind
Unit
testing
tools
DBFit
DBUnit
NDbUnit
Tools
which
OUnit for Oracle (being replaced soon by Qute)
enable you to
SQLUnit
regression
test
TSQLUnit (for testing T-SQL in MS SQL Server)
your database.
Visual Studio Team Edition for Database Professionals
includes testing capabilities
XTUnit
Tools
simulate
high usage loads
on your database,
Empirix
Testing
enabling you to
Mercury Interactive
tools for determine whether
RadView
load
your
system's
Rational Suite Test Studio
testing
architecture will
Web Performance
stand up to your
true
production
needs.
Test Data Developers need Data Factory
Generator test data against Datatect
93
Manual Testing
Q
Testing Tools
Mind
which to validate
their
systems.
Test
data
generators can be
particularly useful DTM Data Generator
when you need Turbo Data
large amounts of
data, perhaps for
stress and load
testing.
Who Should Test?
During development cycles, the primary people responsible for doing database testing are
application developers and agile DBAs. They will typically pair together, and because
they are hopefully taking a Test Driven Development approach to development the
implication is that they'll be doing database unit testing on a continuous basis. During the
release cycle your testers, if you have any, will be responsible for the final system testing
efforts and therefore they will also be doing database testing.
The role of your data management (DM) group, or IT management if your organization
has no DM group, should be to support your database testing efforts. They should
promote the concept that database testing is important, should help people get the
requisite training that they require, and should help obtain database testing tools for your
organization. As you have seen, database testing is something that is done continuously
by the people on development teams; it isn't something that is done by another group
(except of course for system testing efforts). In short, the DM group needs to support
database testing efforts and then get out of the way of the people who are actually doing
the work.
Introducing Database Regression Testing into Your Organization
Database testing is new to many people, and as a result you are likely to face several
challenges:
Insufficient testing skills.
Insufficient unit tests for existing databases.
Insufficient database testing tools.
Reticent DM groups.
Database Testing and Data Inspection
A common quality technique is to use data inspection tools to examine existing data
within a database. You might use something as simple as a SQL-based query tool such as
DB Inspect to select a subset of the data within a database to visually inspect the results.
94
Manual Testing
Q
Testing Tools
Mind
For example, you may choose to view the unique values in a column to determine what
values are stored in it, or compare the row count of a table with the count of the resulting
rows from joining the table with another one. If the two counts are the same then you
don't have an RI problem across the join.
As Richard Dallaway points out, the problem with data inspection is that it is often done
manually and on an irregular basis. When you make changes later, sometimes months or
years later, you need to redo your inspection efforts. This is costly, time consuming, and
error prone.
Data inspection is more of a debugging technique than it is a testing technique. It is
clearly an important technique, but it's not something that will greatly contribute to your
efforts to ensure data quality within your organization.
Best Practices
Use an in-memory database for regression testing. You can dramatically speed up
your database tests by running them, or at least portions of them, against an in-memory
database such as HSQLDB. The challenge with this approach is that because database
methods are implemented differently across database vendors that any method tests will
still need to run against the actual database server.
Start fresh each major test run. To ensure a clean database, a common strategy is that
at the beginning of each test run you drop the database, then rebuild it from scratch taking
into account all database refactorings and transformations to that point, then reload the
test data, and then run your tests. Of course, you wouldn't do this to your production
database.
Take a continuous approach to regression testing. TDD approach to development is an
incredibly effective way to work.
Train people in testing. Many developers and DBAs have not been trained in testing
skills, and they almost certainly haven't been trained in database testing skills. Invest in
your people, and give them the training and education they need to do their jobs.
Pair with novices with people that have database testing experience. One of the
easiest ways to gain database testing skills is to pair program with someone who already
has them.
95
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
97
Manual Testing
Q
Testing Tools
Mind
98
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
The key process areas at Level 3 address both project and organization issues, as the
organization established an infrastructure that institutionalizes effective software
engineering and management processes across all project. They are Organization Process
Focus, Organization Process Definition, Training Program, Integrated Software
Management, Software Product Engineering, Inter group Coordination, and Peer
Reviews.
The key process areas at Level 4 focus on establishing a quantitative understanding of
both the software process and the software work products being built. They are
Quantitative Process Management and Software Quality Management.
The key process area at Level 5 cover the issues that both the organization and the
projects must address to software process improvement. They are Defect Prevention,
Technology Change Management, and Process.
Each key process area is described in terms of the key practices that contribute to
satisfying its goals. The key practices describe the infrastructure and activities that
contribute most to the effective implementation and institutionalization of the key process
area.
Level 1 Initial
No KPAs
Level 2 Repeatable
a. Software Requirement Management
b. Software Project Planning
c. Software Project Tracking & Oversight
d. Software Subcontract Management
e. Software Quality Assurance
f. Software Configuration Management
Level 3 Defined
a. Organizational Process Focus
b. Organizational Process Definition
c. Training Program
d. Software Product Engineering
e. Integrated Software Management
100
Manual Testing
Q
Testing Tools
Mind
f. Inter-Group Coordination
g. Peer Review
Level 4 Managed
a. Software Quality Management
b. Quality Process Management
Level 5 - Optimizing
a. Defect Prevention
b. Technology Change Management
c. Process Change Management
101
Manual Testing
Q
Testing Tools
Mind
102
Manual Testing
Q
Testing Tools
Mind
Introduction
Purpose
The purpose of this System Requirements Document (SRD) is to establish the function requirements for
XYZ INCs new Electronic Proposal Management System (EPMS).
Intended Audience
First, the departments within XYZ INC who are stakeholders in the project will be able to read this
document and understand the functionality that it describes and provide clarifications, corrections, or
modifications to more clearly define how the system can best be organized to meet their customers needs.
Also, upon reading this document, the clients should have a complete understanding of the functionality of
the envisioned system.
Document Organization
The initial chapters of this System Requirements Document provide an overview of the document and the
scope of the project (Chapter 1), a high level description of the functional and business objectives of the
project (Chapter 2), and interfaces that are expected between this system to be developed under this project
and other systems (Chapter 3).
Project Scope
References
Revision History
Points of Contact
Risks
The risks and mitigation steps defined in the following matrix are the list of risks identified at the outset of
the project with the XYZ INC Project Manager on the project. There is no reason to expect that any
particular risk will come to pass, but serves to ensure that the risk analysis and planning are incorporated as
a normal part of the project lifecycle and that appropriate steps are taken to offset those risks where it is
deemed appropriate.
Overall Description of the System
Product Perspective
The XYZ INC Electronic Proposal Management System will provide new functionality to XYZ INC staff
included in the development of new business proposals. This system is expected to be available to
Business Development staff, Operations staff, Estimation staff, Contracts Staff, and the Proposal Team
staffs in Arlington and Houston.
Product Functions
The product functions for the Electronic Proposal Management System fall under four main areas:
Roles and Groups
The users interact with the system based on their role in the Proposal lifecycle.
103
Manual Testing
Q
Testing Tools
Mind
104
Manual Testing
Q
Testing Tools
Mind
Author(s):
Ananth
Reviewed By:
James, Aron
Approved By:
Michales
Distribution:
105
Manual Testing
Q
Testing Tools
Mind
Table of Contents
1.
EXECUTIVE SUMMARY_____________________________________________________106
2.
STATEMENT OF REQUIREMENTS_____________________________________________107
3.
3.1
TEST OBJECTIVES_______________________________________________________________107
3.2
TEST REQUIREMENTS____________________________________________________________107
3.3
3.4
ASSUMPTIONS_________________________________________________________________107
3.5
SCOPE________________________________________________________________________107
FUNCTIONALITY________________________________________________________________109
4.2
SOFTWARE ARCHITECTURE_______________________________________________________109
4.3
HARDWARE ARCHITECTURE_______________________________________________________109
4.4
DATA________________________________________________________________________109
4.
5.
TESTING FRAMEWORK_____________________________________________________111
6.1
6.2
6.3
TESTING GUIDELINES____________________________________________________________111
6.
7.
TESTING STRATEGY_______________________________________________________113
8.
9.
8.1
TYPES OF TESTING______________________________________________________________113
8.2
USE-CASES____________________________________________________________________113
8.3
TEST CYCLES__________________________________________________________________113
ROLES________________________________________________________________________114
9.2
RESPONSIBILITIES_______________________________________________________________114
9.3
RESOURCES ASSESSMENT________________________________________________________114
9.4
RISK ASSESSMENT________________________________________________________115
10.1 PROJECT RISKS_________________________________________________________________115
10.2 OTHER FACTORS_______________________________________________________________115
106
Manual Testing
Q
Testing Tools
Mind
A.
B.
ERROR! BOOKMARK NOT DEFINED.
C.
107
Manual Testing
Q
Testing Tools
Mind
Executive Summary
The objective of the CMS project is To deliver a state-of-the-art Medical Management system
with the objective of improving members health, while controlling care costs. This system should
differentiate XYZ Inc from competing heath plans to retain and grow market share (state source)
The expectation of this exercise is to provide the following:
(1) To ensure that the test regiment applied to the ongoing activities are robust and complete w.r.t.
types of testing ensuring the delivery of a quality application.
(2) The reduction of time taken for execution resulting in earlier delivery date.
(3) To add visibility to all aspects of the testing process allowing better control by management.
(4) To develop an acceptable Basis of Estimation (BOE) to aid this and similar efforts in the
future where time and resource estimations are required aiding cost expectations.
This endeavour presents several risks including:
Schedule risk: Timelines are aggressive given the number of new or changing systems
and the inherent complexity of such an activity requires
Project Risk: The likelihood that the project cannot be complete based on acceptance
criteria. Firstly these criteria do not exist at the time this document was written further
Key Performance Indicators (KPIs) are currently undefined allowing no evaluation on
acceptance criteria should acceptance criteria actual exists
Business Risk There is an identified widow of opportunity EPA intends to exploit.
Should project delivery extend past this the ROI will may not be met.
108
Manual Testing
Q
Testing Tools
Mind
Statement of Requirements
Test Objectives
Primary
The primary objectives are:
To ensure that the functionality of the system will meet the expectations of the internal and external users
To reduce where possible the time estimated to conduct and complete testing
Secondary
The secondary objectives are:
To explore the use of automation tools to improve test coverage, reduce time to test.
To investigate and recommend measures that will result in more structured, and reusable testing processes
across the department
To indicate level of effort required for to conduct testing, now and in the future
To present a staffing model necessary to conduct testing
Test Requirements
The requirements for this project are:
To determine whether the system will perform with acceptable response times at loads of up to 2000
concurrent users running a launch date scenario of typical customer transactions.
Definition
Target Pass
Rate
Critical
Essential functionality.
100%
Important
90%
Desirable
70%
Assumptions
Scope
In scope components and related feeds include:
109
Manual Testing
Q
Testing Tools
Mind
EPA Applications as denoted on the EPA Care Management Data Architecture document (CareMgmtDataArch
(2005-11-03).xls)
SmartMail
CCMS-Fax Interface
Out of Scope
o
Execution of any proposed or designed tests as detailed by test planning. Execution will be the
activities of subsequent efforts.
Systems or components specifically related to Significa, Erin Group, Nurse Coaching, UM Full/Lite
systems
110
Manual Testing
Q
Testing Tools
Mind
Database Type
Contents
MS Access
DM Access Database
MS Access
ODS/Data Repository
DB2
CMBS
Eligibility/Membership
DB2
DB2
Health AtoZ
MS Sequel Server
DB2/AIX
DxCG RiskSmart
MS Sequel Server
Provider
Wellness database
Capitated Lab
Teleform Assessments
HRAs
McKesson CCMS
111
Manual Testing
Q
Testing Tools
Mind
WorkSoft and Compuware have been short listed and have been scheduled to deliver Proofs of Concept
(POCs) in January of 2006. Mercury Int., the market leader in this automation tools space, has been
excluded on the basis of financial instability.
112
Manual Testing
Q
Testing Tools
Mind
Testing Framework
The Testing Process
Test Management Infrastructure
Testing Guidelines
Environment Management
Test Management
Defect Management
Reviews
Reviews of documents and deliverables should be conducted at the point where control of quality is
required, as discussed in the Quality Control section. In general, this is at any point that you have a
planned task that requires a review, approval or sign-off. Specific review points established by the
Project Plan, could be as follows:
Progress Reporting
Requirements Management
Traceability and Test Coverage
Prioritization
Risk Assessment
1.
2.
3.
Impact: the impact that the failure of this requirement would have on the business;
Probability: the probability that a failure might occur if the requirement is not covered by a test;
Complexity: allowing tests to concentrate on the most complex functionality;
113
Manual Testing
Q
4.
Testing Tools
Mind
Source of failure: identifying the areas of testing that are most likely to cause failures, and
concentrating upon the requirements and tests covering these areas.
Issues raised during testing are to be entered into an issue management system, either an
existing/incumbent one or an issue management system developed in lieu.
The issues arising will be formally raised and recorded into the issues register, where they will be
reviewed, escalated and monitored to completion.
An issue management system should be designed that can allow the storage of issues, assigned to
owners whose responsibility it is to manage the resolution of their issues. This must be held under
change control and can be updated regularly with meetings to discuss the progress of issue
resolution. The monitoring of issues can also provide a means of tracking project progress. To do
this, issues can be assigned priorities indicating whether they are (for example) Showstoppers, High,
Medium or Low Priority.
114
Manual Testing
Q
Testing Tools
Mind
Testing Strategy
Types of Testing
Functional Testing
Unit Testing
Data Validation Testing
Data Referential Integrity Testing
Functionality Testing
Privacy Testing
Security Testing
Regression Testing
End-to-End Testing
Non Functional Testing
Use-Cases
Test Cycles
Cycle 0
Cycle 1
Cycle 2 etc.
Investigative Testing
115
Manual Testing
Q
Testing Tools
Mind
Name
Project Owner/Sponsor
James
Project Manager
Brett
Contact No.
Database Administrator
Data Specialist
Kate
Business Specialist
Nina
Laura
Kathy
Kathy
Kathy
Network Specialist
Responsibilities
Resources Assessment
Facilities & Hardware Requirements
116
Manual Testing
Q
Testing Tools
Mind
Risk Assessment
Project Risks
The following standard Project Risks have been identified as being applicable to this project.
Health A to Z has not performed equivalent or similar tests to those of interest to the client.
The proposed hardware configuration of the system has not been proven at the client site
before.
The application / system software configuration has not been proven at the client site
Other systems will be passing information to or taking information from this system
Other systems rely upon this system for their functionality or performance.
Other Factors
Other risks and considerations that apply to this project are:
Existing feeds and sub routines will be changed to accommodate the new
systems and processes
Current development and testing is very siloed and communication is not efficient
Statement of Risk
Risk Mitigation Strategy
117
Manual Testing
Q
Testing Tools
Mind
Review Stage
Complete Final Report
Delivery Dates
Proposed Project Schedule
Summary of Resource Requirements
118
Manual Testing
Q
Testing Tools
Mind
Skills Matrix
119
Manual Testing
Q
Testing Tools
Mind
Role
Skills
Responsibilities
Training
QA
Manage
r
Project Management
QA/Testing Framework
- EWTS and SSTM
- as well as QMS
processes
QA management tools
from leading test
automation vendors;
including (in order of
preference) Mercury
Interactive
(TestDirector/Quality
Center, Compuware
(TrackRecord),
Rational, Clear Case,
RadView, etc
CMMI & TMM
Quality and audit
controls for various
industries, preferably
including Healthcare
and Financial sector
regulations
Manual Testing
Q
Testing Tools
Mind
V1.0
Approved By
Name
Role
Date
Ashok Reddy J
25/12/06
Sno
Date
Version No
Page No
Change Mode
(A/M/D)
Brief
Description
of Change
121
Manual Testing
Q
Testing Tools
Mind
Contents
1. Introduction......................................................................................................................120
Objectives........................................................................................................................120
Test Strategy....................................................................................................................120
Scope................................................................................................................................120
Referential Material.......................................................................................................121
2. Test Items..........................................................................................................................121
Program Modules...........................................................................................................121
User Procedures..............................................................................................................121
3.Features To Be Tested........................................................................................................121
4. Features Not To Be Tested................................................................................................121
5. Approach...........................................................................................................................121
Sanity Testing..................................................................................................................121
Interface Testing.............................................................................................................121
Functional Testing..........................................................................................................121
Regression Testing..........................................................................................................121
Integration Testing..........................................................................................................122
System Testing.................................................................................................................122
Automation Testing........................................................................................................122
6. Pass/Fail Criteria...............................................................................................................122
Suspension Criteria........................................................................................................122
Resumption Criteria.......................................................................................................122
Approval Criteria...........................................................................................................122
7. Testing Process.................................................................................................................122
Test Deliverables.............................................................................................................122
Testing Tasks...................................................................................................................122
Responsibilities...............................................................................................................123
Resources.........................................................................................................................123
Schedule..........................................................................................................................123
8. Environmental Requirements...............................................Error! Bookmark not defined.
Software..............................................................................Error! Bookmark not defined.
Tools....................................................................................Error! Bookmark not defined.
Publications........................................................................Error! Bookmark not defined.
9. Risks and Contingencies......................................................Error! Bookmark not defined.
Schedule.........................................................................Error! Bookmark not defined.
Personnel.......................................................................Error! Bookmark not defined.
Requirements................................................................Error! Bookmark not defined.
10.Change Management Procedures........................................Error! Bookmark not defined.
11.Plan Approvals....................................................................Error! Bookmark not defined.
122
Manual Testing
Q
Testing Tools
Mind
1. Introduction
The Software Test Plan (STP) is designed to prescribe the
scope, approach, resources, and schedule of all testing
activities. The plan must identify the items to be tested, the
features not to be tested, the types to be performed, the
personnel responsible for testing, the resources and schedule
required to complete testing, and the risks associated with
the plan.
Objectives
The objective of the Test Plan is:
a) To identify the components or modules to be tested.
b) To identify and determine the resources required performing
the testing process.
c) To identify and estimate the task schedules for each level
of testing process.
d) To define the test deliverables.
Test Strategy
Test Strategy is a management plan that involves ingenious
methods to achieve ends. In the context of testing a strategy
can be defined as a high-level management method that confirms
adequate confidence in the software product being tested,
while ensuring that the cost, effort, and timelines are all
within acceptable limits.
The Test Strategy for SIMS is classified into following
sections of the Test Plan document.
Scope
Testing will be performed in several points in the life
cycle as the product is constructed. Testing is a very
dependent activity. As a result, test planning is a
continuing
activity
performed
throughout
the
system
development life cycle. Test plans must be developed for each
level of testing.
The scope of this Test Plan document is the testing process
for entire SIMS.
123
Manual Testing
Q
Testing Tools
Mind
Referential Material
a) FRS Documents and Use Case Documents
2. Test Items
Program Modules
This section outlines testing to
developer for each module being built.
be
performed
by
the
User Procedures
This section describes the testing to be performed on all
user documentation to ensure that it is correct, complete, and
comprehensive.
3.Features To Be Tested
The features to be tested within SIMS are classified under the
following modules as:
a) Admin
124
Manual Testing
Q
Testing Tools
Mind
Integration Testing
System Testing
Automation Testing
6. Pass/Fail Criteria
Suspension Criteria
a) When the AUT is failed in the Build Acceptance Testing.
b) Whenever there is a Change Request.
c) Delay in publishing the input documents.
Resumption Criteria
a) Testing can be resumed after the patch is released for the
rejected build.
b) When specification documents are refined and base-lined
based on CR acceptance or rejection.
c) After the input documents are published.
Approval Criteria
When the status of the bugs in the Defect profile is
Closed and result column in the TCD is Pass. This ensures
the proposed functionalities are justified in the System.
7. Testing Process
Test Deliverables
a) Defect Profile Documents.
b) Test Summary Reports.
c) Test Execution Reports.
Testing Tasks
125
Manual Testing
Testing Tools
Q
e) Bug Reporting.
f) Ensuring bug-fixing process.
Mind
Responsibilities
Resources
Task Schedule from 27-Jan-2007 to 08-Feb-2007.
Sno
2
3
4
Task
Master Entities
Sub Category Profile
a) FRS Review and
Review Report
preparation
a) RR with
clarification release
b) Review Meeting
a) Test Case Workshop
b) Test Design
a)
Lead Reviews
Schedule
(in Days)
Start Date
End Date
1.5 Day
13/02/2007
14/02/2007
.5 Day
14/02/2007
14/02/2007
2.5 Days
15/02/2007
17/02/2007
.5 Days
07/02/2007
17/02/2007
126
Manual Testing
Q
b) Refinement and
Baseline of TCD
Product Profile
FRS Review and Review
5
Report preparation
Testing Tools
Mind
1 Day
19/02/2007
19/02/2007
a) RR with
clarification release
b) Review Meeting
.5 Day
20/02/2007
20/02/2007
b) TCD Preparation
3 Days
20/02/2007
23/02/2007
a) Peer Reviews
b) Refinement of TCD
based on Peer
Reviews
c) Lead Reviews
d) Refinement and
Baseline of TCD
1 Day
24/02/2007
24/02/2007
Appendix D
Testing Dictionary
Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteriaenables an end user to determine whether or not to
accept the system.
Affinity Diagram: A group process that takes large amounts of language data, such as a
list developed by brainstorming, and divides it into categories.
Alpha Testing: Testing of a software product or system conducted at the developers site
by the end user.
Audit: An inspection/assessment activity that verifies compliance with plans, policies,
and procedures, and ensures that resources are conserved. Audit is a staff function; it
serves as the eyes and ears of management.
Automated Testing: That part of software testing that is assisted with software tool(s)
that does not require operator input, analysis, or evaluation.
Beta Testing: Testing conducted at one or more end user sites by the end user of a
delivered software product or system.
127
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
Manual Testing
Q
Testing Tools
Mind
130
Manual Testing
Q
Testing Tools
Mind
131
Manual Testing
Q
Testing Tools
Mind
Interface Analysis: Checks the interfaces between program elements for consistency and
adherence to predefined rules or axioms.
Intrusive Testing: Testing that collects timing and processing information during
program execution that may change the behavior of the software from its behavior in a
real environment. Usually involves additional code embedded in the software being
tested or additional processes running concurrently with software being tested on the
same platform.
IV&V: Independent Verification and Validation is the verification and validation of a
software product by an organization that is both technically and managerially separate
from the organization responsible for developing the product.
Life Cycle: The period that starts when a software product is conceived and ends when
the product is no longer available for use. The software life cycle typically includes a
requirements phase, design phase, implementation (code) phase, test phase, installation
and checkout phase, operation and maintenance phase, and a retirement phase.
Manual Testing: That part of software testing that requires operator input, analysis, or
evaluation.
Mean: A value derived by adding several qualities and dividing the sum by the number of
these quantities.
Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained
by measuring.
Metric: A measure of the extent or degree to which a product possesses and exhibits a
certain quality, property, or attribute.
Mutation Testing: A method to determine test set thoroughness by measuring the extent
to which a test set can discriminate the program from slight variants of the program.
Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing
that does not change the timing or processing characteristics of the software under test
from its behavior in a real environment. Usually involves additional hardware that
collects timing or processing information and processes that information on another
platform.
132
Manual Testing
Q
Testing Tools
Mind
133
Manual Testing
Q
Testing Tools
Mind
134
Manual Testing
Q
Testing Tools
Mind
135
Manual Testing
Q
Testing Tools
Mind
Software Tool: A computer program used to help develop, test, analyze, or maintain
another computer program or its documentation; e.g., automated design tools, compilers,
test tools, and maintenance tools.
Standards: The measure used to evaluate products and identify nonconformance. The
basis upon which adherence to policies is measured.
Standardize: Procedures are implemented to ensure that the output of a process is
maintained at a desired level.
Statement Coverage Testing: A test method satisfying coverage criteria that requires
each statement be executed at least once.
Statement of Requirements: The exhaustive list of requirements that define a product.
NOTE: The statement of requirements should document requirements proposed and
rejected (including the reason for the rejection) during the requirements determination
process.
Static Testing: Verification performed without executing the systems code. Also called
static analysis.
Statistical Process Control: The use of statistical techniques and tools to measure an
ongoing process for change or stability.
Structural Coverage: This requires that each pair of module invocations be executed at
least once.
Structural Testing: A testing method where the test data is derived solely from the
program structure.
Stub: A software component that usually minimally simulates the actions of called
components that have not yet been integrated during top-down testing.
Supplier: An individual or organization that supplies inputs needed to generate a product,
service, or information to an end user.
Syntax: 1) The relationship among characters or groups of characters independent of
their meanings or the manner of their interpretation and use; 2) the structure of
expressions in a language; 3) the rules governing the structure of the language.
136
Manual Testing
Q
Testing Tools
Mind
137
Manual Testing
Q
Testing Tools
Mind
138