Testing Manual
Testing Manual
Testing Manual
Process: Software Testing
1
Confidential
WI-080 Testing Manual
Content
1 Introduction........................................................................................................................................... 6
1.1 Purpose.......................................................................................................................................... 6
1.2 Scope............................................................................................................................................. 6
1.3 Definitions and Acronyms............................................................................................................... 6
1.3.1 Terms and Definitions............................................................................................................... 6
1.3.2 Acronyms.................................................................................................................................. 7
2 CHAPTER I: STATIC TESTING............................................................................................................ 7
2.1 Objective........................................................................................................................................ 7
2.2 What should be tested?................................................................................................................. 8
2.3 How to test it?................................................................................................................................ 8
2.4 Testing Checklist............................................................................................................................ 8
3 CHAPTER II: TESTING LEVELS.......................................................................................................... 9
3.1 Unit Testing.................................................................................................................................. 10
3.1.1 Objective................................................................................................................................. 10
3.1.2 What should be tested?.......................................................................................................... 10
3.1.3 How to test it?......................................................................................................................... 10
3.1.4 Tools....................................................................................................................................... 11
3.1.5 Testing Checklist.................................................................................................................... 12
3.2 Integration Testing....................................................................................................................... 13
3.2.1 Objective................................................................................................................................. 13
3.2.2 What should be tested?.......................................................................................................... 13
3.2.3 How to test it?......................................................................................................................... 15
3.2.4 Tools....................................................................................................................................... 15
3.3 System Testing............................................................................................................................ 16
3.3.1 Functional Testing.................................................................................................................. 17
3.3.1.1 Smoke and Sanity Tests.................................................................................................17
3.3.1.1.1 Objective.................................................................................................................... 17
3.3.1.1.2 What should be tested?.............................................................................................. 17
3.3.1.1.3 How to test it?............................................................................................................. 18
3.3.1.1.4 Tools.......................................................................................................................... 18
3.3.1.2 Regression Tests............................................................................................................ 18
3.3.1.2.1 Objective.................................................................................................................... 18
3.3.1.2.2 What should be tested?.............................................................................................. 18
3.3.1.2.3 How to test it?............................................................................................................. 19
3.3.1.2.4 Tools.......................................................................................................................... 19
3.3.1.3 Functional Tests.............................................................................................................. 19
2
Confidential
WI-080 Testing Manual
3
Confidential
WI-080 Testing Manual
3.3.2.2.4 Tools.......................................................................................................................... 41
3.3.2.2.5 Testing checklist......................................................................................................... 42
3.3.2.3 Volume/Load................................................................................................................... 42
3.3.2.3.1 Objective.................................................................................................................... 42
3.3.2.3.2 What should be tested?.............................................................................................. 42
3.3.2.3.3 How will you test it?.................................................................................................... 43
3.3.2.3.4 Tools.......................................................................................................................... 43
3.3.2.4 Stress.............................................................................................................................. 43
3.3.2.4.1 Objective.................................................................................................................... 43
3.3.2.4.2 What should be tested?.............................................................................................. 43
3.3.2.4.3 How will you test it?.................................................................................................... 43
3.3.2.4.4 Tools.......................................................................................................................... 44
3.3.2.4.5 Testing checklist......................................................................................................... 44
3.3.2.5 Usability.......................................................................................................................... 44
3.3.2.5.1 Objective.................................................................................................................... 44
3.3.2.5.2 What should be tested?.............................................................................................. 44
3.3.2.5.3 How will you test it?.................................................................................................... 44
3.3.2.5.4 Tools.......................................................................................................................... 44
3.3.2.6 Security........................................................................................................................... 45
3.3.2.6.1 Objective.................................................................................................................... 45
3.3.2.6.2 What should be tested?.............................................................................................. 45
3.3.2.6.3 How will you test it?.................................................................................................... 46
3.3.2.6.4 Tools.......................................................................................................................... 49
3.3.2.6.5 Testing checklist......................................................................................................... 50
3.3.2.7 Internationalization and localization................................................................................50
3.3.2.7.1 Objective.................................................................................................................... 50
3.3.2.7.2 What should be tested?.............................................................................................. 51
3.3.2.7.3 How will you test it?.................................................................................................... 51
3.3.2.7.4 Tools.......................................................................................................................... 51
3.3.2.7.5 Testing checklist......................................................................................................... 51
3.3.2.8 Accessibility.................................................................................................................... 54
3.3.2.8.1 Objective.................................................................................................................... 54
3.3.2.8.2 What should be tested?.............................................................................................. 54
3.3.2.8.3 How will you test it?.................................................................................................... 55
3.3.2.8.4 Tools.......................................................................................................................... 55
3.3.2.9 Compatibility................................................................................................................... 56
3.3.2.9.1 Objective.................................................................................................................... 57
3.3.2.9.2 What should be tested?.............................................................................................. 57
3.3.2.9.3 How will you test it?.................................................................................................... 57
4
Confidential
WI-080 Testing Manual
3.3.2.9.4 Tools.......................................................................................................................... 61
3.3.2.9.5 Testing Checklist........................................................................................................ 62
3.3.2.9.5.1 Network Connectivity........................................................................................62
3.3.2.9.5.2 Platform.............................................................................................................63
3.3.2.9.5.3 CPU Configurations............................................................................................63
3.3.2.9.5.4 Hardware Configurations...................................................................................64
3.3.2.9.5.5 Application Configuration and Interoperability.................................................64
3.3.2.9.5.6 Configuration.....................................................................................................65
3.3.2.9.5.7 Interoperability..................................................................................................65
3.4 User Acceptance Testing............................................................................................................. 65
3.4.1 Objective................................................................................................................................. 65
3.4.2 What should be tested?.......................................................................................................... 66
3.4.3 How to test it?......................................................................................................................... 66
3.4.4 Tools....................................................................................................................................... 66
4 TESTING METHODS.......................................................................................................................... 67
4.1 BLACK-BOX TESTING................................................................................................................ 67
4.2 WHITE BOX TESTING................................................................................................................. 69
5 DATABASE TESTING......................................................................................................................... 71
5.1.1 Objective................................................................................................................................. 71
5.1.2 What should be tested?.......................................................................................................... 71
5.1.3 How to test it?......................................................................................................................... 74
5.1.4 Tools....................................................................................................................................... 75
6 LOGGING TESTING........................................................................................................................... 76
5
Confidential
WI-080 Testing Manual
1 Introduction
1.1 Purpose
Standards Mapping:
1.2 Scope
The scope of this work instruction is to present a structured form of all testing types performed on
different testing levels, when testing a software solution.
Each chapter regarding testing types/testing levels, will present the following information:
objectives, what should be tested, how to test it, frequently used tools, testing checklist.
Besides the definition and acronyms described below, there will be applicable the terms, definitions
and acronyms described in QS-002_Definitions and Acronyms.
Test Case test case; a succession of necessary actions to be performed in order to verify a
certain relevant aspect within the validation process of a software product in order
to comply with certain predefined rules
Unit Test Test implemented usually in the employed language and used for developing the
application, having as purpose the testing of a code entity (unit), from a functional
standpoint
Code Inspection The procedure for verifying the method for writing the source-code, having the
purpose to verify the compliance with the general code writing regulations and
with the writing method (specific for each development language)
Procedure set of actions to be carried out within the specific situations for implementing a
process
6
Confidential
WI-080 Testing Manual
Test Developer person responsible for developing the Test Cases, as a sequence of verification
actions and points, in order to ensure the testing of a functionality of a software
product
Automated testing testing method employing a specialised tool, consisting in the automated
execution of a set of actions and verifications; the execution of automated testing
is also referred in „PlayBack”
1.3.2 Acronyms
Req Requirement
PM Project Manager
TM Test Manager
QM Quality Manager
Dev Developer
SD Software Developer
TD Test Developer
QD Quality Developer
SDLC Software Development Life Cycle
UI User Interface
GUI Graphical User Interface
2.1 Objective
By reviewing the above mentioned items, we should be able to find and fix defects early on, in the
software development lifecycle (SDLC).
7
Confidential
WI-080 Testing Manual
Almost anything and everything can be reviewed, for example: requirements, system and program
specification, code, deliverables.
During the SDLC, various deliverables are created and each of them contributes to the delivered
solution. It is essential that all significant omissions are discovered early in the SDLC, so that the final
delivered product will function according to the client’s specifications.
The purpose of testing documents is to verify their integrity and correctness. This category includes
the following deliverables of the process associated to a project:
These documents are usually tested during the training phase and the System Test Plan creation
phase as they are basically the input documents for these phases.
The documents associated to the final product can be found in the following list: installation manual (if
created by the testing department), user guides, other associated manuals, such as support and
training materials for the final user (if created by the testing department).
You are not done testing unless you have reviewed all documentation,
a) to ensure that it is correct, and
b) to help generate test cases.
There have been so many cases of documentation which depicted UI which was not in the actual
application, and encountered UI in the application which was nowhere to be found in the documentation.
Other collateral can be useful to review as well - product support calls for the previous version of the
application, for example.
Source code reviews are a simple way to find those places where supposed-to-be-temporary
message boxes and other functionalities are about to be shipped to paying customers.
8
Confidential
WI-080 Testing Manual
The testing levels described in this chapter are essential stages in the development life cycle. Their
purpose is to ensure the quality of the product to be delivered:
Unit Tests,
Integration Tests,
System Tests and
(User) Acceptance Tests.
9
Confidential
WI-080 Testing Manual
Unit testing, also known as component testing, refers to tests that verify the functionality of a
specific section of code, usually at the function level. In an object-oriented environment, this is usually at
the class level, and the minimal unit tests include the constructors and destructors. During this phase the
following tests might be included:
Unit tests must be performed by the development team of the project, for all the builds released to
testing by the developing team.
3.1.1 Objective
The objective of unit testing is to isolate each part of the program and validate their correctness.
Unit testing also helps to: find problems early in the project, facilitates changes to the solution and
simplifies integration of different parts of the program.
Broadly speaking, you should test your custom business logic. You might choose to implement just
a few tests that only cover the code paths that you believe are most likely to contain a bug. Or, you might
choose to implement a large suite of unit tests that are incredibly thorough and test a wide variety of
scenarios. You should be sure to write unit tests that verify your code behaves as expected in "normal"
scenarios as well as in more "unexpected" scenarios, like boundary conditions or error conditions.
For unit tests, start with testing that it does what it is designed to do. Typically, each unit test sends
a specific input to a method and verifies that the method returns the expected value, or takes the expected
action. Unit tests prove that the code you are testing does in fact do what you expect it to do.
10
Confidential
WI-080 Testing Manual
3.1.4 Tools
Tool Description
11
Confidential
WI-080 Testing Manual
Test Around Your Change. Consider what it might affect beyond its immediate intended target. Think
about related functionality that might have similar issues. If fixing these surrounding problems is not
relevant to your change, log bugs for them.
Use Code Coverage. Code coverage can tell you what functionality has not yet been tested. Don't
however just write a test case to hit the code. Instead, let it help you determine what classes of testing
and test cases that uncovered code indicates you are missing.
Consider Testability. Hopefully you have considered testability throughout your design and
implementation process. If not, think about what someone else will have to do to test your code. What
can you do/do you need to do in order to allow proper, authorized verification? (Test Driven Design)
Ways To Find Common Bugs:
Reset to default values after testing other values (e.g., pairwise tests, boundary condition tests)
Look for hard coded data (e.g., "c:\temp" rather than using system APIs to retrieve the temporary
folder), run the application from unusual locations, open documents from and save to unusual
locations
Run under different locales and language packs
Run under different accessibility schemes (e.g., large fonts, high contrast)
Save/Close/Reopen after any edit
Undo, Redo after any edit
Test Boundary Conditions: Determine the boundary conditions and equivalency classes, and then test
just below, at, in the middle of, and just above each condition. If multiple data types can be used,
repeat this for each option (even if your change is to handle a specific type). For numbers, common
boundaries include:
smallest valid value
at, just below, and just above the smallest possible value
-1
0
1
some
many
at, just below, and just above the largest possible value
largest valid value
invalid values
different-but-similar datatypes (e.g., unsigned values where signed values are expected)
for objects, remember to test with null and invalid instances
Other Helpful Techniques:
Do a variety of smallish pairwise tests to mix-and-match parameters, boundary conditions, etc.
One axis that often brings results is testing both before and after resetting to default values.
Repeat the same action over and over and over, both doing exactly the same thing and changing
things up.
Verify that every last bit of functionality you have implemented is discussed in the specification
and matches what the specification describes should happen. Then look past the specification
and think about what is not happening and should.
"But a user would never do that!": To quote Jerry Weinberg, When a developer says, "a user
would never do that," we say, "Okay, then it won't be a problem to any user if you write a little
code to catch that circumstance and stop some user from doing it by accident, giving a clear
12
Confidential
WI-080 Testing Manual
message of what happened and why it can't be done." If it doesn't make sense to do it, no user
will ever complain about being stopped.
Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative way or all
together ("big bang"). Normally the former is considered a better practice since it allows interface issues to
be located more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components corresponding to
elements of the architectural design are integrated and tested until the software works as a system.
Integration in the small is bringing together individual components (modules/units) that have already
been tested in isolation. We are trying to find faults that couldn’t be found at an individual component
testing level. Integration testing in the small makes sure that the things that are communicated are
correct from both sides, not just from one side of the interface.
Integration in the large: This stage of testing usually occurs between ‘System’ and ‘Acceptance’
testing and tests the inputs and outputs from a system to another system or other systems.
3.2.1 Objective
Integration in the small: The objective is to test that the ‘set’ of components function together correctly
by concentrating on the interfaces between the components.
Integration in the large: The objective is to test that the ‘set’ of systems/modules function together
correctly.
13
Confidential
WI-080 Testing Manual
Include what is expected from a Web service with respect to business requirements
Gather and understand the requirements and the data transfer standards
Design test cases keeping business requirements in mind, the more data scenarios you
have, the healthier the quality of the deliverable
It is difficult to test complete end to end business flows with all the possible data
scenarios. The trick is to have an automated tool which can shorten the testing of web services like
Optimyz, WebInject, SOAP UI etc.
What should be tested?
Functionality: A key to testing Web services is ensuring their functional quality, because when you
string together a set of services, you introduce many more opportunities for error or failure. You can
take into consideration:
Specification Review (SR)
Test Case Development (TCD)
Test Execution, examination of requests & responses of web services
Performance: Testing web services performance may be complicated. A key point is to know the
performance requirements in the most accurate manner. For example:
A good requirement: This service has been identified as serving 50,000 concurrent users
with 10 second average response time
A bad requirement: This service should serve > 4000 concurrent users, and the response
should be fast
Security: Web Services are wide-open in a network. This element opens up a host of
vulnerabilities, such as penetration, Denial-of-Service (DOS) attacks, and great volumes of spam
data, etc. Distinctive security policies have to be imposed at the network level to create a sound
Service Oriented Architecture (SOA). There are certain security policies which are enforced during
data transfer, and user tokens or certificates are common sights where data is protected with a
password. Precise test cases aimed at directing these policies need to be designed to completely
test the Web service security:
Authentication – The process of assuring that the request actually originated from an
authorized source. In addition to authenticate the source, the service provider may need
to prove the message origin to other consumers
Authorization – This provides assurance that only authorized requesters are allowed to
access the service. This goes hand in hand with authentication to ensure that malicious
parties cannot mimic a valid client
Penetration – A Penetration Test simulates an attack by a malicious party. This testing
attempts to find and exploit vulnerabilities to determine what information and access is
able to be gained. This is designed to mimic the actions of an attacker exploiting
weaknesses in network security without the usual risks
Protocol / encryption standards testing – this provides assurances that the service
transaction are encrypted using the defined encryption techniques. Secure encryption
standards should prevent attempts to decrypt traffic, known as encryption attacks
When testers take up web services it tosses many challenges at them, it is still very important to
know what they need to do, rather than doing it first to learn costly lessons later.
14
Confidential
WI-080 Testing Manual
One way you can test web services is by calling web methods from unit tests. It is much like
testing other code by using unit tests, using Assert statements. The same range of results is produced.
There are two ways to test web services with unit tests:
The web service runs on an active web server. Testing a web service that runs on a local or remote
web service, such as IIS, has no special requirements. Simply add a web reference and call the web
methods of the web service from your development solution.
The web service is not hosted in an active web server. You can test a web service that runs on your
local computer and not in a web server, such as IIS. Just use an attribute provided by the Team
System testing tools to start ASP.NET Development Server, which creates a temporary server that
hosts the web service you are testing.
Applications need to be tested considering the following aspects:
End to end from the requester perspective
At the unit level during development
At service level
Interface validation
To ensure functionality under boundary load conditions
3.2.4 Tools
Tools Description
15
Confidential
WI-080 Testing Manual
System testing concentrates on a completely integrated system to verify that it meets all its
requirements. System Tests will be performed for all builds received from development.
The following types of tests can be performed during System Test Phase:
• Functional
o Smoke and Sanity
o Regression
• Non -Functional
o Installation
o Performance
o Volume
o Stress
o Usability
o Security
o Internationalization and localization
o Accessibility
o Compatibility
16
Confidential
WI-080 Testing Manual
Functional testing ensures that the application was developed with all requirements as stated in the
Requirements Specification.
3.3.1.1.1 Objective
The objective of Smoke testing is to verify the "stability" of the system in order to proceed with
more rigorous testing.
The objective of Sanity testing is to verify the "rationality" of the system in order to proceed with
more rigorous testing.
17
Confidential
WI-080 Testing Manual
3.3.1.1.4 Tools
Tool Description
OpenScript is an updated scripting platform for creating automated
extensible test scripts in Java. Combining an intuitive graphical
interface with the robust Java language, OpenScript serves needs
OpenScript ranging from novice testers to advanced QA automation experts.
Selenium has the support of some of the largest browser vendors who
have taken (or are taking) steps to make Selenium a native part of
their browser. It is also the core technology in countless other browser
Selenium automation tools, APIs and frameworks.
3.3.1.2.1 Objective
The objectives of this test cycle is to ensure that the new functionalities do not cause problems
with existing software.This usually involves executing a set of repeatable tests to ensure that the new
software produces the same set of results as the original test.
3.3.1.2.4 Tools
Tool Description
OpenScript OpenScript is an updated scripting platform for creating automated
extensible test scripts in Java. Combining an intuitive graphical
interface with the robust Java language, OpenScript serves needs
18
Confidential
WI-080 Testing Manual
3.3.1.3.1.1 Files
You are not done testing unless you have looked at each and every file that makes up your
application, for they are full of information which is often ignored.
Verify that the version number of each file is correct.
Verify that the assembly version number of each managed assembly is correct. Generally the
assembly version number and the file version number should match. They are specified via
different mechanisms however, and must explicitly be kept in sync.
Verify that the copyright information for each file is correct.
Verify that each file is digitally signed - or not, as appropriate. Verify that its digital signature is
correct.
Verify that each file is installed to the correct location. (Also see the Setup Checklist.)
Verify you know the dependencies of each file. Verify each dependency is either installed by your
setup or guaranteed to be on the machine.
Check what happens when each file - and each of its dependencies - is missing.
Check each file for recognizable words and phrases. Determine whether each word or phrase you
find is something you are comfortable with your customers seeing.
3.3.1.3.1.2 Filenames
You are not done testing yet unless you have tested the following test cases for filenames:
Single character filenames
Short filenames
Long filenames
Extra-long filenames
Filenames using text test cases
Filenames containing reserved words
Just the filename (file.ext)
The complete path to the file (c:\My\Directory\Structure\file.ext)
A relative path into a subfolder (Sub\Folder\file.ext)
A relative path into the current folder (.\file.ext)
A relative path into a parent folder (..\Parent\file.ext)
A deeply nested path
(Some\Very\Very\Very\Very\Very\Deeply\Nested\File\That\You\Will\Never\Find\Again\file.ext)
UNC network paths (\\server\share\Parent\file.ext)
Mapped drive network paths (Z:\Parent\file.ext)
19
Confidential
WI-080 Testing Manual
Filenames are interesting and a common source of bugs. Microsoft Windows applications that
don't guard against reserved words set themselves up for a Denial Of Service attack. Applications on any
operating system that allow any old file to be opened/saved/modified, leave a gaping hole onto "secured"
files. Some users stuff every document they've ever created into their user folder. Other users create a
unique folder for each document. Certain characters are allowed in filenames that aren't allowed
elsewhere, and vice versa. Spending some focused time in this area will be well worth your while.
20
Confidential
WI-080 Testing Manual
21
Confidential
WI-080 Testing Manual
3.3.1.3.1.5 Alerts
You are not done testing yet unless you have searched out every alert, warning, and error
message and dialog box your application can display and checked the following:
3.3.1.3.1.5.1 Content
Verify that you understand every condition that can cause the alert to display, and that you have test
cases for each condition (or have explicitly decided to *not* test specific conditions).
Verify that the alert is in fact needed. For example, if the user can easily undo the action, asking them
whether they really want to do it is not necessary.
Verify that the alert first identifies the problem and then presents the solution. Basically, treat your
customers like smart, knowledgeable people and help them understand what the problem is and what
they can do about it.
Verify that the alert text does not use an accusatory tone but rather is polite and helpful. Again, let
them know what happened, what the application is doing to remedy the situation, and what they can do
to prevent it from happening in the future.
Verify the alert text is correct and appropriate for the situation.
Verify the alert text is consistent in its wording and style, both to itself as well as to each other alert.
Verify the alert text is as succinct as possible but no more succinct. Hint: If the alert text is longer than
three lines, it's probably too long.
Verify the alert text contains complete sentences which are properly capitalized and punctuated.
Verify the alert text does not use abbreviations or acronyms. (Discipline-specific acronyms may be OK,
if you are confident that all of your users will know what they mean.)
Verify the alert text uses the product's name, not pronouns such as "we" or "I".
3.3.1.3.1.5.2 Functionality
Verify the alert's title bar contains the name of the product (e.g., "Acme Word Processor").
Verify each button works correctly.
Verify each button has a unique access key.
Verify the buttons are centered below the message text.
Verify any graphics on the alert are appropriate and correctly placed. For Microsoft Windows
applications, there are standard icons for Informational, Warning, and Critical alerts, and these icons
are typically displayed to the left of the alert text.
3.3.1.3.1.6 Accessibility
You are not done testing yet unless you have verified your application integrates with the
accessibility features of your operating system. Accessibility features are vital to customers who are blind,
22
Confidential
WI-080 Testing Manual
deaf, or use assistive input devices, but they are also extremely useful to many other people as well. For
example, comprehensive large font support will be much appreciated by people with failing eyesight
and/or high DPI screens.
Some of the following terms and utilities are specific to Microsoft Windows; other operating
systems likely have something similar.
Verify that every control on every dialog and other user interface widget supports at least the following
Microsoft Active Accessibility (MSAA) properties:
Name - its identifier
Role - a description of what the widget does, e.g., is it invokable, does it take a value
State - a description of its current status
Value - a textual representation of its current value
KeyboardShortcut - the key combination that can be used to set focus to that control
DefaultAction - a description of what will happen if the user invokes the control; e.g., a checked
check box would have a Default Action of "Uncheck", and a button would have a Default Action of
"Press"
Verify that changing the value of each control updates its MSAA State and Value properties.
Run in high contrast mode, where rather than a full color palette you have only a very few colors. Is
your application still functional? Are all status flags and other UI widgets visible? Are your toolbars and
other UI still legible? Does any part of your UI not honor this mode?
Run in large font mode, where the system fonts are all extra large. Verify that your menus, dialogs, and
other widgets all respect this mode, and are still readable. Especially pay attention to text that is
truncated horizontally or vertically.
Run with Sound Sentry, which displays a message box, flashes the screen, or otherwise notifies the
user anytime an application plays a sound. Verify that any alert or other sound your application may
play activates Sound Sentry.
Run with sticky keys, which enables the user to press key chords in sequence rather than all at once.
The operating system will hide much of these details from your application, but if your app ever directly
inspects key state it may need to explicitly handle this state.
Run with mouse keys, which enables the user to control the mouse pointer and buttons via the numeric
keypad. Again, the operating system will hide much of these details from your application, but if your
app ever directly inspects mouse state it may need to explicitly handle this state.
Run with no mouse and verify that every last bit of your UI can be accessed and interacted with solely
through the keyboard. Any test case you can execute with a mouse should be executable in this mode
as well.
Run with a text reader on and your monitor turned off. Again, you should be able to execute each of
your test cases in this state.
Verify focus events are sent when each control loses and receives focus.
Verify the tabbing order for each dialog and other tab-navigable UI component is sensible.
Verify that any actionable color item (e.g., that red squiggly line Microsoft Word displays underneath
misspelled words) can have its color customized.
Verify that any object which flashes does so to the system cursor blink rate.
How completely you support these various accessibility features is of course a business decision
your team must make. Drawing programs and other applications which incorporate graphics, for example,
may decide to require a mouse for the drawing bits. As is also the case with testability, however,
23
Confidential
WI-080 Testing Manual
accessibility-specific features are often useful in other scenarios as well. (The ability to use the keyboard
to nudge objects in drawing programs tends to be popular with customers of all abilities, for example.)
24
Confidential
WI-080 Testing Manual
25
Confidential
WI-080 Testing Manual
Although this fit-and-finish stuff can seem like a waste of time, it matters. Although they likely
aren't conscious of it, these details affect people's evaluation of your product's quality just as much as how
often it crashes does. In fact, if the first impression a potential customer has is that your application is
unpolished, they will tend to view the rest of their experience through that lens as well.
26
Confidential
WI-080 Testing Manual
German characters
Japanese characters
Hebrew characters
Arabic characters
Unicode characters from multiple character ranges
Control characters
Text handling can be loaded with errors. If your application is one hundred percent Unicode, count
yourself lucky. Even then, however, you may have to import to or export from non-Unicode encodings. If
your application handles ASCII text then you get the fun of testing across multiple code pages (try
switching code pages while entering text and see what happens!). And if your application uses double-
byte or multi-byte encodings, you may find yourself thinking about switching careers!
Simple undo/redo testing is easily done manually and will usually find bugs. These bugs are
typically simple programmer errors which are easy to fix. The really interesting bugs are usually found by
intermixing undos and redos. This can certainly be done manually, but this is one case where automated
test monkeys can add value.
You can decide to have one person test undo and redo across your entire application. From
experience, it works best to have each person test undo and redo for their areas.
3.3.1.3.1.14 Printing
You are not done testing yet unless you have checked how your application handles printing.
If you remember back in the past, before your operating system abstracted away (most of) the
differences between printers, so that each application had to know intimate details about every printer it
27
Confidential
WI-080 Testing Manual
might be used with, you surely know how good you have it. That just gives you more time to worry about
the following issues.
Verify changing orientation works properly. Try doing this for a brand new document and for an in-
progress document. Also try doing this by launching your app's equivalent of a page setup dialog box
both directly (e.g., from a menu item) and from within the print dialog.
Verify printing to a local printer works properly.
Verify printing to a network printer works properly.
Verify printing to a file works properly. Every operating system I know of allows you to create a print file
for any printer you have installed.
Verify printing to a PCL printer works properly. PCL started out as the control language for Hewlett-
Packard printers but has since become somewhat of a standard.
Verify printing to a PostScript printer works properly. This printer control language was created by
Adobe and has also become somewhat of a standard. PostScript is semi-human readable, so you can
do some printer testing by inspecting the output file and thus avoid killing any trees.
Verify printing to a PDF file works properly. There are a number of free and low-cost PDF creators
available; also consider purchasing a copy of Adobe Acrobat in order to test the "official" way to create
PDFs.
Verify canceling an in-progress print job works properly.
Verify setting each print option your application supports has the proper effect; number of copies,
collation, and page numbering, for example.
Verify setting printer-specific options works properly. These settings should be orthogonal to your
application's print settings, but you never know. Although it may seem that some of this testing should
be taken care of by your operating system’s testers, I find that developers seem to always have some
little customization they make to these dialogs, and so even though it appears to be a standard dialog
something is different. These little tweaks can turn out to be bug farms, I think in part precisely
because the developer is thinking that it's such a small thing nothing can go wrong.
Even when we really do have a standard dialog box, we should give it a once-over , just as a
sanity check. The same applies to any printer-specific options. Everything *should* work correctly, but we
are a lot happier when we *know* it does!
In the general case, it's a risk assessment you and your feature team have to make. Bugs *could*
be anywhere; where do you think they most likely are? Hit those areas first, and then cover the next most
likely, and then the next most likely, and so on. Mix in some exploratory testing too, since bugs have a
penchant for cropping up in places you wouldn't think to look for them!
28
Confidential
WI-080 Testing Manual
makes a change to the record you are editing?), but if you can open documents off a network share, or
you can open documents from a shared location on the local machine, someone else can do so as well
- potentially the very same document you are editing.
No file open, dirty file open, dirty-but-auto-saved file open, saved file open.
Full screen and other view modes.
Different application window sizes (document window sizes too, if your app has a multi-document
interface); especially: default launch size, minimized, maximized, not-maximized-but-sized-to-fill-the-
screen, and sized very small.
Invoke standby, hibernation, and other power-saving modes whilst an operation is in progress.
Resume your computer out of various sleep modes. Do in-progress operations continue where they
stopped? Or do they restart? Or do they hang?
Modified system settings. Set your mouse to move faster or slower. Change your keystroke repeat
duration. Mess with your system colors. Does your application pick up the new values when it starts?
Does it pick up values that change while it's running?
Object Linking and Embedding (OLE). Does embedding other OLE objects in your app's documents
work correctly? What about embedding your app's documents in other OLE-enabled applications? Do
embedded applications activate and deactivate correctly? Do linked OLE documents update when the
source of the link is modified? How does your app handle the linked document's application not being
available?
Multiple selection. What happens if you apply text formatting when you have three different text ranges
selected? Or you paste when several different items are selected? What should happen?
The last two special states are not contexts in which to execute your test cases but rather
additional tests to run at the end of each of your test cases:
Send To. Many applications today have a handy menu item that lets you send the current document to
someone as an email.
Cut, copy, and delete. To and from the same document, a different document, competing applications,
targets that support a less-rich or more-rich version of the data (e.g., copying from a word processor
and pasting into a text editor), targets that don't support any version of the data (what happens if you
copy from your file explorer and paste into your application?).
29
Confidential
WI-080 Testing Manual
30
Confidential
WI-080 Testing Manual
Japanese-language input processor likely traps all keystrokes, combines multiple keystrokes into a
single Japanese character, and then sends that single character on to the application. Shortcut key
sequences should bypass this extra layer of processing, but oftentimes they don't. (Note: turning off the
IME is one solution to this quandary, but it is almost never the right answer!)
Assistive input devices such as puff tubes. The operating system generally abstracts these into a
standard keyboard or mouse, but they may introduce unusual conditions your application needs to
handle, such as extra-long waits between keystrokes.
Random other input sources. For example, games where you control the action by placing one or more
sensors on your finger(s) and then thinking what you want the program to do. Some of these devices
simply show up as a joystick or mouse. What happens if someone tries to use such a device in your
application?
Multiple keyboards and/or mice. Microsoft Windows supports multiple mice and keyboards
simultaneously. You only ever get a single insertion point and mouse pointer, so you don't have to
figure out how to handle multiple input streams. You may, however, need to deal with large jumps in
e.g., mouse coordinates.
• should be clear for all graphic resolutions ranges indicated for the operation of the
application;
• they should be intuitive related to the associated function
• visible,
• correctly aligned,
• the texts should be correct and fully visible
• correctly active/inactive
• should comply with the internally agreed standards (default colours/denominations)
• to comply with the specifications
• they should operate correctly: they must close/open correctly in different graphic sub-
domains of the interface (context);
• they should call the specified functions;
31
Confidential
WI-080 Testing Manual
6. Toolbars:
8. all the above tests should be performed for guaranteeing the operation of all indicated
graphical resolutions, using also the font states (small/large). For this, it is recommended to
test the minimum and maximum resolution and an intermediate value and if it does not work
properly, it is recommended to localise the limits. Only one element graphic non-operational for a
certain resolution suffice it to declare the respective resolution non-operational or partially
operational (specification in Release Notes).
9. correct navigability:
a. with the mouse
b. with the keyboard
10. Shortcut Keys and correct operation (if there are any specifications in this regard)
11. correct operation of shortcut keys in the context (according to focus)
12. warning and error windows should exist.
32
Confidential
WI-080 Testing Manual
33
Confidential
WI-080 Testing Manual
34
Confidential
WI-080 Testing Manual
35
Confidential
WI-080 Testing Manual
Non-Functional System Tests ensure that the application was developed according to the Non-
Functional requirements set out in the Requirements Specification.
Installation
Performance
Volume/Load
Stress
Usability
Security
Internationalization and localization
Accessibility
3.3.2.1 Installation
An installation test assures that the system/software application is installed correctly and working at
actual customer's hardware.
3.3.2.1.1 Objective
Installation testing follows the objectives:
• To verify whether the application can be appropriately installed/uninstalled, for all indicated
hardware OS/conditions;
• To verify the reaction of the systems (configurations) in case of overload or upgrade;
36
Confidential
WI-080 Testing Manual
• To verify the reaction of the installation of the target system, if it does not have enough memory
(hard-disk full) or it’s too slow; the reaction should be a normal one, returning a notification that the
system cannot be installed;
• To verify if the system was installed, if the application runs accordingly.
3.3.2.1.4 Tools
Just the software application installation kit.
3.3.2.1.5.1 Setup
You are not done testing yet unless you have tested your program's setup process under the following
conditions. Although some of these terms are specific to Microsoft Windows other operating systems
generally have similar concepts.
Installing from a CD-ROM/DVD-ROM
Installing from a network share
Installing from a local hard drive
Installing to a network share
Installing from an advertised install, where icons and other launch points for the application are created
(i.e., the app is "advertised" to the user), but the application isn't actually installed until the first time the
user launches the program. Also known as "install on demand" or "install on first use".
Unattended installs (so-called because no user intervention is required to e.g., answer message
boxes), aka command line installs. This can become quite complicated, as the OS's installation
mechanism supports multiple command-line options, and your application may support yet more.
Mass installs, via an enterprise deployment process such as Microsoft Systems Management Server.
Upgrading from previous versions. This can also become quite complicated depending on how many
versions of your app you have shipped and from which of those you support upgrades. If all of your
customers always upgrade right away, then you're in good shape. But if you have customers on five or
six previous versions, plus various service packs and hotfixes, you have a chore ahead of you!
Uninstall. Be sure that not only are all application-specific and shared files removed, but that registry
and other configuration changes are undone as well. Verify components which are shared with other
37
Confidential
WI-080 Testing Manual
applications are/not uninstalled depending whether any of the sharing apps are still installed. Try out-
of-order uninstalls: install app A and then app B, then uninstall app A and then uninstall app B.
Reinstall after uninstalling the new and previous versions of your application
Installing on all supported operating systems and SKUs. For Microsoft Windows applications, this may
mean going as far back as Windows 95; for Linux apps, consider which distros you will be supporting.
Minimum, Typical, Full, and Custom installs. Verify that each installs the correct files, enables the
correct functionality, and sets the correct registry and configuration settings. Also try
upgrading/downgrading between these types - from a minimum to complete install, for example, or
remove a feature - and verify that the correct files etc. are un/installed and functionality is correctly
dis/enabled.
Install Locally, Run From Network, Install On First Use, and Not Available installs. Depending on how
the setup was created, a custom install may allow the individual components to be installed locally, or
to be run from a shared network location, or to be installed on demand, or to not be installed at all.
Verify that each component supports the correct install types - your application's core probably
shouldn't support Not Available, for example. Mix-and-match install types - if you install one component
locally, run another from the network, and set a third to Install on First Use, does everything work
correctly?
Install On First Use installs. Check whether components are installed when they need to be (and not
before), and that they are installed to the correct location (what happens if the destination folder has
been deleted?), and that they get registered correctly.
Run From Network installs. Check whether your app actually runs - some apps won't, especially if the
network share is read-only. What happens if the network is unavailable when you try to launch your
app? What happens if the network goes down while the application is running?
Verify installs to deeply nested folder structures work correctly.
Verify that all checks made by the installer (e.g., for sufficient disk space) work correctly.
Verify that all errors handled by the installer (e.g., for insufficient disk space) work correctly.
Verify that "normal" or limited-access (i.e., non-admin) users can run the application when it was
installed by an administrator. Especially likely to be troublesome here are Install On First Use
scenarios.
Verify the application works correctly under remoted (e.g., Microsoft Remote Desktop or Terminal
Server), and virtual (e.g., Microsoft Virtual PC and Virtual Server) scenarios. Graphics apps tend to
struggle in these cases.
Perform a Typical install followed by a Modify operation to add additional features.
Perform a Custom install followed by a Modify operation to remove features.
Perform a Typical install, delete one or more of the installed files, then perform a Repair operation.
Perform a Custom installation that includes non-Typical features, delete one or more of the installed
files, then perform a Repair operation.
Patch previous versions. Patching is different from an upgrade in that an upgrade typically replaces all
of the application's installed files, whereas a patch usually overwrites only a few files.
Perform a Minor Upgrade on a previously patched version.
Patch on a previously upgraded version.
Upgrade a previously installed-then-modified install.
Patch a previously installed-then-modified install.
38
Confidential
WI-080 Testing Manual
3.3.2.1.5.2 Upgrades
You are not done testing unless you understand how your application handles being installed over
previous versions of your application, and having the operating system upgraded out from under it. You
may want to test installing a previous version over, or side-by-side to, the current version as well. Consider
whether to cover all three combinations: upgrading just your application, upgrading just your operating
system, and upgrading both the operating system and your application.
39
Confidential
WI-080 Testing Manual
3.3.2.2 Performance
Performance testing is executed to determine how a system or sub-system performs in terms of
responsiveness and stability under a particular workload.
It can also serve to investigate, measure, validate or verify other quality attributes of the system,
such as scalability, reliability and resource usage.
3.3.2.2.1 Objective
The main objective is to determine or validate speed, scalability, and/or stability.
Performance tests must measure the response time under normal operation conditions, in order to
establish its usability degree. Performance teste must also identify the risk elements in relation with the
achievement of the operation criteria of the application. They must run on application modules, in order to
timely identify critical processes, and also after application integration in order to determine its
performance during integrated module running and concurrent processes.
40
Confidential
WI-080 Testing Manual
Afterwards, the test shall be redone by running an estimated number of users, as provided in the
specifications.
Along with the tested application, it is recommended to gradually open concurrent processes, in
terms of network connection, client machine, server, workstation where the database is implemented and
to progressively re-execute the response time measurements.
For each specified loading, the execution time should not exceed the limits set and the processes
should run without affecting the operations in the application or in the involved systems.
For each use case, the identified execution time should comply with the accepted range, in terms
of operation after application integration. Thus, it is recommended to elaborate a time target on different
processes and to optimise them in order to meet the present parameters.
3.3.2.2.4 Tools
Tool Description
Apache JMeter is a 100% pure Java desktop application designed to load test functional
behavior and measure performance. It was originally designed for testing Web Applications
but has since expanded to other test functions. Apache JMeter may be used to test
Apache performance both on static and dynamic resources (files, Servlets, Perl scripts, Java
Jmeter Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a
heavy load on a server, network or object to test its strength or to analyze overall
performance under different load types. You can use it to make a graphical analysis of
performance or to test your server/script/object behavior under heavy concurrent load.
The main goal of the project is to create a distributed generic system collecting and storing
various runtime metrics collections used for continuous system performance, health, quality
and availability monitoring purposes. Allmon agents are designed to harvest a range of
Allmon metrics values coming from many areas of monitored infrastructure (application
instrumentation, JMX, HTTP health checks, SNMP). Collected data are base for
quantitative and qualitative performance and availability analysis. Allmon collaborates with
other analytical tools for OLAP analysis and Data Mining processing.
The Grinder is a Java load-testing framework making it easy to orchestrate the activities of
Grinder a test script in many processes across many machines, using a graphical console
application.
loadUI is a tool for Load Testing numerous protocols, such as Web Services, REST, AMF,
JMS, JDBC as well as Web Sites. Tests can be distributed to any number of runners and
loadUI
be modified in real time. LoadUI is tightly integrated with soapUI. LoadUI uses a highly
graphic interface making Load Testing Fun and Fast
41
Confidential
WI-080 Testing Manual
Verify performance targets exist for each performance scenario, and are being met
Verify the performance tests are targeting the correct scenarios and data points
Verify performance optimizations have the intended effect
Verify performance with and without various options enabled, such as Clear Type and menu
animations, as appropriate
Compare performance to previous versions
Compare performance to similar applications
3.3.2.3 Volume/Load
Volume/Load testing is primarily focused on testing the characteristic of the system to continue to
operate under a specific load, whether it is a large data load or a large number of users.
This is generally referred to as software scalability. Volume testing is a way to test software
functions even when certain components (for example a file or database) increase radically in size.
3.3.2.3.1 Objective
The objectives of this type of testing are:
• maximum (actual or physically capable) number of clients connected (or simulated) all performing
the same, worst case (performance) business function for an extended period of time.
• maximum database size has been reached (actual or scaled) and multiple queries / report
transactions are executed simultaneously.
• processing large files(import, export, upload, etc.)
42
Confidential
WI-080 Testing Manual
3.3.2.3.4 Tools
Tool Description
Apache Java desktop application for load testing and
Jmeter performance measurement.
Performance testing tool primarily used for executing
large numbers of tests (or a large number of virtual
LoadRunner
users) concurrently. Can be used for unit and
integration testing as well. Licensed.
Visual Studio Ultimate edition includes a load test
Visual Studio
tool which enables a developer to execute a variety
Ultimate
of tests (web, unit etc...) with a combination of
Edition
configurations to simulate real user load
Eclipse based large scale performance testing tool
Rational
primarily used for executing large volume
Performance
performance tests to measure system response time
Tester
for server based applications. Licensed.
3.3.2.4 Stress
Stress testing is a way to test reliability under unexpected or rare workloads.
It involves testing beyond normal operational capacity, often to a breaking point, in order to observe
the results.
3.3.2.4.1 Objective
Verifying the range within which the system (or different components) operates normally.
3.3.2.4.4 Tools
Tools used are the same as for the performance testing.
43
Confidential
WI-080 Testing Manual
As you do all of this performance and stress testing, also check for memory and other resource leaks.
3.3.2.5 Usability
Usability testing is needed to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.
3.3.2.5.1 Objective
The purpose of the practice is to discover any missed requirements or any kind of development
that was seen to be intuitive but ended up confusing new users. By testing user needs and how they
interact with the product, designers are able to assess on the product's capacity to meet its intended
purpose.
Usability testing also reveals whether users feel comfortable with your application or Web site
according to different parameters - the flow, navigation and layout, speed and content - especially in
comparison to prior or similar applications.
Usability_General_T
C v 3.1.xlsx
3.3.2.5.4 Tools
N/A
3.3.2.6 Security
Security testing is essential for software that processes confidential data to prevent system intrusion
by hackers. There are different levels on which security tests can be performed like WEB, Infrastructure
and Wireless LANs.
44
Confidential
WI-080 Testing Manual
3.3.2.6.1 Objective
Security testing is basically a type of software testing that’s done to check whether the application
or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack
the system or login to the application without any authorization.
It is a process to determine that an information system protects data and maintains functionality as
intended.
Test for known vulnerabilities and configuration issues on Web Server and Web Application
Test for default or guessable password
Test for non-production data in live environment, and vice-versa
Test for Injection vulnerabilities
Test for Buffer Overflows
Test for Insecure Cryptographic Storage
Test for Insufficient Transport Layer Protection
Test for Improper Error Handling
Test for all vulnerabilities with a CVSS v2 score > 4.0
Test for Authentication and Authorization issues
Test for CSRF
45
Confidential
WI-080 Testing Manual
HTML 5
Error Handling
One of the first major initiatives in any good security program should be to require accurate
documentation of the application. The architecture, data-flow diagrams, use cases, etc, should be written
in formal documents and made available for review. The technical specification and application documents
should include information that lists not only the desired use cases, but also any specifically disallowed
use case.
Finally, it is good to have at least a basic security infrastructure that allows the monitoring and
trending of attacks against an organization's applications and network (e.g., IDS systems).
Estimation phase:
46
Confidential
WI-080 Testing Manual
Vulnerability Assessment
Fuzzing
Deployment (OWASP: Phase 4: During Deployment)
Server Configuration Review
Network Configuration Review
Security criteria (this should be limited only to application layer as infrastructure assessment is out of the
scope of the testing team unless stated otherwise):
System test plan for all projects should include two areas under security testing:
- manual security testing using the Manual_Security_Tests.docx available on QMS;
- Types of testing (OWASP top 10) and tools to be used for automatic assessment and pen-
testing.
Both are subject to tailoring to the specifics of the project.
47
Confidential
WI-080 Testing Manual
Execution phase:
Execution for security testing happens throughout project life. Below are described the activities that
should be covered for achieving secure software products.
All activities are subject to tailoring and use OWASP principles as guidance.
No
Activity Instructions Resource used/Outcome
.
Go over existing documentation to
identify the security requirements.
Security Document outlining the specific
1. As required set up meetings with
Requirements security requirements.
BA/customer/PM/software security
professionals.
Based on the Security
requirements document identify
Risk Assessment threats both from technology and
(exposes financial Document ranking the risks in
business perspective.TPM,
2. implications). terms of probability and impact as
architect, BAs and developers
well as proposal to mitigate them.
should be part of the team doing
risk assessment along with
security professionals.
Meet with architecture and MoM with proposal to address the
Architecture & Design
3. development TL to present the identified security issues on
Reviews
issues identified by security team. architecture and design level.
Thread modeling document with
Threat Modeling Iterative process used to identify
ranking of threats as well as the
(exposes technology the threats that can affect the
4. vulnerabilities. This document
and business related project from a technology and
serves as input for security
implications) business roles perspective.
analysis phase (5&6 below).
This document will have to be updated as additional tests are performed and bugs are fixed. The last
version of this document should ideally contain only bugs that are resolved and verified and should be
provided to PM/TPM.
48
Confidential
WI-080 Testing Manual
3.3.2.6.4 Tools
While we have already stated that there is no silver bullet tool, tools do play a critical role in the
overall security program. There is a range of open source and commercial tools that can automate many
routine security tasks. These tools can simplify and speed up the security process by assisting security
personnel in their tasks. However, it is important to understand exactly what these tools can and cannot
do so that they are not oversold or used incorrectly.
Tool Description
The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing
OWASP ZAP – Zed tool for finding vulnerabilities in web applications.It is designed to be used by
Attack Proxy Project people with a wide range of security experience and as such is ideal for
developers and functional testers who are new to penetration testing.
49
Confidential
WI-080 Testing Manual
3.3.2.7.1 Objective
The user interface (UI), documentation, and content can be in multiple languages, currencies, date
formats, and units of measurement. With such complexities, organizations need to ensure that their
applications are relevant to the regions they serve. Internationalization and localization testing ensures
reliability, usability, acceptability, and above all relevance to audience and users worldwide. Products
need to be localized and then tested on many counts like language/copy context, consistent functionality,
compatibility, and interoperability.
Language
o Computer encoded text – One of the most common ways to know if a product is ready to
be localized is the use of Unicode. This allows the system to support a wide range of character
encoding issues
o Different Number systems – Some countries use a different method in counting that is
different from the usual 1, 2, 3 system English uses.
50
Confidential
WI-080 Testing Manual
o Writing direction – Some are left to right (German, English, French), some are right to left
(Arabic and other Middle Eastern countries)
o Spelling variants where the same language is spoken (tomato vs tomatoe, Localization vs
Localisation, colour vs color)
o Capitalization rules, sorting rules can be different as well
o Input – keyboard shortcuts and keyboard layouts may be different
Culture
Writing conventions
You should then also run all your automated acceptance tests in your new locale to ensure that all
new functionality is internationalized as it developed.
3.3.2.7.4 Tools
N/A
51
Confidential
WI-080 Testing Manual
maps in any fashion, be prepared for all kinds of pain the moment anyone outside your country starts
using them!
Verify that your application correctly handles switching to different system locales, language packs,
and code pages, both before your application has started and while it is running.
Verify that your application correctly handles switching to different regional settings, both before your
application has started and while it is running: date and time formats, currency symbols and formats,
and sort orders, to name just a few. Some or all of these settings will vary across locales and language
packs; most modern operating systems allow you to customize all of this separately from changing
languages or locales as well. (On Microsoft Windows, do this via the Regional Settings control panel
applet.) For example, if your application works with currency, see what happens when you change your
currency symbol to "abc".
Verify that your application correctly handles multi-byte (e.g., Japanese), complex script (e.g., Arabic)
and right-to-left (e.g., Hebrew) languages. Can you cursor around this text correctly? What happens if
you mix right-to-left and left-to-right text?
Verify that all controls correctly interact with Input Method Editors (IMEs). This is especially important if
you intend to sell into East Asian countries.
Verify that your application correctly handles different keyboard mappings. As with regional settings,
certain locales and language packs will apply special keyboard mappings, but operating systems
usually allow you to directly modify your keyboard map as well.
Verify your application correctly handles ANSI, multi-byte, and Unicode text, extended characters, and
non-standard characters on input, display, edit, and output.
Verify that the correct sorting order is used. Sorting correctly is hard! Just ask anyone who has run into
the infamous Turkish "i" sort order bug. If you rely on operating system-provided sort routines then you
should be in good shape, but if your application does any custom sorting it probably does it wrong.
Verify that the system, user, and invariant locales are used as appropriate: use the user locale when
displaying data to the user; use the system locale when working with non-Unicode strings, and use the
invariant locale when formatting data for storage.
Verify that any language-dependent features work correctly.
Verify that your test cases correctly take into account all of these issues. In my experience, testers
make all the same mistakes in this area as do developers - and won't you be embarrassed if your
developer logs a bug against your test case! Localization International sufficiency testing is important
for just about any application, but localization testing only matters if you are localizing your application
into other languages. The distinction can be hard to remember, but it's really quite simple: international
sufficiency testing verifies that your application does not have any locale-specific assumptions (like
expecting the decimal separator to be a decimal point), whereas localization testing verifies your
application can be localized into different languages. Although similar the two are completely
orthogonal. The simplest way to get started localization testing is with a pseudo-localized (aka
pseudoloc) build. A pseudoloc build takes your native language build and pseudo-localizes it by adding
interesting stuff to the beginning and end of each localized string (where "interesting stuff" is
determined by the languages to which your product will be translated, but might include e.g. double-
byte or right-to-left characters). This process can vastly simplify your localization testing
It allows every build to be localized via an automated process, which is vastly faster and cheaper than
is the case when a human hand localizes.
It allows people who may not read a foreign language to test localized builds.
Strings that should be localized, but are not, are immediately obvious as they don't have extra
characters pre- and post-pended.
Strings that should not be localized but in fact are do have extra characters pre- and post-pended and
thus are also immediately obvious.
Double-byte bugs are more easily found.
52
Confidential
WI-080 Testing Manual
UI problems such as truncated strings and layout issues become highly noticeable.
If you can, treat pseudoloc as your primary language and do most of your testing on pseudoloc builds.
This lets you combine loc testing and functionality testing into one. Testing on actual localized builds -
functionality testing as well as localization test - is still important but should be trivial. If you do find
major localization bugs on a localized build, find a way to move that discovery into your pseudoloc
testing next time! Beyond all that, there are a few specific items to keep in mind as you test (hopefully
pseudo-) localized builds:
Verify each control throughout your user interface (don't forget all those dialog boxes!) is aligned
correctly and sized correctly. Common bugs here are auto-sizing controls moving out of alignment with
each other, and non-auto-sizing controls truncating their contents.
Verify all data is ordered/sorted correctly.
Verify tab order is correct. (No, this shouldn't be affected by the localization process. But weirder things
have happened.)
Verify all strings that should have been localized were. A should-have-been-localized-but-was-not
string is likely hard-coded.
Verify no strings, that should not have been localized, were.
Verify all accelerator key sequences were localized.
Verify each accelerator key sequence is unique.
Verify all hot key combination were localized.
Verify each hot key combination is unique. APIs - if your application installs EXEs, DLLs, LIBs, or any
other kind of file - which covers every application I've ever encountered - you have APIs to test.
Possibly the number of APIs *should* be zero - as in the case of for-the-app's-use-only DLLs, or one -
as in the case of an EXE which does not support command line arguments. But - as every tester
knows - what should be the case is not always what is the case.
Verify that all publicly exposed APIs should in fact be public. Reviewing source code is one way to do
this. Alternatively, tools exist for every language to help with this - Lutz Roeder's .Net Reflector is de
rigueur for anyone working in Microsoft .Net, for example. For executables, start by invoking the
application with "-<command>", ":<command>", "/<command>", "\<command>" and "<command>"
command line arguments, replacing "<command>" with "?" or "help" or a filename. If one of the help
commands works you know a) that the application does in fact process command line arguments, and
b) the format which it expects command line arguments to take.
Verify that no non-public API can cause harm if accessed "illegally". Just because an API isn't public
does not mean it can't be called. Managed code languages often allow anyone who cares to reflect into
non-public methods and properties, and vtables can be hacked. For the most part anyone resorting to
such tricks has no cause for complaint if they shoot themselves in their foot, but do be sure that
confidential information cannot be exposed through such trickery. Simply making your decryption key
or license-checking code private is not sufficient to keep it from prying eyes, for example.
Review all internal and external APIs for your areas of ownership. Should they have the visibility they
have? Do they make sense? Do their names make clear their use and intent?
Verify that every public object, method, property, and routine has been reviewed and tested.
Verify that all optional arguments work correctly when they are and are not specified.
Verify that all return values and uncaught exceptions are correct and helpful.
Verify that all objects, methods, properties, and routines which claim to be thread safe in fact are.
Verify that each API can be used from all supported languages. For example, ActiveX controls should
be usable (at a minimum) from C++, VB, VB.Net, and C#.
Verify that documentation exists for every public object, method, property, and routine, and that said
documentation is correct. Ensure that any code samples in the docs compile cleanly and run correctly.
53
Confidential
WI-080 Testing Manual
3.3.2.8 Accessibility
Accessibility testing is a subset of usability testing where the users under consideration have
disabilities that affect how they use the web. The end goal, in both usability and accessibility, is to discover
how easily people can use a web site and feed that information back into improving future designs and
implementations.
Accessibility testing may include compliance with known standards like Web Accessibility Initiative
(WAI) of the World Wide Web Consortium (W3C) created especially for people with disabilities.
3.3.2.8.1 Objective
Web accessibility is a goal, not a yes/no setting. It is a nexus of human needs and technology. As
our understanding of human needs evolves and as technology adapts to those needs, accessibility
requirements will change as well and current standards will be outdated. Different websites, and different
webs, serve different needs with different technology. Voice chat like Skype is great for the blind,
whereas video chat is a boon for sign language users.
Disabilities pose special challenges when working out how easy a product is to use, because they
can introduce additional experience gaps between users and evaluators. Accessibility evaluation must
take account of what it is like to experience the web with different senses and cognitive abilities and of the
various unusual configuration options and specialist software that enable web access to people with
particular disabilities.
If you are trying to evaluate the usability or accessibility of your web site, putting yourself in the
place of a film-loving teenager or a 50-year old bank manager using your site is difficult, even before
disabilities are considered. But what if the film-loving teenager is deaf and needs captions for the films she
watches? What if the 50-year old bank manager is blind and uses special technology (like a screen
reader) which is unfamiliar to the evaluator in order to interact with his desktop environment and web
browser?
For accessibility testing to succeed, test team should plan a separate cycle for accessibility
testing. Management should make sure that test team have information on what to test and all the tools
that they need to test accessibility are available to them.
Typical test cases for accessibility might look similar to the following examples:
Make sure that all functions are available via keyboard only (do not use mouse)
Make sure that information is visible when display setting is changed to High Contrast modes.
Make sure that screen reading tools can read all the text available and every picture/Image have
corresponding alternate text associated with it.
Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.
Etc.
54
Confidential
WI-080 Testing Manual
Tool-guided evaluation: where a tool looks for accessibility problems and presents them to the
evaluator (this would include accessibility checkers and code linters).
Screening: where the expert simulates an end-user experience of the web site. Often you don’t
need to look very far to find accessibility problems. You might do no more than load the page in your
browser and notice the text is very hard to read.
Tool-based inspection: where the evaluator uses a tool to probe how the various bits of a web site
are working together.
Code review: where the evaluator looks directly at the code and assets of a web site to scour for
problems.
While beginners may be especially dependent on tool-guided evaluation, evaluators of all levels of
experience can benefit from each component. Even beginners can spot img elements without text
equivalents in HTML markup, and as you get more experienced, you will get quicker at spotting problems
before you progress to more rigorous testing. For experts on larger projects, it may not be feasible to
manually review all client-side code or inspect all parts of a website, but a tool-guided evaluation can find
areas of particular trouble that deserve a closer look. Also, human evaluators may overlook things that a
machine evaluation would have caught.
3.3.2.8.4 Tools
Tool Description
Accessibility Valet is a tool that allows you to check Web pages against either
Section 508 or W3C Web Content Accessibility Guidelines (WCAG) accessibility
compliance. One URL at a time may be checked with this online tool in free
Accessibility Valet mode, or unlimited use with paid subscription. All the HTML reporting options
display your markup in a normalized form, highlighting valid, deprecated and
bogus markup, as well as elements which are misplaced. Any accessibility
warnings are shown in a generated report.
55
Confidential
WI-080 Testing Manual
3.3.2.9 Compatibility
A common cause of software failure (real or perceived) is a lack of its compatibility with other
application software, operating systems (or operating system versions, old or new), or target environments
that differ greatly from the original. This type of tests purpose is to tests the software in most of the client
environment.
3.3.2.9.1 Objective
Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web browsers, hardware
platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read
56
Confidential
WI-080 Testing Manual
only a particular language), operating systems etc. This type of testing helps find out how well a system
performs in a particular environment that includes hardware, network, operating system and other
software etc.
But before starting testing on browser compatibility it is the sole responsibility of the tester to ask the
developer certain things about the application or Website
57
Confidential
WI-080 Testing Manual
For better testing results, we should check these steps for browser compatibility testing
CSS and HTML and XHTML Validation : This is done to ensure that pages that have been
developed are free from HTML error and are also following standards set by the W3 Consortium.
Page validation : This is checked by enabling and disabling Javascript of Browser
Font Size Validation : Because some browsers overwrite this with their default or maybe that font
is not available on the system
All Image alignment: This is to ensure the proper alignment of an image on the page
Header and Footer: should be verified with care and all text and its spacing alignment should be
taken into account for testing
Page Alignment should be tested (Center, LHS and RHS)
Control Alignment: Alignment of controls especially 1) Bullets 2) Radio Button 3) Check Box
should be check on various Browser
Page Zoom In and Out should be tested properly
Verification of information submitted to database, if there are forms that interact with the
database, they should be tested /verified on priority, it should be verified that information is being
saved correctly in database
HTML video format: Video format should be verified because not all browsers support all the
video formats, for example, IE9 gives support only to .mp4, while Firefox gives support to .mp4
and .webm and Chrome supports almost all .mp4, .webm,.ogm and some other video formats.
Text Alignment : should be verified specially in DropDown
Flash content should be tested
Pages should be tested while cookies and Javascript are turned off and pages should again be
tested when both are turned on.
Verification should be done on Ajax and JQuery request.
Functioning of Buttons Verify all the buttons (Ok, Cancel, Submit, Reset
4.1
and Links etc) on Form page is working.
58
Confidential
WI-080 Testing Manual
59
Confidential
WI-080 Testing Manual
Printing Issues
13.3 Verify Scalability to fit paper size
60
Confidential
WI-080 Testing Manual
3.3.2.9.4 Tools
Tool Description
Ghostlab offers synchronized testing for scrolls, clicks, reloads and form input
across all your connected devices, meaning you can test the entire user
experience, not just a simple page. Using the superior built-in inspector, you
can discover and fix problems quickly, connected to the DOM or JavaScript
output on any device.
Ghostlab
Ghostlab is available for both Windows and Mac OS X, with no setup
required, as it can instantly connect to any JavaScript-enabled client. Using
the Ghostlab server, you can sync pages from your local directory, your
localhost Apache setup or any server in the world, with automatic reloading to
keep track of file changes. The workspace feature lets you create a custom
browser setup, and adapt Ghostlab's features to exactly what you require.
Sauce Labs allows you to run tests in the cloud on more than 260 different
browser platforms and devices, providing a comprehensive test infrastructure
including Selenium, JavaScript, Mobile and Manual testing facilities. There's
Sauce Labs
no VM setup or maintenance required, with access to live breakpoints while
the tests are running so you can jump in and take control to investigate a
problem manually.
CrossBrowserTesting offers a live testing environment with access to more
than 130 browsers across 25 different operating systems and mobile devices,
so you can interactively verify your layout and test AJAX, HTML Forms,
JavaScript and Flash.
CrossBrowserTestin
The impressive layout comparison feature lets you choose a "base" browser
g
for comparisons and get a summary of rendering differences, along with a
screenshot of side-by-side images to catch and debug layout issues
effectively. You can test local development of websites even behind firewalls
and logins, with the ability to change browser, cache and cookie settings, and
turn JavaScript on or off.
Browsershots is a free, open-source web app providing a convenient way to
test your website's browser compatibility in one place. Browsershots uniquely
champions the idea of distributing the work of making screenshots among
community members, who set up "factories" on their own machines, to get
Browsershots jobs from the server using a fully automatic unattended script.
It's simple to use -- simply enter the URL and choose the browser setup you
require. There are several presets to choose from, including screen size,
color depth, JavaScript, Java and Flash. You will then have to wait a
61
Confidential
WI-080 Testing Manual
3.3.2.9.5.2 Platform
You are not done testing unless you have considered which platforms to include in and which to omit
from your test matrix. The set of supported platforms tends to vary widely across contexts - a consumer
application will likely have a different set of supported platforms than does an enterprise line of business
62
Confidential
WI-080 Testing Manual
application. Even if you officially support only a few specific platforms, it can be useful to understand what
will happen if your application is installed or executed on other platforms. Platforms to consider include:
All supported versions of Windows; at a minimum: Windows XP SP2, Windows XP SP<latest>,
Windows Server 2003 SP<latest>, Windows Vista SP<latest>
Apple OS X.<latest>
Your favorite distribution of Linux
Your favorite flavor of Unix
32-bit version of the operating system running on 32-bit hardware
32-bit version of the operating system running on 64-bit hardware
64-bit version of the operating system running on 64-bit hardware
The various SKUs of the operating system
Interoperability between different SKUs of the operating system
Interoperability between different operating systems (e.g., using a Windows Vista machine to open a
file which is stored on a Linux network share)
All supported browsers and browser versions; at a minimum: Internet Explorer 6, Internet Explorer 7,
Opera, FireFox
With and without anti-virus software installed
With and without firewall software installed
Also peruse the Windows Logo requirements. Even if you aren't pursuing logo compliance (or
your application doesn't run on Windows) these are a useful jumping off point for brainstorming test cases!
63
Confidential
WI-080 Testing Manual
The exact definition of "low end" and "high end" and such will vary across applications and use
cases and user scenarios - the minimum configuration for a high-end CAD program will probably be rather
different than that for a mainstream consumer-focused house design application, for example. Also
carefully consider which chipsets, CPUs, system manufacturers, and such you need to cover. The full
matrix is probably rather larger than you have time or budget to handle!
3.3.2.9.5.6 Configuration
Verify settings which should modify behavior do
Verify settings are or are not available for administrators to set via policies, as appropriate
Verify user-specific settings roam with the user
Verify registry keys are set correctly, and that no other registry keys are modified
Verify user-specific configuration settings are not written to machine or global registry settings or
configuration files
Brainstorm how backward compatibility problems might occur because a setting has moved or
changed and thus broken functionality in a previous version or changed default values from a previous
version
3.3.2.9.5.7 Interoperability
Verify clipboard cut, copy, and paste operations within your application
Verify clipboard cut, copy, and paste operations between your and other applications
64
Confidential
WI-080 Testing Manual
The definition of acceptance testing in standard BS7925 is: “acceptance testing is formal testing
conducted to enable a user, customer or other authorized entity to determine whether to accept a system
or component”. This is the final stage of validation in the software development lifecycle (SDLC).
We perform this activity together with the customer and the main objective is to ensure that the final
system matches the original requirements defined by the business or the project sponsor. Testing team
members may choose to do any test is needed, based on the usual business process. Testing will be
carried out against the user requirements documentation in an environment as close to production as
achievable.
Testing test cases will be generated as detailed scenarios for each requirement (business and
technical) described in project documentation. Additional test cases can be defined during testing
execution phases.
The User Acceptance testing process comprises several types of tests: Functional, Non-Functional
(already detailed in the System Test chapter from above) and end-to-end tests.
The following users can be part of the testing team: business users, Testing & Review members,
members of support teams and members from the customer.
3.4.1 Objective
The key purpose of UAT is not to see that a program or system works according to the specification
but to check that it will work in the context of a business or organization. Many UAT testers are not aware
of this and spend their time running tests which should have been properly done in the functional testing
part of the System Testing.
UAT is testing the integration of a computer system into a much larger system called the business
or organization. It is a form of Interface Testing and is concerned with checking communication between
the system and the users. This does not mean it is a form of Usability testing, which checks how easy it is
to work with a computer system. Instead it is about whether a business or organization can input the
information they need to and get back usable results which will enable the business to go forward.
Testing generally involves running a suite of tests on the completed system. Each individual test,
known as a case, exercises a particular operating condition of the user's environment or feature of the
system, and will result in a pass or fail outcome.
There is generally no degree of success or failure. The test environment is usually designed to be
identical, or as close as possible, to the anticipated user's environment, including extremes of such. These
65
Confidential
WI-080 Testing Manual
test cases must each be accompanied by test case input data and/or a formal description of the
operational activities to be performed. The intentions are to thoroughly elucidate the specific test case and
description of the expected results.
The acceptance test suite is run against the supplied input data or using an acceptance test script
to direct the testers. Then the results obtained are compared with the expected results. If there is a correct
match for every case, the test suite is said to pass. If not, the system may either be rejected or accepted
on conditions previously agreed between the sponsor and the manufacturer.
The objective is to provide confidence that the delivered system meets the business requirements
of both sponsors and users. The acceptance phase may also act as the final quality gateway, where any
quality defects not previously detected may be uncovered.
A principal purpose of acceptance testing is that, once completed successfully, and provided certain
additional (contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system
as satisfying the contract (previously agreed between sponsor and manufacturer), and deliver final
payment.
The UAT acts as a final verification of the required business functionality and proper functioning of
the system, emulating real-world usage conditions on behalf of the paying client or a specific large
customer. If the software works as required and without issues during normal use, one can reasonably
extrapolate the same level of stability in production.
3.4.4 Tools
66
Confidential
WI-080 Testing Manual
Robot Framework
Selenium
Specification by example (Specs2)
Watir
4 TESTING METHODS
Black Box Testing, also known as Behavioral Testing, is a software testing method in which the
internal structure/ design/ implementation of the item being tested is not known to the tester. These tests
can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a black box;
inside which one cannot see. This method attempts to find errors in the following categories:
Definition by ISTQB
LEVELS APPLICABLE TO
Integration Testing
System Testing
67
Confidential
WI-080 Testing Manual
Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box testing method
comes into use.
Following are some techniques that can be used for designing black box tests.
Equivalence partitioning: It is a software test design technique that involves dividing input values
into valid and invalid partitions and selecting representative values from each partition as test
data.
Boundary Value Analysis: It is a software test design technique that involves determination of
boundaries for input values and selecting values that are at the boundaries and just inside/
outside of the boundaries as test data.
Cause Effect Graphing: It is a software test design technique that involves identifying the cases
(input conditions) and effects (output conditions), producing a Cause-Effect Graph, and generating
test cases accordingly.
Decision Table: Decision tables are precise and compact way to model complicated logic. They
are ideal for describing situations in which a number of combinations of actions are taken under
varying sets of conditions.
State transition: State transition testing is used where some aspect of the system can be
described in what is called a ‘finite state machine’. This simply means that the system can be in a
(finite) number of different states, and the transitions from one state to another are determined by
the rules of the ‘machine’. This is the model on which the system and the tests are based.
Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
Tester need not know programming languages or how the software has been implemented.
Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
Test cases can be designed as soon as the specifications are complete.
68
Confidential
WI-080 Testing Manual
Only a small number of possible inputs can be tested and many program paths will be left
untested.
Without clear specifications, which is the situation in many projects, test cases will be difficult to
design.
Tests can be redundant if the software designer/ developer has already run a test case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case
in Black Box Testing.
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method in which
the internal structure/ design/ implementation of the item being tested is known to the tester. The tester
chooses inputs to exercise paths through the code and determines the appropriate outputs. Programming
know-how and the implementation knowledge is essential. White box testing is testing beyond the user
interface and into the nitty-gritty of a system.
This method is named so because the software program, in the eyes of the tester, is like a white/
transparent box; inside which one clearly sees.
Definition by ISTQB
EXAMPLE
A tester, usually a developer as well, studies the implementation code of a certain field on a webpage,
determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the expected
outcomes, which is also determined by studying the implementation code.
White Box Testing is like the work of a mechanic who examines the engine to see why the car is not
moving.
LEVELS APPLICABLE TO
White Box Testing method is applicable to the following levels of software testing:
69
Confidential
WI-080 Testing Manual
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
Testing is more thorough, with the possibility of covering most paths.
Since tests can be very complex, highly skilled resources are required, with thorough knowledge
of programming and implementation.
Test script maintenance can be a burden if the implementation changes too frequently.
Since this method of testing it closely tied with the application being testing, tools to cater to every
kind of implementation/platform may not be readily available.
5 DATABASE TESTING
Computer applications are more complex these days with technologies like Android and also with
lots of smart phone apps. The more complex the front ends, the back ends are even more intricate. So, it
is all the more important to learn about DB testing and be able to validate the databases effectively to
ensure secure and quality databases.
5.1.1 Objective
70
Confidential
WI-080 Testing Manual
1) Data Mapping: In the software systems, data often travels back and forth from the UI (user interface) to
the backend DB and vice versa. So following are the aspects to look for:
To check whether the fields in the UI/Front end forms and mapped consistently with the
corresponding DB table (and also the fields within). Typically this mapping information is defined
in the requirements documents.
Whenever a certain action is performed in the front end of an application, a corresponding CRUD
(Create, Retrieve, Update and Delete) action gets invoked at the back end. A tester will have to
check if the right action is invoked and the invoked action in itself is successful or not.
2) ACID properties validation: atomicity, consistency, isolation and durability. Every transaction a DB
performs has to adhere to these four properties.
Atomicity means that a transaction either fails or passes. This means that even if a single part of
transaction fails- it means that the entire transaction has failed. Usually this is called the “all-or
nothing” rule.
Consistency: A transaction will always result in a valid state of the DB
Isolation: If there are multiple transactions and they are executed all at once, the result/state of
the DB should be the same as if they were executed one after the other.
Durability: Once a transaction is done and committed, no external factors like power loss or crash
should be able to change it
3) Data integrity:
This means that following any of the CRUD operations (create, read, update and delete), the updated and
most recent values/Status of shared data should appear on all the forms and screens. A value should not
be updated on one screen and display an older value on another one. So devise your DB test cases in a
way to include checking the data in all the places it appears to see if it is consistently the same.
4) Business rule conformity: More complex databases means more complicated components like
relational constraints, triggers, stored procedures, etc. So testers will have to come up with appropriate
SQL queries in order to validate these complex objects.
1) Transactions:
When testing transactions it is important to make sure that they satisfy the ACID properties.The
following are the statements commonly used:
ROLLBACK TRANSACTION#
After these statements are executed, use a select to make sure if the changes have been reflected.
71
Confidential
WI-080 Testing Manual
2) Database schema:
Database schema is nothing but a formal definition of the how the data is going to be organized into a DB.
To test it:
Identify the requirements based on which the database operates. Sample requirements:
3) Trigger:
When a certain event takes places on a certain table, a piece of code (a trigger) can be auto-
instructed to be executed.
For example, a new student joined a school. The student is taking 2 classes; math and science. The
student is added to the “student table”. A trigger could be adding the student to the corresponding subject
tables once he is added to the student table.
The common method to test is to execute SQL query embedded in the trigger independently first
and record the result. Follow this up with executing the trigger as a whole. Compare the results.
These are tested during both the black box and white box testing phases.
White box testing: Stubs and drivers are to insert or update or delete data that would result in
the trigger being invoked. The basic idea is to just test the DB alone even before the integration
with the front end (UI) is made.
Black box testing:
a) Since the UI and DB integration is now available; we can insert/delete/update data from
the front end in a way that the trigger gets invoked. Following that select statements can be
used to retrieve the DB data to see if the trigger was successful in performing the intended
operation.
b) Second way to test this is to directly load the data that would invoke the trigger and see if it
works as intended.
4) Stored Procedures:
Stored procedures are more or less similar to user defined functions. These can be invoked by a
call procedure/execute procedure statements and the output is usually in the form of result sets.
These are stored in the RDBMS and are available for applications.
72
Confidential
WI-080 Testing Manual
White box testing: Stubs are used to invoke the stored procedures and then the results are
validated against the expected values.
Black box testing: Perform an operation from the front-end(UI) of the application and check for
the execution of the stored procedure and its results.
4 <i>newregexp.Ignorecase = True</i>
5 <i>newregexp.Global = True</i>
6 <i>VBScriptRegularexpressionvlaidation = newregexp.Test(string_to_match)</i>
7 <i>End Function</i>
The result to the above code is true if the default value exists or false if it doesn’t.
Checking the unique value can be done exactly the way we did for the default values. Try entering values
from the UI that will violate this rule and see if an error gets displayed.
Automation VB script code can be:
4 <i>newregexp.Ignorecase = True</i>
5 <i>newregexp.Global = True</i>
73
Confidential
WI-080 Testing Manual
6 <i>VBScriptRegularexpressionvlaidation = newregexp.Test(string_to_match)</i>
7 <i>End Function</i>
For the foreign key constraint validation use data loads that directly input data that violates the constraint
and see if the application restricts the same or not. Along with the back end data load, perform the front
end UI operations too in a way that are going to violate the constraints and see if the relevant error is
displayed.
The general test process for DB testing is not very different from any other application. The following are
the steps:
An important part of writing database tests is the creation of test data. You have several strategies for
doing so:
1. Have source test data. You can maintain an external definition of the test data, perhaps in flat
files, XML files, or a secondary set of tables. This data would be loaded in from the external
source as needed.
2. Test data creation scripts. You develop and maintain scripts, perhaps using data manipulation
language (DML) SQL code or simply application source code (e.g. Java or C#), which does the
necessary deletions, insertions, and/or updates required to create the test data.
3. Self-contained test cases. Each individual test case puts the database into a known state
required for the test.
74
Confidential
WI-080 Testing Manual
View definitions
Incoming data values
Referential integrity (RI) rules
With all these features, factors and processes to test on a database, there is an increasing demand
on the tester to be technically strong with the key DB concepts. Despite some of negative beliefs that the
DB testing creates new bottlenecks and is a lot of additional expenditure – this is a realm of testing that is
gaining obvious attention and demand.
5.1.4 Tools
Empirix
Tools simulate high usage loads on your database, enabling
Testing tools for Mercury Interactive
you to determine whether your system's architecture will
load testing RadView
stand up to your true production needs.
Web Performance
Data Factory
Developers need test data against which to validate their
Datatect
Test Data systems. Test data generators can be particularly useful
DTM Data
Generator when you need large amounts of data, perhaps for stress and
Generator
load testing.
Turbo Data
Your test data needs to be managed. It should be defined,
either manually or automatically (or both), and then
maintained under version control. You need to define IBM Optim Test
Test Data
expected results of tests and then automatically compare that Data Management
Management
with the actual results. You may even want to retain the tools
results of previous test runs (perhaps due to regulatory
compliance concerns).
AnyDbTest
SQLUnit
TSQLUnit (for
testing T-SQL in
MS SQL Server)
Visual Studio
Unit testing tools Tools which enable you to regression test your database.
Team Edition for
Database
Professionals
includes testing
capabilities
XTUnit
75
Confidential
WI-080 Testing Manual
6 LOGGING TESTING
Treating logs as data gives us greater insight into the operational activity of the systems we
test. Structured logging, which is using a consistent, predetermined message format containing semantic
information, builds on this technique.
Logging levels
DEBUG level messages give highly-detailed and/or specific information, only useful for tracking
down problems.
INFORMATION messages give general information about what the system is doing. (e.g.
processing file X)
WARNING messages warn the user about things which are not ideal, but should not affect the
system. (e.g. configuration X missed out, using default value)
ERROR messages inform the user that something has gone wrong, but the system should be
able to cope. (e.g. connection lost, but will try again)
CRITICAL messages inform the user when an un-recoverable error occurs. (i.e. I am about to
abort the current task or crash)
If logging is a deployed feature of an application then it too needs testing. But, since log output is an
integration point, it does not fall under “unit” testing. If log files can contain security flaws, convey data,
impact support, and impair performance, then they should be tested that they conform to standards.
Log output can be tested using the appropriate XUnit framework, like JUnit. In development of a
project, the log output changes rapidly as the code changes. Selecting where in the software development
life cycle (SDLC) to test logging or even specify what logs should contain is difficult. One approach is that
the deployed system will not do any app logging that was not approved by the stake holders. These must
be “unit” tested, and all development support logging is removed or disabled except for use in a
development environment.
76