0% found this document useful (0 votes)
166 views

Testing Manual

Uploaded by

Cătă Cătălin
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views

Testing Manual

Uploaded by

Cătă Cătălin
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 76

Confidential

WI-080 Testing Manual

Testing Manual
Process: Software Testing

1
Confidential
WI-080 Testing Manual

Content

1 Introduction........................................................................................................................................... 6
1.1 Purpose.......................................................................................................................................... 6
1.2 Scope............................................................................................................................................. 6
1.3 Definitions and Acronyms............................................................................................................... 6
1.3.1 Terms and Definitions............................................................................................................... 6
1.3.2 Acronyms.................................................................................................................................. 7
2 CHAPTER I: STATIC TESTING............................................................................................................ 7
2.1 Objective........................................................................................................................................ 7
2.2 What should be tested?................................................................................................................. 8
2.3 How to test it?................................................................................................................................ 8
2.4 Testing Checklist............................................................................................................................ 8
3 CHAPTER II: TESTING LEVELS.......................................................................................................... 9
3.1 Unit Testing.................................................................................................................................. 10
3.1.1 Objective................................................................................................................................. 10
3.1.2 What should be tested?.......................................................................................................... 10
3.1.3 How to test it?......................................................................................................................... 10
3.1.4 Tools....................................................................................................................................... 11
3.1.5 Testing Checklist.................................................................................................................... 12
3.2 Integration Testing....................................................................................................................... 13
3.2.1 Objective................................................................................................................................. 13
3.2.2 What should be tested?.......................................................................................................... 13
3.2.3 How to test it?......................................................................................................................... 15
3.2.4 Tools....................................................................................................................................... 15
3.3 System Testing............................................................................................................................ 16
3.3.1 Functional Testing.................................................................................................................. 17
3.3.1.1 Smoke and Sanity Tests.................................................................................................17
3.3.1.1.1 Objective.................................................................................................................... 17
3.3.1.1.2 What should be tested?.............................................................................................. 17
3.3.1.1.3 How to test it?............................................................................................................. 18
3.3.1.1.4 Tools.......................................................................................................................... 18
3.3.1.2 Regression Tests............................................................................................................ 18
3.3.1.2.1 Objective.................................................................................................................... 18
3.3.1.2.2 What should be tested?.............................................................................................. 18
3.3.1.2.3 How to test it?............................................................................................................. 19
3.3.1.2.4 Tools.......................................................................................................................... 19
3.3.1.3 Functional Tests.............................................................................................................. 19

2
Confidential
WI-080 Testing Manual

3.3.1.3.1 Functional Checklist................................................................................................... 19


3.3.1.3.1.1 Files....................................................................................................................19
3.3.1.3.1.2 Filenames...........................................................................................................19
3.3.1.3.1.3 Filename Invalid Characters and Error Cases.....................................................20
3.3.1.3.1.4 File Operations...................................................................................................20
3.3.1.3.1.5 Alerts.................................................................................................................22
3.3.1.3.1.6 Accessibility.......................................................................................................23
3.3.1.3.1.7 Text Accessibility................................................................................................24
3.3.1.3.1.8 Menus and Command Bars................................................................................24
3.3.1.3.1.9 Dialog Box Behavior...........................................................................................25
3.3.1.3.1.10 Dialog Box Interactivity....................................................................................25
3.3.1.3.1.11 Dialog Box Look And Feel.................................................................................26
3.3.1.3.1.12 Text Entry Fields...............................................................................................26
3.3.1.3.1.13 Undo and Redo................................................................................................27
3.3.1.3.1.14 Printing............................................................................................................28
3.3.1.3.1.15 Special Modes And States................................................................................29
3.3.1.3.1.16 Dates and Y2K (Year 2000 Bug or Millenium Bug)...........................................30
3.3.1.3.1.17 Window Interactions.......................................................................................30
3.3.1.3.1.18 Input Methods.................................................................................................30
3.3.1.3.2 GUI(Graphical User Interface) Tests..........................................................................31
3.3.1.3.3 Functional Heuristics.................................................................................................. 33
3.3.2 Non-Functional Testing........................................................................................................... 36
3.3.2.1 Installation....................................................................................................................... 37
3.3.2.1.1 Objective.................................................................................................................... 37
3.3.2.1.2 What should be tested?.............................................................................................. 37
3.3.2.1.3 How will you test it?.................................................................................................... 37
3.3.2.1.4 Tools.......................................................................................................................... 37
3.3.2.1.5 You are not done yet.................................................................................................. 37
3.3.2.1.5.1 Setup..................................................................................................................37
3.3.2.1.5.2 Upgrades............................................................................................................40
3.3.2.2 Performance................................................................................................................... 40
3.3.2.2.1 Objective.................................................................................................................... 40
3.3.2.2.2 What should be tested?.............................................................................................. 40
3.3.2.2.3 How will you test it?.................................................................................................... 41

3
Confidential
WI-080 Testing Manual

3.3.2.2.4 Tools.......................................................................................................................... 41
3.3.2.2.5 Testing checklist......................................................................................................... 42
3.3.2.3 Volume/Load................................................................................................................... 42
3.3.2.3.1 Objective.................................................................................................................... 42
3.3.2.3.2 What should be tested?.............................................................................................. 42
3.3.2.3.3 How will you test it?.................................................................................................... 43
3.3.2.3.4 Tools.......................................................................................................................... 43
3.3.2.4 Stress.............................................................................................................................. 43
3.3.2.4.1 Objective.................................................................................................................... 43
3.3.2.4.2 What should be tested?.............................................................................................. 43
3.3.2.4.3 How will you test it?.................................................................................................... 43
3.3.2.4.4 Tools.......................................................................................................................... 44
3.3.2.4.5 Testing checklist......................................................................................................... 44
3.3.2.5 Usability.......................................................................................................................... 44
3.3.2.5.1 Objective.................................................................................................................... 44
3.3.2.5.2 What should be tested?.............................................................................................. 44
3.3.2.5.3 How will you test it?.................................................................................................... 44
3.3.2.5.4 Tools.......................................................................................................................... 44
3.3.2.6 Security........................................................................................................................... 45
3.3.2.6.1 Objective.................................................................................................................... 45
3.3.2.6.2 What should be tested?.............................................................................................. 45
3.3.2.6.3 How will you test it?.................................................................................................... 46
3.3.2.6.4 Tools.......................................................................................................................... 49
3.3.2.6.5 Testing checklist......................................................................................................... 50
3.3.2.7 Internationalization and localization................................................................................50
3.3.2.7.1 Objective.................................................................................................................... 50
3.3.2.7.2 What should be tested?.............................................................................................. 51
3.3.2.7.3 How will you test it?.................................................................................................... 51
3.3.2.7.4 Tools.......................................................................................................................... 51
3.3.2.7.5 Testing checklist......................................................................................................... 51
3.3.2.8 Accessibility.................................................................................................................... 54
3.3.2.8.1 Objective.................................................................................................................... 54
3.3.2.8.2 What should be tested?.............................................................................................. 54
3.3.2.8.3 How will you test it?.................................................................................................... 55
3.3.2.8.4 Tools.......................................................................................................................... 55
3.3.2.9 Compatibility................................................................................................................... 56
3.3.2.9.1 Objective.................................................................................................................... 57
3.3.2.9.2 What should be tested?.............................................................................................. 57
3.3.2.9.3 How will you test it?.................................................................................................... 57

4
Confidential
WI-080 Testing Manual

3.3.2.9.4 Tools.......................................................................................................................... 61
3.3.2.9.5 Testing Checklist........................................................................................................ 62
3.3.2.9.5.1 Network Connectivity........................................................................................62
3.3.2.9.5.2 Platform.............................................................................................................63
3.3.2.9.5.3 CPU Configurations............................................................................................63
3.3.2.9.5.4 Hardware Configurations...................................................................................64
3.3.2.9.5.5 Application Configuration and Interoperability.................................................64
3.3.2.9.5.6 Configuration.....................................................................................................65
3.3.2.9.5.7 Interoperability..................................................................................................65
3.4 User Acceptance Testing............................................................................................................. 65
3.4.1 Objective................................................................................................................................. 65
3.4.2 What should be tested?.......................................................................................................... 66
3.4.3 How to test it?......................................................................................................................... 66
3.4.4 Tools....................................................................................................................................... 66
4 TESTING METHODS.......................................................................................................................... 67
4.1 BLACK-BOX TESTING................................................................................................................ 67
4.2 WHITE BOX TESTING................................................................................................................. 69
5 DATABASE TESTING......................................................................................................................... 71
5.1.1 Objective................................................................................................................................. 71
5.1.2 What should be tested?.......................................................................................................... 71
5.1.3 How to test it?......................................................................................................................... 74
5.1.4 Tools....................................................................................................................................... 75
6 LOGGING TESTING........................................................................................................................... 76

5
Confidential
WI-080 Testing Manual

1 Introduction

1.1 Purpose

Standards Mapping:

This document is designed to present the classifications and


Standard Chapters
categories of tests performed within the testing department.
CMMI-DEV v1.3 GP 2.1
ISO 9001:2008 4.1, 4.2.1, 5.3

1.2 Scope

The scope of this work instruction is to present a structured form of all testing types performed on
different testing levels, when testing a software solution.

Each chapter regarding testing types/testing levels, will present the following information:
objectives, what should be tested, how to test it, frequently used tools, testing checklist.

1.3 Definitions and Acronyms

Besides the definition and acronyms described below, there will be applicable the terms, definitions
and acronyms described in QS-002_Definitions and Acronyms.

1.3.1 Terms and Definitions

Test Case test case; a succession of necessary actions to be performed in order to verify a
certain relevant aspect within the validation process of a software product in order
to comply with certain predefined rules

Unit Test Test implemented usually in the employed language and used for developing the
application, having as purpose the testing of a code entity (unit), from a functional
standpoint

Code Inspection The procedure for verifying the method for writing the source-code, having the
purpose to verify the compliance with the general code writing regulations and
with the writing method (specific for each development language)

Requirement specific function to be respected or implemented by an application

Tool software application

Procedure set of actions to be carried out within the specific situations for implementing a
process

6
Confidential
WI-080 Testing Manual

Developer person responsible for developing a project or a part of it by using a programming


language, by writing specific command sequences for the functionalities to be
developed

Tester the person responsible for executing a Test Case

Test Developer person responsible for developing the Test Cases, as a sequence of verification
actions and points, in order to ensure the testing of a functionality of a software
product

Automated testing testing method employing a specialised tool, consisting in the automated
execution of a set of actions and verifications; the execution of automated testing
is also referred in „PlayBack”

1.3.2 Acronyms

Req Requirement
PM Project Manager
TM Test Manager
QM Quality Manager
Dev Developer
SD Software Developer
TD Test Developer
QD Quality Developer
SDLC Software Development Life Cycle
UI User Interface
GUI Graphical User Interface

2 CHAPTER I: STATIC TESTING

Static testing means reviewing documentation and product deliverables.

2.1 Objective

By reviewing the above mentioned items, we should be able to find and fix defects early on, in the
software development lifecycle (SDLC).

7
Confidential
WI-080 Testing Manual

2.2 What should be tested?

Almost anything and everything can be reviewed, for example: requirements, system and program
specification, code, deliverables.

During the SDLC, various deliverables are created and each of them contributes to the delivered
solution. It is essential that all significant omissions are discovered early in the SDLC, so that the final
delivered product will function according to the client’s specifications.

When reviewing documents, we can check the following list:

 It is fit for purpose


 It is complete
 It meets all its objectives
 It is ready to become input to the next stage of the process

2.3 How to test it?

The purpose of testing documents is to verify their integrity and correctness. This category includes
the following deliverables of the process associated to a project:

 Testing the analysis and design documents (including Use Cases)

These documents are usually tested during the training phase and the System Test Plan creation
phase as they are basically the input documents for these phases.

 Testing the documents associated to the final product

The documents associated to the final product can be found in the following list: installation manual (if
created by the testing department), user guides, other associated manuals, such as support and
training materials for the final user (if created by the testing department).

 Testing the help topics

2.4 Testing Checklist

You are not done testing unless you have reviewed all documentation,
a) to ensure that it is correct, and
b) to help generate test cases.
There have been so many cases of documentation which depicted UI which was not in the actual
application, and encountered UI in the application which was nowhere to be found in the documentation.
Other collateral can be useful to review as well - product support calls for the previous version of the
application, for example.
Source code reviews are a simple way to find those places where supposed-to-be-temporary
message boxes and other functionalities are about to be shipped to paying customers.

8
Confidential
WI-080 Testing Manual

 Review postponed and otherwise not-fixed bugs from previous releases


 Review product support issues from previous releases
 Review error reports submitted by customers for previous releases
 Verify that each error message, which can be presented to your customer, is accurate, easily
understood and understandable
 Verify that each input validation error message refers to the correct problem
 Verify all tutorials are correct: the steps are correct, UI matches the actual UI, and so on
 Review every help topic for technical accuracy
 Verify each piece of context sensitive help is correct
 Verify every code sample functions correctly
 Verify every code sample follows appropriate coding standards and guidelines
 Review all source code for:
 Correctness
 Lines of code which have not been executed by a test case
 Security issues (see the Security Testing checklist for more details)
 Potential memory leaks
 Dead code
 Correct error handling
 Use of obsolete and banned function calls
 Compliance with appropriate coding standards and guidelines
 Inappropriate user-facing messages
 Verify you have discussed the feature design and target users with your feature team
 Verify you have asked your developer which areas they think could use special attention
 Verify you have discussed your feature with your product support team
 Verify you have brainstormed and reviewed your test cases with your feature team and with your Test
team
 Verify you have discussed cross-feature implications with your feature team and with your Test team
 Verify you have completed all appropriate feature-specific testing
 Verify you have completed all appropriate cross-feature integration testing
 Verify you have completed all appropriate real-world “use-it-the-way-your-user-will” testing

3 CHAPTER II: TESTING LEVELS

The testing levels described in this chapter are essential stages in the development life cycle. Their
purpose is to ensure the quality of the product to be delivered:

 Unit Tests,
 Integration Tests,
 System Tests and
 (User) Acceptance Tests.

9
Confidential
WI-080 Testing Manual

3.1 Unit Testing

Unit testing, also known as component testing, refers to tests that verify the functionality of a
specific section of code, usually at the function level. In an object-oriented environment, this is usually at
the class level, and the minimal unit tests include the constructors and destructors. During this phase the
following tests might be included:

o static code analysis


o data flow analysis
o metrics analysis
o peer code reviews
o code coverage analysis
o other software verification practices.

Unit tests must be performed by the development team of the project, for all the builds released to
testing by the developing team.

3.1.1 Objective

The objective of unit testing is to isolate each part of the program and validate their correctness.

Unit testing also helps to: find problems early in the project, facilitates changes to the solution and
simplifies integration of different parts of the program.

3.1.2 What should be tested?

Broadly speaking, you should test your custom business logic. You might choose to implement just
a few tests that only cover the code paths that you believe are most likely to contain a bug. Or, you might
choose to implement a large suite of unit tests that are incredibly thorough and test a wide variety of
scenarios. You should be sure to write unit tests that verify your code behaves as expected in "normal"
scenarios as well as in more "unexpected" scenarios, like boundary conditions or error conditions.

3.1.3 How to test it?

For unit tests, start with testing that it does what it is designed to do. Typically, each unit test sends
a specific input to a method and verifies that the method returns the expected value, or takes the expected
action. Unit tests prove that the code you are testing does in fact do what you expect it to do.

A unit test should:

 Set up all conditions for testing.


 Call the method (or Trigger) being tested.
 Verify that the results are correct.

10
Confidential
WI-080 Testing Manual

3.1.4 Tools

Tool Description

A regression testing framework used by


JUnit developers who implement unit tests in Java.
(freeware)

C/C++ unit testing tool that automatically


C++Test tests any C/C++ class, function, or
component.

Rational Test RealTime Unit Testing


performs black-box/functional testing, i.e.
verifies that all units behave according to
their specifications without regard to how that
Rational Test
functionality is implemented. The Unit
RealTime Unit
Testing feature has the flexibility to naturally
Testing
fit any development process by matching
and automating developers’ and testers’
work patterns, allowing them to focus on
value-added tasks.

JsUnit is a Unit Testing framework for client-


JsUnit (Hieatt) side (in-browser) JavaScript. It is essentially
a port of JUnit to JavaScript.

VectorCAST Software Module Test System

Check A unit test framework for C (freeware)

CppUnit C++ unit test tool (freeware)

Java unit testing framework for web


HtmlUnit
applications (freeware)

Framework for developing unit tests in the


Mock Objects
mock object style (freeware)

PerlUnit Unit test framework for Perl (freeware)

Unit test tool for Visual Basic and COM


vbUnit3 Basic
objects. (freeware)

Automatic Static Analysis & Unit Testing


.TEST
for .NET

11
Confidential
WI-080 Testing Manual

3.1.5 Testing Checklist

 Test Around Your Change. Consider what it might affect beyond its immediate intended target. Think
about related functionality that might have similar issues. If fixing these surrounding problems is not
relevant to your change, log bugs for them.
 Use Code Coverage. Code coverage can tell you what functionality has not yet been tested. Don't
however just write a test case to hit the code. Instead, let it help you determine what classes of testing
and test cases that uncovered code indicates you are missing.
 Consider Testability. Hopefully you have considered testability throughout your design and
implementation process. If not, think about what someone else will have to do to test your code. What
can you do/do you need to do in order to allow proper, authorized verification? (Test Driven Design)
 Ways To Find Common Bugs:
 Reset to default values after testing other values (e.g., pairwise tests, boundary condition tests)
 Look for hard coded data (e.g., "c:\temp" rather than using system APIs to retrieve the temporary
folder), run the application from unusual locations, open documents from and save to unusual
locations
 Run under different locales and language packs
 Run under different accessibility schemes (e.g., large fonts, high contrast)
 Save/Close/Reopen after any edit
 Undo, Redo after any edit
 Test Boundary Conditions: Determine the boundary conditions and equivalency classes, and then test
just below, at, in the middle of, and just above each condition. If multiple data types can be used,
repeat this for each option (even if your change is to handle a specific type). For numbers, common
boundaries include:
 smallest valid value
 at, just below, and just above the smallest possible value
 -1
 0
 1
 some
 many
 at, just below, and just above the largest possible value
 largest valid value
 invalid values
 different-but-similar datatypes (e.g., unsigned values where signed values are expected)
 for objects, remember to test with null and invalid instances
 Other Helpful Techniques:
 Do a variety of smallish pairwise tests to mix-and-match parameters, boundary conditions, etc.
One axis that often brings results is testing both before and after resetting to default values.
 Repeat the same action over and over and over, both doing exactly the same thing and changing
things up.
 Verify that every last bit of functionality you have implemented is discussed in the specification
and matches what the specification describes should happen. Then look past the specification
and think about what is not happening and should.
 "But a user would never do that!": To quote Jerry Weinberg, When a developer says, "a user
would never do that," we say, "Okay, then it won't be a problem to any user if you write a little
code to catch that circumstance and stop some user from doing it by accident, giving a clear

12
Confidential
WI-080 Testing Manual

message of what happened and why it can't be done." If it doesn't make sense to do it, no user
will ever complain about being stopped.

3.2 Integration Testing

Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative way or all
together ("big bang"). Normally the former is considered a better practice since it allows interface issues to
be located more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components corresponding to
elements of the architectural design are integrated and tested until the software works as a system.

 Integration in the small is bringing together individual components (modules/units) that have already
been tested in isolation. We are trying to find faults that couldn’t be found at an individual component
testing level. Integration testing in the small makes sure that the things that are communicated are
correct from both sides, not just from one side of the interface.
 Integration in the large: This stage of testing usually occurs between ‘System’ and ‘Acceptance’
testing and tests the inputs and outputs from a system to another system or other systems.

3.2.1 Objective

Integration in the small: The objective is to test that the ‘set’ of components function together correctly
by concentrating on the interfaces between the components.

Integration in the large: The objective is to test that the ‘set’ of systems/modules function together
correctly.

3.2.2 What should be tested?

WEB SERVICES TESTING


It is usually stress-free for a tester to migrate from one technology to another, but at times it is
more difficult to move from one methodology to another. Jumping from a custom application to
Commercial Off-The-Shelf (COTS) is still an easy transition. The tester has a better idea on which parts
they need to focus on or which modules are more susceptible than others. 
With the dispersed architecture of loosely coupled systems which may be technically apart, but
frequently need to communicate with each other in terms of data requirements, is where web services
comes into the picture.
A web service provides a simple interface for communication for these systems using a typical
data transfer mechanism.

Levels in Web service testing:

13
Confidential
WI-080 Testing Manual

 Include what is expected from a Web service with respect to business requirements
 Gather and understand the requirements and the data transfer standards 
 Design test cases keeping business requirements in mind, the more data scenarios you
have, the healthier the quality of the deliverable
 It is difficult to test complete end to end business flows with all the possible data
scenarios. The trick is to have an automated tool which can shorten the testing of web services like
Optimyz, WebInject, SOAP UI etc.
What should be tested?
 Functionality: A key to testing Web services is ensuring their functional quality, because when you
string together a set of services, you introduce many more opportunities for error or failure. You can
take into consideration:
 Specification Review (SR)
 Test Case Development (TCD)
 Test Execution, examination of requests & responses of web services
 Performance: Testing web services performance may be complicated. A key point is to know the
performance requirements in the most accurate manner. For example:
 A good requirement: This service has been identified as serving 50,000 concurrent users
with 10 second average response time
 A bad requirement: This service should serve > 4000 concurrent users, and the response
should be fast
 Security: Web Services are wide-open in a network. This element opens up a host of
vulnerabilities, such as penetration, Denial-of-Service (DOS) attacks, and great volumes of spam
data, etc. Distinctive security policies have to be imposed at the network level to create a sound
Service Oriented Architecture (SOA). There are certain security policies which are enforced during
data transfer, and user tokens or certificates are common sights where data is protected with a
password. Precise test cases aimed at directing these policies need to be designed to completely
test the Web service security:
 Authentication – The process of assuring that the request actually originated from an
authorized source. In addition to authenticate the source, the service provider may need
to prove the message origin to other consumers
 Authorization – This provides assurance that only authorized requesters are allowed to
access the service. This goes hand in hand with authentication to ensure that malicious
parties cannot mimic a valid client
 Penetration – A Penetration Test simulates an attack by a malicious party. This testing
attempts to find and exploit vulnerabilities to determine what information and access is
able to be gained. This is designed to mimic the actions of an attacker exploiting
weaknesses in network security without the usual risks
 Protocol / encryption standards testing – this provides assurances that the service
transaction are encrypted using the defined encryption techniques. Secure encryption
standards should prevent attempts to decrypt traffic, known as encryption attacks

When testers take up web services it tosses many challenges at them, it is still very important to
know what they need to do, rather than doing it first to learn costly lessons later.

14
Confidential
WI-080 Testing Manual

3.2.3 How to test it?

One way you can test web services is by calling web methods from unit tests. It is much like
testing other code by using unit tests, using Assert statements. The same range of results is produced.
There are two ways to test web services with unit tests:
 The web service runs on an active web server. Testing a web service that runs on a local or remote
web service, such as IIS, has no special requirements. Simply add a web reference and call the web
methods of the web service from your development solution.
 The web service is not hosted in an active web server. You can test a web service that runs on your
local computer and not in a web server, such as IIS. Just use an attribute provided by the Team
System testing tools to start ASP.NET Development Server, which creates a temporary server that
hosts the web service you are testing.
Applications need to be tested considering the following aspects:
 End to end from the requester perspective
 At the unit level during development
 At service level
 Interface validation
 To ensure functionality under boundary load conditions

3.2.4 Tools

Tools Description

SoapUI is the world leading Open Source Functional Testing Tool,


mainly it is used for API Testing .
SoapUI supports multiple protocols such as SOAP, REST, HTTP,
Soap UI
JMS, AMF and JDBC.
SoapUI enables you to create advanced Performance Tests very
quickly and run Automated Functional Tests.
• Real-Time, Drag and Drop Test Creation
• Interactive Load Testing
• Drag and Drop Distributed Testing
LoadUI
• Cloud Load Testing
• Advanced Reporting
• Detailed Analysis

Windows Communication Foundation (WCF) Test Client


(WcfTestClient.exe) is a GUI tool that enables users to input test
parameters, submit that input to the service, and view the
response that the service sends back. It provides a seamless
WCF Test Client
service testing experience when combined with WCF Service
Host. You can find the WCF Test Client (WcfTestClient.exe) in the
following location: C:\Program Files\Microsoft Visual Studio
9.0\Common7\IDE\

15
Confidential
WI-080 Testing Manual

Apache JMeter may be used to test performance both on static


and dynamic resources (Files, Web dynamic languages - PHP,
Java, ASP.NET, etc. -, Java Objects, Data Bases and Queries,
FTP Servers and more). It can be used to simulate a heavy load
JMeter
on a server, group of servers, network or object to test its strength
or to analyze overall performance under different load types. You
can use it to make a graphical analysis of performance or to test
your server/script/object behavior under heavy concurrent load.

RESTClient supports all HTTP methods RFC2616 (HTTP/1.1) and


RFC2518 (WebDAV). You can construct custom HTTP request
RESTClient
(custom method with resources URI and HTTP request Body) to
directly test requests against a server.
WebServiceTester is an end-to-end product offering
automatic test generation; functional, regression, and load testing;
conformance testing against WS-I Profiles, BPEL-based
Optimyz
orchestration
testing; secure Web services testing; and debugging and
diagnostics.
"end to end" solution for Web services testing in the
form of three offerings: LoadRunner, QuickTest Professional and
Mercury(now HP)
Business Process Testing, its newest tool that sits on top of
LoadRunner.

3.3 System Testing

System testing concentrates on a completely integrated system to verify that it meets all its
requirements. System Tests will be performed for all builds received from development.
The following types of tests can be performed during System Test Phase:

• Functional
o Smoke and Sanity
o Regression
• Non -Functional
o Installation
o Performance
o Volume
o Stress
o Usability
o Security
o Internationalization and localization
o Accessibility
o Compatibility

16
Confidential
WI-080 Testing Manual

3.3.1 Functional Testing

Functional testing ensures that the application was developed with all requirements as stated in the
Requirements Specification.

Other specific types of functional tests are:

 Smoke and Sanity Tests


 Regression Tests

3.3.1.1 Smoke and Sanity Tests


Smoke testing consists of minimal attempts to operate the software, designed to determine
whether there are any basic problems that will prevent it from working at all. Smoke Testing is performed
right after the build is ready for testing, to determine whether the critical functionalities of the program are
working fine. It is performed before any detailed functional or regression tests are executed on the build.
A sanity test is a very brief run-through of a functionality, to assure that part of the system or
methodology works roughly as expected. This is often prior to a more exhaustive round of testing.
After receiving a build, with minor changes in the source code, sanity testing is performed to
assure that the bugs have been fixed and no further issues are introduced due to these changes. The goal
is to determine that the proposed functionality works as expected. If sanity test fails, the build is rejected to
save the time and costs of a more rigorous testing. Sanity testing is a subset of regression testing.
Smoke and Sanity Testing must be performed on every build received from development for
testing. If the smoke and sanity test results are FAIL, the system is inoperable.

3.3.1.1.1 Objective
The objective of Smoke testing is to verify the "stability" of the system in order to proceed with
more rigorous testing.
The objective of Sanity testing is to verify the "rationality" of the system in order to proceed with
more rigorous testing.

3.3.1.1.2 What should be tested?


Smoke testing: Critical functionalities of the application/system tested;
Sanity testing: Positive testing. No further issues have been introduced with the fixing of bugs.
Sanity testing exercises only the particular component of the entire system.

3.3.1.1.3 How to test it?


Smoke testing: Manual test critical functionalities or use scripts that automatically check the
expected result of a major functionality.
Sanity testing: Manual or automated testing.
Both smoke and sanity tests can be executed manually or using an automation tool.  When
automated tools are used, the tests can be initiated by the same process that generates the build itself.

17
Confidential
WI-080 Testing Manual

3.3.1.1.4 Tools

Tool Description
OpenScript is an updated scripting platform for creating automated
extensible test scripts in Java. Combining an intuitive graphical
interface with the robust Java language, OpenScript serves needs
OpenScript ranging from novice testers to advanced QA automation experts.
Selenium has the support of some of the largest browser vendors who
have taken (or are taking) steps to make Selenium a native part of
their browser. It is also the core technology in countless other browser
Selenium automation tools, APIs and frameworks.

3.3.1.2 Regression Tests


Regression testing focuses on finding defects after a major code change has occurred.
Regressions tests can be performed whenever a software functionality, that was previously
working correctly, generates different results compared to the expected results. Typically, regression tests
occur as an unintended consequence of program changes, when the newly developed part of the software
collides with the previously existing code. Common methods of regression testing include re-running
previously run tests and checking whether previously fixed faults have re-emerged.
The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to be risky, or be
very shallow, consisting of positive tests on each feature, if the changes are early in the release or
deemed to be of low risk.

3.3.1.2.1 Objective
The objectives of this test cycle is to ensure that the new functionalities do not cause problems
with existing software.This usually involves executing a set of repeatable tests to ensure that the new
software produces the same set of results as the original test.

3.3.1.2.2 What should be tested?


Functionalities that could have been affected by the current build.
In order to determine which of these functionalities could have been affected, you need to create a
risk analysis, having as input the functionalities delivered in the current build/release.

3.3.1.2.3 How to test it?


Common methods of regression testing include rerunning previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have re-emerged.
Regression testing can be performed to test a system efficiently by systematically selecting the
appropriate minimum set of tests needed to adequately cover a particular change.

3.3.1.2.4 Tools

Tool Description
OpenScript OpenScript is an updated scripting platform for creating automated
extensible test scripts in Java. Combining an intuitive graphical
interface with the robust Java language, OpenScript serves needs

18
Confidential
WI-080 Testing Manual

ranging from novice testers to advanced QA automation experts.


Selenium has the support of some of the largest browser vendors who
have taken (or are taking) steps to make Selenium a native part of
their browser. It is also the core technology in countless other browser
Selenium automation tools, APIs and frameworks.

3.3.1.3 Functional Tests

3.3.1.3.1 Functional Checklist

3.3.1.3.1.1 Files
You are not done testing unless you have looked at each and every file that makes up your
application, for they are full of information which is often ignored.
 Verify that the version number of each file is correct.
 Verify that the assembly version number of each managed assembly is correct. Generally the
assembly version number and the file version number should match. They are specified via
different mechanisms however, and must explicitly be kept in sync.
 Verify that the copyright information for each file is correct.
 Verify that each file is digitally signed - or not, as appropriate. Verify that its digital signature is
correct.
 Verify that each file is installed to the correct location. (Also see the Setup Checklist.)
 Verify you know the dependencies of each file. Verify each dependency is either installed by your
setup or guaranteed to be on the machine.
 Check what happens when each file - and each of its dependencies - is missing.
 Check each file for recognizable words and phrases. Determine whether each word or phrase you
find is something you are comfortable with your customers seeing.

3.3.1.3.1.2 Filenames
You are not done testing yet unless you have tested the following test cases for filenames:
 Single character filenames
 Short filenames
 Long filenames
 Extra-long filenames
 Filenames using text test cases
 Filenames containing reserved words
 Just the filename (file.ext)
 The complete path to the file (c:\My\Directory\Structure\file.ext)
 A relative path into a subfolder (Sub\Folder\file.ext)
 A relative path into the current folder (.\file.ext)
 A relative path into a parent folder (..\Parent\file.ext)
 A deeply nested path
(Some\Very\Very\Very\Very\Very\Deeply\Nested\File\That\You\Will\Never\Find\Again\file.ext)
 UNC network paths (\\server\share\Parent\file.ext)
 Mapped drive network paths (Z:\Parent\file.ext)

19
Confidential
WI-080 Testing Manual

Filenames are interesting and a common source of bugs. Microsoft Windows applications that
don't guard against reserved words set themselves up for a Denial Of Service attack. Applications on any
operating system that allow any old file to be opened/saved/modified, leave a gaping hole onto "secured"
files. Some users stuff every document they've ever created into their user folder. Other users create a
unique folder for each document. Certain characters are allowed in filenames that aren't allowed
elsewhere, and vice versa. Spending some focused time in this area will be well worth your while.

3.3.1.3.1.3 Filename Invalid Characters and Error Cases


You are not done testing yet unless you have checked for invalid characters in filenames, and for
reserved filenames. Operating systems tend to throw an alert if you try to use wildcards (e.g., '*') in
filenames. They may also treat certain filenames specially. For example, Microsoft Windows provides a
single API for creating/opening files, communication ports, and various other cross-process
communication mechanisms. Well-known communication ports (e.g., COM1) are addressed by "filename"
just as though they were a file - this means that you can't use "COM1" for a physical file on disk.
Testing for this is easy: brainstorm a list of interesting test cases, then execute each one into each
of your application's dialog boxes, command line arguments, and APIs that take a filename. Illegal
characters will probably throw an error, but trying to open a reserved filename is likely to block your app.

3.3.1.3.1.4 File Operations


You are not done testing unless you have thoroughly tested your application's Open, Save, and
Save As functionality. It is important to verify the correct thing happens under the following conditions:
 Open each supported file type and version and Save As each supported file type and version.
Especially important is to open from and save as the previous version of your native format.
 Open each supported file type and version and Save. If the file type and version can be selected during
a Save operation (as opposed to a Save As operation), Save to each supported file type and version.
More usually, Save saves to the current version only.
 Roundtrip from each supported version to the current version and back to the previous version. Open
the resulting file in that version of your application. Does it open correctly? Are new features correctly
converted to something the previous version understands? How are embedded objects of previous
versions handled?
 Open files saved in the current version of your application in previous versions of your application. If
the document opens, how are features added in the new version handled? If the document does not
open, is the resulting error message clear and understandable?
 Open from and Save and Save As to different file systems (e.g., FAT and NTFS) and protocols (e.g.,
local disk, UNC network share, http://). The operating system generally hides any differences between
types of file systems; your application probably has different code paths for different protocols
however.
 Open, Save, and Save As via the following mechanisms (as appropriate):
 Menu item
 Toolbar item
 Hot key (e.g., Control+S for Save)
 Most Recently Used list
 Microsoft SharePoint document library
 Context menu(s)
 The application’s Most Recently Used list
 The operating system’s Most Recently Used list
 Drag-and-drop from the file system explorer
 Drag-and-drop from your desktop

20
Confidential
WI-080 Testing Manual

 Drag-and-drop from another application


 Command line
 Double-click a shortcut on your desktop
 Double-click a shortcut in an email or other document
 Embedded object
 Open from and Save and Save As to the following locations:
 Writable files
 Read-only files
 Files to which you do not have access (e.g., files whose security is set such that you cannot
access them)
 Writable folders
 Read-only folders
 Folders to which you do not have access
 Floppy drive
 Hard drive
 Removable drive
 USB drive
 CD-ROM
 CD-RW
 DVD-ROM
 DVD-RW
 Open from and Save and Save As to various types and speeds of network connections. Dial-up and
even broadband has different characteristics than the 100 gigabyte network.
 Open files created on (and Save and Save As to as appropriate):
 A different operating system
 An OS using a different system locale
 An OS using a different user locale
 A different language version of your application
 Open from and Save and Save As to filenames containing
 The Text Entry Field Checklist, as appropriate
 The Filenames Checklist list, as appropriate
 The Invalid Filenames Checklist list
 Spaces
 Cause the following to occur during Open, Save, and Save As operations:
 Drop all network connections
 Fail over to a different network connection
 Reboot the application
 Reboot the machine
 Sleep the machine
 Hibernate the machine
 Put AutoSave through its paces. What happens when you AutoSave every zero minutes? Every
minute? With a very big document? If the AutoSave timer is per document, what happens when
multiple AutoSaves kick off simultaneously, or while another AutoSave is in progress? Does file
recovery from AutoSave work as you expect? What happens if the application crashes during an
AutoSave? During recovery of an AutoSaved document?
 Save and Save as in the following conditions:
 No documents are dirty(contain erroneous information)

21
Confidential
WI-080 Testing Manual

 One document is dirty (contain erroneous information)


 Multiple documents are dirty(contain erroneous information) and the user chooses to save all of
them
 Multiple documents are dirty(contain erroneous information) and the user chooses to save none of
them
 Multiple documents are dirty(contain erroneous information) and the user chooses to save only
some of them

3.3.1.3.1.5 Alerts
You are not done testing yet unless you have searched out every alert, warning, and error
message and dialog box your application can display and checked the following:

3.3.1.3.1.5.1 Content
 Verify that you understand every condition that can cause the alert to display, and that you have test
cases for each condition (or have explicitly decided to *not* test specific conditions).
 Verify that the alert is in fact needed. For example, if the user can easily undo the action, asking them
whether they really want to do it is not necessary.
 Verify that the alert first identifies the problem and then presents the solution. Basically, treat your
customers like smart, knowledgeable people and help them understand what the problem is and what
they can do about it.
 Verify that the alert text does not use an accusatory tone but rather is polite and helpful. Again, let
them know what happened, what the application is doing to remedy the situation, and what they can do
to prevent it from happening in the future.
 Verify the alert text is correct and appropriate for the situation.
 Verify the alert text is consistent in its wording and style, both to itself as well as to each other alert.
 Verify the alert text is as succinct as possible but no more succinct. Hint: If the alert text is longer than
three lines, it's probably too long.
 Verify the alert text contains complete sentences which are properly capitalized and punctuated.
 Verify the alert text does not use abbreviations or acronyms. (Discipline-specific acronyms may be OK,
if you are confident that all of your users will know what they mean.)
 Verify the alert text uses the product's name, not pronouns such as "we" or "I".

3.3.1.3.1.5.2 Functionality
 Verify the alert's title bar contains the name of the product (e.g., "Acme Word Processor").
 Verify each button works correctly.
 Verify each button has a unique access key.
 Verify the buttons are centered below the message text.
 Verify any graphics on the alert are appropriate and correctly placed. For Microsoft Windows
applications, there are standard icons for Informational, Warning, and Critical alerts, and these icons
are typically displayed to the left of the alert text.

3.3.1.3.1.6 Accessibility
You are not done testing yet unless you have verified your application integrates with the
accessibility features of your operating system. Accessibility features are vital to customers who are blind,

22
Confidential
WI-080 Testing Manual

deaf, or use assistive input devices, but they are also extremely useful to many other people as well. For
example, comprehensive large font support will be much appreciated by people with failing eyesight
and/or high DPI screens.
Some of the following terms and utilities are specific to Microsoft Windows; other operating
systems likely have something similar.
 Verify that every control on every dialog and other user interface widget supports at least the following
Microsoft Active Accessibility (MSAA) properties:
 Name - its identifier
 Role - a description of what the widget does, e.g., is it invokable, does it take a value
 State - a description of its current status
 Value - a textual representation of its current value
 KeyboardShortcut - the key combination that can be used to set focus to that control
 DefaultAction - a description of what will happen if the user invokes the control; e.g., a checked
check box would have a Default Action of "Uncheck", and a button would have a Default Action of
"Press"
 Verify that changing the value of each control updates its MSAA State and Value properties.
 Run in high contrast mode, where rather than a full color palette you have only a very few colors. Is
your application still functional? Are all status flags and other UI widgets visible? Are your toolbars and
other UI still legible? Does any part of your UI not honor this mode?
 Run in large font mode, where the system fonts are all extra large. Verify that your menus, dialogs, and
other widgets all respect this mode, and are still readable. Especially pay attention to text that is
truncated horizontally or vertically.
 Run with Sound Sentry, which displays a message box, flashes the screen, or otherwise notifies the
user anytime an application plays a sound. Verify that any alert or other sound your application may
play activates Sound Sentry.
 Run with sticky keys, which enables the user to press key chords in sequence rather than all at once.
The operating system will hide much of these details from your application, but if your app ever directly
inspects key state it may need to explicitly handle this state.
 Run with mouse keys, which enables the user to control the mouse pointer and buttons via the numeric
keypad. Again, the operating system will hide much of these details from your application, but if your
app ever directly inspects mouse state it may need to explicitly handle this state.
 Run with no mouse and verify that every last bit of your UI can be accessed and interacted with solely
through the keyboard. Any test case you can execute with a mouse should be executable in this mode
as well.
 Run with a text reader on and your monitor turned off. Again, you should be able to execute each of
your test cases in this state.
 Verify focus events are sent when each control loses and receives focus.
 Verify the tabbing order for each dialog and other tab-navigable UI component is sensible.
 Verify that any actionable color item (e.g., that red squiggly line Microsoft Word displays underneath
misspelled words) can have its color customized.
 Verify that any object which flashes does so to the system cursor blink rate.

How completely you support these various accessibility features is of course a business decision
your team must make. Drawing programs and other applications which incorporate graphics, for example,
may decide to require a mouse for the drawing bits. As is also the case with testability, however,

23
Confidential
WI-080 Testing Manual

accessibility-specific features are often useful in other scenarios as well. (The ability to use the keyboard
to nudge objects in drawing programs tends to be popular with customers of all abilities, for example.)

3.3.1.3.1.7 Text Accessibility


You are not done testing yet unless you have checked that all text is actually text and not a bitmap
or video. Text rendered as a graphic is problematic for two reasons. First, accessibility clients such as
screen readers can't see into bitmaps, videos, and animations, so any text embedded in such a graphic is
invisible to anyone using accessibility features. Second, graphics with embedded text vastly complicate
the localization process. Translating text simply requires modifying the application's resource files, but
translating bitmaps and videos requires recompositing them.
If you must place text in bitmaps, you can mitigate your localization pain by creating the bitmaps
dynamically at runtime using resourced text strings. Videos and animations may be runtime creatable as
well depending on the toolset you use.
As for accessibility, ensure that the relevant information is available some other way: in the
supporting text, by closed captioning, ALT tags in HTML, and so on.

3.3.1.3.1.8 Menus and Command Bars


You are not done testing yet unless you have put your menus and command bars through their
paces. There used to be a distinct difference between menus and command bars: menus could have
submenus and were always text (perhaps with an optional icon) while command bars were never nested
and were only graphics. Nowadays, however, menus and toolbars are more-or-less the same animal and
can be mixed-and-matched, so that the only real difference is that command bars are typically always
visible whereas menus are transient.
 Verify all commands work from menus and from command bars
 Verify each keyboard shortcut works correctly
 Verify built-in commands work correctly from a custom menu
 Verify built-in commands work correctly from a custom command bar
 Verify custom commands work correctly from a custom menu
 Verify custom commands work correctly from a custom command bar
 Verify custom commands work correctly from a built-in menu
 Verify custom commands work correctly from a built-in command bar
 Verify custom menus and command bars persist correctly
 Verify customizations to built-in menus and command bars persist correctly
 Verify commands hide/disable and show/enable as and only when appropriate
 Verify command captions are correct and consistent with similar terms used elsewhere
 Verify menu and command bar item context menus work correctly
 Verify status bar text is correct
 Verify status bar text is not truncated

3.3.1.3.1.9 Dialog Box Behavior


You are not done testing yet unles you have checked the following points for each and every
dialog box in your application:
 Verify that each command (e.g., menu item, shortcut key) that is supposed to launch the dialog box
does in fact launch it.

24
Confidential
WI-080 Testing Manual

 Verify that its title is correct.


 Verify that all terms used by the dialog are consistent with those used by the rest of the application.
 Verify that accepting the dialog box updates application state correctly.
 Verify that canceling the dialog box causes no change to application state.
 Verify that the dialog is sticky - displays in the position from which it was last dismissed. Or that it
always displays in the same location, if it is not supposed to be sticky.
 Verify that the dialog's contents are initialized from the current state of the application. Or that it always
starts with default values, if it is not supposed to initialize from application state.
 Verify that invoking help (e.g., pressing F1) links to the correct help topic. Note that you may need to
check this for each individual control as some dialog boxes have control-specific context-sensitive help.

3.3.1.3.1.10 Dialog Box Interactivity


You are not done testing yet unless you have checked the following points for each and every
dialog box in your application:
 Verify that the correct system controls display on the title bar (e.g., some dialog boxes can be
maximized while others cannot) and work correctly.
 Verify that the default edit-focused control and default button-focused control are correct.
 Verify that the dialog can be canceled by
 Pressing the Escape key (regardless of which control has focus)
 Pressing the system close button (the 'x' button on Microsoft Windows dialog boxes)
 Pressing the cancel button on the dialog
 Verify that the dialog can be closed and its contents accepted by
 Pressing the Enter button
 Pressing the accept or OK button on the dialog
 Verify that the keyboard navigation order is correct. (Microsoft Windows apps often refer to this as "tab
order" as the convention on that operating system is that pressing Tab moves you through the dialog
box.)
 Verify that every control has a shortcut letter, that every shortcut works, and that each shortcut is
unique within the dialog box.
 Verify that each control's tooltip is correct.
 Verify that any mutually exclusive controls work together correctly.
 Verify all dialog states, such as different sets of controls being visible due to application state or "more
details" and "less details" buttons on the dialog box being invoked.
 Verify that all ninchable controls (controls which can be in an indeterminate state; for example, the Bold
button would be ninched if the current selection contains some text that is bolded and some text that is
not bolded) do in fact ninch as appropriate. (NINCH is an acronym for "no input no change")
 Verify that editing a ninched value has the correct effect (i.e., applies the new value(s) to all items
which should be affected; to revisit the text example, all text should be bolded).
 Verify that each control responds correctly to valid input and invalid input, including appropriate
boundary cases. For example, invalid input might cause a message box to be displayed, or highlight
the control in some fashion.
 Verify that the dialog displays and functions correctly
 With different color settings
 With different font settings

25
Confidential
WI-080 Testing Manual

 In high contrast mode


 In high DPI mode
 Verify that all images and other media in the dialog box are localized correctly.

3.3.1.3.1.11 Dialog Box Look And Feel


You are not done testing yet unless you have checked the following points for each and every
dialog box in your application:
 Verify that the menu command which launches the dialog ends with an ellipsis (e.g., "Create New
Document..."). This is the convention on Microsoft Windows at least; for other operating systems check
your style guide.
 Verify that the size of each control and spacing between each pair of controls matches that specified
by your style guide.
 Verify that the dialog box is sized correctly relative to its controls.
 Verify that ninchable controls display correctly (per your style guide) when they are ninched.
(Generally, they should grey out or otherwise make obvious their ninched state.)
 Verify that any samples in the dialog reflect the actual contents and formatting of the current document.
Or reconsider showing samples! Dialogs which affect document formatting often purport to preview the
effect its settings will have on the active document. Bringing the actual document (or an appropriate
piece, such as the current selection) into the preview greatly enhances the value of the preview. If your
preview simply presents some preset, made up content that hopefully looks somewhat like the real
document, you might as well not have a preview at all.

Although this fit-and-finish stuff can seem like a waste of time, it matters. Although they likely
aren't conscious of it, these details affect people's evaluation of your product's quality just as much as how
often it crashes does. In fact, if the first impression a potential customer has is that your application is
unpolished, they will tend to view the rest of their experience through that lens as well.

3.3.1.3.1.12 Text Entry Fields


You are not done testing yet unless you have covered the following boundary conditions for every
text entry field in your application. (Don't forget about editable combo boxes!)
 Null (if you are testing an API)
 Zero characters
 One character
 Two characters
 Some characters
 Many characters
 One less than the maximum allowed number of characters
 The maximum allowed number of characters
 One more than the maximum allowed number of characters
 Spaces in the text
 Symbols (e.g., colon, underscore) in the text
 Punctuation in the text
 ASCII characters
 High ASCII characters

26
Confidential
WI-080 Testing Manual

 German characters
 Japanese characters
 Hebrew characters
 Arabic characters
 Unicode characters from multiple character ranges
 Control characters

Text handling can be loaded with errors. If your application is one hundred percent Unicode, count
yourself lucky. Even then, however, you may have to import to or export from non-Unicode encodings. If
your application handles ASCII text then you get the fun of testing across multiple code pages (try
switching code pages while entering text and see what happens!). And if your application uses double-
byte or multi-byte encodings, you may find yourself thinking about switching careers!

3.3.1.3.1.13 Undo and Redo


You are not done testing yet unless you have tested undo and redo. If your application doesn't
support undo then you're off the hook. Otherwise, be sure you've done the following:
 Considered whether each user action should be undoable.
 Considered whether each user actions should be redoable.
 Tested one level of undo
 Tested multiple levels of undo
 Tested one level of redo
 Tested multiple levels of redo
 Redo more times than you've undone. In some applications redo is more a "do again".
 Tested intermixed undos and redos
 Verified that each undoable and redoable command is listed correctly in the undo and redo UI
 Tested undo and redo across document saves (some applications toss their undo and redo stacks
when you save)
 Tested undo and redo across document close+reopen
 Tested undo and redo across builds, if your application builds code or uses built code (such as
allowing the user to reference custom control libraries). The issue here is that the contents of that built
code might change - how do you redo an addition of a custom control that no longer exists in the
library?

Simple undo/redo testing is easily done manually and will usually find bugs. These bugs are
typically simple programmer errors which are easy to fix. The really interesting bugs are usually found by
intermixing undos and redos. This can certainly be done manually, but this is one case where automated
test monkeys can add value.
You can decide to have one person test undo and redo across your entire application. From
experience, it works best to have each person test undo and redo for their areas.

3.3.1.3.1.14 Printing
You are not done testing yet unless you have checked how your application handles printing.
If you remember back in the past, before your operating system abstracted away (most of) the
differences between printers, so that each application had to know intimate details about every printer it

27
Confidential
WI-080 Testing Manual

might be used with, you surely know how good you have it. That just gives you more time to worry about
the following issues.
 Verify changing orientation works properly. Try doing this for a brand new document and for an in-
progress document. Also try doing this by launching your app's equivalent of a page setup dialog box
both directly (e.g., from a menu item) and from within the print dialog.
 Verify printing to a local printer works properly.
 Verify printing to a network printer works properly.
 Verify printing to a file works properly. Every operating system I know of allows you to create a print file
for any printer you have installed.
 Verify printing to a PCL printer works properly. PCL started out as the control language for Hewlett-
Packard printers but has since become somewhat of a standard.
 Verify printing to a PostScript printer works properly. This printer control language was created by
Adobe and has also become somewhat of a standard. PostScript is semi-human readable, so you can
do some printer testing by inspecting the output file and thus avoid killing any trees.
 Verify printing to a PDF file works properly. There are a number of free and low-cost PDF creators
available; also consider purchasing a copy of Adobe Acrobat in order to test the "official" way to create
PDFs.
 Verify canceling an in-progress print job works properly.
 Verify setting each print option your application supports has the proper effect; number of copies,
collation, and page numbering, for example.
 Verify setting printer-specific options works properly. These settings should be orthogonal to your
application's print settings, but you never know. Although it may seem that some of this testing should
be taken care of by your operating system’s testers, I find that developers seem to always have some
little customization they make to these dialogs, and so even though it appears to be a standard dialog
something is different. These little tweaks can turn out to be bug farms, I think in part precisely
because the developer is thinking that it's such a small thing nothing can go wrong.
Even when we really do have a standard dialog box, we should give it a once-over , just as a
sanity check. The same applies to any printer-specific options. Everything *should* work correctly, but we
are a lot happier when we *know* it does!
In the general case, it's a risk assessment you and your feature team have to make. Bugs *could*
be anywhere; where do you think they most likely are? Hit those areas first, and then cover the next most
likely, and then the next most likely, and so on. Mix in some exploratory testing too, since bugs have a
penchant for cropping up in places you wouldn't think to look for them!

3.3.1.3.1.15 Special Modes And States


You are not done testing yet unless you have tested your application in the following special
modes and states. Ideally you would run each of your tests in each of these special cases, but I haven't
yet met anyone who has that much time. More typical is to pick one case each day as the context in which
to run your tests that day.
 Different zoom levels, as appropriate.
 Safe Mode. Microsoft Windows has a special mode where just the essentials are loaded - the most
basic display driver, a bare-bones network stack, and no start-on-boot services or applications. How
does your app handle being run under these conditions?
 Sharing documents between multiple users and/or multiple machines, simultaneously and sequentially.
This is especially important for programs that access a database (what happens when someone else

28
Confidential
WI-080 Testing Manual

makes a change to the record you are editing?), but if you can open documents off a network share, or
you can open documents from a shared location on the local machine, someone else can do so as well
- potentially the very same document you are editing.
 No file open, dirty file open, dirty-but-auto-saved file open, saved file open.
 Full screen and other view modes.
 Different application window sizes (document window sizes too, if your app has a multi-document
interface); especially: default launch size, minimized, maximized, not-maximized-but-sized-to-fill-the-
screen, and sized very small.
 Invoke standby, hibernation, and other power-saving modes whilst an operation is in progress.
 Resume your computer out of various sleep modes. Do in-progress operations continue where they
stopped? Or do they restart? Or do they hang?
 Modified system settings. Set your mouse to move faster or slower. Change your keystroke repeat
duration. Mess with your system colors. Does your application pick up the new values when it starts?
Does it pick up values that change while it's running?
 Object Linking and Embedding (OLE). Does embedding other OLE objects in your app's documents
work correctly? What about embedding your app's documents in other OLE-enabled applications? Do
embedded applications activate and deactivate correctly? Do linked OLE documents update when the
source of the link is modified? How does your app handle the linked document's application not being
available?
 Multiple selection. What happens if you apply text formatting when you have three different text ranges
selected? Or you paste when several different items are selected? What should happen?

The last two special states are not contexts in which to execute your test cases but rather
additional tests to run at the end of each of your test cases:

 Send To. Many applications today have a handy menu item that lets you send the current document to
someone as an email.
 Cut, copy, and delete. To and from the same document, a different document, competing applications,
targets that support a less-rich or more-rich version of the data (e.g., copying from a word processor
and pasting into a text editor), targets that don't support any version of the data (what happens if you
copy from your file explorer and paste into your application?).

3.3.1.3.1.16 Dates and Y2K (Year 2000 Bug or Millenium Bug)


You are not done testing unless you have vetted your application for Year 2000 issues. Even though
we are now well past that date, Y2K issues can crop up with the least provocation. Those of you on some
form of Unix have another Y2K-ish situation coming up in 2038 when that platform's 32-bit time data
structure rolls over. Oh, and as long as you're looking at date-related functionality, you may as well look
for other date-related defects as well, such as general leap year handling.
 Verify dates entered with a two digit year from 1 Jan 00 through 31 Dec 29 are interpreted as 1 Jan
2000 through 31 Dec 2029
 Verify dates entered with a two digit year from 1 Jan 30 through 31 Dec 99 are interpreted as 1 Jan
1930 through 31 Dec 1999
 Very dates at least through 2035 are supported
 Verify dates in leap years are correctly interpreted:

29
Confidential
WI-080 Testing Manual

 29 Feb 1900 should fail


 29 Feb 1996 should work
 29 Feb 2000 should work
 31 Dec 2000 should work and be identified as day 366
 29 Feb 2001 should fail
 Verify other interesting dates are correctly interpreted and represented, including:
 31 Dec 1999 should work
 1 Jan 2000 should be unambiguously represented
 10 Jan 2000 (first seven digit date)
 10 Oct 2000 (first eight digit date)
 Verify entering "13" for the month in year 2000 fails

3.3.1.3.1.17 Window Interactions


 Verify z-order works correctly, especially with respect to Always On Top windows (e.g., online help)
 Verify all modal dialog boxes block access to the rest of the application, and all modeless dialog boxes
do not
 Verify window and dialog box focus is correct in all scenarios
 Verify window size is correct after restore-from-minimize and restore-from-maximize
 Verify window size is correct on first launch
 Verify window size is correct on subsequent launches
 Verify window size is maintained after a manual resize
 Verify multiple window (i.e., MDI) scenarios work correctly
 Verify window arrangement commands (e.g., Cascade All) work correctly
 Verify multiple instances of the application work correctly across all scenarios (e.g., that a modal dialog
box in one instance of the application does not disable interactivity in another instance)

3.3.1.3.1.18 Input Methods


You are not done testing yet unless you have tested the following input methods:
 Keyboard. It's important to remember that testing keyboard input doesn't just mean verifying you can
type into text boxes. Scour your application for every different control that accepts text - not just as a
value, but also shortcut key sequences and navigation. (Yes, there's some overlap here with Dialog
Box Navigation and Accessibility.) If your application uses any custom controls, pay them especial
attention as they are likely to use custom keystroke processing.
 Mouse. It’s so obvious that it's easy to miss. And again, pay especial attention custom controls as they
are likely to do custom mouse handling.
 Pen input. Depending on your target platform(s), this could mean pen input direct to your application,
filtered through the operating system (e.g., the Tablet Input Panel on Microsoft Windows), and/or
filtered through third-party input panels. Each input source has its own quirks that just might collide with
your application's own quirks.
 Speech input. Depending on your target platform(s), this could mean speech input direct to your
application, filtered through the operating system, and/or filtered through third-party speech
processors.
 Foreign language input. On Microsoft Windows this usually means an Input Method Editor (IME), either
the one that comes with the operating system or one provided by a third party. These can be
troublesome even for applications that do not do any custom keystroke processing. For example, a

30
Confidential
WI-080 Testing Manual

Japanese-language input processor likely traps all keystrokes, combines multiple keystrokes into a
single Japanese character, and then sends that single character on to the application. Shortcut key
sequences should bypass this extra layer of processing, but oftentimes they don't. (Note: turning off the
IME is one solution to this quandary, but it is almost never the right answer!)
 Assistive input devices such as puff tubes. The operating system generally abstracts these into a
standard keyboard or mouse, but they may introduce unusual conditions your application needs to
handle, such as extra-long waits between keystrokes.
 Random other input sources. For example, games where you control the action by placing one or more
sensors on your finger(s) and then thinking what you want the program to do. Some of these devices
simply show up as a joystick or mouse. What happens if someone tries to use such a device in your
application?
 Multiple keyboards and/or mice. Microsoft Windows supports multiple mice and keyboards
simultaneously. You only ever get a single insertion point and mouse pointer, so you don't have to
figure out how to handle multiple input streams. You may, however, need to deal with large jumps in
e.g., mouse coordinates.

3.3.1.3.2 GUI(Graphical User Interface) Tests


1. images existing on the interface

• should be clear for all graphic resolutions ranges indicated for the operation of the
application;
• they should be intuitive related to the associated function

2. the controllers (buttons, editBoxes,TestBoxes,Labels) should be :

• visible,
• correctly aligned,
• the texts should be correct and fully visible
• correctly active/inactive
• should comply with the internally agreed standards (default colours/denominations)
• to comply with the specifications

3. the menus should correctly operate

• they should call the specified functions


• the component items should be active/inactive according to the correct specifications

4. The ToolTips should be present and correctly written


5. Pop-ups:

• they should operate correctly: they must close/open correctly in different graphic sub-
domains of the interface (context);
• they should call the specified functions;

31
Confidential
WI-080 Testing Manual

• the component items should be active/inactive according to the specifications;

6. Toolbars:

• shortcuts should exist for the main functions of the menu;


• they should call the specified functions
• all items should have tooltips (optionally)
• the toolbars should be opened/closed without errors
• the images (if any) should operate correctly in the context (active/inactive);

7. The graphical elements should work properly:

• close / open toolbar


• close / open menu
• close / open other areas of the graphical interface
• frame windows on the screen; the correct and sufficient presence of scrollbars;
• frame child windows in the application screen;
• graphic interfaces access

8. all the above tests should be performed for guaranteeing the operation of all indicated
graphical resolutions, using also the font states (small/large). For this, it is recommended to
test the minimum and maximum resolution and an intermediate value and if it does not work
properly, it is recommended to localise the limits. Only one element graphic non-operational for a
certain resolution suffice it to declare the respective resolution non-operational or partially
operational (specification in Release Notes).
9. correct navigability:
a. with the mouse
b. with the keyboard
10. Shortcut Keys and correct operation (if there are any specifications in this regard)
11. correct operation of shortcut keys in the context (according to focus)
12. warning and error windows should exist.

3.3.1.3.3 Functional Heuristics

Data Type Attacks

32
Confidential
WI-080 Testing Manual

Paths/Files 􀂃 Long Name (>255 chars)


􀂃 Special Characters in Name (space * ? / \ | < > ,
. ( ) [ ] { } ; : ‘ “ ! @ # $ % ^ &)
􀂃 Non-Existent
􀂃 Already Exists
􀂃 No Space
􀂃 Minimal Space
􀂃 Write-Protected
􀂃 Unavailable
􀂃 Locked
􀂃 On Remote Machine
􀂃 Corrupted
Time and Date 􀂃 Timeouts
􀂃 Time Difference between Machines
􀂃 Crossing Time Zones
􀂃 Leap Days
􀂃 Always Invalid Days (Feb 30, Sept 31)
􀂃 Feb 29 in Non-Leap Years
􀂃 Different Formats(June 5, 2001; 06/05/2001;
06/05/01; 06-05-01; 6/5/2001 12:34)
􀂃 Daylight SavingsChangeover
􀂃 Reset Clock Backward or Forward
Numbers 􀂃0
􀂃 32768 (215)
􀂃 32769 (215 + 1)
􀂃 65536 (216)
􀂃 65537 (216 +1)
􀂃 2147483648 (231)
􀂃 2147483649 (231 + 1)
􀂃 4294967296 (232)
􀂃 4294967297 (232 + 1)
􀂃 Scientific Notation(1E-16)
􀂃 Negative
􀂃 Floating Point/Decimal (0.0001)
􀂃 With Commas (1,234,567)
􀂃 European Style (1.234.567,89)
􀂃 All the Above in Calculations
Strings 􀂃 Long (255, 256, 257, 1000, 1024, 2000, 2048 or
more characters)
􀂃 Accented Chars
(àáâãäåçèéêëìíîðñòôõöö, etc.)
􀂃 Asian Chars ( __ )
􀂃 Common Delimiters and Special
Characters ( “ ‘ ` | / \ , ; : & < > ^ * ? Tab )
􀂃 Leave Blank
􀂃 Single Space
􀂃 Multiple Spaces
􀂃 Leading Spaces
􀂃 End-of-Line Characters (^M)

33
Confidential
WI-080 Testing Manual

􀂃 SQL Injection ( ‘select * from customer )


􀂃 With All Actions (Entering, Searching, Updating,
etc.)
General 􀂃 Violates Domain-Specific Rules (an ip address
of 999.999.999.999, an email address with
no “@”, an age of -1)
􀂃 Violates Uniqueness Constraint
Web Tests
Navigation 􀂃 Back (watch for ‘Expired’ messages and
double-posted transactions)
􀂃 Refresh
􀂃 Bookmark the URL
􀂃 Select Bookmark when Logged Out
􀂃 Hack the URL (change/remove
parameters; see also Data Type Attacks)
􀂃 Multiple Browser Instances Open
Input See also Data Type Attacks
􀂃 HTML/JavaScript Injection (allowing the user to
enter
arbitrary HTML tags and JavaScript commands
can lead to security vulnerabilities)
􀂃 Check
Max Length Defined on Text Inputs
􀂃 > 5000 Chars in TextAreas
Syntax HTML Syntax Checker (http://validator.w3.org/)
CSS Syntax Checker (http://jigsaw.w3.org/css-
validator/)
Preferences 􀂃 Javascript Off
􀂃 Cookies Off
􀂃 Security High
􀂃 Resize Browser Window
􀂃 Change Font Size
Testing Wisdom
A test is an experiment designed to reveal 􀂃 Stakeholders have questions; testers have
information or answer a specific question about answers.
the software or 􀂃 Don’t confuse speed with progress.
system. 􀂃 Take a contrary approach.
􀂃 Observation is exploratory.
􀂃 The narrower the view, the wider the ignorance.
􀂃 Big bugs are often found by coincidence.
􀂃 Bugs cluster.
􀂃 Vary sequences, configurations, and data to
increase the probability that, if there is a problem,
testing will find it.
􀂃 It’s all about the variables.
Heuristics
Variable Analysis Identify anything whose value can change.
Variables can be obvious, subtle, or hidden.
Touch Points Identify any public or private interface that
provides visibility or control. Provides places to

34
Confidential
WI-080 Testing Manual

provoke, monitor, and verify the system.


Boundaries Approaching the Boundary (almost too big, almost
too small), At the Boundary
Goldilocks Too Big, Too Small, Just Right
CRUD Create, Read, Update, Delete
Follow the Data Perform a sequence of actions involving data,
verifying the data integrity at each step.
(Example: Enter → Search → Report → Export →
Import → Update → View)
Configurations Varying the variables related to configuration
(Screen Resolution; Network Speed, Latency,
Signal Strength; Memory; Disk Availability; Count
heuristic applied to any peripheral such as 0,
1, Many Monitors, Mice, or Printers)
Interruptions Log Off, Shut Down, Reboot, Kill Process,
Disconnect, Hibernate, Timeout, Cancel
Starvation CPU, Memory, Network, or Disk at maximum
capacity
Position Beginning, Middle, End (Edit at the beginning of
the line, middle of the line, end of the line)
Selection Some, None, All (Some permissions, No
permissions, All permissions)
Count 0, 1, Many (0 transactions, 1 transactions, Many
simultaneous transactions)
Multi-User Simultaneous create, update, delete from two
accounts or same account logged in twice.
Flood Multiple simultaneous transactions or requests
flooding the queue.
Dependencies Identify “has a” relationships (a Customer has an
Invoice; an Invoice has multiple Line Items).
Apply CRUD, Count, Position, and/or Selection
heuristics (Customer has 0, 1, many Invoices;
Invoice has 0, 1, many Line Items; Delete last
Line Item then Read; Update first Line Item;
Some,
None, All Line Items are taxable; Delete Customer
with 0, 1, Many Invoices)
Constraints Violate constraints (leave required fields blank,
enter invalid combinations in dependent fields,
enter duplicate IDs or names). Apply with the Input
Method heuristic.
Input Method Typing, Copy/Paste, Import, Drag/Drop, Various
Interfaces (GUI v. API)
Sequences Vary Order of Operations 􀂃 Undo/Redo 􀂃
Reverse 􀂃 Combine 􀂃 Invert 􀂃 Simultaneous
Sorting Alpha v. Numeric 􀂃 Across Multiple Pages
State Analysis Identify states and events/transitions, then
represent them in a picture or table. Works with
the Sequences and Interruption heuristics.
Map Making Identify a “base” or “home” state. Pick a direction
and take one step. Return to base. Repeat.

35
Confidential
WI-080 Testing Manual

Users & Scenarios Use Cases, Soap Operas, Personae, Extreme


Personalities
Frameworks
Judgment Inconsistencies, Absences, and Extras with
respect to Internal, External – Specific, or External

Cultural reference points.
Observations Input/Output/Linkage
Flow Input/Processing/Output
Requirements Users/Functions/Attributes/Constraints
Nouns & Verbs The objects or data in the system and the ways in
which the system manipulates it. Also,
Adjectives (attributes) such as Visible, Identical,
Verbose and Adverbs (action descriptors) such
as Quickly, Slowly, Repeatedly, Precisely,
Randomly. Good for creating random scenarios.
Deming’s Cycle Plan, Do, Check, Act

3.3.2 Non-Functional Testing

Non-Functional System Tests ensure that the application was developed according to the Non-
Functional requirements set out in the Requirements Specification.

The type of tests classified as non-functional are:

 Installation
 Performance
 Volume/Load
 Stress
 Usability
 Security
 Internationalization and localization
 Accessibility

3.3.2.1 Installation
An installation test assures that the system/software application is installed correctly and working at
actual customer's hardware.

3.3.2.1.1 Objective
Installation testing follows the objectives:

• To verify whether the application can be appropriately installed/uninstalled, for all indicated
hardware OS/conditions;
• To verify the reaction of the systems (configurations) in case of overload or upgrade;

36
Confidential
WI-080 Testing Manual

• To verify the reaction of the installation of the target system, if it does not have enough memory
(hard-disk full) or it’s too slow; the reaction should be a normal one, returning a notification that the
system cannot be installed;
• To verify if the system was installed, if the application runs accordingly.

3.3.2.1.2 What should be tested?


Both for server and monitoring clients, the test method shall test:

• A new installation, on a computer on which the application was never installed;


• The upgrade of a version already existing on a computer;
• The installation on a computer on which the application was previously installed and uninstalled by
file deletion only.

3.3.2.1.3 How will you test it?


The recommended approach is to have a test environment with the hardware platform(s) and
software platform set up to look exactly like the intended production environment. Then the test is to
execute the installation procedure as written with the files provided to validate successful installation.

3.3.2.1.4 Tools
Just the software application installation kit.

3.3.2.1.5 You are not done yet

3.3.2.1.5.1 Setup
You are not done testing yet unless you have tested your program's setup process under the following
conditions. Although some of these terms are specific to Microsoft Windows other operating systems
generally have similar concepts.
 Installing from a CD-ROM/DVD-ROM
 Installing from a network share
 Installing from a local hard drive
 Installing to a network share
 Installing from an advertised install, where icons and other launch points for the application are created
(i.e., the app is "advertised" to the user), but the application isn't actually installed until the first time the
user launches the program. Also known as "install on demand" or "install on first use".
 Unattended installs (so-called because no user intervention is required to e.g., answer message
boxes), aka command line installs. This can become quite complicated, as the OS's installation
mechanism supports multiple command-line options, and your application may support yet more.
 Mass installs, via an enterprise deployment process such as Microsoft Systems Management Server.
 Upgrading from previous versions. This can also become quite complicated depending on how many
versions of your app you have shipped and from which of those you support upgrades. If all of your
customers always upgrade right away, then you're in good shape. But if you have customers on five or
six previous versions, plus various service packs and hotfixes, you have a chore ahead of you!
 Uninstall. Be sure that not only are all application-specific and shared files removed, but that registry
and other configuration changes are undone as well. Verify components which are shared with other

37
Confidential
WI-080 Testing Manual

applications are/not uninstalled depending whether any of the sharing apps are still installed. Try out-
of-order uninstalls: install app A and then app B, then uninstall app A and then uninstall app B.
 Reinstall after uninstalling the new and previous versions of your application
 Installing on all supported operating systems and SKUs. For Microsoft Windows applications, this may
mean going as far back as Windows 95; for Linux apps, consider which distros you will be supporting.
 Minimum, Typical, Full, and Custom installs. Verify that each installs the correct files, enables the
correct functionality, and sets the correct registry and configuration settings. Also try
upgrading/downgrading between these types - from a minimum to complete install, for example, or
remove a feature - and verify that the correct files etc. are un/installed and functionality is correctly
dis/enabled.
 Install Locally, Run From Network, Install On First Use, and Not Available installs. Depending on how
the setup was created, a custom install may allow the individual components to be installed locally, or
to be run from a shared network location, or to be installed on demand, or to not be installed at all.
Verify that each component supports the correct install types - your application's core probably
shouldn't support Not Available, for example. Mix-and-match install types - if you install one component
locally, run another from the network, and set a third to Install on First Use, does everything work
correctly?
 Install On First Use installs. Check whether components are installed when they need to be (and not
before), and that they are installed to the correct location (what happens if the destination folder has
been deleted?), and that they get registered correctly.
 Run From Network installs. Check whether your app actually runs - some apps won't, especially if the
network share is read-only. What happens if the network is unavailable when you try to launch your
app? What happens if the network goes down while the application is running?
 Verify installs to deeply nested folder structures work correctly.
 Verify that all checks made by the installer (e.g., for sufficient disk space) work correctly.
 Verify that all errors handled by the installer (e.g., for insufficient disk space) work correctly.
 Verify that "normal" or limited-access (i.e., non-admin) users can run the application when it was
installed by an administrator. Especially likely to be troublesome here are Install On First Use
scenarios.
 Verify the application works correctly under remoted (e.g., Microsoft Remote Desktop or Terminal
Server), and virtual (e.g., Microsoft Virtual PC and Virtual Server) scenarios. Graphics apps tend to
struggle in these cases.
 Perform a Typical install followed by a Modify operation to add additional features.
 Perform a Custom install followed by a Modify operation to remove features.
 Perform a Typical install, delete one or more of the installed files, then perform a Repair operation.
 Perform a Custom installation that includes non-Typical features, delete one or more of the installed
files, then perform a Repair operation.
 Patch previous versions. Patching is different from an upgrade in that an upgrade typically replaces all
of the application's installed files, whereas a patch usually overwrites only a few files.
 Perform a Minor Upgrade on a previously patched version.
 Patch on a previously upgraded version.
 Upgrade a previously installed-then-modified install.
 Patch a previously installed-then-modified install.

3.3.2.1.5.1.1 Setup Special Cases


Beyond the standard setup cases above, also consider some more specialized conditions. As
before, although some of these terms are specific to Microsoft Windows other operating systems generally
have similar concepts.

38
Confidential
WI-080 Testing Manual

3.3.2.1.5.1.2 Local Caching


Depending how the setup program was authored, it may allow setup files to be cached on the
local hard drive, which speeds up subsequent repair and other setup operations.
 Verify the correct files/archives are cached
 Verify all files shared with another feature or application are handled correctly across installs,
uninstalls, and reinstalls
 Verify setups for multiple programs and multiple versions of individual programs share the cache
correctly. This is especially important for shared files - imagine the havoc that might ensue if
uninstalling one application removed from the cache a shared file other installed applications require!
 Verify deleting a file/archive from the cache causes a reinstall and recaches the file/archive

3.3.2.1.5.1.3 Setup Authoring


Also known as: Test your setup program.
 Verify every possible path through the setup program (including canceling at every cancel point) works
correctly
 Verify the setup program includes the correct components, files, and registry settings
 Verify any custom actions or conditions, creation of shortcuts, and other special authoring works
correctly
 Verify the correct install states are available
 Verify canceling an in-progress install in fact cancels and leaves no trace of the unfinished install

3.3.2.1.5.1.4 Multi-User Setup


What happens when multiple users mess with modify the setup configuration of your application?
 Verify your application works correctly for User2 after User1 installs/modifies/damages it
 Verify per-user features must be installed by User2 even after User1 has installed them
 Verify User2's per-user settings do not change when User1 changes them

3.3.2.1.5.1.5 Network Setup


Can you install your app from the network rather than a local CD?
 Verify your feature uninstalls cleanly and correctly when it was installed from the network (sometimes
called a post-admin install)
 Verify the correct files are installed a) on the network share, and b) locally (as appropriate), when the
application is installed as Run From Network

3.3.2.1.5.2 Upgrades
You are not done testing unless you understand how your application handles being installed over
previous versions of your application, and having the operating system upgraded out from under it. You
may want to test installing a previous version over, or side-by-side to, the current version as well. Consider
whether to cover all three combinations: upgrading just your application, upgrading just your operating
system, and upgrading both the operating system and your application.

39
Confidential
WI-080 Testing Manual

3.3.2.1.5.2.1 Application Upgrade


 Verify upgrading over a previous version replaces appropriate files and no others
 Verify installing this version side-by-side to previous versions works correctly
 Verify the correct files do and do not exist after an upgrade, and that their versions are also correct
 Verify default settings are correct
 Verify previously existing settings and files are maintained or modified, as appropriate
 Verify all functionality works correctly when the previous version(s) and/or the new version is set to
Run From Network
 Verify any features and applications dependent on files or functionality affected by the upgrade work
correctly

3.3.2.1.5.2.2 Operating System Upgrade


 Verify upgrading over a previous version replaces appropriate files and no others
 Verify all functionality works correctly
 Verify any features and applications dependent on operating system files or functionality affected by
the upgrade work correctly

3.3.2.2 Performance
Performance testing is executed to determine how a system or sub-system performs in terms of
responsiveness and stability under a particular workload.

It can also serve to investigate, measure, validate or verify other quality attributes of the system,
such as scalability, reliability and resource usage.

3.3.2.2.1 Objective
The main objective is to determine or validate speed, scalability, and/or stability.
Performance tests must measure the response time under normal operation conditions, in order to
establish its usability degree. Performance teste must also identify the risk elements in relation with the
achievement of the operation criteria of the application. They must run on application modules, in order to
timely identify critical processes, and also after application integration in order to determine its
performance during integrated module running and concurrent processes.

3.3.2.2.2 What should be tested?


A performance test is a technical investigation done to determine or validate the responsiveness,
speed, scalability, and/or stability characteristics of the product under test.

3.3.2.2.3 How will you test it?


Running developed use-cases and measuring their execution time, while the workstation where
the measurements are made, runs only the tested application.
For the client-server application it is recommended to run on the server only the application tested
during the first phase of the tests.
For these tests, the first instance will be tested, with a single user performing different operations
in the application(operations that are more likely to be used or those provided by project documentation).

40
Confidential
WI-080 Testing Manual

Afterwards, the test shall be redone by running an estimated number of users, as provided in the
specifications.
Along with the tested application, it is recommended to gradually open concurrent processes, in
terms of network connection, client machine, server, workstation where the database is implemented and
to progressively re-execute the response time measurements.
For each specified loading, the execution time should not exceed the limits set and the processes
should run without affecting the operations in the application or in the involved systems.
For each use case, the identified execution time should comply with the accepted range, in terms
of operation after application integration. Thus, it is recommended to elaborate a time target on different
processes and to optimise them in order to meet the present parameters.

3.3.2.2.4 Tools

Tool Description
Apache JMeter is a 100% pure Java desktop application designed to load test functional
behavior and measure performance. It was originally designed for testing Web Applications
but has since expanded to other test functions. Apache JMeter may be used to test
Apache performance both on static and dynamic resources (files, Servlets, Perl scripts, Java
Jmeter Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a
heavy load on a server, network or object to test its strength or to analyze overall
performance under different load types. You can use it to make a graphical analysis of
performance or to test your server/script/object behavior under heavy concurrent load.
The main goal of the project is to create a distributed generic system collecting and storing
various runtime metrics collections used for continuous system performance, health, quality
and availability monitoring purposes. Allmon agents are designed to harvest a range of
Allmon metrics values coming from many areas of monitored infrastructure (application
instrumentation, JMX, HTTP health checks, SNMP). Collected data are base for
quantitative and qualitative performance and availability analysis. Allmon collaborates with
other analytical tools for OLAP analysis and Data Mining processing.
The Grinder is a Java load-testing framework making it easy to orchestrate the activities of
Grinder a test script in many processes across many machines, using a graphical console
application.
loadUI is a tool for Load Testing numerous protocols, such as Web Services, REST, AMF,
JMS, JDBC as well as Web Sites. Tests can be distributed to any number of runners and
loadUI
be modified in real time. LoadUI is tightly integrated with soapUI. LoadUI uses a highly
graphic interface making Load Testing Fun and Fast

3.3.2.2.5 Testing checklist


You are not done testing unless you understand the performance characteristics of your
application and the manner in which your product deforms under stress. Performance testing can seem
straightforward: verify the times required to complete typical user scenarios are acceptable - what's so
hard about that? Simulating those scenarios sufficiently realistically can be difficult, however. Even more
so when it comes to stress testing! For example, say you are testing a web site which you expect to
become wildly popular. How do you simulate millions of users hitting your site simultaneously?
 Verify performance tests exist for each performance scenario, and are being executed on a sufficiently
regular basis

41
Confidential
WI-080 Testing Manual

 Verify performance targets exist for each performance scenario, and are being met
 Verify the performance tests are targeting the correct scenarios and data points
 Verify performance optimizations have the intended effect
 Verify performance with and without various options enabled, such as Clear Type and menu
animations, as appropriate
 Compare performance to previous versions
 Compare performance to similar applications

3.3.2.3 Volume/Load
Volume/Load testing is primarily focused on testing the characteristic of the system to continue to
operate under a specific load, whether it is a large data load or a large number of users.

This is generally referred to as software scalability. Volume testing is a way to test software
functions even when certain components (for example a file or database) increase radically in size.

3.3.2.3.1 Objective
The objectives of this type of testing are:

- Finding problems with the maximum amounts of data


- Define the maximum amount of work a system can handle without significant performance
degradation
- System performance or usability often degrades when large amounts of data must be
ordered, imported or when a certain information is searched within the data volume and
performance is also affected by large numbers os users that access the application.

3.3.2.3.2 What should be tested?


Verify that the system’s performance is not affected by the following scenarios that imply large data
volume:

• maximum (actual or physically capable) number of clients connected (or simulated) all performing
the same, worst case (performance) business function for an extended period of time.
• maximum database size has been reached (actual or scaled) and multiple queries / report
transactions are executed simultaneously.
• processing large files(import, export, upload, etc.)

3.3.2.3.3 How will you test it?


Use tests developed for Performance Testing.
Multiple clients should be used, either running the same tests or complementary tests to produce
the worst case transaction volume / mix (see stress test section next) for an extended period of time.
Maximum database size is created (actual, scaled, or filled with representative data) and multiple
clients used to run queries / report transactions simultaneously for extended periods.

42
Confidential
WI-080 Testing Manual

3.3.2.3.4 Tools

Tool Description
Apache Java desktop application for load testing and
Jmeter performance measurement.
Performance testing tool primarily used for executing
large numbers of tests (or a large number of virtual
LoadRunner
users) concurrently. Can be used for unit and
integration testing as well. Licensed.
Visual Studio Ultimate edition includes a load test
Visual Studio
tool which enables a developer to execute a variety
Ultimate
of tests (web, unit etc...) with a combination of
Edition
configurations to simulate real user load
Eclipse based large scale performance testing tool
Rational
primarily used for executing large volume
Performance
performance tests to measure system response time
Tester
for server based applications. Licensed.

3.3.2.4 Stress
Stress testing is a way to test reliability under unexpected or rare workloads.

It involves testing beyond normal operational capacity, often to a breaking point, in order to observe
the results.

3.3.2.4.1 Objective
Verifying the range within which the system (or different components) operates normally.

3.3.2.4.2 What should be tested?


These tests aim to test the operation of the application with reduced resources, such as RAM
memory, hard-disk space. Also, they test the operation in shared resource use, such as network
connection or parallel operation of other operations.

3.3.2.4.3 How will you test it?


To test limited resources, tests should be run on a single machine, RAM and DASD on server
should be reduced (or limited).
To test unavailable / constrained resources, external subsystems should either be taken down / off-
line or simulated as being in various states.
For remaining stress tests, multiple clients should be used, either running the same tests or
complementary tests to produce the worst case transaction volume / mix.

3.3.2.4.4 Tools
Tools used are the same as for the performance testing.

3.3.2.4.5 Testing checklist


 Run under low memory conditions
 Run under low disk space conditions

43
Confidential
WI-080 Testing Manual

 Run under out-of-memory caused via automation (e.g., a use-up-all-available-memory utility)


 Run under out-of-memory caused by real world scenarios (e.g., running multiple other applications
each having multiple documents open)
 Run under a heavy user load
 Run over a network which frequently drops out
 Run over a network with a large amount of traffic
 Run over a network with low bandwidth
 Run on a minimum requirements machine
 Open, save, and execute (as appropriate) from floppies and other removable disks

As you do all of this performance and stress testing, also check for memory and other resource leaks.

3.3.2.5 Usability
Usability testing is needed to check if the user interface is easy to use and understand. It is
concerned mainly with the use of the application.

3.3.2.5.1 Objective
The purpose of the practice is to discover any missed requirements or any kind of development
that was seen to be intuitive but ended up confusing new users. By testing user needs and how they
interact with the product, designers are able to assess on the product's capacity to meet its intended
purpose.

Usability testing also reveals whether users feel comfortable with your application or Web site
according to different parameters - the flow, navigation and layout, speed and content - especially in
comparison to prior or similar applications.

3.3.2.5.2 What should be tested?


Practice used within the field of user-centered design and user experience that allows for the
designers to interact with the users directly about the product to make any necessary modifications to the
prototype of the product, whether it be a software, device, or website.

3.3.2.5.3 How will you test it?

Usability_General_T
C v 3.1.xlsx

3.3.2.5.4 Tools
N/A

3.3.2.6 Security
Security testing is essential for software that processes confidential data to prevent system intrusion
by hackers. There are different levels on which security tests can be performed like WEB, Infrastructure
and Wireless LANs.

44
Confidential
WI-080 Testing Manual

3.3.2.6.1 Objective
Security testing is basically a type of software testing that’s done to check whether the application
or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack
the system or login to the application without any authorization.

It is a process to determine that an information system protects data and maintains functionality as
intended.

3.3.2.6.2 What should be tested?


Cryptography

 Check if data which should be encrypted is not


 Check for wrong algorithms usage depending on context
 Check for weak algorithms usage
 Check for proper use of salting
 Check for randomness functions

Risky Functionality - File Uploads

 Test that acceptable file types are whitelisted


 Test that file size limits, upload frequency and total file counts are defined and are enforced
 Test that file contents match the defined file type
 Test that all file uploads have Anti-Virus scanning in-place.
 Test that unsafe filenames are sanitised
 Test that uploaded files are not directly accessible within the web root
 Test that uploaded files are not served on the same hostname/port
 Test that files and other media are integrated with the authentication and authorisation schemas

Risky Functionality - Card Payment

 Test for known vulnerabilities and configuration issues on Web Server and Web Application
 Test for default or guessable password
 Test for non-production data in live environment, and vice-versa
 Test for Injection vulnerabilities
 Test for Buffer Overflows
 Test for Insecure Cryptographic Storage
 Test for Insufficient Transport Layer Protection
 Test for Improper Error Handling
 Test for all vulnerabilities with a CVSS v2 score > 4.0
 Test for Authentication and Authorization issues
 Test for CSRF

45
Confidential
WI-080 Testing Manual

HTML 5

 Test Web Messaging


 Test for Web Storage SQL injection
 Check CORS implementation
 Check Offline Web Application

Error Handling

 Check for Error Codes


 Check for Stack Traces

3.3.2.6.3 How will you test it?


Successfully testing an application for security vulnerabilities requires thinking "outside of the
box." Normal use cases will test the normal behavior of the application when a user is using it in the
manner that is expected. Good security testing requires going beyond what is expected and thinking like
an attacker who is trying to break the application. Creative thinking can help to determine what
unexpected data may cause an application to fail in an insecure manner. It can also help find what
assumptions made by web developers are not always true and how they can be subverted. One of the
reasons why automated tools are actually bad at automatically testing for vulnerabilities is that this
creative thinking must be done on a case-by-case basis as most web applications are being developed in
a unique way (even when using common frameworks).

One of the first major initiatives in any good security program should be to require accurate
documentation of the application. The architecture, data-flow diagrams, use cases, etc, should be written
in formal documents and made available for review. The technical specification and application documents
should include information that lists not only the desired use cases, but also any specifically disallowed
use case.

Finally, it is good to have at least a basic security infrastructure that allows the monitoring and
trending of attacks against an organization's applications and network (e.g., IDS systems). 

Estimation phase:

Estimation should cover all phases from S-SDLC:

 Requirements Gathering (OWASP: Phase 1: Before Development Begins)


 Security Requirements
 Setting up Phase Gates
 Risk Assessment
 Design (OWASP: Phase 2: During Definition and Design)
 Identify Design Requirements from security perspective
 Architecture & Design Reviews
 Threat Modeling
 Coding (OWASP: Phase 3: During Development)
 Coding Best Practices
 Perform Static Analysis
 Testing (OWASP: Phase 3: During Development)

46
Confidential
WI-080 Testing Manual

 Vulnerability Assessment
 Fuzzing
 Deployment (OWASP: Phase 4: During Deployment)
 Server Configuration Review
 Network Configuration Review

No. Activity Instructions Resource used/Outcome


As a guidance and depending on
project size (development
estimate), complexity and
familiarity with the proposed
solution, the estimate should be 3-
Estimate is sent via email to
1. Initial estimate 5% of development estimate for
manual TL.
large projects (above 1000 md), a
standard of 20 md for medium
projects (500 to 1000 md) and max
14 md for projects less than
500md.
Based on needs a re-estimate
might be necessary as more about Estimate is sent via email to
2. Re-estimate
the technologies used are manual TL.
discovered during project lifetime.

Security criteria (this should be limited only to application layer as infrastructure assessment is out of the
scope of the testing team unless stated otherwise):

 Assessment of OWASP TOP 10 vulnerabilities:


 Injection
 Broken authentication and session management
 Cross-site scripting (XSS)
 Insecure direct object references
 Security misconfigurations (define secure settings and emphasize
application level assessments)
 Sensitive data exposure
 Missing function level access control
 Cross site request forgery (CSRF)
 Using components with known vulnerabilities
 Un-validated Redirects and Forwards
 Test tools to be used
 Manual steps to be employed in the evaluation

System test plan for all projects should include two areas under security testing:
- manual security testing using the Manual_Security_Tests.docx available on QMS;
- Types of testing (OWASP top 10) and tools to be used for automatic assessment and pen-
testing.
Both are subject to tailoring to the specifics of the project.

47
Confidential
WI-080 Testing Manual

Execution phase:

Execution for security testing happens throughout project life. Below are described the activities that
should be covered for achieving secure software products.

All activities are subject to tailoring and use OWASP principles as guidance.

No
Activity Instructions Resource used/Outcome
.
Go over existing documentation to
identify the security requirements.
Security Document outlining the specific
1. As required set up meetings with
Requirements security requirements.
BA/customer/PM/software security
professionals.
Based on the Security
requirements document identify
Risk Assessment threats both from technology and
(exposes financial Document ranking the risks in
business perspective.TPM,
2. implications). terms of probability and impact as
architect, BAs and developers
well as proposal to mitigate them.
should be part of the team doing
risk assessment along with
security professionals.
Meet with architecture and MoM with proposal to address the
Architecture & Design
3. development TL to present the identified security issues on
Reviews
issues identified by security team. architecture and design level.
Thread modeling document with
Threat Modeling Iterative process used to identify
ranking of threats as well as the
(exposes technology the threats that can affect the
4. vulnerabilities. This document
and business related project from a technology and
serves as input for security
implications) business roles perspective.
analysis phase (5&6 below).

Perform Static Bugs filed to outline the existing


Use tools like Checkmarx/Contrast
Analysis vulnerabilities. Bugs have to
5. for assessing secure coding on
include recommendations for how
project source code.
to fix the issue.
Use specific tools (e.g. Kali tools /
Perform Dynamic Bugs filed to outline the existing
Acunetix, Netsparker, WebInspect,
Analysis vulnerabilities. Bugs have to
6. Burp, Zap, AppScan etc.) as well
include recommendations for how
as manual inspections to detect
to fix the issue.
security bugs.
Monitoring the vulnerabilities by
tracking the existing bugs and
Vulnerability Secure software and verified bugs
7. interact with development team as
Assessment with status fixed and closed.
well as TPM to fix the
vulnerabilities.

Reporting will be done using the Security_Assessment_Report_v1.xls template.

This document will have to be updated as additional tests are performed and bugs are fixed. The last
version of this document should ideally contain only bugs that are resolved and verified and should be
provided to PM/TPM.

48
Confidential
WI-080 Testing Manual

3.3.2.6.4 Tools
While we have already stated that there is no silver bullet tool, tools do play a critical role in the
overall security program. There is a range of open source and commercial tools that can automate many
routine security tasks. These tools can simplify and speed up the security process by assisting security
personnel in their tasks. However, it is important to understand exactly what these tools can and cannot
do so that they are not oversold or used incorrectly. 

Tool Description

The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing
OWASP ZAP – Zed tool for finding vulnerabilities in web applications.It is designed to be used by
Attack Proxy Project people with a wide range of security experience and as such is ideal for
developers and functional testers who are new to penetration testing.

BeEF is short for The Browser Exploitation Framework. It is a penetration


testing tool that focuses on the web browser.

Amid growing concerns about web-borne attacks against clients, including


mobile clients, BeEF allows the professional penetration tester to assess the
BeEF – The Browser
actual security posture of a target environment by using client-side attack
Exploitation
vectors. Unlike other security frameworks, BeEF looks past the hardened
Framework Project
network perimeter and client system, and examines exploitability within the
context of the one open door: the web browser. BeEF will hook one or more
web browsers and use them as beachheads for launching directed command
modules and further attacks against the system from within the browser
context.
Burp Suite is an integrated platform for performing security testing of web
applications. Its various tools work seamlessly together to support the entire
Burp Suite
testing process, from initial mapping and analysis of an application’s attack
surface, through to finding and exploiting security vulnerabilities.
PeStudio is a free tool performing the static investigation of any Windows
executable binary. A file being analyzed with PeStudio is never launched.
Therefore you can evaluate unknown executable and even malware with no
PEStudio
risk. PeStudio runs on any Windows Platform and is fully portable, no
installation is required. PeStudio does not change the system or leaves
anything behind.
OWASP Xenotix XSS Exploit Framework is an advanced Cross Site Scripting
(XSS) vulnerability detection and exploitation framework. It provides Zero False
Positive scan results with its unique Triple Browser Engine (Trident, WebKit,
and Gecko) embedded scanner. It is claimed to have the world’s 2nd largest
OWASP Xenotix XSS Payloads of about 1500+ distinctive XSS Payloads for effective XSS
vulnerability detection and WAF Bypass. It is incorporated with a feature rich
Information Gathering module for target Reconnaissance. The Exploit
Framework includes highly offensive XSS exploitation modules for Penetration
Testing and Proof of Concept creation.

49
Confidential
WI-080 Testing Manual

3.3.2.6.5 Testing checklist


You are not done testing unless you have thought hard about security testing and made explicit
decisions about which testing to do and to not do. Back in the day, when even corporate computers were
unlikely to be connected to a network, security testing didn't seem that big of a deal. After all, even if a
computer did get infected by a virus it couldn't do much damage! Security testing is now officially a Big
Deal.
 Pore through your source code, APIs, and user interface looking for potential
 Buffer overrun attacks
 Denial of service attacks
 SQL injection attacks
 Virus attacks
 User privacy violations (e.g., including user identifying data in saved files)
 On Microsoft Windows OSs, use Application Verifier to ensure no NULL DACLS are created or used -
and to check for many other potential security issues
 Verify security for links and macros is sufficient and works correctly
 Verify relative filenames (e.g., "..\..\file") are handled correctly
 Verify temporary files are created in appropriate locations and have appropriate permissions
 Verify your application functions correctly under different user rights and roles
 Verify your application functions correctly under partial trust scenarios
 Verify every input is bounds-checked
 Verify known attack vectors are disabled

3.3.2.7 Internationalization and localization


The general ability of software to be internationalized and localized can be automatically tested
without actual translation, by using pseudo localization. It will verify that the application still works, even
after it has been translated into a new language or adapted for a new culture (such as different currencies
or time zones).

3.3.2.7.1 Objective
The user interface (UI), documentation, and content can be in multiple languages, currencies, date
formats, and units of measurement. With such complexities, organizations need to ensure that their
applications are relevant to the regions they serve. Internationalization and localization testing ensures
reliability, usability, acceptability, and above all relevance to audience and users worldwide. Products
need to be localized and then tested on many counts like language/copy context, consistent functionality,
compatibility, and interoperability.

3.3.2.7.2 What should be tested?

 Language

o Computer encoded text – One of the most common ways to know if a product is ready to
be localized is the use of Unicode.  This allows the system to support a wide range of character
encoding issues
o Different Number systems – Some countries use a different method in counting that is
different from the usual 1, 2, 3 system English uses.

50
Confidential
WI-080 Testing Manual

o Writing direction – Some are left to right (German, English, French), some are right to left
(Arabic and other Middle Eastern countries)
o Spelling variants where the same language is spoken (tomato vs tomatoe, Localization vs
Localisation, colour vs color)
o Capitalization rules, sorting rules can be different as well
o Input – keyboard shortcuts and keyboard layouts may be different

 Culture

o Images and colors:  issues of comprehensibility and cultural acceptance


o Names and titles
o Government assigned numbers (Social Security number is USA, SIN in Canada) and
passports
o Telephone numbers, addresses, postal codes
o Currencies (symbols, position of currency markers)
o Weights and measures
o Paper sizes (though not as common)

 Writing conventions 

o Date and time formats, calendar uses (Georgian vs Lunar, etc)


o Time Zones (usually internationalized products use UTC time)
o Number format (decimal separators, digit groupings)

3.3.2.7.3 How will you test it?


The main goal of Internationalization and localization testing is to ensure compatibility and
consistency across all localized versions. We do this by defining a baseline standard, usually the native
version of the application under test. Additional tests are planned and executed based on any specific
changes needed to achieve proper localization.

You should then also run all your automated acceptance tests in your new locale to ensure that all
new functionality is internationalized as it developed.

3.3.2.7.4 Tools
N/A

3.3.2.7.5 Testing checklist


You are not done testing yet unless you have vetted your application's readiness for use in
international locales. Even if you are positive that your application will never be used anywhere other than
your own country, these issues are worth at least investigating. Your team may decide not to fix problems
you find, but at least you will know where they are.
 Look for culture-specific images and terminology. For example, the color red is synonymous with
danger in many Western cultures, and it symbolizes happiness or good luck in others.
 Look for geo-political sensitivity issues. One common example is maps that cover areas whose
ownership or exact boundaries are disputed between countries. If your application contains or displays

51
Confidential
WI-080 Testing Manual

maps in any fashion, be prepared for all kinds of pain the moment anyone outside your country starts
using them!
 Verify that your application correctly handles switching to different system locales, language packs,
and code pages, both before your application has started and while it is running.
 Verify that your application correctly handles switching to different regional settings, both before your
application has started and while it is running: date and time formats, currency symbols and formats,
and sort orders, to name just a few. Some or all of these settings will vary across locales and language
packs; most modern operating systems allow you to customize all of this separately from changing
languages or locales as well. (On Microsoft Windows, do this via the Regional Settings control panel
applet.) For example, if your application works with currency, see what happens when you change your
currency symbol to "abc".
 Verify that your application correctly handles multi-byte (e.g., Japanese), complex script (e.g., Arabic)
and right-to-left (e.g., Hebrew) languages. Can you cursor around this text correctly? What happens if
you mix right-to-left and left-to-right text?
 Verify that all controls correctly interact with Input Method Editors (IMEs). This is especially important if
you intend to sell into East Asian countries.
 Verify that your application correctly handles different keyboard mappings. As with regional settings,
certain locales and language packs will apply special keyboard mappings, but operating systems
usually allow you to directly modify your keyboard map as well.
 Verify your application correctly handles ANSI, multi-byte, and Unicode text, extended characters, and
non-standard characters on input, display, edit, and output.
 Verify that the correct sorting order is used. Sorting correctly is hard! Just ask anyone who has run into
the infamous Turkish "i" sort order bug. If you rely on operating system-provided sort routines then you
should be in good shape, but if your application does any custom sorting it probably does it wrong.
 Verify that the system, user, and invariant locales are used as appropriate: use the user locale when
displaying data to the user; use the system locale when working with non-Unicode strings, and use the
invariant locale when formatting data for storage.
 Verify that any language-dependent features work correctly.
 Verify that your test cases correctly take into account all of these issues. In my experience, testers
make all the same mistakes in this area as do developers - and won't you be embarrassed if your
developer logs a bug against your test case! Localization International sufficiency testing is important
for just about any application, but localization testing only matters if you are localizing your application
into other languages. The distinction can be hard to remember, but it's really quite simple: international
sufficiency testing verifies that your application does not have any locale-specific assumptions (like
expecting the decimal separator to be a decimal point), whereas localization testing verifies your
application can be localized into different languages. Although similar the two are completely
orthogonal. The simplest way to get started localization testing is with a pseudo-localized (aka
pseudoloc) build. A pseudoloc build takes your native language build and pseudo-localizes it by adding
interesting stuff to the beginning and end of each localized string (where "interesting stuff" is
determined by the languages to which your product will be translated, but might include e.g. double-
byte or right-to-left characters). This process can vastly simplify your localization testing
 It allows every build to be localized via an automated process, which is vastly faster and cheaper than
is the case when a human hand localizes.
 It allows people who may not read a foreign language to test localized builds.
 Strings that should be localized, but are not, are immediately obvious as they don't have extra
characters pre- and post-pended.
 Strings that should not be localized but in fact are do have extra characters pre- and post-pended and
thus are also immediately obvious.
 Double-byte bugs are more easily found.

52
Confidential
WI-080 Testing Manual

 UI problems such as truncated strings and layout issues become highly noticeable.
 If you can, treat pseudoloc as your primary language and do most of your testing on pseudoloc builds.
This lets you combine loc testing and functionality testing into one. Testing on actual localized builds -
functionality testing as well as localization test - is still important but should be trivial. If you do find
major localization bugs on a localized build, find a way to move that discovery into your pseudoloc
testing next time! Beyond all that, there are a few specific items to keep in mind as you test (hopefully
pseudo-) localized builds:
 Verify each control throughout your user interface (don't forget all those dialog boxes!) is aligned
correctly and sized correctly. Common bugs here are auto-sizing controls moving out of alignment with
each other, and non-auto-sizing controls truncating their contents.
 Verify all data is ordered/sorted correctly.
 Verify tab order is correct. (No, this shouldn't be affected by the localization process. But weirder things
have happened.)
 Verify all strings that should have been localized were. A should-have-been-localized-but-was-not
string is likely hard-coded.
 Verify no strings, that should not have been localized, were.
 Verify all accelerator key sequences were localized.
 Verify each accelerator key sequence is unique.
 Verify all hot key combination were localized.
 Verify each hot key combination is unique. APIs - if your application installs EXEs, DLLs, LIBs, or any
other kind of file - which covers every application I've ever encountered - you have APIs to test.
Possibly the number of APIs *should* be zero - as in the case of for-the-app's-use-only DLLs, or one -
as in the case of an EXE which does not support command line arguments. But - as every tester
knows - what should be the case is not always what is the case.
 Verify that all publicly exposed APIs should in fact be public. Reviewing source code is one way to do
this. Alternatively, tools exist for every language to help with this - Lutz Roeder's .Net Reflector is de
rigueur for anyone working in Microsoft .Net, for example. For executables, start by invoking the
application with "-<command>", ":<command>", "/<command>", "\<command>" and "<command>"
command line arguments, replacing "<command>" with "?" or "help" or a filename. If one of the help
commands works you know a) that the application does in fact process command line arguments, and
b) the format which it expects command line arguments to take.
 Verify that no non-public API can cause harm if accessed "illegally". Just because an API isn't public
does not mean it can't be called. Managed code languages often allow anyone who cares to reflect into
non-public methods and properties, and vtables can be hacked. For the most part anyone resorting to
such tricks has no cause for complaint if they shoot themselves in their foot, but do be sure that
confidential information cannot be exposed through such trickery. Simply making your decryption key
or license-checking code private is not sufficient to keep it from prying eyes, for example.
 Review all internal and external APIs for your areas of ownership. Should they have the visibility they
have? Do they make sense? Do their names make clear their use and intent?
 Verify that every public object, method, property, and routine has been reviewed and tested.
 Verify that all optional arguments work correctly when they are and are not specified.
 Verify that all return values and uncaught exceptions are correct and helpful.
 Verify that all objects, methods, properties, and routines which claim to be thread safe in fact are.
 Verify that each API can be used from all supported languages. For example, ActiveX controls should
be usable (at a minimum) from C++, VB, VB.Net, and C#.
 Verify that documentation exists for every public object, method, property, and routine, and that said
documentation is correct. Ensure that any code samples in the docs compile cleanly and run correctly.

53
Confidential
WI-080 Testing Manual

3.3.2.8 Accessibility
Accessibility testing is a subset of usability testing where the users under consideration have
disabilities that affect how they use the web. The end goal, in both usability and accessibility, is to discover
how easily people can use a web site and feed that information back into improving future designs and
implementations.

Accessibility testing may include compliance with known standards like Web Accessibility Initiative
(WAI) of the World Wide Web Consortium (W3C) created especially for people with disabilities.

3.3.2.8.1 Objective
Web accessibility is a goal, not a yes/no setting. It is a nexus of human needs and technology. As
our understanding of human needs evolves and as technology adapts to those needs, accessibility
requirements will change as well and current standards will be outdated. Different websites, and different
webs, serve different needs with different technology. Voice chat like Skype is great for the blind,
whereas video chat is a boon for sign language users.

Disabilities pose special challenges when working out how easy a product is to use, because they
can introduce additional experience gaps between users and evaluators. Accessibility evaluation must
take account of what it is like to experience the web with different senses and cognitive abilities and of the
various unusual configuration options and specialist software that enable web access to people with
particular disabilities.

If you are trying to evaluate the usability or accessibility of your web site, putting yourself in the
place of a film-loving teenager or a 50-year old bank manager using your site is difficult, even before
disabilities are considered. But what if the film-loving teenager is deaf and needs captions for the films she
watches? What if the 50-year old bank manager is blind and uses special technology (like a screen
reader) which is unfamiliar to the evaluator in order to interact with his desktop environment and web
browser?

3.3.2.8.2 What should be tested?


Development team can make sure that their product is partially accessibility compliant by code
inspection and Unit testing. Test team needs to certify that product is accessibility compliant during the
functional testing phase. In most cases, accessibility checklist is used to certify the accessibility
compliance. This checklist can have information on what should be tested, how it should be tested and
status of product for different access related problems. Template of this checklist is available here.

For accessibility testing to succeed, test team should plan a separate cycle for accessibility
testing. Management should make sure that test team have information on what to test and all the tools
that they need to test accessibility are available to them.

Typical test cases for accessibility might look similar to the following examples:

 Make sure that all functions are available via keyboard only (do not use mouse)
 Make sure that information is visible when display setting is changed to High Contrast modes.
 Make sure that screen reading tools can read all the text available and every picture/Image have
corresponding alternate text associated with it.
 Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.
 Etc.

54
Confidential
WI-080 Testing Manual

3.3.2.8.3 How will you test it?


There are four components to accessibility testing:

 Tool-guided evaluation: where a tool looks for accessibility problems and presents them to the
evaluator (this would include accessibility checkers and code linters).
 Screening: where the expert simulates an end-user experience of the web site. Often you don’t
need to look very far to find accessibility problems. You might do no more than load the page in your
browser and notice the text is very hard to read.
 Tool-based inspection: where the evaluator uses a tool to probe how the various bits of a web site
are working together.
 Code review: where the evaluator looks directly at the code and assets of a web site to scour for
problems.
While beginners may be especially dependent on tool-guided evaluation, evaluators of all levels of
experience can benefit from each component. Even beginners can spot img elements without text
equivalents in HTML markup, and as you get more experienced, you will get quicker at spotting problems
before you progress to more rigorous testing. For experts on larger projects, it may not be feasible to
manually review all client-side code or inspect all parts of a website, but a tool-guided evaluation can find
areas of particular trouble that deserve a closer look. Also, human evaluators may overlook things that a
machine evaluation would have caught.

3.3.2.8.4 Tools

Tool Description
Accessibility Valet is a tool that allows you to check Web pages against either
Section 508 or W3C Web Content Accessibility Guidelines (WCAG) accessibility
compliance. One URL at a time may be checked with this online tool in free
Accessibility Valet mode, or unlimited use with paid subscription. All the HTML reporting options
display your markup in a normalized form, highlighting valid, deprecated and
bogus markup, as well as elements which are misplaced. Any accessibility
warnings are shown in a generated report.

Accessibility Checker is an open source accessibility evaluation tool that was


developed in 2009 by the Inclusive Design Research Centre (formerly known as
AChecker -
the Adaptive Technology Resource Centre) of the University of Toronto. Using
Accessibility
this tool, the user can submit a web page via its URL or by uploading its HTML
Checker
file and can subsequently select which guidelines to evaluate it against, namely
the HTML Validator, BITV, Section 508 , Stanca Act, WCAG 1.0 and WCAG 2.0.

55
Confidential
WI-080 Testing Manual

Developed by the University of the Basque Country in Spain, EvalAccess is one


of the few tools that lets you evaluate an entire website for WCAG 1.0
EvalAccess compliance. It displays the results in an easy-to-read report, whilst describing
each error detected. Whilst it may not be the most user friendly access tool, it
can be sufficient to help most designers and developers clean up their sites.
The FAE evaluates a web page for its accessibility by referencing the ITAA Web
Accessibility Standards which are based on the WCAG 1.0 and Section 508
FAE – Functional guidelines. The results of the evaluation are broken into 5 categories: Navigation
Accessibility and Orientation, Text Equivalents, Scripting, Styling and HTML Standards. The
Evaluator judging of the overall performance in each category is a percentage, divided
between Pass, Warn and Fail – thus enabling you to focus on the specific areas
with most problems.
MAGENTA – Multi- MAGENTA is a web-based accessibility tool developed by the Human Interface
Analysis of in Information Systems (HIIS) Laboratory within the Human Computer Interaction
Guidelines by an Group. In addition to the WCAG 1.0 guidelines it evaluates the accessibility of
ENhanced Tool for web sites according to their conformance to guidelines for the visually impaired
Accessibility and guidelines included in the Stanca Act.
Developed by Urbilog and Orange, OCAWA references the WCAG 1.0 and
France’s accessibility law, the RGAA. Users can submit the URL of the web site
OCAWA
or else upload an HTML file and the tool displays an accessibility audit report
with links to the discovered violations.
WAVE is a tool developed by WebAIM that is available both online and as a
Firefox add-on. It reports accessibility violations by annotating a copy of the page
WAVE – Web
that was evaluated and at the same time, providing recommendations on how to
Accessibility
repair them. Rather than providing a complex technical report, WAVE shows the
Versatile Evaluator
original Web page with embedded icons and indicators that reveal the
accessibility information within your page.
Developed by the University of Stanford’s Online Accessibility Program (SOAP),
Web Accessibility Checker is a tool that can analyze individual web pages for
Web Accessibility their accessibility. Any detected problems are listed by the tool in a report that it
Checker outputs at the end of the evaluation. The user can choose to evaluate against
multiple guidelines which include WCAG 1.0 and 2.0, Section 508, BITV and the
Stanca Act.

3.3.2.9 Compatibility
A common cause of software failure (real or perceived) is a lack of its compatibility with other
application software, operating systems (or operating system versions, old or new), or target environments
that differ greatly from the original. This type of tests purpose is to tests the software in most of the client
environment.

3.3.2.9.1 Objective
Compatibility testing is a type of software testing used to ensure compatibility of the
system/application/website built with various other objects such as other web browsers, hardware
platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read

56
Confidential
WI-080 Testing Manual

only a particular language), operating systems etc. This type of testing helps find out how well a system
performs in a particular environment that includes hardware, network, operating system and other
software etc.

3.3.2.9.2 What should be tested?

 Hardware: It checks software to be compatible with different hardware configurations.


 Operating Systems: It checks your software to be compatible with different Operating Systems
like Windows, Unix, Mac OS etc.
 Software: It checks your developed software to be compatible with other software’s.For example:
MS Word application should be compatible with other software like MS Outlook,MS Excel , VBA
etc.
 Network: Evaluation of performance of system in network with varying parameters such as
Bandwidth, Operating speed, Capacity. It also checks application in different networks with all
parameters mentioned earlier.
 Browser: It checks compatibility of your website with different browsers like Firefox, Google
Chrome, Internet Explorer, Safari etc.
 Devices: It checks compatibility of your software with different devices like USB port Devices,
Printers and Scanners, Other media devices and Blue tooth.
 Mobile: Checking you software is compatible with mobile platforms like Android, iOS etc.
 Versions of the software: It is verifying you software application to be compatible with different
versions of software.For instance checking your Microsoft Word to be compatible with Windows 7,
Windows 7 SP1, Windows 7 SP 2 , Windows 7 SP 3.

3.3.2.9.3 How will you test it?

Common Issues associated with Cross Browser testing

1. Inconsistency in Page Layout


2. Inconsistency in Grid
3. Page Validation does not work on a certain Browser
4. Transaction is not being posted to database on clicking submit button or link
5. SSL certificate error that has been noticed mainly with lower version of browsers
6. Inconsistency in Tab Flow
7. Sometimes, pagination errors also occur.

But before starting testing on browser compatibility it is the sole responsibility of the tester to ask the
developer certain things about the application or Website

 For which browser has the application been developed


 Ask about style sheet, because not every browser supports certain stylesheet versions.

Things on a webpage that remain the same in every Browser

 Image size should remain the same in all browser


 Font color should remain the same as it is mentioned in Style sheet
 Text padding also should remain the same, but be careful if you are using some old version of IE
 Background should also remain the same

57
Confidential
WI-080 Testing Manual

For better testing results, we should check these steps for browser compatibility testing

  CSS and HTML and XHTML Validation : This is done to ensure that pages that have been
developed are free from HTML error and are also following standards set by the W3 Consortium.
 Page validation :  This is checked by enabling and disabling Javascript of Browser
 Font Size Validation : Because some browsers overwrite this with their default or maybe that font
is not available on the system
 All Image alignment: This is to ensure the proper alignment of an image on the page
 Header and Footer: should be verified with care and all text and its spacing alignment should be
taken into account for testing
 Page Alignment should be tested (Center, LHS and RHS)
 Control Alignment: Alignment of controls especially  1) Bullets 2) Radio Button 3) Check Box
should be check on various Browser
 Page Zoom In and Out should be tested properly
 Verification of information submitted to database, if there are forms that interact with the
database, they should be tested /verified on priority, it should be verified that information is being
saved correctly in database
 HTML video format: Video format should be verified  because not all browsers support all the
video formats, for example, IE9 gives support only to .mp4, while Firefox gives support to .mp4
and .webm and Chrome supports almost all .mp4, .webm,.ogm and some other video formats.
 Text Alignment : should be verified specially in DropDown
 Flash content should be tested
 Pages should be tested while cookies and Javascript are turned off and pages should again be
tested when both are turned on.
 Verification should be done on Ajax and JQuery request.

Browser Compatibility Checklist

# Common Issues Brief Description Status  


Browser Browser
     
1 2
Verify that fonts are consistent throughout the
1 CSS Rendering    
browser.
Verify Java Scripts are usable by the browsers
under test?
2 Java Script Interpreting    
-> Execute the functionality which invokes
popups.

Verify that clicking a link, does not open a new


instance of browser. If a link opens new instance
3 Browser Hi-jacking    
of browser, then the link should open new
instance in each browser.

Functioning of Buttons Verify all the buttons (Ok, Cancel, Submit, Reset
4.1    
and Links etc) on Form page is working.

58
Confidential
WI-080 Testing Manual

4.2 Verify Pagination    

Verify there are no broken Internal Links,


4.3    
Outgoing Links, Email Links.
Verify Check boxes and Radio buttons are
4.4    
working.
Verify Application on different resolutions on
5 Resolution Issues    
different browsers.
Verify that application can work if cookies are
6 Cookies Issues    
deleted/disabled.
Verify that session storage on different browsers
7.1    
by closing browser.
Session Storage
Verify the session storage on different browsers
7.2    
by closing tabs in the browser.

Verify that flash player is playing animation.


Generally you don’t need to install a flash player
Flash and Shockwave
8 to run these type of files but in Mozilla Firefox,    
Issues
these files will not run if flash player add on is not
installed.

9 Image display Issues Verify images display correctly.    


Verify the compatibility of HTML version being
10 HTML Version Issues    
used.
Performance Issues Across Do performance profiling of a web application on
11    
Browser various browsers.

12.1 Verify the Alignment of texts    

Verify options in all drop downs as well as texts


12.2    
do not get truncated.
Verify options in all drop downs displayed over
12.3    
the window.
Verify the menu items are showing properly,
12.4    
check is there any Overlapping?
UI Issues
Verify that texts do not get trimmed In row height
12.5    
text (e.g.  g, y, p, j, q,)

Verify that tool tip appear properly, means no


12.6    
overlapping, alignment is correct.

Verify that there are no line  breaks(For headings


12.7    
and  underlines on the page)

59
Confidential
WI-080 Testing Manual

Verify web content does not need Horizontal


12.8 Scroll Bar as it is not preferable on any web    
application.

13.1 Verify Text and image alignments    

13.2 Verify Colors of text, foreground and background    

Printing Issues
13.3 Verify Scalability to fit paper size    

13.4 Verify Tables and borders    

13.5 Verify pages prints legibly without trimming text.    

14.1 Verify all the validations on each field.    


Verify the default values of fields in the input
14.2 Input forms    
form.
14.3 Verify Wrong inputs to the fields in the forms.    

    Verify behavior when auto complete turned on/off    

15.1 Verify the Navigation with Tab/arrow keys    

Verify Background color is consistent throughout


15.2    
application

Verify that buttons, drop-downs, links, menus


Usability
15.3 appearance is consistent throughout the    
application

Verify that links on the previous page appear


15.4 correctly when we press back button on the next  
page
 

60
Confidential
WI-080 Testing Manual

3.3.2.9.4 Tools

Tool Description
Ghostlab offers synchronized testing for scrolls, clicks, reloads and form input
across all your connected devices, meaning you can test the entire user
experience, not just a simple page. Using the superior built-in inspector, you
can discover and fix problems quickly, connected to the DOM or JavaScript
output on any device.
Ghostlab
Ghostlab is available for both Windows and Mac OS X, with no setup
required, as it can instantly connect to any JavaScript-enabled client. Using
the Ghostlab server, you can sync pages from your local directory, your
localhost Apache setup or any server in the world, with automatic reloading to
keep track of file changes. The workspace feature lets you create a custom
browser setup, and adapt Ghostlab's features to exactly what you require.

BrowserStack provides live, web-based browser testing with instant access to


every desktop and mobile browser (currently more than 300), with the ability
to test local and internal servers, providing a secure setup. The cloud-based
BrowserStack
access means no installation is required, and the pre-installed developer
tools (including Firebug Lite, Microsoft Script Debugger and many more) are
useful for quick cross-browser testing and debugging.

Sauce Labs allows you to run tests in the cloud on more than 260 different
browser platforms and devices, providing a comprehensive test infrastructure
including Selenium, JavaScript, Mobile and Manual testing facilities. There's
Sauce Labs
no VM setup or maintenance required, with access to live breakpoints while
the tests are running so you can jump in and take control to investigate a
problem manually.
CrossBrowserTesting offers a live testing environment with access to more
than 130 browsers across 25 different operating systems and mobile devices,
so you can interactively verify your layout and test AJAX, HTML Forms,
JavaScript and Flash.
CrossBrowserTestin
The impressive layout comparison feature lets you choose a "base" browser
g
for comparisons and get a summary of rendering differences, along with a
screenshot of side-by-side images to catch and debug layout issues
effectively. You can test local development of websites even behind firewalls
and logins, with the ability to change browser, cache and cookie settings, and
turn JavaScript on or off.
Browsershots is a free, open-source web app providing a convenient way to
test your website's browser compatibility in one place. Browsershots uniquely
champions the idea of distributing the work of making screenshots among
community members, who set up "factories" on their own machines, to get
Browsershots jobs from the server using a fully automatic unattended script.

It's simple to use -- simply enter the URL and choose the browser setup you
require. There are several presets to choose from, including screen size,
color depth, JavaScript, Java and Flash. You will then have to wait a

61
Confidential
WI-080 Testing Manual

specified period of time until your request is processed. When the


screenshots are ready, they'll appear, and you can bookmark your
processing page to come back to it later.

3.3.2.9.5 Testing Checklist

3.3.2.9.5.1 Network Connectivity


You are not done testing yet unless you have verified how your application handles various
network configurations and events. In times past, you could more or less count on stability in the network -
if the computer was connected to a network when your application started, it would almost certainly
remain connected to that network while your application was open. The chances of something
catastrophic happening were low enough, however, that bugs of the form "Disconnect your computer from
the network while the application is opening a twenty megabyte file from a network share" tended to be
Won't Be Fixed with dispatch under the premise that "No user is going to do that".
Users these days are more likely than not to be connected to a wireless network which drops out
on a regular basis. Users who start out connected to a wired connection may undock their computer and
thus disconnect from that network at any time. Net-over-cell phone is becoming ever more prevalent. And
so it is important to check the following:
 Connecting over a network which supports only IPv4
 Connecting over a network which supports only IPv6
 Connecting over a network which supports both IPv4 and IPv6
 Connecting over an 802.11a wireless network
 Connecting over an 802.11b wireless network
 Connecting over an 802.11g wireless network
 Connecting over an 802.11n wireless network
 Connecting over a GPRS (cell phone) network
 Connecting from a multi-homed machine (i.e., one which is connected to multiple networks)
 Connecting via a 28.8 modem
 Connecting via a 56k modem
 Connecting over a network other than the one inside your corporate firewall
 Connecting over a network which requires user authentication on first access
 Connecting over a network which requires user authentication on every access
 Passing through a software firewall
 Passing through a hardware firewall
 Passing through Network Address Translation
 Losing its connection to the network
 Losing its authority to connect to the network
 Joined to a workgroup
 Joined to a domain
 Accessing documents from a network location which requires user authentication
 Performing a Print Preview to a network printer which is disconnected or otherwise unavailable

3.3.2.9.5.2 Platform
You are not done testing unless you have considered which platforms to include in and which to omit
from your test matrix. The set of supported platforms tends to vary widely across contexts - a consumer
application will likely have a different set of supported platforms than does an enterprise line of business

62
Confidential
WI-080 Testing Manual

application. Even if you officially support only a few specific platforms, it can be useful to understand what
will happen if your application is installed or executed on other platforms. Platforms to consider include:
 All supported versions of Windows; at a minimum: Windows XP SP2, Windows XP SP<latest>,
Windows Server 2003 SP<latest>, Windows Vista SP<latest>
 Apple OS X.<latest>
 Your favorite distribution of Linux
 Your favorite flavor of Unix
 32-bit version of the operating system running on 32-bit hardware
 32-bit version of the operating system running on 64-bit hardware
 64-bit version of the operating system running on 64-bit hardware
 The various SKUs of the operating system
 Interoperability between different SKUs of the operating system
 Interoperability between different operating systems (e.g., using a Windows Vista machine to open a
file which is stored on a Linux network share)
 All supported browsers and browser versions; at a minimum: Internet Explorer 6, Internet Explorer 7,
Opera, FireFox
 With and without anti-virus software installed
 With and without firewall software installed

Also peruse the Windows Logo requirements. Even if you aren't pursuing logo compliance (or
your application doesn't run on Windows) these are a useful jumping off point for brainstorming test cases!

3.3.2.9.5.3 CPU Configurations


You are not done testing yet unless you have tested across multiple CPU configurations. A
common pitfall we often see is having every last developer and tester use exactly the same make, model,
and configuration of computer. This is especially prevalent in computer labs. Yes, having every machine
be exactly the same simplifies setup, troubleshooting, etc. etc. But this is not the world in which your
customers live!
This holds true even if you're writing line of business applications for a corporation where the
computing environment is tightly controlled and locked down. That "standard" configuration will change on
a regular basis, and it will take months or years to complete the switch over. (By which time the standard
of course has moved on!)
So be sure to sprinkle the following throughout your development organization (including dev, test,
PM, docs, and anyone else that helps create your app):
 Processors from multiple vendors (i.e., Intel and AMD if you're building Windows applications)
 Multiple versions of each brand of processor (e.g., for Intel, mobile and desktop Celerons and
Pentiums)
 Single-, dual-, and multi-processors
 Single-, dual, and multi-core processors
 Hyperthreaded and non-hyperthreaded processors
 Desktop and laptop configurations
 32-bit and 64-bit configurations

3.3.2.9.5.4 Hardware Configurations


You are not done testing yet unless you have consciously decided which of the following
hardware configurations to include in and which to exclude from your test matrix:
 Low end desktop

63
Confidential
WI-080 Testing Manual

 High end desktop


 Low end laptop
 High end laptop
 Minimum expected screen resolution (this may be 640x480 in some cases, but depending on your
customers you may be able to expect a higher resolution)
 Super-high screen resolution
 Other relevant form factors (e.g., for Microsoft Windows: convertible Tablet PC, slate Tablet PC,
UMPC/Origami, Pocket PC, Smartphone)
 Maxed-out super-high-end-everything machine
 Minimum configuration machine
 Laptop with power management settings disabled
 Laptop with power management settings set to maximize power
 Laptop with power management settings set to maximize battery life
 CPU configurations

The exact definition of "low end" and "high end" and such will vary across applications and use
cases and user scenarios - the minimum configuration for a high-end CAD program will probably be rather
different than that for a mainstream consumer-focused house design application, for example. Also
carefully consider which chipsets, CPUs, system manufacturers, and such you need to cover. The full
matrix is probably rather larger than you have time or budget to handle!

3.3.2.9.5.5 Application Configuration and Interoperability


You are not done testing unless you have given your application's configurability options a
thorough going-over. Applications can be configured via many different avenues: global and per-user
configuration files and global and per-user registry settings, for example. Users tend to appreciate being
able to customize an application to look and work exactly the ways they want it to look and work; they tend
to get grumpy if all those customizations are nowhere to be seen the next time they launch the application.
Interoperability with other instances of the application and with other applications fall into this same boat:
most users expect a certain level of interoperability between applications, and they often get grumpy if
your application does not meet that bar. Window interactions show up here too. Here are a few items to
kick off your configuration and interoperability brain storming session:

3.3.2.9.5.6 Configuration
 Verify settings which should modify behavior do
 Verify settings are or are not available for administrators to set via policies, as appropriate
 Verify user-specific settings roam with the user
 Verify registry keys are set correctly, and that no other registry keys are modified
 Verify user-specific configuration settings are not written to machine or global registry settings or
configuration files
 Brainstorm how backward compatibility problems might occur because a setting has moved or
changed and thus broken functionality in a previous version or changed default values from a previous
version

3.3.2.9.5.7 Interoperability
 Verify clipboard cut, copy, and paste operations within your application
 Verify clipboard cut, copy, and paste operations between your and other applications

64
Confidential
WI-080 Testing Manual

 Verify drag and drop operations within your application


 Verify drag and drop operations between your and other applications

3.4 User Acceptance Testing

The definition of acceptance testing in standard BS7925 is: “acceptance testing is formal testing
conducted to enable a user, customer or other authorized entity to determine whether to accept a system
or component”. This is the final stage of validation in the software development lifecycle (SDLC).

We perform this activity together with the customer and the main objective is to ensure that the final
system matches the original requirements defined by the business or the project sponsor. Testing team
members may choose to do any test is needed, based on the usual business process. Testing will be
carried out against the user requirements documentation in an environment as close to production as
achievable.

Testing test cases will be generated as detailed scenarios for each requirement (business and
technical) described in project documentation. Additional test cases can be defined during testing
execution phases.

The User Acceptance testing process comprises several types of tests: Functional, Non-Functional
(already detailed in the System Test chapter from above) and end-to-end tests.

The following users can be part of the testing team: business users, Testing & Review members,
members of support teams and members from the customer.

3.4.1 Objective

The key purpose of UAT is not to see that a program or system works according to the specification
but to check that it will work in the context of a business or organization. Many UAT testers are not aware
of this and spend their time running tests which should have been properly done in the functional testing
part of the System Testing.

UAT is testing the integration of a computer system into a much larger system called the business
or organization. It is a form of Interface Testing and is concerned with checking communication between
the system and the users. This does not mean it is a form of Usability testing, which checks how easy it is
to work with a computer system. Instead it is about whether a business or organization can input the
information they need to and get back usable results which will enable the business to go forward.

3.4.2 What should be tested?

Testing generally involves running a suite of tests on the completed system. Each individual test,
known as a case, exercises a particular operating condition of the user's environment or feature of the
system, and will result in a pass or fail outcome.

There is generally no degree of success or failure. The test environment is usually designed to be
identical, or as close as possible, to the anticipated user's environment, including extremes of such. These

65
Confidential
WI-080 Testing Manual

test cases must each be accompanied by test case input data and/or a formal description of the
operational activities to be performed. The intentions are to thoroughly elucidate the specific test case and
description of the expected results.

3.4.3 How to test it?

The acceptance test suite is run against the supplied input data or using an acceptance test script
to direct the testers. Then the results obtained are compared with the expected results. If there is a correct
match for every case, the test suite is said to pass. If not, the system may either be rejected or accepted
on conditions previously agreed between the sponsor and the manufacturer.

The objective is to provide confidence that the delivered system meets the business requirements
of both sponsors and users. The acceptance phase may also act as the final quality gateway, where any
quality defects not previously detected may be uncovered.

A principal purpose of acceptance testing is that, once completed successfully, and provided certain
additional (contractually agreed) acceptance criteria are met, the sponsors will then sign off on the system
as satisfying the contract (previously agreed between sponsor and manufacturer), and deliver final
payment.

The UAT acts as a final verification of the required business functionality and proper functioning of
the system, emulating real-world usage conditions on behalf of the paying client or a specific large
customer. If the software works as required and without issues during normal use, one can reasonably
extrapolate the same level of stability in production.

3.4.4 Tools

Acceptance Testing Framework Description

A behavior-driven development (BDD) acceptance


test framework.
Cucumber
- Behat, BDD acceptance framework for PHP.
- Lettuce, BDD acceptance framework for Python.

App.test for automated acceptance tests


Fabasoft
FitNesse, a fork of Fit
iMacros  
Java Ajax web framework with built-in, server
ItsNat  based, functional web testing capabilities.
A popular web acceptance test framework based
Mocha on Javascript and Node.js
Ranorex  

66
Confidential
WI-080 Testing Manual

Robot Framework  
Selenium  
Specification by example (Specs2)  
Watir  

4 TESTING METHODS

4.1 BLACK-BOX TESTING

Black Box Testing, also known as Behavioral Testing, is a software testing method in which the
internal structure/ design/ implementation of the item being tested is not known to the tester. These tests
can be functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of the tester, is like a black box;
inside which one cannot see. This method attempts to find errors in the following categories:

 Incorrect or missing functions


 Interface errors
 Errors in data structures or external database access
 Behavior or performance errors
 Initialization and termination errors

Definition by ISTQB

 black box testing: Testing, either functional or non-functional, without reference to the


internal structure of the component or system.
 black box test design technique: Procedure to derive and/or select test cases based on an
analysis of the specification, either functional or non-functional, of a component or system
without reference to its internal structure.

 LEVELS APPLICABLE TO

Black Box Testing method is applicable to the following levels of software testing:

 Integration Testing
 System Testing

67
Confidential
WI-080 Testing Manual

 Acceptance Testing

The higher the level, and hence the bigger and more complex the box, the more black box testing method
comes into use.

 BLACK BOX TESTING TECHNIQUES

Following are some techniques that can be used for designing black box tests.

 Equivalence partitioning: It is a software test design technique that involves dividing input values
into valid and invalid partitions and selecting representative values from each partition as test
data.
 Boundary Value Analysis:  It is a software test design technique that involves determination of
boundaries for input values and selecting values that are at the boundaries and just inside/
outside of the boundaries as test data.
 Cause Effect Graphing:  It is a software test design technique that involves identifying the cases
(input conditions) and effects (output conditions), producing a Cause-Effect Graph, and generating
test cases accordingly.
 Decision Table: Decision tables are precise and compact way to model complicated logic. They
are ideal for describing situations in which a number of combinations of actions are taken under
varying sets of conditions.
 State transition: State transition testing is used where some aspect of the system can be
described in what is called a ‘finite state machine’. This simply means that the system can be in a
(finite) number of different states, and the transitions from one state to another are determined by
the rules of the ‘machine’. This is the model on which the system and the tests are based.

 BLACK BOX TESTING ADVANTAGES

 Tests are done from a user’s point of view and will help in exposing discrepancies in the
specifications.
 Tester need not know programming languages or how the software has been implemented.
 Tests can be conducted by a body independent from the developers, allowing for an objective
perspective and the avoidance of developer-bias.
 Test cases can be designed as soon as the specifications are complete.

 BLACK BOX TESTING DISADVANTAGES

68
Confidential
WI-080 Testing Manual

 Only a small number of possible inputs can be tested and many program paths will be left
untested.
 Without clear specifications, which is the situation in many projects, test cases will be difficult to
design.
 Tests can be redundant if the software designer/ developer has already run a test case.
 Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case
in Black Box Testing.

4.2 WHITE BOX TESTING

White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method in which
the internal structure/ design/ implementation of the item being tested is known to the tester. The tester
chooses inputs to exercise paths through the code and determines the appropriate outputs. Programming
know-how and the implementation knowledge is essential. White box testing is testing beyond the user
interface and into the nitty-gritty of a system.

This method is named so because the software program, in the eyes of the tester, is like a white/
transparent box; inside which one clearly sees.

Definition by ISTQB

 white-box testing: Testing based on an analysis of the internal structure of the component or


system.
 white-box test design technique: Procedure to derive and/or select test cases based on an
analysis of the internal structure of a component or system.

 EXAMPLE

A tester, usually a developer as well, studies the implementation code of a certain field on a webpage,
determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the expected
outcomes, which is also determined by studying the implementation code.

White Box Testing is like the work of a mechanic who examines the engine to see why the car is not
moving.

 LEVELS APPLICABLE TO

White Box Testing method is applicable to the following levels of software testing:

69
Confidential
WI-080 Testing Manual

 Unit Testing: For testing paths within a unit.


 Integration Testing: For testing paths between units.
 System Testing: For testing paths between subsystems.

However, it is mainly applied to Unit Testing.

 WHITE BOX TESTING TECHNIQUES

 Control flow testing
 Data flow testing
 Branch testing
 Statement coverage
 Decision coverage

 WHITE BOX TESTING ADVANTAGES

 Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
 Testing is more thorough, with the possibility of covering most paths.

 WHITE BOX TESTING DISADVANTAGES

 Since tests can be very complex, highly skilled resources are required, with thorough knowledge
of programming and implementation.
 Test script maintenance can be a burden if the implementation changes too frequently.
 Since this method of testing it closely tied with the application being testing, tools to cater to every
kind of implementation/platform may not be readily available.

5 DATABASE TESTING

Computer applications are more complex these days with technologies like Android and also with
lots of smart phone apps. The more complex the front ends, the back ends are even more intricate.   So, it
is all the more important to learn about DB testing and be able to validate the databases effectively to
ensure secure and quality databases.

5.1.1 Objective

70
Confidential
WI-080 Testing Manual

1) Data Mapping: In the software systems, data often travels back and forth from the UI (user interface) to
the backend DB and vice versa. So following are the aspects to look for:
 To check whether the fields in the UI/Front end forms and mapped consistently with the
corresponding DB table (and also the fields within).  Typically this mapping information is defined
in the requirements documents.
 Whenever a certain action is performed in the front end of an application, a corresponding CRUD
(Create, Retrieve, Update and Delete) action gets invoked at the back end. A tester will have to
check if the right action is invoked and the invoked action in itself is successful or not.

2) ACID properties validation:  atomicity, consistency, isolation and durability. Every transaction a DB
performs has to adhere to these four properties.

 Atomicity means that a transaction either fails or passes. This means that even if a single part of
transaction fails- it means that the entire transaction has failed. Usually this is called the “all-or
nothing” rule.
 Consistency: A transaction will always result in a valid state of the DB
 Isolation: If there are multiple transactions and they are executed all at once, the result/state of
the DB should be the same as if they were executed one after the other.
 Durability: Once a transaction is done and committed, no external factors like power loss or crash
should be able to change it

3) Data integrity:
This means that following any of the CRUD operations (create, read, update and delete), the updated and
most recent values/Status of shared data should appear on all the forms and screens. A value should not
be updated on one screen and display an older value on another one. So devise your DB test cases in a
way to include checking the data in all the places it appears to see if it is consistently the same.

4) Business rule conformity:  More complex databases means more complicated components like
relational constraints, triggers, stored procedures, etc. So testers will have to come up with appropriate
SQL queries in order to validate these complex objects.

5.1.2 What should be tested?

1) Transactions:
When testing transactions it is important to make sure that they satisfy the ACID properties.The
following are the statements commonly used:

 BEGIN TRANSACTION TRANSACTION#


 END TRANSACTION TRANSACTION#

Rollback statement ensures that the database lies in a consistent state.

 ROLLBACK TRANSACTION#

After these statements are executed, use a select to make sure if the changes have been reflected.

 SELECT * FROM TABLENAME <tables which involve the transactions>

71
Confidential
WI-080 Testing Manual

2) Database schema:
Database schema is nothing but a formal definition of the how the data is going to be organized into a DB.
To test it:

 Identify the requirements based on which the database operates. Sample requirements:

 Primary keys to be created before any other fields are created.


 Foreign keys should be completely indexed for easy retrieval and searching.
 Field names starting or ending with certain characters.
 Fields with a constraint that certain values can or cannot be inserted.

 Use one of the following ways according to the relevance:


 SQL Query DESC<table name> to validate the schema.
 Regular expressions for validating the names of the individual fields and their values
 Tools like SchemaCrawler

3) Trigger:
When a certain event takes places on a certain table, a piece of code (a trigger) can be auto-
instructed to be executed.

For example, a new student joined a school. The student is taking 2 classes; math and science. The
student is added to the “student table”.  A trigger could be adding the student to the corresponding subject
tables once he is added to the student table.

The common method to test is to execute SQL query embedded in the trigger independently first
and record the result. Follow this up with executing the trigger as a whole. Compare the results.

These are tested during both the black box and white box testing phases.

 White box testing:  Stubs and drivers are to insert or update or delete data that would result in
the trigger being invoked. The basic idea is to just test the DB alone even before the integration
with the front end (UI) is made.
 Black box testing:
a) Since the UI and DB integration is now available; we can insert/delete/update data from
the front end in a way that the trigger gets invoked. Following that select statements can be
used to retrieve the DB data to see if the trigger was successful in performing the intended
operation.
b) Second way to test this is to directly load the data that would invoke the trigger and see if it
works as intended.

4) Stored Procedures:
Stored procedures are more or less similar to user defined functions. These can be invoked by a
call procedure/execute procedure statements and the output is usually in the form of result sets.

These are stored in the RDBMS and are available for applications.

72
Confidential
WI-080 Testing Manual

These are also tested during:

 White box testing: Stubs are used to invoke the stored procedures and then the results are
validated against the expected values.
 Black box testing: Perform an operation from the front-end(UI) of the application and check for
the execution of the stored procedure and its results.

5. Field constraints – Default value, unique value and foreign key:


 Perform a front end operation which overruns the database object condition
 Validate the results with a SQL Query.
Checking the default value for a certain field is quite simple. It is a part of business rule validation. You can
do it manually or you can use tools like QTP to do so. Manually, you can perform an action that will add a
value other than the default value into the field from the front end and see if it results in an error.
The following is a sample VBScript code:

1 <i>Function VBScriptRegularexpressionvlaidation(pattern , string_to_match)</i>

2 <i>Set newregexp = new RegExp</i>

3 <i>newregexp.Pattern = “<Default value as required by the business requirements>”</i>

4 <i>newregexp.Ignorecase = True</i>

5 <i>newregexp.Global = True</i>

6 <i>VBScriptRegularexpressionvlaidation = newregexp.Test(string_to_match)</i>

7 <i>End Function</i>

8 <i>Msgbox VBScriptRegularexpressionvlaidation(pattern , string_to_match)</i>

The result to the above code is true if the default value exists or false if it doesn’t.

Checking the unique value can be done exactly the way we did for the default values. Try entering values
from the UI that will violate this rule and see if an error gets displayed.
Automation VB script code can be:

1 <i>Function VBScriptRegularexpressionvlaidation(pattern , string_to_match)</i>

2 <i>Set newregexp = new RegExp</i>

3 <i>newregexp.Pattern = “<Unique value as required by the business requirements>”</i>

4 <i>newregexp.Ignorecase = True</i>

5 <i>newregexp.Global = True</i>

73
Confidential
WI-080 Testing Manual

6 <i>VBScriptRegularexpressionvlaidation = newregexp.Test(string_to_match)</i>

7 <i>End Function</i>

8 <i>Msgbox VBScriptRegularexpressionvlaidation(pattern , string_to_match)</i>

For the foreign key constraint validation use data loads that directly input data that violates the constraint
and see if the application restricts the same or not. Along with the back end data load, perform the front
end UI operations too in a way that are going to violate the constraints and see if the relevant error is
displayed.

5.1.3 How to test it?

The general test process for DB testing is not very different from any other application. The following are
the steps:

Step #1) Prepare the environment


Step #2) Run a test
Step #3) Check test result
Step #4) Validate according to the expected results
Step #5) Report the findings to the respective stakeholders

An important part of writing database tests is the creation of test data. You have several strategies for
doing so:

1. Have source test data.  You can maintain an external definition of the test data, perhaps in flat
files, XML files, or a secondary set of tables. This data would be loaded in from the external
source as needed.
2. Test data creation scripts.  You develop and maintain scripts, perhaps using data manipulation
language (DML) SQL code or simply application source code (e.g. Java or C#), which does the
necessary deletions, insertions, and/or updates required to create the test data.
3. Self-contained test cases.  Each individual test case puts the database into a known state
required for the test. 

Black-Box Testing at the White/Clear-Box Testing Internally Within the


Interface Database
Scaffolding code (e.g. triggers or updateable views)
which support refactorings
O/R mappings (including the meta Typical unit tests for your stored procedures, functions,
data) and triggers
Existence tests for database schema elements (tables,
procedures, ...)

74
Confidential
WI-080 Testing Manual

View definitions
Incoming data values
Referential integrity (RI) rules

Outgoing data values (from queries, Default values for a column


stored functions, views ...)
Data invariants for a single column
Data invariants involving several columns

With all these features, factors and processes to test on a database, there is an increasing demand
on the tester to be technically strong with the key DB concepts. Despite some of negative beliefs that the
DB testing creates new bottlenecks and is a lot of additional expenditure – this is a realm of testing that is
gaining obvious attention and demand.

5.1.4 Tools

Category Description Examples


Data privacy, or more generally information privacy, is a
Data Privacy critical issue for many organizations. Many organizations IBM Optim Data
Tools must safeguard data by law due to regulatory compliance Privacy tools
concerns.

Empirix
Tools simulate high usage loads on your database, enabling
Testing tools for Mercury Interactive
you to determine whether your system's architecture will
load testing RadView
stand up to your true production needs.
Web Performance

Data Factory
Developers need test data against which to validate their
Datatect
Test Data systems. Test data generators can be particularly useful
DTM Data
Generator when you need large amounts of data, perhaps for stress and
Generator
load testing.
Turbo Data
Your test data needs to be managed. It should be defined,
either manually or automatically (or both), and then
maintained under version control. You need to define IBM Optim Test
Test Data
expected results of tests and then automatically compare that Data Management
Management
with the actual results. You may even want to retain the tools
results of previous test runs (perhaps due to regulatory
compliance concerns).
AnyDbTest
SQLUnit
TSQLUnit (for
testing T-SQL in
MS SQL Server)
Visual Studio
Unit testing tools Tools which enable you to regression test your database.
Team Edition for
Database
Professionals
includes testing
capabilities
XTUnit

75
Confidential
WI-080 Testing Manual

6 LOGGING TESTING
Treating logs as data gives us greater insight into the operational activity of the systems we
test. Structured logging, which is using a consistent, predetermined message format containing semantic
information, builds on this technique.

Logging levels

 DEBUG level messages give highly-detailed and/or specific information, only useful for tracking
down problems.

 INFORMATION messages give general information about what the system is doing. (e.g.
processing file X)

 WARNING messages warn the user about things which are not ideal, but should not affect the
system. (e.g. configuration X missed out, using default value)

 ERROR messages inform the user that something has gone wrong, but the system should be
able to cope. (e.g. connection lost, but will try again)

 CRITICAL messages inform the user when an un-recoverable error occurs. (i.e. I am about to
abort the current task or crash)
If logging is a deployed feature of an application then it too needs testing. But, since log output is an
integration point, it does not fall under “unit” testing. If log files can contain security flaws, convey data,
impact support, and impair performance, then they should be tested that they conform to standards.

Log output can be tested using the appropriate XUnit framework, like JUnit. In development of a
project, the log output changes rapidly as the code changes. Selecting where in the software development
life cycle (SDLC) to test logging or even specify what logs should contain is difficult. One approach is that
the deployed system will not do any app logging that was not approved by the stake holders. These must
be “unit” tested, and all development support logging is removed or disabled except for use in a
development environment.

76

You might also like