435 Chapter20
435 Chapter20
All copyright information MUST appear if these slides are posted on a website for student
use.
1
Testing Quality Dimensions-I
More challenging than conventional ones:
Networks, protocols, OSes, hardware, etc. need to be checked too
2
Testing Quality Dimensions-II
Usability is tested to ensure that each category of user
is supported by the interface
can learn and apply all required navigation syntax and
semantics
Navigability is tested to ensure that
all navigation syntax and semantics are exercised to uncover
any navigation errors (e.g., dead links, improper links,
erroneous links).
Performance is tested under a variety of operating
conditions, configurations, and loading to ensure that
the system is responsive to user interaction
the system handles extreme loading without unacceptable
operational degradation
3
Testing Quality Dimensions-III
Compatibility is tested by executing the
WebApp in a variety of different host
configurations on both the client and server
sides.
The intent is to find errors that are specific to a
unique host configuration.
Interoperability is tested to ensure that the
WebApp properly interfaces with other
applications and/or databases.
Security is tested by assessing potential
vulnerabilities and attempting to exploit each.
Any successful penetration attempt is deemed a
security failure.
4
Error Characteristics in a WebApp
Because many types of WebApp tests uncover problems that are first
evidenced on the client side, you often see a symptom of the error, not
the error itself.
Because a WebApp is implemented in a number of different
configurations and within different environments, it may be difficult or
impossible to reproduce an error outside the environment in which the
error was originally encountered.
Although some errors are the result of incorrect design or improper
HTML (or other programming language) coding, many errors can be
traced to the WebApp configuration.
Because WebApps reside within a client/server architecture, errors can
be difficult to trace across three architectural layers: the client, the
server, or the network itself.
Some errors are due to the static operating environment (i.e., the
specific configuration in which testing is conducted), while others are
attributable to the dynamic operating environment (i.e., instantaneous
resource loading or time-related errors).
5
The Testing Process
6
Content Testing
Content testing has three important objectives:
to uncover syntactic errors (e.g., typos, grammar
mistakes) in text-based documents, graphical
representations, and other media
to uncover semantic errors (i.e., errors in the
accuracy or completeness of information) in any
content object presented as navigation occurs, and
to find errors in the organization or structure of
content that is presented to the end-user.
7
Assessing Content Semantics
Is the information factually accurate?
Is the information concise and to the point?
Is the layout of the content object easy for the user to understand?
Can information embedded within a content object be found easily?
Have proper references been provided for all information derived from
other sources?
Is the information presented consistent internally and consistent with
information presented in other content objects?
Is the content offensive, misleading, or does it open the door to
litigation?
Does the content infringe on existing copyrights or trademarks?
Does the content contain internal links that supplement existing
content? Are the links correct?
Does the aesthetic style of the content conflict with the aesthetic style
of the interface?
8
Database Testing
9
User Interface Testing
Interface features are tested to ensure that design rules,
aesthetics, and related visual content is available for the
user without error.
Individual interface mechanisms are tested in a manner
that is analogous to unit testing.
The complete interface is tested against selected use-
cases and Navigation Semantic Units (NSUs) to uncover
errors in the semantics of the interface.
The interface is tested within a variety of environments
(e.g., browsers) to ensure that it will be compatible.
10
Testing Interface Mechanisms-I
Links—navigation mechanisms that link the user to some other
content object or function.
Forms—a structured document containing blank fields that are
filled in by the user. The data contained in the fields are used
as input to one or more WebApp functions.
Client-side scripting—a list of programmed commands in a
scripting language (e.g., Javascript) that handle information
input via forms or other user interactions
Dynamic HTML—leads to content objects that are manipulated
on the client side using scripting or cascading style sheets
(CSS).
Client-side pop-up windows—small windows that pop-up
without user interaction. These windows can be content-
oriented and may require some form of user interaction.
11
Testing Interface Mechanisms-II
CGI scripts—a common gateway interface (CGI) script implements a
standard method that allows a Web server to interact dynamically with
users (e.g., a WebApp that contains forms may use a CGI script to
process the data contained in the form once it is submitted by the
user).
Streaming content—rather than waiting for a request from the client-
side, content objects are downloaded automatically from the server
side. This approach is sometimes called “push” technology because
the server pushes data to the client.
Cookies—a block of data sent by the server and stored by a browser
as a consequence of a specific user interaction. The content of the
data is WebApp-specific (e.g., user identification data or a list of items
that have been selected for purchase by the user).
Application specific interface mechanisms—include one or more
“macro” interface mechanisms such as a shopping cart, credit card
processing, or a shipping cost calculator.
12
Usability Tests
Design by WebE team … executed by end-users
Testing sequence …
Define a set of usability testing categories and identify goals
for each.
Design tests that will enable each goal to be evaluated.
Select participants who will conduct the tests.
Instrument participants’ interaction with the WebApp while
testing is conducted.
Develop a mechanism for assessing the usability of the
WebApp
different levels of abstraction:
the usability of a specific interface mechanism (e.g., a form)
can be assessed
the usability of a complete Web page (encompassing
interface mechanisms, data objects and related functions)
can be evaluated
the usability of the complete WebApp can be considered.
13
Compatibility Testing
Compatibility testing is to define a set of “commonly
encountered” client side computing configurations and their
variants
Create a tree structure identifying
each computing platform
typical display devices
the operating systems supported on the platform
the browsers available
likely Internet connection speeds
Derive a series of compatibility validation tests
derived from existing interface tests, navigation tests, performance
tests, and security tests.
intent of these tests is to uncover errors or execution problems that
can be traced to configuration differences.
14
Component-Level Testing
Focuses on a set of tests that attempt to
uncover errors in WebApp functions
Derived from forms-level input
Conventional black-box and white-box test
case design methods can be used
Database testing is often an integral part of the
component-testing regime
15
Navigation Testing
User travels on the WebApp – like visiting a museum
The following navigation mechanisms should be tested:
Navigation links—these mechanisms include internal links within the
WebApp, external links to other WebApps, and anchors within a
specific Web page.
Redirects—these links come into play when a user requests a non-
existent URL or selects a link whose destination has been removed or
whose name has changed.
Bookmarks—although bookmarks are a browser function, the
WebApp should be tested to ensure that a meaningful page title can
be extracted as the bookmark is created.
Frames and framesets—tested for correct content, proper layout and
sizing, download performance, and browser compatibility
Site maps—Each site map entry should be tested to ensure that the
link takes the user to the proper content or functionality.
Internal search engines—Search engine testing validates the
accuracy and completeness of the search, the error-handling
properties of the search engine, and advanced search features
16
Configuration Testing
Server-side
Is the WebApp fully compatible with the server OS?
Are system files, directories, and related system data created
correctly when the WebApp is operational?
Do system security measures (e.g., firewalls or encryption) allow
the WebApp to execute and service users without interference or
performance degradation?
Has the WebApp been tested with the distributed server
configuration (if one exists) that has been chosen?
Is the WebApp properly integrated with database software? Is the
WebApp sensitive to different versions of database software?
Do server-side WebApp scripts execute properly?
Have system administrator errors been examined for their affect
on WebApp operations?
If proxy servers are used, have differences in their configuration
been addressed with on-site testing?
17
Configuration Testing
Client-side
Hardware—CPU, memory, storage and printing devices
Operating systems—Linux, Macintosh OS, Microsoft
Windows, a mobile-based OS
Browser software—Internet Explorer, Mozilla/Netscape,
Opera, Safari, and others
User interface components—Active X, Java applets and
others
Plug-ins—QuickTime, RealPlayer, and many others
Connectivity—cable, DSL, regular modem, T1
The number of configuration variables must be reduced
to a manageable number
18
Security Testing
Designed to probe vulnerabilities of the client-
side environment, the network communications
that occur as data are passed from client to
server and back again, and the server-side
environment
On the client-side, vulnerabilities can often be
traced to pre-existing bugs in browsers, e-mail
programs, or communication software.
On the server-side, vulnerabilities include
denial-of-service attacks and malicious scripts
that can be passed along to the client-side or
used to disable server operations
19
Performance Testing
Does the server response time degrade to a point where it is
noticeable and unacceptable?
At what point (in terms of users, transactions or data loading)
does performance become unacceptable?
What system components are responsible for performance
degradation?
What is the average response time for users under a variety of
loading conditions?
Does performance degradation have an impact on system
security?
Is WebApp reliability or accuracy affected as the load on the
system grows?
What happens when loads that are greater than maximum
server capacity are applied?
20
Load Testing
The intent is to determine how the WebApp
and its server-side environment will respond to
various loading conditions
N, the number of concurrent users
T, the number of on-line transactions per unit of time
D, the data load processed by the server per
transaction
Overall throughput, P, is computed in the
following manner:
• P= NxTxD
21
Stress Testing
Does the system degrade ‘gently’ or does the server shut down as
capacity is exceeded?
Does server software generate “server not available” messages? More
generally, are users aware that they cannot reach the server?
Does the server queue requests for resources and empty the queue
once capacity demands diminish?
Are transactions lost as capacity is exceeded?
Is data integrity affected as capacity is exceeded?
What values of N, T, and D force the server environment to fail? How
does failure manifest itself? Are automated notifications sent to
technical support staff at the server site?
If the system does fail, how long will it take to come back on-line?
Are certain WebApp functions (e.g., compute intensive functionality,
data streaming capabilities) discontinued as capacity reaches the 80 or
90 percent level?
22