Unit IV
Unit IV
• Graphical user interfaces (GUIs) have helped to eliminate many of the most
horrific interface problems
• User interface analysis and design has to do with the study of people and
how they relate to technology
A Spiral Process
• User interface development follows a spiral process
• Focuses on the profile of the users who will interact with the
system
– Interface design
– Interface construction
• Define interaction modes in a way that does not force a user into
unnecessary or undesired actions
– The user shall be able to enter and exit a mode with little or no effort
(e.g., spell check edit text spell check)
• Provide for flexible interaction
• Four different models come into play when a user interface is analyzed and
designed
– User profile model – Established by a human engineer or software
engineer
– Design model – Created by a software engineer
– Implementation model – Created by the software implementers
– User's mental model – Developed by the user when interacting with
the application
• The role of the interface designer is to reconcile these differences and derive
a consistent representation of the interface
Design Model
Implementation Model
• Consists of the look and feel of the interface combined with all supporting
information (books, videos, help files) that describe system syntax and
semantics
• Strives to agree with the user's mental model; users then feel comfortable
with the software and use it effectively
• Serves as a translation of the design model by providing a realization of the
information contained in the user profile model and the user’s mental model
User Analysis
• The analyst strives to get the end user's mental model and the design model
to converge by understanding
– The users themselves
– How these people use the system
• Information can be obtained from
– User interviews with the end users
– Sales input from the sales people who interact with customers and
users on a regular basis
– Marketing input based on a market analysis to understand how
different population segments might use the software
– Support input from the support staff who are aware of what works and
what doesn't, what users like and dislike, what features generate
questions, and what features are easy to use
Content Analysis
• Interface objects and actions are obtained from a grammatical parse of the
use cases and the software problem statement
• Interface objects are categorized into types: source, target, and application
– A source object is dragged and dropped into a target object such as to
create a hardcopy of a report
– An application object represents application-specific data that are not
directly manipulated as part of screen interaction such as a list
• The message should describe the problem in plain language that a typical
user can understand
• The message should provide constructive advice for recovering from the
error
• The message should indicate any negative consequences of the error (e.g.,
potentially corrupted data files) so that the user can check to ensure that they
have not occurred (or correct them if they have)
Software Testing
• Unit testing
– Concentrates on each component/function of the software as
implemented
nted in the source code
• Integration testing
– Focuses on the design and construction of the software architecture
• Validation testing
– Requirements are validated against the constructed software
• System testing
– The software and other system elements are tested as a whole
• Unit testing
– Exercises specific paths in a component's control structure to ensure
complete coverage and maximum error detection
– Components are then assembled and integrated
• Integration testing
– Focuses on inputs and outputs, and how well the components fit
together and work together
• Validation testing
– Provides final assurance that the software meets all functional,
behavioral, and performance requirements
• System testing
– Verifies that all system elements (software, hardware, people,
databases) mesh properly and that overall system function and
performance is achieved
1.Unit Testing
• Module interface
– Ensure that information flows properly into and out of the module
• Local data structures
– Ensure that data stored temporarily maintains its integrity during all
steps in an algorithm execution
• Boundary conditions
– Ensure that the module operates properly at boundary values
established to limit or restrict processing
• Independent paths (basis paths)
– Paths are exercised to ensure that all statements in a module have been
executed at least once
• Error handling paths
– Ensure that the algorithms respond correctly to specific error
conditions
• Driver
– A simple main program that accepts test case data, passes such data to
the component being tested, and prints the returned results
• Stubs
– Serve to replace modules that are subordinate to (called by) the
component to be tested
2. Integration Testing
• Three kinds
Top-down integration
Bottom-up integration
Sandwich integration
• The program is constructed and tested in small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied
Top-down integration
• Integration and testing starts with the most atomic modules in the control
hierarchy
• Advantages
– This approach verifies low-level data processing early in the testing
process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this
code is later discarded or expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will
eventually use the services of the lower-level modules; consequently,
testing may be incomplete or more testing may be needed later when
the upper level modules are available
Sandwich Integration
– Thread-based testing
• Integrates the set of classes required to respond to one input or
event for the system
• Each thread is integrated and tested individually
• Regression testing is applied to ensure that no side effects occur
– Use-based testing
• First tests the independent classes that use very few, if any,
server classes
• Then the next layer of classes, called dependent classes, are
integrated
• This sequence of testing layer of dependent classes continues
until the entire system is constructed
• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that
cannot be controlled by the developer
System Testing
• Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and verifies that
recovery is properly performed
– Tests reinitialization, checkpointing mechanisms, data recovery, and
restart for correctness
• Security testing
– Verifies that protection mechanisms built into a system will, in fact,
protect it from improper access
• Stress testing
– Executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume
• Performance testing
– Tests the run-time performance of software within the context of an
integrated system
– Often coupled with stress testing and usually requires both hardware
and software instrumentation
– Can uncover situations that lead to degradation and possible system
failure
Debugging Strategies
• Operable
– The better it works (i.e., better quality), the easier it is to test
• Observable
– Incorrect output is easily identified; internal errors are automatically
detected
• Controllable
– The states and variables of the software can be controlled directly by
the tester
• Decomposable
– The software is built from independent modules that can be tested
independently
• Simple
– The program should exhibit functional, structural, and code simplicity
• Stable
– Changes to the software during testing are infrequent and do not
invalidate existing tests
• Understandable
– The architectural design is well understood; documentation is
available and organized
Test Characteristics
• Black-box testing
– Knowing the specified function that a product has been designed to
perform, test to see if that function is fully operational and error free
– Includes tests that are conducted at the software interface
– Not concerned with internal logical structure of the software
• White-box testing
– Knowing the internal workings of a product, test that all internal
operations are performed according to specifications and all internal
components have been exercised
– Involves tests that concentrate on close examination of procedural
detail
– Logical paths through the software are tested
– Test cases exercise specific sets of conditions and loops
White-box Testing
• Uses the control structure part of component-level design to derive the test
cases
• These test cases
– Guarantee that all independent paths within a module have been
exercised at least once
– Exercise all logical decisions on their true and false sides
– Execute all loops at their boundaries and within their operational
bounds
Exercise internal data structures to ensure their validity
• Defined as a pathh through the program from the start node until the end node
that introduces at least one new set of processing statements or a new
condition (i.e., new nodes)
• Must move along at least one edge that has not been traversed before by a
previous path
• Basis sett for flow graph on previous slide
– Path 1: 0-1-11
– Path 2: 0-1-2-3--4-5-10-1-11
– Path 3: 0-1-2-3--6-8-9-10-1-11
– Path 4: 0-1-2-3--6-7-9-10-1-11
• The number of paths in the basis set is determined by the cyclomatic
complexity
Black-box Testing
Equivalence Partitioning
• A black-box testing method that divides the input domain of a program into
classes of data from which test cases are derived
• An ideal test case single-handedly uncovers a complete class of errors,
thereby reducing the total number of test cases that must be developed
• Test case design is based on an evaluation of equivalence classes for an
input condition
• An equivalence class represents a set of valid or invalid states for input
conditions
• From each equivalence class, test cases are selected so that the largest
number of attributes of an equivalence class are exercise at once
Fault-based Testing
• The state diagram for a class can be used to derive a sequence of tests that
will exercise the dynamic behavior of the class and the classes that
collaborate with it
• The test cases should be designed to achieve coverage of all states