Introduction To User Centered Design
Introduction To User Centered Design
COM
Interface design focuses on three areas of concern: (1) the design of interfaces between
software components, (2) the design of interfaces between the software and other nonhuman
producers and consumers of information (i.e., other external entities), and (3) the design of the
interface between a human (i.e., the user) and the computer. In this chapter we focus exclusively
on thethird interface design category—user interface design. In the preface to his classic book on
user interface design, Ben Shneiderman [SHN90] states:
Frustration and anxiety are part of daily life for many users of computerized information
systems. They struggle to learn command language or menu selection systems that are supposed
to help them do their job. Some people encounter such serious cases of computer shock, terminal
terror, or network neurosis that they avoid using computerized systems.
The problems to which Shneiderman alludes are real. It is true that graphical user
interfaces, windows, icons, and mouse picks have eliminated many of the most horrific interface
problems. But even in a “Windows world,” we all have encountered user interfaces that are
difficult to learn, difficult to use, confusing, unforgiving, and in many cases, totally frustrating.
Yet, someone spent time and energy building each of these interfaces, and it is not likely that the
builder created these problems purposely.
User interface design has as much to do with the study of people as it does with
technology issues. Who is the user? How does the user learn to interact with a new computer-
based system? How does the user interpret information produced by the system? What will the
user expect of the system? These are only a few of the many questions that must be asked and
answered as part of user interface design.
In his book on interface design, Theo Mandel [MAN97] coins three “golden rules”:
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.
233
WWW.VIDYARTHIPLUS.COM
These golden rules actually form the basis for a set of user interface design principles that guide
this important software design activity.
During a requirements-gathering session for a major new information system, a key user
was asked about the attributes of the window-oriented graphical interface. “What I really would
like,” said the user solemnly, “is a system that reads my mind. It knows what I want to do before
I need to do it and makes it very easy for me to get it done. That’s all, just that.”
My first reaction was to shake of my head and smile, but I paused for a moment. There was
absolutely nothing wrong with the user’s request. She wanted a system that reacted to her needs
and helped her get things done. She wanted to control the computer, not have the computer
control her.
Most interface constraints and restrictions that are imposed by a designer are intended to
simplify the mode of interaction. But for whom? In many cases, the designer might introduce
constraints and limitations to simplify the implementation of the interface. The result may be an
interface that is easy to build, but frustrating to use. Mandel [MAN97] defines a number of
design principles that allow the user to maintain control:
Define interaction modes in a way that does not force a user into unnecessary or
undesired actions. An interaction mode is the current state of the interface. For example, if spell
check is selected in a word-processor menu, the software moves to a spell checking mode. There
is no reason to force the user to remain in spell checking mode if the user desires to make a small
text edit along the way. The user should be able to enter and exit the mode with little or
no effort.
Provide for flexible interaction. Because different users have different interaction
preferences, choices should be provided. For example, software might allow a user to interact via
keyboard commands, mouse movement, a digitizer pen, or voice recognition commands. But
every action is not amenable to every interaction mechanism. Consider, for example, the
difficulty of using keyboard command (or voice input) to draw a complex shape.
Hide technical internals from the casual user. The user interface should move the user
into the virtual world of the application. The user should not be aware of the operating system,
file management functions, or other arcane computing technology. In essence, the interface
should never require that the user interact at a level that is “inside” the machine (e.g., a user
should never be required to type operating system commands from within application software).
234
WWW.VIDYARTHIPLUS.COM
Design for direct interaction with objects that appear on the screen. The user feels a sense
of control when able to manipulate the objects that are necessary to perform a task in a manner
similar to what would occur if the object were a physical thing. For example, an application
interface that allows a user to “stretch” an object (scale it in size) is an implementation of direct
manipulation.
The more a user has to remember, the more error-prone will be the interaction with the
system. It is for this reason that a well-designed user interface does not tax the user’s memory.
Whenever possible, the system should “remember” pertinent information and assist the user with
an interaction scenario that assists recall. Mandel [MAN97] defines design principles that enable
an interface to reduce the user’s memory load:
Reduce demand on short-term memory. When users are involved in complex tasks, the
demand on short-term memory can be significant. The interface should be designed to reduce the
requirement to remember past actions and results. This can be accomplished by providing visual
cues that enable a user to recognize past actions, rather than having to recall them.
Establish meaningful defaults. The initial set of defaults should make sense for the average
user, but a user should be able to specify individual preferences. However, a “reset” option
should be available, enabling the redefinition of original default values.
Define shortcuts that are intuitive. When mnemonics are used to accomplish a system function
(e.g., alt-P to invoke the print function), the mnemonic should be tied to the action in a way that
is easy to remember (e.g., first letter of the task to be invoked).
The visual layout of the interface should be based on a real world metaphor. For example, a
bill payment system should use a check book and check register metaphor to guide the user
through the bill paying process. This enables the user to rely on well-understood visual cues,
rather than memorizing an arcane interaction sequence.
The interface should present and acquire information in a consistent fashion. This implies
that (1) all visual information is organized according to a design standard that is maintained
throughout all screen displays, (2) input mechanisms are constrained to a limited set that are used
235
WWW.VIDYARTHIPLUS.COM
consistently throughout the application, and (3) mechanisms for navigating from task to task are
consistently defined and implemented. Mandel [MAN97] defines a set of design principles
that help make the interface consistent:
Allow the user to put the current task into a meaningful context. Many interfaces
implement complex layers of interactions with dozens of screen images. It is important to
provide indicators (e.g., window titles, graphical icons, consistent color coding) that enable the
user to know the context of the work at hand. In addition, the user should be able to determine
where he has come from and what alternatives exist for a transition to a new task. Maintain
consistency across a family of applications. A set of applications (or products) should all
implement the same design rules so that consistency is maintained for all interaction.
If past interactive models have created user expectations, do not make changes unless
there is a compelling reason to do so. Once a particular interactive sequence has become a de
facto standard (e.g., the use of alt-S to save a file), the user expects this in every application he
encounters. A change (e.g., using alt-S to invoke scaling) will cause confusion.
The interface design principles discussed in this and the preceding sections provide basic
guidance for a software engineer. In the sections that follow, we examine the interface design
process itself.
The overall process for designing a user interface begins with the creation of different
models of system function (as perceived from the outside). The human- and computer-oriented
tasks that are required to achieve system function are then delineated; design issues that apply to
all interface designs are considered; tools are used to prototype and ultimately implement the
design model; and the result is evaluated for quality.
Four different models come into play when a user interface is to be designed. The
software engineer creates a design model, a human engineer (or the software engineer)
establishes a user model, the end-user develops a mental image that is often called the user's
model or the system perception, and the implementers of the system create a system image
[RUB88]. Unfortunately, each of these models may differ significantly.The role of interface
designer is to reconcile these differences and derive a consistent representation of the interface.
A design model of the entire system incorporates data, architectural, interface, and
procedural representations of the software. The requirements specification may establish certain
constraints that help to define the user of the system, but the interface design is often only
incidental to the design model. The user model establishes the profile of end-users of the system.
To build an effective user interface, "all design should begin with an understanding of the
intended users, including profiles of their age, sex, physical abilities, education, cultural or ethnic
background, motivation, goals and personality" [SHN90]. In addition, users can be categorized
as:
236
WWW.VIDYARTHIPLUS.COM
• Novices. No syntactic knowledge2 of the system and little semantic knowledge3 of the
application or computer usage in general.
• Knowledgeable, intermittent users. Reasonable semantic knowledge of the application but
relatively low recall of syntactic information necessary to use the interface.
• Knowledgeable, frequent users. Good semantic and syntactic knowledge that often leads to the
"power-user syndrome"; that is, individuals who look for shortcuts and abbreviated modes of
interaction.
The system perception (user's model) is the image of the system that end-users carry in
their heads. For example, if the user of a particular word processor were asked to describe its
operation, the system perception would guide the response. The accuracy of the description will
depend upon the user's profile (e.g., novices would provide a sketchy response at best) and
overall familiarity with software in the application domain. A user who understands word
processors fully but has worked with the specific word processor only once might actually be
able to provide a more complete description of its function than the novice who has spent weeks
trying to learn the system.
The system image combines the outward manifestation of the computer-based system
(the look and feel of the interface),
coupled with all supporting
information (books, manuals,
videotapes, help files) that describe
system syntax and semantics. When
the system image and the system
perception are coincident, users
generally feel comfortable with the
software and use it effectively. To
accomplish this "melding" of the
models, the design model must have
been developed to accommodate the
information contained in the user
model, and the system image must
accurately reflect syntactic and
semantic information about the
interface.Fig: The user interface design process. The models described in this section are
"abstractions of what the user is doing or thinks he is doing or what somebody else thinks he
ought to be doing when he uses an interactive system" [MON84]. In essence, these models
enable the interface designer to satisfy a key element of the most important principle of user
interface design: "Know the user, know the tasks."
237
WWW.VIDYARTHIPLUS.COM
238
WWW.VIDYARTHIPLUS.COM
239
WWW.VIDYARTHIPLUS.COM
240
WWW.VIDYARTHIPLUS.COM
To provide a brief illustration of the design steps noted previously, we consider a user scenario
for an advanced version of the SafeHomesystem (discussed in earlier chapters). In the advanced
version, SafeHomecan be accessed via modem or through the Internet. A PC application allows
the homeowner to check the status of the house from a remote location, reset the
SafeHomeconfiguration, arm and disarm the system, and (using an extra cost video option6)
monitor rooms within the house visually. A preliminary user scenario for the interface follows:
Scenario: The homeowner wishes to gain access to the SafeHomesystem installed in his house.
Using software operating on a remote PC (e.g., a notebook computer carried by the homeowner
while at work or traveling), the homeowner determines the status of the alarm system, arms or
disarms the system, reconfigures security zones, and views different rooms within the house via
preinstalled video cameras. To access SafeHomefrom a remote location, the homeowner provides
an identifier and a password. These define levels of access (e.g., all users may not be able to
reconfigure the system) and provide security. Once validated, the user (with full access
privileges) checks the status of the system and changes status by arming or disarming SafeHome.
The user reconfigures the system by displaying a floor plan of the house, viewing each of the
security sensors, displaying each currently configured zone, and modifying zones as
required.The user views the interior of the house via strategically placed video cameras. The user
can pan and zoom each camera to provide different views of the interior.
Homeowner tasks:
• accesses the SafeHomesystem
• enters an ID and password to allow remote access
• checkssystem status
• arms or disarms SafeHomesystem
• displays floor plan and sensor locations
• displays zones on floor plan
• changes zones on floor plan
• displays video camera locations on floor plan
• selects video camera for viewing
• views video images (4 frames per second)
• pans or zooms the video camera
Objects (boldface) and actions (italics) are extracted from this list of homeowner tasks. The
majority of objects noted are application objects. However, video camera location (a source
object) is dragged and dropped onto video camera (a target object) to create a video image (a
window with video display).A preliminary sketch of the screen layout for video monitoring is
created (Figure 15.2). To invoke the video image, a video camera location icon, C, located in
floor plan displayed in the monitoring window is selected. In this case a camera location in the
241
WWW.VIDYARTHIPLUS.COM
living room, LR, is then dragged and dropped onto the video camera icon in the upper left-hand
portion of the screen. The video image window appears, displaying streaming video from the
camera located in the living room (LR). The zoom and pan control slides are used to control the
magnification and direction of the video image. To select a view from another camera, the user
simply drags and drops a different camera location icon into the camera icon in the upper left-
hand corner of the screen.The layout sketch shown would have to be supplemented with an
expansion of each menu item within the menu bar, indicating what actions are available for the
Design Issues
As the design of a user interface evolves, four common design issues almost always surface:
system response time, user help facilities, error information handling, and command labeling.
Unfortunately, many designers do not address these issues until relatively late in the design
process (sometimes the first inkling of a problem doesn't occur until an operational prototype is
available). Unnecessary iteration, project delays, and customer frustration often result. It is far
242
WWW.VIDYARTHIPLUS.COM
better to establish each as a design issue to be considered at the beginning of software design,
when changes are easy and costs are low.
System response time is the primary complaint for many interactive applications. In general,
system response time is measured from the point at which the user performs some control action
(e.g., hits the return key or clicks a mouse) until the software responds with desired output or
action.System response time has two important characteristics: length and variability. If the
length of system response is too long, user frustration and stress is the inevitable result.
However, a very brief response time can also be detrimental if the user is being paced by the
interface. A rapid response may force the user to rush and therefore make mistakes.
Variability refers to the deviation from average response time, and in many ways, it is the most
important response time characteristic. Low variability enables the user to establish an
interaction rhythm, even if response time is relatively long. For example, a 1-second response to
a command is preferable to a response that varies from 0.1 to 2.5 seconds. The user is always off
balance, always wondering whether something "different" has occurred behind the scenes.
Almost every user of an interactive, computer-based system requires help now and then. In some
cases, a simple question addressed to a knowledgeable colleague can do the trick. In others,
detailed research in a multivolume set of "user manuals" may be the only option. In many cases,
however, modern software provides on-line help facilities that enable a user to get a question
answered or resolve a problem without leaving the interface.
Two different types of help facilities are encountered: integrated and add-on [RUB88]. An
integrated help facility is designed into the software from the beginning. It is often context
sensitive, enabling the user to select from those topics that are relevant to the actions currently
being performed. Obviously, this reduces the time required for the user to obtain help and
increases the "friendliness" of the interface. An add-on help facility is added to the software after
the system has been built. In many ways, it is really an on-line user's manual with limited query
capability. The user may have to search through a list of hundreds of topics to find appropriate
guidance, often making many false starts and receiving much irrelevant information.There is
little doubt that the integrated help facility is preferable to the add-on approach. A number of
design issues [RUB88] must be addressed when a help facility is considered:
• Will help be available for all system functions and at all times during system interaction?
Options include help for only a subset of all functions and actions or help for all functions.
• How will the user request help? Options include a help menu, a special function key, or a
HELP command.
• How will help be represented? Options include a separate window, a reference to a printed
document (less than ideal), or a one- or two-line suggestion produced in a fixed screen location.
• How will the user return to normal interaction? Options include a return button displayed on
the screen, a function key, or control sequence.
• How will help information be structured? Options include a "flat" structure in which all
information is accessed through a keyword, a layered hierarchy of information that provides
increasing detail as the user proceeds into the structure, or the use of hypertext. Error messages
and warnings are "bad news" delivered to users of interactive systems when something has gone
243
WWW.VIDYARTHIPLUS.COM
awry. At their worst, error messages and warnings impart useless or misleading information and
serve only to increase user frustration.There are few computer users who have not encountered
an error of the form:
244
WWW.VIDYARTHIPLUS.COM
object is to be duplicated in one application and alt-D when a graphics object is to be deleted in
another. The potential for error is obvious.
IMPLEMENTATION TOOLS
Once a design model is created, it is implemented as a prototype,7 examined by users (who fit
the user model described earlier) and modified based on their comments. To accommodate this
iterative design approach, a broad class of interface design and prototyping tools has evolved.
Called user-interface toolkits or user-interface development systems (UIDS), these tools provide
components or objects that facilitate creation of windows, menus, device interaction, error
messages, commands, and many other elements of an interactive environment.Using
prepackaged software components to create a user interface, a UIDS provides built-in
mechanisms [MYE89] for • managing input devices (such as a mouse or keyboard)
• validating user input
• handling errors and displaying error messages
• providing feedback (e.g., automatic input echo)
• providing help and prompts
• handling windows and fields, scrolling within windows
• establishing connections between application software and the interface
• insulating the application from interface management functions
• allowing the user to customize the interface
These functions can be implemented using either a language-based or graphical approach.
Design Evaluation
Once an operational user interface prototype has been created, it must be evaluated to determine
whether it meets the needs of the user. Evaluation can span a formality spectrum that ranges
from an informal "test drive," in which a user provides impromptu feedback to a formally
designed study that uses statistical methods for the evaluation of questionnaires completed by a
population of end-users. The user interface evaluation cycle takes the form shown in Figure 15.3.
After the design model has been completed, a first-level prototype is created. The prototype is
evaluated by the user, who provides the designer with direct comments about the efficacy of the
interface. In addition, if formal evaluation techniques are used (e.g., questionnaires, rating
sheets), the designer may extract information from these data (e.g., 80 percent of all users did not
like the mechanism for saving data files). Design modifications are made based on user input and
the next level prototype is created. The evaluation cycle continues until no further modifications
to the interface design are necessary.
The prototyping approach is effective, but is it possible to evaluate the quality of a user interface
before a prototype is built? If potential problems can be uncovered and corrected early, the
number of loops through the evaluation cycle will be reduced and development time will shorten.
245
WWW.VIDYARTHIPLUS.COM
If a design model of the interface has been created, a number of evaluation criteria [MOR81] can
be applied during early design reviews:
1. The length and complexity of the written specification of the system and its interface provide
an indication of the amount of learning required by users of the system.
2. The number of user tasks specified and the average number of actions per task provide an
indication of interaction time and the overall efficiency of the system.
3. The number of actions, tasks, and system states indicated by the design model imply the
memory load on users of the system.
4. Interface style, help facilities, and error handling protocol provide a general indication of the
complexity of the interface and the degree to which it will be accepted by the user.
Once the first prototype is built, the designer can collect a variety of qualitative and quantitative
data that will assist in evaluating the interface. To collect qualitative data, questionnaires can be
distributed to users of the prototype. Questions can be all (1) simple yes/no response, (2) numeric
response, (3) scaled (subjective) response, or (4) percentage (subjective) response. Examples are
1. Were the icons self-explanatory? If not, which icons were unclear?
2. Were the actions easy to remember and to invoke?
3. How many different actions did you use?
4. How easy was it to learn basic system operations (scale 1 to 5)?
5. Compared to other interfaces you've used, how would this rate—top 1%, top 10%, top 25%,
top 50%, bottom 50%?
If quantitative data are desired, a form of time study analysis can be conducted. Users are
observed during interaction, and data—such as number of tasks correctly completed over a
standard time period, frequency of actions, sequence of actions, time spent "looking" at the
display, number and types of errors, error recovery time, time spent using help, and number of
help references per standard time period—are collected and used as a guide for interface
modification. A complete discussion of user interface evaluation methods is beyond the scope of
this book. For further information, see [LEA88] and [MAN97].
INTERFACE EVALUATION
Interface evaluation is the process of assessing the usability of an interface and checking that it
meets user requirements. Therefore, it should be part of the normal verification and validation
process for software systems. Neilsen (Neilsen, 1993) includes a good chapter on this topic in his
book on usability engineering. Ideally, an evaluation should be conducted against a usability
specification based on usability attributes, as shown in Figure 16.17. Metrics for these usability
attributes can be devised. For example, in a learnability specification, you might state that an
operator who is familiar with the work supported should be able to use 80% of the system
functionality after a three-hour training session. However, it is more common to specify
246
WWW.VIDYARTHIPLUS.COM
usability (if it is specified at all) qualitatively rather than using metrics. You therefore usually
have to use your judgement and experience in interface evaluation.
Systematic evaluation of a user interface desig n can be an expensive process involving cognitive
scientists and graphics designers. You may have to design and carry out a statistically significant
number of experiments with typical users. You may need to use specially constructed
laboratories fitted with monitoring equipment. A user interface evaluation of this kind is
economically unrealistic for systems developed by small organisations with limited
resources.There are a number of simpler, less expensive techniques of user interface evaluation
that can identify particular user interface design deficiencies:
1. Questionnaires that collect information about what users thought of the interface;
2. Observation of users at work with the system and ‘thinking aloud’ about how they are trying
to use the system to accomplish some task;
3. Video ‘snapshots’ of typical system use;
4. The inclusion in the software of code which collects information about the mostused facilities
and the most common errors.
Surveying users by questionnaire is a relatively cheap way to evaluate an interface. The
questions should be precise rather than general. It is no use asking questions such as ‘Please
comment on the usability of the interface’ as the responses will probably vary so much that you
won’t see any common trend. Rather, specific questions such as ‘Please rate the
understandability of the error messages on a scale from 1 to 5. A rating of 1 means very clear and
5 means incomprehensible’ are better. They are both easier to answer and more likely to provide
useful information to improve the interface. Users should be asked to rate their own experience
and background when filling in the questionnaire. This allows the designer to find out whether
users from any particular background have problems with the interface. Questionnaires can even
be used before any executable system is available if a paper mock-up of the interface is
constructed and evaluated.
Observation-based evaluation simply involves watching users as they use a system,looking at the
facilities used, the errors made and so on. This can be supplemented by ‘think aloud’ sessions
where users talk about what they are trying to achieve, how they understand the system and how
they are trying to use the system to accomplish their objectives.
Relatively low-cost video equipment
means that you can record user
sessions for later analysis. Complete
video analysis is expensive and
requires a specially equipped
evaluation suite with several cameras
focused on the user and on the screen.
However, video recording of selected
user operations can be helpful in
detecting problems. Other evaluation
methods must be used to find out
247
WWW.VIDYARTHIPLUS.COM
248
WWW.VIDYARTHIPLUS.COM
Deciding to do a review
Context: You are designing a new software part or a change in an existing partand you are
wondering how much effort you should put into the review.
Risk over Size
Problem: Decisions on reviews often are taken against the background of practical reasons.
The availability of time and expertise or the mere size of a software module determine the
content and the intensity of reviews. So a rather common algorithm might be looked at
very carefully while a very innovative software part might be more or less neglected.
Reasons: In software development, not only the final product counts but also the development
time and effort. In most cases, developers have to make a trade off between quality and
adherence to deadlines. The developer of new software usually has some influence on the
intensity and the extent of the review that is done on the design.
However, often, there is not the time to review everything with the same intensity. If you set the
focus improperly, maybe big risks are overlooked.
Solution: Find out where is the biggest risk and put most of your effort on this part.
Implementation: A design resulting in a high number of lines of code is not automatically
riskier than a small change in an existing software part. You should watch out for changes with a
high degree of innovation or changes in complex interfaces.
If you have used a commonly known solution or you have developed a module similar to one
you have developed before, the risk is not as high as there is for completely new algorithms or
changes in interfaces with cross connections to many other parts of the software.
Of course it is difficult to find an impartial technique leading to decisions comparable to
decisions in other reviews. It is hard to measure risk which does not only depend on the
properties of the software but also on the properties of the developer and his or her environment.
By discussions with colleages about your development and how it impacts the system, you will
develop a sense of how intense the review should be.
Examples:
1. You are using a filter algorithm looking very complicated at first sight. The code for it is
rather big. But in fact you have a lot of experience with this algorithm because you have used it
many times before. Moreover the algorithm does not have many boundary conditions or
dependencies. Therefore the review of this part of the design does not have to be elaborate.
2. You are changing an interface used as an input for a timer from a 32-bitvariable to a 16-bit
variable. Maybe you have tested the new function in the target environment and you have shown
that the behaviour is the same as before. Unfortunately you did not consider the fact that now
there is a variable overflow much faster and that then the timer will fail. This example shows that
especially when there are changes in interfaces it is good to review with a rather high effort.
249
WWW.VIDYARTHIPLUS.COM
250
WWW.VIDYARTHIPLUS.COM
lead to a hurried meeting where the developer tries to dismiss as many points as possible in
order to avoid rework.
Reasons: 1. If you hold the review meeting too early, there is a danger that not all the necessary
information will be available or some details even are not defined yet. You will have to review
alternatives or theoretical concepts. This can result in endless discussions.
2. On the other hand, if the review meeting is hold too late, you will not have the time to
implement the measures you have decided on in the meeting. Maybe you will even try to avoid
some findings in the meeting because you know that will not have the time to do the rework
identified to reduce the risks.
Solution: Hold the review meeting only after you have collected all the facts the review
participants need to examine the design but keep enough time after the review to change
the design where necessary.
Implementation: As you CONTINUOSLY PREPARE THE REVIEW, you can look for the best
time for the review meeting from the beginning of the development. You can already talk with
the colleagues you would like to participate in the meeting so that you find out when persons that
are very important for the review success are on vacation et cetera.
As the delivery date normally is fixed, you can estimate how much rework will be necessary
after the review in the worst case and then you can calculate the date when the meeting has to be
held the latest. The review participants also need some time for the preparation of the review.
Taking this into consideration you know when your design has to be finished. So by calculating
backwards from the delivery date you make sure that you have enough time for the review and
the rework and that you also go into the review with a real design and not only a design idea.
Example: Here you can look at the same example as in CONTINUOUSLY PREPARE THE
REVIEW. Imagine you are still not sure yet whether you should take the filtered or the unfiltered
signal as an input value. Therefore you do not put NEng_filt or NEng_unfilt into your design,
you just write down NEng. If you are doing the review now, there is a high risk that some of the
participants will have NEng_filt in mind and some will think about NEng_unfilt. They will be
talking about different things without noticing the problem. If they notice this open point, the
risk is high that the review will not be a design review on the complete design but a discussion
about which of the two variables would be the better one for your application. Of course this
might be a helpful discussion but it is not the goal of the review.
251
WWW.VIDYARTHIPLUS.COM
affect the functionality are easily ignored. However, things that seem unimportant to you might
affect the overall system or parts of the system interacting with your module.
2. You want to avoid to drown important changes in unimportant “stuff” and you think that a
comprehensive list of changes might overwhelm the review participants. So your idea is to make
a selection of the most important aspects in advance.
Solution: When preparing for the review, describe every part of your solution, even if it
seems unimportant to you. Avoid any judgement on the importance or the risks of
individual aspects. Try to be as neutral as possible.
Implementation: A checklist can be really helpful to identify all the changes that were made. If
you have filled out a checklist before, you can show this checklist to the review participants. If
there is a very high number of changes, you can, later in the review meeting, make suggestions to
put some details aside or summarize some minor changes in one common heading. But you have
to justify why it is safe to do so to the review participants.
If safety is a concern, you should rather err on the side of caution and mention too many changes
rather than too few.
Example: The precision of the output value of a function is increased while the functionality
itself remains unchanged. To the developer, this change seems to be unimportant, as his function
is now providing a more precise value than before. Unfortunately, there is another function
which uses this value as an input and performs a modulo operation on it. As now the resolution is
higher, the results of the modulo operation will be completely different. The function fails.
If in the review this change had been mentioned, the potentially fatal consequences could have
been found.
252
WWW.VIDYARTHIPLUS.COM
Solution: Specify clearly what you think are the requirements to the producteven if these
requirements were not mentioned explicitely by the customer.
Implementation: Put yourself in the place of the customer and think of whathe or she wants the
product to do or not to do. What are the different possibilitiesof using the product and how can
they be combined? List all the aspectsand circumstances of use. Think about what might be
obvious to the customerbut not to you. Then, think about what might be obvious to you but notto
the customer.
You can take these assumptions as a basis for discussion. Chances are high that in the review
meeting there is an expert who can confirm or correct these assumptions. When you are
wondering which requirements are obvious to the customer but not to you, of course it is a good
idea to simply ask the customer. But even then, there will be implicit requirements remaining
implicit if you do not find them and make them explicit.
When looking for implicit requirements it might be helpful to use checklists. In DRBFM for
example, you are invited to find, for each module, the “basic function” which describes the
actual purpose of the module, “additional functions” which focus on the on the look and feel of a
product, “harm prevention” which avoids that the product can cause harm to people, or the
environment and “self protection” which avoids that the module itself is damaged or destroyed.
Example: The customer ordered a new application for a mobile phone: Photos taken with the
camera integrated into the phone can be sorted into different albums. He has given you a
specification where all the different possibilities of sorting, opening and modifying the albums
are described. As it seems obvious to him, he has not specified the fact that the basic function of
the phone, namely the possibility to place and receive calls must not be disabled by the new
application. If you do not integrate this requirement into your review, you might only find out
253
WWW.VIDYARTHIPLUS.COM
that your design is good for sorting photos and so fulfilles the requirements you limited the
module to.
Look at the System
Problem: When you are developing only a small part of a complex technical product you
tend to find a solution fitting this small part but maybe carrying risks for the whole
product. If you do not consider this in the review you will not find these risks.
Reasons: 1. You know
best about all the obstacles
you had to overcome and
about the reasons why you
have chosen this specific
solution. Digging deeply
into the subject for a long
time, you have probably
lost an overall view on the
product. As a
consequence, there is a
risk that you have created a locally optimized solution which works well if you regard it for itself
but which does not really fit into the greater context.In software development, especially in large
projects, there is a risk that adeveloper who is working on a specific part of the software for a
long timeloses sight of the real function of the product.
2. In embedded systems, software is not an end in itself. Often there are manyinterfaces of one
software function to other functions and to sensor signals ordevice drivers. It is difficult to keep
in mind all interactions all the time duringdevelopment.
Solution: Ask yourself what is the function of your development for thewhole product and
document this function during review preparation.
Implementation: When preparing for the review it is important to step backand look at the
whole system and at all the interfaces that your software modulehas with the overall system.
These are inputs and outputs as well as environmentalconditions.
Find answers to the following questions:
1. Why was there a need for this development?
2. Why did the requirement come up right now?
3. Have there been other attempts to fulfill the requirements? Why were theysuccessful or why
did they fail?
Example: The software is getting a sensor signal as an input and calculatessome output value
based on it. In order to find out if the output value meetsthe accuracy requirements you have to
get some information about the accuracyof the input value. You read the technical data sheet of
the sensor. But thisis not the only information you should take into consideration. You should
254
WWW.VIDYARTHIPLUS.COM
alsoask yourself: Is it possible, in certain circumstances, that the sensor signal doesnot have the
specified accuracy, although the sensor works properly? Whenmight this be possible? For
example, due to the signal processing from the rawsignal to the signal received by your module.
Maybe there is some filtering orsome hysteresis applied you did not take into your consideration.
Also: whatdo I do (in my module) if the accuracy is not as good as I expect it? Can mymodule
deal with that?
255
WWW.VIDYARTHIPLUS.COM
of the subparts youmight say “signal range check of sensor input S1”. That means you are
lookinginto your top-level black box “temperature calculation” and your aredelimiting the
solutions to those solutions using S1 as an input. The key pointis that for this granularity level
you must not say “signal range check withalgorithm a” but only “signal range check with
boundary conditions c1, c2 andc3”.
256
WWW.VIDYARTHIPLUS.COM
2. Even if your colleagues come to the review meeting with positive expectations,they can
become unmotivated very quickly if you are presenting andmoderating the review in a boring or
confusing manner.
Solution: Make the review fun! During the review meeting, you are the entertainer!You are
responsible to create an atmosphere where all the reviewparticipants are comfortable and
really want to contribute to the analysis ofthe product.
Implementation: At the beginning, you should make sure that all the participantsknow each
other and know you. This helps everybody to get some orientationin the group.
Second you should give a short overview of the review so that everybodyknows how long the
meeting will take, what will be the steps to take and, veryimportant, what should be the outcome
of the meeting.When you are presenting your work, tell a story to the audience. Give
informationabout the motivation, about your ideas for a solution, about which obstaclesyou had
to overcome, which actions you took in order to make the outcomerobust and what is the final
result. It does not mean that you shouldmanipulate the information in order to create a
breathtaking story or to showwhat a great developer you are. The story does not have to be
thrilling and itdoes not need to have a happy ending. But is has to be clear, logic and
complete.By this you make sure that the review participants can follow. Also itcreates an open
atmosphere and motivates people to account for the story inorder to make it a succesfulone.Use
the fact that IT IS NOT ABOUT FORM in order to arrange the meeting so thatthe participants
remember it as some very interesting and motivating hours.
Example: You have designed a software module for an automobile enginecontrol. For the
development you had to do measurements at very low temperaturesin a climate chamber in the
middle of summer. So you came out of theclimate chamber with a complete arctic dress in
July.In the review meeting, show a photo of you in your winter dress to thecolleagues. Instantly
they will pay attention and they will be inspired to becreative.
257
WWW.VIDYARTHIPLUS.COM
support for each of the identified attributes.The results of the literature survey showed that there
are a number of different measurableattributes for software test planning and test design
processes. The study partitioned theseattributes in multiple categories for software test planning
and test design processes. Foreach of these attributes, different existing measurements are
studied. A consolidation ofthese measurements is presented in this thesis which is intended to
provide an opportunityfor management to consider improvement in these processes.
The Test design process is very broad and includes critical activities like determining the
testobjectives (i.e. broad categories of things to test), selection of test case design
techniques,preparing test data, developing test procedures, setting up the test environment
andsupporting tools. Brian Marick points out the importance of test case design: Paying
moreattention to running tests than to designing them is a classic mistake [8].Determination of
test objectives is a fundamental activity which leads to the creation of atesting matrix reflecting
the fundamental elements that needs to be tested to satisfy anobjective. This requires the
gathering reference materials like software requirementsspecification and design documentation.
Then on the basis of reference materials, a team ofexperts (e.g. test analyst and business analyst)
meet in a brainstorming session to compile alist of test objectives. For example, while doing
system testing, the test objectives that can be
258
WWW.VIDYARTHIPLUS.COM
scenarios andin reducing redundant test cases. This mapping also identifies the absence of a test
case for aparticular objective in the list; therefore, the testing team needs to create those test
cases.After this, each item in the list is evaluated to assess for adequacy of coverage. It is doneby
using tester’s experience and judgment. More test cases should be developed if an item isnot
adequately covered. The mapping in the form of a matrix should be maintainedthroughout the
system development.
While designing test cases, there are two broad categories, namely black box testing andwhite
box testing. Black box test case design techniques generate test cases without knowingthe
internal working of the system. White box test case design techniques examine thestructure of
code to examine how the system works. Due to time and cost constraints, thechallenge designing
test cases is that what subset of all possible test cases has the highestprobability of detecting the
most errors [2]. Rather than focusing on one technique, test casesthat are designed using multiple
techniques is recommended. [11] recommends acombination of functional analysis, equivalence
partitioning, path analysis, boundary valueanalysis and orthogonal array testing. The tendency is
that structural (white box) test casedesign techniques are applied at lower level of abstraction and
functional (black box) testcase design techniques are likely to be used at higher level of
abstraction [15]. According toTsuneo Yamaura, there is only one rule in designing test cases:
cover all features, but do notmake too many test cases [8].The testing objectives identified earlier
are used to create test cases. Normally one testcase is prepared per objective, this helps in
maintaining test cases when a change occurs. Thetest cases created becomes part of a document
called test design specification [5]. Thepurpose of the test design specification is to group similar
test cases together. There might bea single test design specification for each feature or a single
test design specification for allthe features.
The test design specification documents the input specifications, output
specifications,environmental needs and other procedural requirements for the test case. The
hierarchy ofdocumentation is shown in Figure 11 by taking an example from system testing
[6].After the creation of test case specification, the next artifact is called Test
ProcedureSpecification [5]. It is a description of how the tests will be run. Test procedure
describessequencing of individual test cases and the switch over from one test run to another
[15].Figure 12 shows the test design process applicable at the system level [6].
259
WWW.VIDYARTHIPLUS.COM
Preparation of test environment also comes under test design. Test environment includese.g. test
data, hardware configurations, testers, interfaces, operating systems and manuals.The test
environment more closely matches the user environment as one moves higher up inthe testing
levels.Preparation of test data is an essential activity that will result in identification of
faults.Data can take the form of e.g. messages, transactions and records. According to [11], the
testdata should be reviewed for several data concerns:
Depth: The test team must consider the quantity and size of database records needed tosupport
tests.
Data integrity during testing: The test data for one person should not adversely affect
datarequired for others.
Conditions: Test data should match specific conditions in the domain of application.
It is important that an organization follows some test design standards, so that everyoneconforms
to the design guidelines and required information is produced [16]. The testprocedure and test
design specification documents should be treated as living documents andthey should be updated
as changes are made to the requirements and design. These updatedtest cases and procedures
become useful for reusing them in later projects. If during thecourse of test execution, a new
scenario evolves then it must be made part of the test designdocuments. After test design, the
next activity is test execution as described next.
260
WWW.VIDYARTHIPLUS.COM
WALK THROUGHS
A design walkthrough is a quality practice that allows designers to obtain an early validation of
design decisions related to the development and treatment of content, design of the graphical
user interface, and the elements of product functionality. Design walkthroughs provide designers
with a way to identify and assess early on whether the proposed design meets the requirements
and addresses the project's goal.
For a design walkthrough to be effective, it needs to include specific components. The following
guidelines highlight these key components. Use these guidelines to plan, conduct, and participate
in design walkthroughs and increase their effectiveness.
A design walkthrough should be scheduled when detailing the micro-level tasks of a project.
Time and effort of every participant should be built into the project plan so that participants can
schedule their personal work plans accordingly. The plan should include time for individual
preparation, the design walkthrough (meeting), and the likely rework.
All participants in the design walkthrough should clearly understand their role and
responsibilities so that they can consistently practice effective and efficient reviews.
Besides planning, all participants need to prepare for the design walkthrough. One cannot
possibly find all high-impact mistakes in a work product that they have looked at only 10
minutes before the meeting. If all participants are adequately prepared as per their
responsibilities, the design walkthrough is likely to be more effective.
A design walkthrough should follow a well-structured, documented process. This process should
help define the key purpose of the walkthrough and should provide systematic practices and rules
of conduct that can help participants collaborate with one another and add value to the review.
261
WWW.VIDYARTHIPLUS.COM
The design walkthrough should be used as a means to review and critique the product, not the
person who created the design. Use the collective wisdom to improve the quality of the product,
add value to the interactions, and encourage participants to submit their products for a design
walkthrough.
A design walkthrough has only one purpose, to find defects. There may, however, be times when
participants drift from the main purpose. A moderator needs to prevent this from happening and
ensure that the walkthrough focuses on the defects or weaknesses rather than identifying fixes or
resolutions.
In addition to these guidelines, there are a few best practices that can help you work towards
effective design walkthroughs:
The document or work product for the design walkthrough should be complete from all
respects including all the necessary reviews/filters.
Plan for a design walkthrough in a time-box mode. A session should be scheduled for a
minimum of one hour and should not stretch beyond two and a half hours. When
walkthroughs last more than three hours, the effectiveness of the design walkthrough and
the review process decreases dramatically.
It is best to work with 5-10 participants to add different perspectives to the design
walkthrough. However, with more than 15 participants, the process becomes slow and
each participant may not be able to contribute to their full capacity.
Design walkthroughs planned for morning sessions work better than afternoon sessions.
A design walkthrough should definitely include the instructional designers, graphic
artists, course architects, and any other roles that have been instrumental in creating the
design. You may also want to invite designers from other projects to add a fresh and
independent perspective to the review process.
Involving senior management or business decision makers in a design walkthrough may
not always be a good idea as it can intimidate the designers and they may feel that the
senior management is judging their competencies in design. With senior management in
the room, other participants and reviewers may also be hesitant in sharing problems with
the design.
Effective design walkthroughs rely on a 'moderator' who is a strong Lead Reviewer and is
in charge of the review process. It is critical that the group remains focused on the task at
hand. The Lead Reviewer can help in this process by curbing unnecessary discussions
and lead the group in the right direction.
Design walkthroughs are more effective if the reviewers use specific checklists for
reviewing various aspects of the work product.
It is a good practice to involve the potential end users in the design walkthrough.
However, in most situations it is difficult to get access to the end users. Therefore, you
may request reviewer(s) to take on the role of the end user and review the product from
the end-user perspective. These reviewers may be Subject Matter Experts or practitioners
in the same field/industry who have an understanding of the audience profile for the
product.
262
WWW.VIDYARTHIPLUS.COM
The effectiveness of a design walkthrough depends on what happens after the defects
have been identified in the meeting and how the defects are addressed and closed in the
work product. The team needs to prioritise the defects based on their impact and assign
responsibility for closing the defects.
Design walkthroughs, if done correctly, provide immediate short-term benefits, like early defect
detection and correction within the current project and offer important long-term returns. From a
long-term perspective, design walkthroughs help designers identify their mistakes and learn from
them, therefore moving towards continuous improvement. During the process, designers are also
able to unravel the basic principles of design and the key mistakes that violate these principles.
By participating in walkthroughs, reviewers are able to create a mental 'catalogue of mistakes'
that are likely to happen and are therefore more equipped to detect these early in any product. By
analysing the kind of defects made by designers, over time, reviewers can use this information to
support root-cause analysis and participate in organisation-wide improvement initiatives.
Effective design walkthroughs are one of the most powerful quality tools that can be leveraged
by designers to detect defects early and promote steps towards continuous improvement.
263
WWW.VIDYARTHIPLUS.COM
Routine kind, name, parameters and their types, return type, pre- and post-
condition, usage protocol with respect to other routines.
File name, format, permissions.
Socket number and protocol.
Shared variables, synchronization primitives (locks).
o Have features of the target programming language been used where appropriate?
o Have implementation details been avoided? (No details of internal classes.)
Are the relationships between the components explicitly documented?
o Preferably use a diagram
Is the proposed solution achievable?
o Can the components be implemented or bought, and then integrated together.
o Possibly introduce a second layer of decomposition to get a better grip on
achievability.
Are all relevantarchitectural views documented?
o Logical (Structural) view (class diagram per component expresses functionality).
o Process view (how control threads are set up, interact, evolve, and die).
o Physical view (deployment diagram relates components to equipment).
o Development view (how code is organized in files).
Are cross-cutting issues clearly and generally resolved?
o Exception handling.
o Initialization and reset.
o Memory management.
o Security.
o Internationalization.
o Built-in help.
o Built-in test facilities.
Is all formalized material and diagrammatic material accompanied by sufficient
explanatory text in natural language?
Are design decisions documented explicitly and motivated?
o Restrictions on developer freedom with respect to the requirements.
Has an evaluation of the software architecture been documented?
o Have alternative architectures been considered?
o Have non-functional requirements also been considered?
o Negative indicators:
High complexity: a component has a complex interface or functionality.
Low cohesion: a component contains unrelated functionality.
High coupling: two or more components have many (mutual) connections.
High fan-in: a component is needed by many other components.
High fan-out: a component depends on many other components.
Is the flexibility of the architecture demonstrated?
o How can it cope with likely changes in the requirements?
o Have the most relevant change scenarios been documented?
264