0% found this document useful (0 votes)
708 views54 pages

Unit - 4 User Interface Design: Characteristics of A Good User Interface: - Speed of Learning

Uploaded by

Sasi Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
708 views54 pages

Unit - 4 User Interface Design: Characteristics of A Good User Interface: - Speed of Learning

Uploaded by

Sasi Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 54

UNIT – 4

USER INTERFACE DESIGN:


- The user interface portion of a software product is responsible for all interactions with the user .
CHARACTERISTICS OF A GOOD USER INTERFACE:
- In the following subsections, we identify a few important characteristics of a good user interface:
- Speed of learning:
 A good user interface should be easy to learn. Speed of learning is hampered by complex
syntax and semantics of the command issue procedures.
 A good user interface should not require its users to memorise commands.
Use of metaphors and intuitive command names:
 popular metaphor is a shopping cart. Everyone knows how a shopping cart is used to make
choices while purchasing items in a supermarket.
 If a user interface uses the shopping cart metaphor for designing the interaction style for a
situation where similar types of choices have to be made, then the users can easily understand and
learn to use the interface.
Consistency:
 Once, a user learns about a command, he should be able to use the similar commands in
different circumstances for carrying out similar actions.
 This makes it easier to learn the interface since the user can extend his knowledge about
one part of the interface to the other parts.
Component-based interface:
 Users can learn an interface faster if the interaction style of the interface is very similar to
the interface of other applications with which the user is already familiar with.
 This can be achieved if the interfaces of different applications are developed using some
standard user interface components.
- Speed of use:
 Speed of use of a user interface is determined by the time and user effort necessary to initiate and
execute different commands.
 This characteristic of the interface is some times referred to as productivity support of the
interface. It indicates how fast the users can perform their intended tasks.
 The time and user effort necessary to initiate and execute different commands should be minimal.
- Speed of recall:
 Once users learn how to use an interface, the speed with which they can recall the command issue
procedure should be maximized.

65
- Error prevention:
 A good user interface should minimise the scope of committing errors while initiating different
commands.
 The error rate of an interface can be easily determined by monitoring the errors committed by an
average users while using the interface.
- Aesthetic and attractive:
 A good user interface should be attractive to use. An attractive user interface catches user
attention and fancy.
- Consistency:
 The commands supported by a user interface should be consistent.
 The basic purpose of consistency is to allow users to generalise the knowledge about aspects of
the interface from one part to another .
- Feedback:
 A good user interface must provide feedback to various user actions.
 Especially, if any user request takes more than few seconds to process, the user should be
informed about the state of the processing of his request.
- Support for multiple skill levels:
 A good user interface should support multiple levels of sophistication of command issue procedure
for different categories of users.
 This is necessary because users with different levels of experience in using an application prefer
different types of user interfaces.
- Error recovery (undo facility):
 While issuing commands, even the expert users can commit errors.
 Therefore, a good user interface should allow a user to undo a mistake committed by him while
using the interface.
- User guidance and on-line help:
 Users seek guidance and on-line help when they either forget a command or are unaware of some
features of the software.
BASIC CONCEPTS:
- We first discuss some basic concepts in user guidance and on-line help system.
1.User Guidance and On-line Help:
 Users may seek help about the operation of the software any time while using the software.
This is provided by the on-line help system.
 This is different from the guidance and error messages which are flashed automatically
without the user asking for them.

66
On-line help system:
 Users expect the on-line help messages to be tailored to the context in which they invoke the
“help system”.
 Therefore, a good online help system should keep track of what a user is doing while invoking the
help system and provide the output message in a context-dependent way.
Guidance messages:
 The guidance messages should be carefully designed to prompt the user about the next actions he
might pursue, the current status of the system, the progress so far made in processing his last
command, etc.
Error messages:
 Error messages are generated by a system either when the user commits some error or when some
errors encountered by the system during processing due to some exceptional conditions, such as
out of memory, communication link broken, etc.
 Users do not like error messages that are either ambiguous or too general such as “invalid input or
system error”.
2. Mode-based versus Modeless Interface:
 A mode is a state or collection of states in which only a subset of all user interaction tasks can be
performed. In a modeless interface, the same set of commands can be invoked at any time during
the running of the software.
 Thus, a modeless interface has only a single mode and all the commands are available all the time
during the operation of the software.
 On the other hand, in a mode-based interface, different sets of commands can be invoked
depending on the mode in which the system is, i.e., the mode at any instant is determined by the
sequence of commands already issued by the user.
3. Graphical User Interface (GUI) versus Text-based User Interface:
 In a GUI multiple windows with different information can simultaneously be displayed on the
user screen. This is perhaps one of
 the biggest advantages of GUI over text- based interfaces since the user has the flexibility to
simultaneously interact with several related items at any time and can have access to different
system information displayed in different windows.
 Iconic information representation and symbolic information manipulation is possible in a GUI.
Symbolic information manipulation such as dragging an icon representing a file to a trash for
deleting is intuitively very appealing and the user can instantly remember it.
 A GUI usually supports command selection using an attractive and user-friendly menu selection
system.

67
 In a GUI, a pointing device such as a mouse or a light pen can be used for issuing commands. The
use of a pointing device increases the efficacy of command issue procedure.
 On the flip side, a GUI requires special terminals with graphics capabilities for running and also
requires special input devices such a mouse. On the other hand, a text-based user interface can be
implemented even on a cheap alphanumeric display terminal. Graphics terminals are usually
much more expensive than alphanumeric terminals.
TYPES OF USER INTERFACES:
 user interfaces can be classified into the following three categories:
1. Command language-based interfaces
2. Menu-based interfaces
3. Direct manipulation interfaces
 Each of these categories of interfaces has its own characteristic advantages and disadvantages.
1. Command Language-based Interface:
 A command language-based interface—as the name itself suggests, is based on designing a
command language which the user can use to issue the commands.
 The user is expected to frame the appropriate commands in the language and type them
appropriately whenever required.
 A simple command language-based interface might simply assign unique names to the different
commands.
 However , a more sophisticated command language-based interface may allow users to compose
complex commands by using a set of primitive commands.
 Thus, a command language-based interface can be made concise requiring minimal typing by the
user .
 Command language-based interfaces allow fast interaction with the computer and simplify the
input of complex commands.
 Command language-based interfaces suffer from several drawbacks. Usually, command
language-based interfaces are difficult to learn and require the user to memorise the set of
primitive commands.
 Also, most users make errors while formulating commands in the command language and also
while typing them.
 Further , in a command language-based interface, all interactions with the system is through a
key-board and cannot take advantage of effective interaction devices such as a mouse.
Issues in designing a command language-based interface:
 The designer has to decide what mnemonics (command names) to use for the different
commands. The designer should try to develop meaningful mnemonics and yet be concise to

68
minimise the amount of typing required. For example, the shortest mnemonic should be assigned
to the most frequently used commands.
 The designer has to decide whether the users will be allowed to redefine the command names to
suit their own preferences. Letting a user define his own mnemonics for various commands is a
useful feature, but it increases the complexity of user interface development.
 The designer has to decide whether it should be possible to compose primitive commands to
form more complex commands. A sophisticated command composition facility would require the
syntax and semantics of the various command composition options to be clearly and
unambiguously specified.
2 .Menu-based Interface:
 An important advantage of a menu-based interface over a command language-based interface is
that a menu-based interface does not require the users to remember the exact syntax of the
commands.
 A menu-based interface is based on recognition of the command names, rather than recollection.
Humans are much better in recognising something than recollecting it.
 experienced users find a menu-based user interface to be slower than a command language-based
interface because an experienced user can type fast and can get speed advantage by composing
different primitive commands to express complex commands.
 Composing commands in a menu based interface is not possible. This is because of the fact that
actions involving logical connectives (and, or, etc.) are awkward to specify in a menu based
system.
 In the following, we discuss some of the techniques available to structure a large number of menu
items:
Scrolling menu: Sometimes the full choice list is large and cannot be displayed within the menu
area, scrolling of the menu items is required. This would enable the user to view and select the
menu items that cannot be accommodated on the screen.

69
Walking menu: Walking menu is very commonly used to structure a large collection of menu
items. In this technique, when a menu item is selected, it causes further menu items to be
displayed adjacent to it in a sub-menu.

Hierarchical menu: This type of menu is suitable for small screens with limited display area
such as that in mobile phones. In a hierarchical menu, the menu items are organised in a hierarchy
or tree structure. Selecting a menu item causes the current menu display to be replaced by an
appropriate sub-menu.
3. Direct Manipulation Interfaces:
 Direct manipulation interfaces present the interface to the user in the form of visual models (i.e.,
icons2 or objects).
 For this reason, direct manipulation interfaces are sometimes called as iconic interfaces.

70
 In this type of interface, the user issues commands by performing actions on the visual
representations of the objects, e.g., pull an icon representing a file into an icon representing a
trash box, for deleting the file.
 Important advantages of iconic interfaces include the fact that the icons can be recognised by the
users very easily, and that icons are language independent.
FUNDAMENTALS OF COMPONENT-BASED GUI DEVELOPMENT:
 Graphical user interfaces became popular in the 1980s.
 The main reason why there were very few GUI-based applications prior to the eighties is that
graphics terminals were too expensive.
 For example, the price of a graphics terminal those days was much more than what a high-end
personal computer costs these days.
 One of the first computers to support GUI-based applications was the Apple Macintosh computer.
 In fact, the popularity of the Apple Macintosh computer in the early eighties is directly
attributable to its GUI. In those early days of GUI design, the user interface programmer typically
started his interface development from the scratch. He would starting from simple pixel display
routines, write programs to draw lines, circles, text, etc.
 The current style of user interface development is component-based.
 It recognises that every user interface can easily be built from a handfuls of predefined
components such as menus, dialog boxes, forms, etc.
1 .Window System:
 A window system can generate displays through a set of windows. Since a window is the basic
entity in such a graphical user interface, we need to first discuss what exactly a window is.
 Window: A window is a rectangular area on the screen. A window can be considered to be a
virtual screen, in the sense that it provides an interface to the user for carrying out independent
activities, e.g., one window can be used for editing a program and another for drawing pictures,
etc.

 A window can be divided into two parts—client part, and non-client part.
71
 The client area makes up the whole of the window, except for the borders and scroll bars.
 The client area is the area available to a client application for display.
 The non-client-part of the window determines the look and feel of the window.
 The look and feel defines a basic behaviour for all windows, such as creating, moving, resizing,
iconifying of the windows.
Window management system (WMS):
 A graphical user interface typically consists of a large number of windows.
 Therefore, it is necessary to have some systematic way to manage these windows.
 Most graphical user interface development environments do this through a window management
system (WMS).
 A window management system is primarily a resource manager . It keeps track of the screen area
resource and allocates it to the different windows that seek to use the screen.
 From a broader perspective, a WMS can be considered as a user interface management system
(UIMS) —which not only does resource management, but also provides the basic behaviour to the
windows and provides several utility routines to the application programmer for user interface
development
 A WMS simplifies the task of a GUI designer to a great extent by providing the basic behaviour
to the various windows such as move, resize, iconify, etc. as soon as they are created and by
providing the basic routines to manipulate the windows from the application program such as
creating, destroying, changing different attributes of the windows, and drawing text, lines, etc.
 A WMS consists of two parts
o a window manager, and
o a window system.

Window manager and window system:


 The window manager is built on the top of the window system in the sense that it makes use of
various services provided by the window system.

72
 The window manager and not the window system determines how the windows look and behave.
In fact, several kinds of window managers can be developed based on the same window system.
 The window manager can be considered as a special kind of client that makes use of the services
(function calls) supported by the window system.
 A widget is the short form of a window object.
 We know that an object is essentially a collection of related data with several operations defined
on these data which are available externally to operate on these data.
 The data of an window object are the geometric attributes (such as size, location etc.) and other
attributes such as its background and foreground colour , etc.
 The operations that are defined on these data include, resize, move, draw, etc.
Component-based development:
 A development style based on widgets is called component-based (or widget-based ) GUI
development style.
 There are several important advantages of using a widget-based design style.
 One of the most important reasons to use widgets as building blocks is because they help users
learn an interface fast.
 In this style of development, the user interfaces for different applications are built from the same
basic components.
 The component-based user interface development style reduces the application programmer’s
work significantly as he is more of a user interface component integrator than a programmer in
the traditional sense.
Visual programming:
 Visual programming is the drag and drop style of program development.
 In this style of user interface development, a number of visual objects (icons) representing the
GUI components are provided by the programming environment.
 The application programmer can easily develop the user interface by dragging the required
component types (e.g., menu, forms, etc.) from the displayed icons and placing them wherever
required.
 Thus, visual programming can be considered as program development through manipulation of
several visual objects.
2 . Types of Widgets:
 Different interface programming packages support different widget sets. However
 Label widget: This is probably one of the simplest widgets. A label widget does nothing except
to display a label, i.e., it does not have any other interaction capabilities and is not sensitive to
mouse clicks. A label widget is often used as a part of other widgets.

73
 Container widget: These widgets do not stand by themselves, but exist merely to contain other
widgets. Other widgets are created as children of the container widget. When the container widget
is moved or resized, its children widget also get moved or resized. A container widget has no
callback routines associated with it.
 Pop-up menu: These are transient and task specific. A pop-up menu appears upon pressing the
mouse button, irrespective of the mouse position.
 Pull-down menu: These are more permanent and general. You have to move the cursor to a
specific location and pull down this type of menu.
 Dialog boxes: We often need to select multiple elements from a selection list. A dialog box
remains visible until explicitly dismissed by the user . A dialog box can include areas for entering
text as well as values.
 Push button: A push button contains key words or pictures that describe the action that is
triggered when you activate the button. Usually, the action related to a push button occurs
immediately when you click a push button unless it contains an ellipsis (. . . ).
 Radio buttons: A set of radio buttons are used when only one option has to be selected out of
many options. A radio button is a hollow circle followed by text describing the option it stands
for. When a radio button is selected, it appears filled and the previously selected radio button
from the group is unselected.
 Combo boxes: A combo box looks like a button until the user interacts with it. When the user
presses or clicks it, the combo box displays a menu of items to choose from. Normally a combo
box is used to display either one-of-many choices when space is limited, the number of choices is
large, or when the menu items are computed at run-time.
3 . An Overview of X-Window/MOTIF:
 One of the important reasons behind the extreme popularity of the X-window system is probably
due to the fact that it allows development of portable GUIs.
 Applications developed using the X-window system are device independent.
 Network-independent GUI operation has been schematically represented in Figure 9.5.

74
 A is the computer application in which the application is running. B can be any computer on the
network from where you can interact with the application.
 Network independent GUI was pioneered by the X-window system in the mid-eighties at MIT
(Massachusetts Institute of Technology) with support from DEC (Digital Equipment
Corporation).
 Built on top of X-windows are higher level functions collectively called Xtoolkit, which consists
of a set of basic widgets and a set of routines to manipulate these widgets.
 One of the most widely used widget sets is X/Motif. Digital Equipment Corporation (DEC) used
the basic X-window functions to develop its own look and feel for interface designs called
DECWindows.
4. X Architecture:
 The X architecture is pictorially depicted in Figure 9.6. The different terms used in this diagram
are explained as follows:

75
 Xserver: The X server runs on the hardware to which the display and the key board are attached.
The X server performs low-level graphics, manages window, and user input functions. The X
server controls accesses to a bit-mapped graphics display resource and manages it.
 X protocol.:The X protocol defines the format of the requests between client applications and
display servers over the network. The X protocol is designed to be independent of hardware,
operating systems, underlying network protocol, and the proramming language used.
 X library (Xlib): The Xlib provides a set of about 300 utility routines for applications to call.
These routines convert procedure calls into requests that are transmitted to the server. Xlib
provides low level primitives for developing an user interface, such as displaying a window,
drawing characters and graphics on the window, waiting for specific events, etc.
 Xtoolkit (Xt):The Xtoolkit consists of two parts: the intrinsics and the widgets. We have
already seen that widgets are predefined user interface components such as scroll bars, menu
bars, push buttons, etc. for designing GUIs. Intrinsics are a set of about a dozen library routines
that allow a programmer to combine a set of widgets into a user interface
5 . Size Measurement of a Component-based GUI:
 Lines of code (LOC) is not an appropriate metric to estimate and measure the size of a
component-based GUI.
 This is because, the interface is developed by integrating several pre- built components.
 A way to measure the size of a modern user interface is widget points (wp).
 The size of a user interface (in wp units) is simply the total number of widgets used in the
interface.
 The size of an interface in wp units is a measure of the intricacy of the interface and is more or
less independent of the implementation environment.
 The wp measure opens up chances for contracts on a measured amount of user interface
functionality, instead of a vague definition of a complete system.
Coding and Testing
 Coding is undertaken once the design phase is complete and the design documents have been
successfully reviewed
 After all the modules of a system have been coded and unit tested, the integration and system
testing phase is undertaken.
CODING:
 The coding is the process of transforming the design of a system into a computer language format.
This coding phase of software development is concerned with software translating design
specification into the source code. ... Coding is done by the coder or programmers who are
independent people than the designer.

76
 The objective of the coding phase is to transform the design of a system into code in a high-level
language, and then to unit test this code.
 The input to the coding phase is the design document produced at the end of the design phase.
 Please recollect that the design document contains not only the high-level design of the system in
the form of a module structure (e.g., a structure chart), but also the detailed design.
 The main advantages of adhering to a standard style of coding are the following:
o A coding standard gives a uniform appearance to the codes written by different
engineers.
o It facilitates code understanding and code reuse.
o It promotes good programming practices.
 After a module has been coded, usually code review is carried out to ensure that the coding
standards are followed and also to detect as many errors as possible before testing.
1. Coding Standards and Guidelines :
 Good software development organizations usually develop their own coding standards and
guidelines depending on what suits their organization best and based on the specific types of
software they develop.
 To give an idea about the types of coding standards that are being used.
Representative coding standards:
o Rules for limiting the use of globals: These rules list what types of data can be declared
global and what cannot, with a view to limit the data that needs to be defined with global
scope.
o Standard headers for different modules: The header of different modules should have
standard format and information for ease of understanding and maintenance. The
following is an example of header format that is being used in some companies:
 Name of the module.
 Date on which the module was created.
 Author’s name. Modification history. Synopsis of the module.
 This is a small writeup about what the module does.
 Different functions supported in the module, along with their input/output parameters.
 Global variables accessed/modified by the module
o Naming conventions for global variables, local variables, and constant identifiers: A
popular naming convention is that variables are named using mixed case lettering. Global variable
names would always start with a capital letter (e.g., GlobalData) and local variable names start with
small letters (e.g., localData). Constant names should be formed using capital letters only (e.g.,
CONSTDATA).

77
o Conventions regarding error return values and exception handling mechanisms: The
way error conditions are reported by different functions in a program should be standard within an
organisation. For example, all functions while encountering an error condition should either return a 0
or 1 consistently
o Representative coding guidelines: The following are some representative coding guidelines
that are recommended by many software development organisations. Wherever necessary, the
rationale behind these guidelines is also mentioned.
o Do not use a coding style that is too clever or too difficult to understand: Code should be
easy to understand. Many inexperienced engineers actually take pride in writing cryptic and
incomprehensible code.
o Avoid obscure side effects: The side effects of a function call include modifications to the
parameters passed by reference, modification of global variables, and I/O operations. An obscure side
effect is one that is not obvious from a casual examination of the code.
o Do not use an identifier for multiple purposes: Programmers often use the same identifier
to denote several temporary entities. For example, some programmers make use of a temporary loop
variable for also computing and storing the final result. The rationale that they give for such multiple
use of variables is memory efficiency, e.g., three variables use up three memory locations, whereas
when the same variable is used for three different purposes, only one memory location is used.
o Some of the problems caused by the use of a variable for multiple purposes are as follows:
 Each variable should be given a descriptive name indicating its purpose. This is not
possible if an identifier is used for multiple purposes. Use of a variable for multiple
purposes can lead to confusion and make it difficult for somebody trying to read and
understand the code.
 Use of variables for multiple purposes usually makes future enhancements more difficult.
For example, while changing the final computed result from integer to float type, the
programmer might subsequently notice that it has also been used as a temporary loop
variable that cannot be a float type.
o Code should be well-documented: As a rule of thumb, there should be at least one comment line
on the average for every three source lines of code.
o Length of any function should not exceed 10 source lines: A lengthy function is usually very
difficult to understand as it probably has a large number of variables and carries out many
different types of computations. For the same reason, lengthy functions are likely to have
disproportionately larger number of bugs.
o Do not use GO TO statements: Use of GO TO statements makes a program unstructured. This
makes the program very difficult to understand, debug, and maintain.
TESTING :

78
 The aim of program testing is to help realiseidentify all defects in a program.
 However , in practice, even after satisfactory completion of the testing phase, it is not possible to
guarantee that a program is error free.
 This is because the input data domain of most programs is very large, and it is not practical to test
the program exhaustively with respect to each value that the input can assume.
 Consider a function taking a floating point number as argument.
 If a tester takes 1sec to type in a value, then even a million testers would not be able to
exhaustively test it after trying for a million number of years.
1. Basic Concepts and Terminologies:
i).How to test a program? :
- Testing a program involves executing the program with a set of test inputs and observing if the
program behaves as expected. If the
- program fails to behave as expected, then the input data and the conditions under which it fails are
noted for later debugging and error correction.

ii).Terminologies:
 A mistake is essentially any programmer action that later shows up as an incorrect result during
program execution. A programmer may commit a mistake in almost any development activity.
 Division by zero in an arithmetic operation. Both these mistakes can lead to an incorrect result.
 An error is the result of a mistake committed by a developer in any of the development activities.
Among the extremely large variety of errors that can exist in a program. One example of an error
is a call made to a wrong function.
 A failure of a program essentially denotes an incorrect behaviour exhibited by the program
during its execution. An incorrect behaviour is observed either as an incorrect result produced or
as an inappropriate activity carried out by the program. Every failure is caused by some bugs
present in the program.
 in the following we give three randomly selected examples:
o – The result computed by a program is 0, when the correct result is 10.
o – A program crashes on an input.
o – A robot fails to avoid an obstacle and collides with it.

79
 A test case is a triplet [I , S, R], where I is the data input to the program under test, S is the state
of the program at which the data is to be input, and R is the result expected to be produced by the
program. The state of a program is also called its execution mode.
 A test scenario is an abstract test case in the sense that it only identifies the aspects of the
program that are to be tested without identifying the input, state, or output. A test case can be said
to be an implementation of a test scenario. In the test case, the input, output, and the state at which
the input would be applied is designed such that the scenario can be executed.
 A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases.
 A test case is said to be a positive test case if it is designed to test whether the software correctly
performs a required functionality.
 A test case is said to be negative test case, if it is designed to test whether the software carries out
something, that is not required of the system.
 A test suite is the set of all test that have been designed by a tester to test a given program.
 Testability of a requirement denotes the extent to which it is possible to determine whether an
implementation of the requirement conforms to it in both functionality and performance.
 A failure mode of a software denotes an observable way in which it can fail. In other words, all
failures that have similar observable symptoms, constitute a failure mode. As an example of the
failure modes of a software, consider a railway ticket booking software that has three failure
modes—failing to book an available seat, incorrect seat booking (e.g., booking an already booked
seat), and system crash.
 Equivalent faults denote two or more bugs that result in the system failing in the same failure
mode. As an example of equivalent faults, consider the following two faults in C language—
division by zero and illegal memory access errors. These two are equivalent faults, since each of
these leads to a program crash.
Verification versus validation:
 The objectives of both verification and validation techniques are very similar since both these
techniques are designed to help remove errors in a software.
 Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase; whereas validation is the process of
determining whether a fully developed software conforms to its requirements specification.
 The primary techniques used for verification include review, simulation, formal verification, and
testing. Review, simulation, and testing are usually considered as informal verification
techniques. Formal verification usually involves use of theorem proving techniques or use of
automated tools such as a model checker . On the other hand, validation techniques are primarily
based on product testing.

80
 Verification does not require execution of the software, whereas validation requires execution of
the software.
 Verification is carried out during the development process to check if the development activities
are proceeding alright, whereas validation is carried out to check if the right as required by the
customer has been developed.
 Verification techniques can be viewed as an attempt to achieve phase containment of errors.
Phase containment of errors has been acknowledged to be a cost-effective way to eliminate
program bugs, and is an important software engineering principle.
 While verification is concerned with phase containment of errors, the aim of validation is to
check whether the deliverable software is error free.
2 . Testing Activities:
 Test suite design: The set of test cases using which a program is to be tested is designed possibly
using several test case design techniques.
 Running test cases and checking the results to detect failures: Each test case is run and the
results are compared with the expected results. A mismatch between the actual result and
expected results indicates a failure. The test cases for which the system fails are noted down for
later debugging.
 Locate error: In this activity, the failure symptoms are analysed to locate the errors. For each
failure observed during the previous activity, the statements that are in error are identified.
 Error correction: After the error is located during debugging, the code is appropriately changed
to correct the error.

3. Why Design Test Cases?:


 When test cases are designed based on random input data, many of the test cases do not contribute
to the significance of the test suite, That is, they do not help detect any additional defects not
already being detected by other test cases in the suite.

81
 Testing a software using a large collection of randomly selected test cases does not guarantee that
all (or even most) of the errors in the system will be uncovered.
 Let us try to understand why the number of random test cases in a test suite, in general, does not
indicate of the effectiveness of testing. Consider the following example code segment which
determines the greater of two integer values x and y.
 This code segment has a simple programming error:
if (x>y) max = x;
else max = x;
 For the given code segment, the test suite {(x=3,y=2);(x=2,y=3)} can detect the error , whereas a
larger test suite {(x=3,y=2);(x=4,y=3); (x=5,y=1)} does not detect the error.
 To satisfactorily test a software with minimum cost, we must design a minimal test suite that is of
reasonable size and can uncover as many existing errors in the system as possible.
 To reduce testing cost and at the same time to make testing more effective, systematic approaches
have been developed to design a small test suite that can detect most, if not all failures.
 A minimal test suite is a carefully designed set of test cases such that each test case helps detect
different errors. This is in contrast to testing using some random input values.
 There are essentially two main approaches to systematically design test cases:
 Black-box approach
 White-box (or glass-box) approach
 In the black-box approach, test cases are designed using only the functional specification of the
software. That is, test cases are designed solely based on an analysis of the input/out behaviour
(that is, functional behaviour) and does not require any knowledge of the internal structure of a
program.
 For this reason, black-box testing is also known as functional testing.
 White-box test cases requires a thorough knowledge of the internal structure of a program, and
therefore white-box testing is also called structural testing.
 Black- box test cases are designed solely based on the input-output behaviour of a program. In
contrast, white-box test cases are based on an analysis of the code.
4 .Testing in the Large versus Testing in the Small:
 A software product is normally tested in three levels or stages:
1. Unit testing
2. Integration testing
3. System testing
 During unit testing, the individual functions (or units) of a program are tested.
 Unit testing is referred to as testing in the small, whereas integration and system testing are
referred to as testing in the large.

82
 After testing all the units individually, the units are slowly integrated and tested after each step of
integration (integration testing). Finally, the fully integrated system is tested (system testing).
Integration and system testing are known as testing in the large.
UNIT TESTING:
 Unit testing is undertaken after a module has been coded and reviewed.
 UNIT TESTING is a level of software testing where individual units/ components of a software are
tested.

1.Driver and stub modules:


 In order to test a single module, we need a complete environment to provide all relevant code that
is necessary for execution of the module. That is, besides the module under test, the following are
needed to test the module:
 The procedures belonging to other modules that the module under test calls.
 Non-local data structures that the module accesses.
 A procedure to call the functions of the module under test with appropriate parameters.
 Stub: The role of stub and driver modules is pictorially shown in Figure 10.3. A stub procedure is
a dummy procedure that has the same I/O parameters as the function called by the unit under test
but has a highly simplified behaviour .

 Driver: A driver module should contain the non-local data structures accessed by the module
under test. Additionally, it should also have the code to call the different functions of the unit
under test with appropriate parameter values for testing.
BLACK-BOX TESTING:
 In black-box testing, test cases are designed from an examination of the input/output values only
and no knowledge of design or code is required.
83
 BLACK BOX TESTING, also known as Behavioral Testing, is a software testing method in
which the internal structure/design/implementation of the item being tested is not known to the
tester. 

This method attempts to find errors in the following categories:


 Incorrect or missing functions
 Interface errors
 Errors in data structures or external database access
 Behavior or performance errors
 Initialization and termination errors

 The following are the two main approaches available to design black box test cases:

 Equivalence class partitioning


 Boundary value analysis

1. Equivalence Class Partitioning:


 In the equivalence class partitioning approach, the domain of input values to the program under
test is partitioned into a set of equivalence classes.
 The partitioning is done such that for every input data belonging to the same equivalence class,
the program behaves similarly.
 The main idea behind defining equivalence classes of input data is that testing the code with any
one value belonging to an equivalence class is as good as testing the code with any other value
belonging to the same equivalence class.
 The following are two general guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of values, then one valid
and two invalid equivalence classes need to be defined. For example, if the equivalence
class is the set of integers in the range 1 to 10 (i.e., [1,10]), then the invalid equivalence
classes are [−∞,0], [11,+∞].
2. If the input data assumes values from a set of discrete members of some domain, then one
equivalence class for the valid input values and another equivalence class for the invalid
input values should be defined. For example, if the valid equivalence classes are {A,B,C},
then the invalid equivalence class is -{A,B,C}, where is the universe of possible input
values.
2. Boundary Value Analysis:

84
 A type of programming error that is frequently committed by programmers is missing out on the
special consideration that should be given to the values at the boundaries of different equivalence
classes of inputs.
 For example, programmers may improperly use < instead of <=, or conversely <= for <, etc.
 Boundary value analysis-based test suite design involves designing test cases using the values at
the boundaries of different equivalence classes.
 To design boundary value test cases, it is required to examine the equivalence classes to check if
any of the equivalence classes contains a range of values.
 For example, if an equivalence class contains the integers in the range 1 to 10, then the boundary
value test suite is {0,1,10,11}.
3. Summary of the Black-box Test Suite Design Approach:
 We now summarise the important steps in the black-box test suite design approach:
 Examine the input and output values of the program.
 Identify the equivalence classes.
 Design equivalence class test cases by picking one representative value from each
equivalence class.
 Design the boundary value test cases as follows. Examine if any equivalence class is a
range of values.
WHITE-BOX TESTING:
 White-box testing is an important type of unit testing. A large number of white-box testing
strategies exist.
 Each testing strategy essentially designs test cases based on analysis of some aspect of source
code and is based on some heuristic.
 White Box Testing is defined as the testing of a software solution's internal structure, design, and
coding. In this type of testing, the code is visible to the tester

1. Basic Concepts:
A white-box testing strategy can either be coverage-based or fault-based.

85
 Fault-based testing: A fault-based testing strategy targets to detect certain types of faults. These
faults that a test strategy focuses on constitutes the fault model of the strategy. An example of a
fault-based strategy is mutation testing,
 Coverage-based testing: A coverage-based testing strategy attempts to execute (or cover) certain
elements of a program. Popular examples of coverage-based testing strategies are statement
coverage, branch coverage, multiple condition coverage, and path coverage-based testing.
 Testing criterion for coverage-based testing: A coverage-based testing strategy typically targets
to execute (i.e., cover) certain program elements for discovering failures.
 Stronger versus weaker testing :
- The concepts of stronger , weaker , and complementary testing are schematically
illustrated in Figure 10.6.
- Observe in Figure 10.6(a) that testing strategy A is stronger than B since B covers only a
proper subset of elements covered by B.
- On the other hand, Figure 10.6(b) shows A and B are complementary testing strategies
since some elements of A are not covered by B and vice versa.
- If a stronger testing has been performed, then a weaker testing need not be carried out.

2. Statement Coverage:
 The principal idea governing the statement coverage strategy is that unless a statement is
executed, there is no way to determine whether an error exists in that statement.

86
 It is obvious that without executing a statement, it is difficult to determine whether it causes a
failure due to illegal memory access, wrong result computation due to improper arithmetic
operation, etc.
 It can however be pointed out that a weakness of the statement- coverage strategy is that
executing a statement once and observing that it behaves properly for one input value is no
guarantee that it will behave correctly for all input values.
 Example 10.11 :Design statement coverage-based test suite for the following Euclid’s GCD
computation program:
int computeGCD(x,y)
int x,y;
{
1 while (x != y) {
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
 To design the test cases for the statement coverage, the conditional expression of the while
statement needs to be made true and the conditional expression of the if statement needs to be
made both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y = 4)},
all statements of the program would be executed at least once.
3. Branch Coverage:
 A test suite satisfies branch coverage, if it makes each branch condition in the program to assume
true and false values in turn.
 In other words, for branch coverage each branch in the CFG representation of the program must
be taken at least once, when the test suite is executed.
 Branch testing is also known as edge testing, since in this testing scheme, each edge of a
program’s control flow graph is traversed at least once.
Example 10.12 For the program of Example 10.11, determine a test suite to achieve branch coverage.
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x = 3, y = 4)} achieves branch
coverage.
4. Multiple Condition Coverage:
 In the multiple condition (MC) coverage-based testing, test cases are designed to make each
component of a composite conditional expression to assume both true and false values.
 For example, consider the composite conditional expression ((c1 .and.c2 ).or.c3).

87
 A test suite would achieve MC coverage, if all the component conditions c1, c2 and c3 are each
made to assume both true and false values.
5. Path Coverage:
 A test suite achieves path coverage if it exeutes each linearly independent paths ( o r basis paths )
at least once.
 A linearly independent path can be defined in terms of the control flow graph (CFG) of a
program.
Control flow graph (CFG):
- A control flow graph describes the sequence in which the different instructions of a program
get executed.
- In order to draw the control flow graph of a program, we need to first number all the
statements of a program. The different numbered statements serve as nodes of the control flow
graph (see Figure 10.5).

- we can define a CFG as follows. A CFG is a directed graph consisting of a set of nodes and
edges (N, E), such that each node n N corresponds to a unique program statement and an
edge exists between two nodes if control can transfer from one node to the other.
Path:
- A path through a program is any node and edge sequence from the start node to a terminal
node of the control flow graph of a program.
- Please note that a program can have more than one terminal nodes when it contains
multiple exit or return type of statements.
- Writing test cases to cover all paths of a typical program is impractical since there can be an
infinite number of paths through a program in presence of loops.

88
Linearly independent set of paths (or basis path set):
- A set of paths for a given program is called linearly independent set of paths (or the set of
basis paths or simply the basis set), if each path in the set introduces at least one new edge that
is not included in any other path in the set.
- If a set of paths is linearly independent of each other , then no path in the set can be obtained
through any linear operations (i.e., additions or subtractions) on the other paths in the set.
6. Data Flow-based Testing:
 Data flow based testing method selects test paths of a program according to the definitions and
uses of different variables in a program.
 Data flow testing is a family of test strategies based on selecting paths through the program's
control flow in order to explore sequences of events related to the status of variables
or data objects. 
 Dataflow Testing focuses on the points at which variables receive values and the points at which
these values are used.
 Data Flow Testing uses the control flow graph to find the situations that can interrupt the flow of
the program.
 Reference or define anomalies in the flow of the data are detected at the time of associations
between values and variables.
 These anomalies are:
o A variable is defined but not used or referenced,
o A variable is used but never defined,
o A variable is defined twice before it is used
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
 To find a variable that is used but never defined,
 To find a variable that is defined but never used,
 To find a variable that is defined multiple times before it is use,
 Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
 Time consuming and costly process
 Requires knowledge of programming languages
8 .Mutation Testing
 Mutation Testing is a type of software testing where we mutate (change) certain statements in the
source code and check if the test cases are able to find the errors.

89
DEBUGGING:
 After a failure has been detected, it is necessary to first identify the program statement(s) that are
in error and are responsible for the failure, the error can then be fixed.
1. Debugging Approaches:
I).Brute force method:
 This is the most common method of debugging but is the least efficient method.
 In this approach, print statements are inserted throughout the program to print the intermediate
values with the hope that some of the printed values will help to identify the statement in error.
 This approach becomes more systematic with the use of a symbolic debugger (also called a source
code debugger ), because values of different variables can be easily checked and break points and
watch points can be easily set to test the values of variables effortlessly
II).Backtracking:
 This is also a fairly common approach. In this approach, starting from the statement at which
an error symptom has been observed, the source code is traced backwards until the error is
discovered.
 Unfortunately, as the number of source lines to be traced back increases, the number of
potential backward paths increases and may become unmanageably large for complex
programs, limiting the use of this approach.
III).Cause elimination method:
 In this approach, once a failure is observed, the symptoms of the failure (i.e., certain variable
is having a negative value though it should be positive, etc.) are noted.
 Based on the failure symptoms, the causes which could possibly have contributed to the
symptom is developed and tests are conducted to eliminate each.
 A related technique of identification of the error from the error symptom is the software fault
tree analysis.
IV).Program slicing:
 This technique is similar to back tracking. In the backtracking approach, one often has to
examine a large number of statements.
 However , the search space is reduced by defining slices.
 A slice of a program for a particular variable and at a particular statement is the set of source
lines preceding this statement that can influence the value of that variable

90
 Program slicing makes use of the fact that an error in the value of a variable can be caused by
the statements on which it is data dependent.
2. Debugging Guidelines:
 The following are some general guidelines for effective debugging:
 Many times debugging requires a thorough understanding of the program design. Trying
to debug based on a partial understanding of the program design may require an
inordinate amount of effort to be put into debugging even for simple problems.
 Debugging may sometimes even require full redesign of the system. In such cases, a
common mistakes that novice programmers often make is attempting not to fix the error
but its symptoms.
 One must be beware of the possibility that an error correction may introduce new errors.
Therefore after every round of error-fixing, regression testing must be carried out.
INTEGRATION TESTING:
 Integration testing is carried out after all (or at least some of ) the modules have been unit tested.
 Successful completion of unit testing, to a large extent, ensures that the unit (or module) as a whole
works satisfactorily.
 In this context, the objective of integration testing is to detect the errors at the module interfaces
(call parameters).
 For example, it is checked that no parameter mismatch occurs when one module invokes the
functionality of another module.
 The objective of integration testing is to check whether the different modules of a program
interface with each other properly.
 During integration testing, different modules of a system are integrated in a planned manner using
an integration plan.
 The integration plan specifies the steps and the order in which modules are combined to realise the
full system.
 After each integration step, the partially integrated system is tested.
 Any one (or a mixture) of the following approaches can be used to develop the test plan:
 Big-bang approach to integration testing
 Top-down approach to integration testing
 Bottom-up approach to integration testing
 Mixed (also called sandwiched ) approach to integration testing
I).Big-bang approach to integration testing:
- Big-bang testing is the most obvious approach to integration testing.
- In this approach, all the modules making up a system are integrated in a single step.
- In simple words, all the unit tested modules of the system are simply linked together and tested.
- However , this technique can meaningfully be used only for very small systems.
91
- The main problem with this approach is that once a failure has been detected during integration
testing, it is very difficult to localise the error as the error may potentially lie in any of the
modules.
II).Bottom-up approach to integration testing:
- Large software products are often made up of several subsystems.
- A subsystem might consist of many modules which communicate among each other through
well-defined interfaces.
- In bottom-up integration testing, first the modules for the each subsystem are integrated.
- The primary purpose of carrying out the integration testing a subsystem is to test whether the
interfaces among various modules making up the subsystem work satisfactorily.
- The principal advantage of bottom- up integration testing is that several disjoint subsystems
can be tested simultaneously.
- Another advantage of bottom-up testing is that the low-level modules get tested thoroughly,
since they are exercised in each integration step.
- A disadvantage of bottom-up testing is the complexity that occurs when the system is made up
of a large number of small subsystems that are at the same level.
III). Top-down approach to integration testing:
- Top-down integration testing starts with the root module in the structure chart and one or two
subordinate modules of the root module.
- After the top-level ‘skeleton’ has been tested, the modules that are at the immediately lower layer
of the ‘skeleton’ are combined with it and tested.
- Top-down integration testing approach requires the use of program stubs to simulate the effect of
lower-level routines that are called by the routines under test.
- An advantage of top-down integration testing is that it requires writing only stubs, and stubs are
simpler to write compared to drivers.
- A disadvantage of the top-down integration testing approach is that in the absence of lower-level
routines, it becomes difficult to exercise the top-level routines in the desired manner since the
lower level routines usually perform input/output (I/O) operations.
IV). Mixed approach to integration testing:
- The mixed (also called sandwiched ) integration testing follows a combination of top-down
and bottom-up testing approaches.
- In topdown approach, testing can start only after the top-level modules have been coded and
unit tested.
- Similarly, bottom-up testing can start only after the bottom level modules are ready.
- The mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
- In the mixed testing approach, testing can start as and when modules become available after
unit testing.
92
1. Phased versus Incremental Integration Testing:
- Big-bang integration testing is carried out in a single step of integration.
- In contrast, in the other strategies, integration is carried out over several steps.
- In these later strategies, modules can be integrated either in a phased or incremental manner .
- A comparison of these two strategies is as follows:
 In incremental integration testing, only one new module is added to the partially integrated
system each time.
 In phased integration, a group of related modules are added to the partial system each
time.
SYSTEM TESTING:
 After all the units of a program have been integrated together and tested, system testing is taken
up.
 System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document.
 There are essentially three main kinds of system testing depending on who carries out testing:
1.Alpha Testing: Alpha testing refers to the system testing carried out by the test team within the
developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a select group of friendly
customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system.
1. Smoke Testing:
- Smoke testing is carried out before initiating system testing to ensure that system testing would be
meaningful, or whether many parts of the software would fail.
- The idea behind smoke testing is that if the integrated program cannot pass even the basic tests, it
is not ready for a vigorous testing.
- For smoke testing, a few test cases are designed to check whether the basic functionalities are
working.
- For example, for a library automation system, the smoke tests may check whether books can be
created and deleted, whether member records can be created and deleted, and whether books can
be loaned and returned.
2. Performance Testing:
- Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document.
- There are several types of performance testing corresponding to various types of non-functional
requirements.

93
I). Stress testing:
- Stress testing is also known as endurance testing.
- Stress testing evaluates system performance when it is stressed for short periods of time.
- Stress tests are black-box tests which are designed to impose a range of abnormal and even
illegal input conditions so as to stress the capabilities of the software.
- Input data volume, input data rate, processing time, utilisation of memory, etc., are tested
beyond the designed capacity.
II). Volume testing:
- Volume testing checks whether the data structures (buffers, arrays, queues, stacks, etc.) have
been designed to successfully handle extraordinary situations.
- For example, the volume testing for a compiler might be to check whether the symbol table
overflows when a very large program is compiled.
III). Configuration testing:
- Configuration testing is used to test system behaviour in various hardware and software
configurations specified in the requirements.
- Sometimes systems are built to work in different configurations for different users.
- For instance, a minimal system might be required to serve a single user , and other extended
configurations may be required to serve additional users during configuration testing.
IV). Compatibility testing:
- This type of testing is required when the system interfaces with external systems (e.g., databases,
servers, etc.).
- Compatibility aims to check whether the interfaces with the external systems are performing as
required.
- For instance, if the system needs to communicate with a large database system to retrieve
information, compatibility testing is required to test the speed and accuracy of data retrieval.
V). Regression testing:
- This type of testing is required when a software is maintained to fix some bugs or enhance
functionality, performance, etc.
VI). Recovery testing:
- Recovery testing tests the response of the system to the presence of faults, or loss of power ,
devices, services, data, etc.
- The system is subjected to the loss of the mentioned resources (as discussed in the SRS document)
and it is checked if the system recovers satisfactorily.
VIII). Maintenance testing:
- This addresses testing the diagnostic programs, and other procedures that are required to help
maintenance of the system.

94
- It is verified that the artifacts exist and they perform properly.
IX). Documentation testing:
- It is checked whether the required user manual, maintenance manuals, and technical manuals exist
and are consistent.
- If the requirements specify the types of audience for which a specific manual should be designed,
then the manual is checked for compliance of this requirement.
X). Usability testing:
- Usability testing concerns checking the user interface to see if it meets all user requirements
concerning the user interface.
- During usability testing, the display screens, messages, report formats, and other aspects relating to
the user interface requirements are tested.
XI). Security testing:
- Security testing is essential for software that handle or process confidential data that is to be
gurarded against pilfering.
- It needs to be tested whether the system is fool-proof from security attacks such as intrusion by
hackers.
- Over the last few years, a large number of security testing techniques have been proposed, and
these include password cracking, penetration testing, and attacks on specific ports, etc.
3. Error Seeding:
- Sometimes customers specify the maximum number of residual errors that can be present in the
delivered software.
- These requirements are often expressed in terms of maximum number of allowable errors per line
of source code.
- The error seeding technique can be used to estimate the number of residual errors in a software.
- Error seeding, as the name implies, it involves seeding the code with some known errors.
- In other words, some artificial errors are introduced (seeded) into the program.
- The number of these seeded errors that are detected in the course of standard testing is
determined.
- These values in conjunction with the number of unseeded errors detected during testing can be
used to predict the following aspects of a program:
1. The number of errors remaining in the product.
2. The effectiveness of the testing strategy.
- Let N be the total number of defects in the system, and let n of these defects be found by testing.
- Let S be the total number of seeded defects, and let s of these defects be found during testing.
Therefore, we get:

95
Defects still remaining in the program after testing can be given by:

- Error seeding works satisfactorily only if the kind seeded errors and their frequency of occurrence
matches closely with the kind of defects that actually exist.
- However , it is difficult to predict the types of errors that exist in a software.
- To some extent, the different categories of errors that are latent and their frequency of occurrence
can be estimated by analyzing historical data collected from similar projects

96
UNIT – 5
SOFTWARE RELIABILITY:
- The reliability of a software product essentially denotes its trustworthiness or dependability.
- Alternatively, the reliability of a software product can also be defined as the probability of the
product working “correctly” over a given period of time.
- Software Reliability is defined as: the probability of failure-free software operation for a specified
period of time in a specified environment. ... .
- Software Reliability is hard to achieve, because the complexity of software tends to be high.
- we can summarise the main reasons that make software reliability more difficult to measure than
hardware reliability:
 The reliability improvement due to fixing a single bug depends on where the bug is
located in the code.
 The perceived reliability of a software product is observer-dependent.
 The reliability of a product keeps changing as errors are detected and fixed.
1. Hardware versus Software Reliability:
- An important characteristic feature that sets hardware and software reliability issues apart is the
difference between their failure patterns.
- Hardware components fail due to very different reasons as compared to software components.
Hardware components fail mostly due to wear and tear , whereas software components fail due to
bugs.
- A logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix a hardware fault, one
has to either replace or repair the failed part.
- In contrast, a software product would continue to fail until the error is tracked down and either the
design or the code is changed to fix the bug.
- For this reason, when a hardware part is repaired its reliability would be maintained at the level
that existed before the failure occurred; whereas when a software failure is repaired, the reliability
may either increase or decrease.
- A comparison of the changes in failure rate over the product life time for a typical hardware
product as well as a software product are sketched in Figure 11.1.

97
- Figure 11.1(a)) appears like a “bath tub”. For a software component the failure rate is initially
high, but decreases as the faulty components identified are either repaired or replaced.
- The system then enters its useful life, where the rate of failure is almost constant.
- After some time (called product life time ) the major components wear out, and the failure rate
increases.
- The initial failures are usually covered through manufacturer’s warranty.
- In contrast to the hardware products, the software product show the highest failure rate just after
purchase and installation (see the initial portion of the plot in Figure 11.1 (b)).
- As the system is used, more and more errors are identified and removed resulting in reduced
failure rate.
- This error removal continues at a slower pace during the useful life of the product.
- As the software becomes obsolete no more error correction occurs and the failure rate remains
unchanged.
2 .Reliability Metrics of Software Products:
 Rate of occurrence of failure (ROCOF): ROCOF measures the frequency of occurrence of
failures. ROCOF measure of a software product can be obtained by observing the behaviour of a
software product in operation over a specified time interval and then calculating the ROCOF
value as the ratio of the total number of failures observed and the duration of observation.
 Mean time to failure (MTTF): MTTF is the time between two successive failures, averaged over
a large number of failures. To measure MTTF , we can record the failure data for n failures. Let
the failures occur at the time instants t1, t2, ..., tn. Then, MTTF can be calculated as

 Mean time to repair (MTTR): Once failure occurs, some time is required to fix the error . MTTR
measures the average time it takes to track the errors causing the failure and to fix them.
 Mean time between failure (MTBF): The MTTF and MTTR metrics can be combined to get the
MTBF metric: MTBF=MTTF+MTTR. Thus, MTBF of 300 hours indicates that once a failure
occurs, the next failure is expected after 300 hours. In this case, the time measurements are real
time and not the execution time as in MTTF
 Probability of failure on demand (POFOD): Unlike the other metrics discussed, this metric does
not explicitly involve time measurements. POFOD measures the likelihood of the system failing
when a service request is made.
 Availability: Availability of a system is a measure of how likely would the system be available for
use over a given period of time. This metric not only considers the number of failures occurring
during a time interval, but also takes into account the repair time (down time) of a system when a
failure occurs.

98
Shortcomings of reliability metrics of software products:
 Transient: Transient failures occur only for certain input values while invoking a function of the
system.
 Permanent: Permanent failures occur for all input values while invoking a function of the system.
 Recoverable: When a recoverable failure occurs, the system can recover without having to
shutdown and restart the system (with or without operator intervention).
 Unrecoverable: In unrecoverable failures, the system may need to be restarted.
 Cosmetic: These classes of failures cause only minor irritations, and do not lead to incorrect
results.
3. Reliability Growth Modeling:
- A reliability growth model is a mathematical model of how software reliability improves as errors
are detected and repaired.
- A reliability growth model can be used to predict when (or if at all) a particular level of reliability
is likely to be attained.
- Thus, reliability growth modeling can be used to determine when to stop testing to attain a given
reliability level.
Jelinski and Moranda model:
- The simplest reliability growth model is a step function model where it is assumed that the
reliability increases by a constant increment each time an error is detected and repaired.

Littlewood and Verall’s model:


- This model allows for negative reliability growth to reflect the fact that when a repair is carried
out, it may introduce additional errors.
- It also models the fact that as errors are repaired, the average improvement to the product
reliability per repair decreases.
- It treats an error’s contribution to reliability improvement to be an independent random variable
having Gamma distribution.
STATISTICAL TESTING:
- Statistical testing is a testing process whose objective is to determine the reliability of the product
rather than discovering errors.

99
- The test cases designed for statistical testing with an entirely different objective from those of
conventional testing.
- To carry out statistical testing, we need to first define the operation profile of the product.
- Operation profile: Different categories of users may use a software product for very different
purposes. For example, a librarian might use the Library Automation Software to create member
records, delete member records, add books to the library, etc., whereas a library member might
use software to query about the availability of a book, and to issue and return books.
How to define the operation profile for a product?:
- We need to divide the input data into a number of input classes.
- For example, for a graphical editor software, we might divide the input into data associated with
the edit, print, and file operations.
- We then need to assign a probability value to each input class; to signify the probability for an
input value from that class to be selected.
1. Steps in Statistical Testing:
- The first step is to determine the operation profile of the software.
- The next step is to generate a set of test data corresponding to the determined operation profile.
- The third step is to apply the test cases to the software and record the time between each failure.
- After a statistically significant number of failures have been observed, the reliability can be
computed.
- For accurate results, statistical testing requires some fundamental assumptions to be satisfied.
- It requires a statistically significant number of test cases to be used.
- It further requires that a small percentage of test inputs that are likely to cause system failure to be
included.
- Now let us discuss the implications of these assumptions.
Pros and cons of statistical testing:
- Statistical testing allows one to concentrate on testing parts of the system that are most likely to be
used.
- Therefore, it results in a system that the users can find to be more reliable (than actually it is!).
- Also, the reliability estimation arrived by using statistical testing is more accurate compared to
those of other methods discussed.
- However , it is not easy to perform the statistical testing satisfactorily due to the following two
reasons.
- There is no simple and repeatable way of defining operation profiles.
- Also, the the number of test cases with which the system is to be tested should be statistically
significant.

100
SOFTWARE QUALITY:
- The quality of a product is defined in terms of its fitness of purpose.
- That is, a good quality product does exactly what the users want it to do, since for almost every
product, fitness of purpose is interpreted in terms of satisfaction of the requirements laid down in
the SRS document.
- Although “fitness of purpose” is a satisfactory definition of quality for many products such as a car
, a table fan, a grinding machine, etc.—“fitness of purpose” is not a wholly satisfactory definition
of quality for software products.
- The modern view of a quality associates with a software product several quality factors (or
attributes) such as the following:
- Portability: A software product is said to be portable, if it can be easily made to work in different
hardware and operating system environments, and easily interface with external hardware devices
and software products.
- Usability: A software product has good usability, if different categories of users (i.e., both expert
and novice users) can easily invoke the functions of the product.
- Reusability: A software product has good reusability, if different modules of the product can
easily be reused to develop new products.
- Correctness: A software product is correct, if different requirements as specified in the SRS
document have been correctly implemented.
- Maintainability: A software product is maintainable, if errors can be easily corrected as and when
they show up, new functions can be easily added to the product, and the functionalities of the
product can be easily modified, etc.
McCall’s quality factors:
- McCall distinguishes two levels of quality attributes [McCall].
- The higherlevel attributes, known as quality factor s or external attributes can only be measured
indirectly.
- The second-level quality attributes are called quality criteria.
- Quality criteria can be measured directly, either objectively or subjectively.
- By combining the ratings of several criteria, we can either obtain a rating for the quality factors, or
the extent to which they are satisfied.
ISO 9126:
- ISO 9126 defines a set of hierarchical quality characteristics. Each subcharacteristic in this is
related to exactly one quality characteristic.
- This is in contrast to the McCall’s quality attributes that are heavily interrelated.
- Another difference is that the ISO characteristic strictly refers to a software product, whereas
McCall’s attributes capture process quality issues as well.

101
SOFTWARE QUALITY MANAGEMENT SYSTEM:
- A quality management system (often referred to as quality system) is the principal methodology
used by organisations to ensure that the products they develop have the desired quality.
Managerial structure and individual responsibilities:
- A quality system is the responsibility of the organisation as a whole.
- However , every organisation has a separate quality department to perform several quality system
activities.
- The quality system of an organisation should have the full support of the top management.
Quality system activities:
The quality system activities encompass the following:
 Auditing of projects to check if the processes are being followed.
 Collect process and product metrics and analyse them to check if quality goals are being met.
 Review of the quality system to make it more effective.
 Development of standards, procedures, and guidelines.
 Produce reports for the top management summarising the effectiveness of the quality system in the
organisation.
1. Evolution of Quality Systems:
- Quality systems have rapidly evolved over the last six decades.
- Prior to World War II, the usual method to produce quality products was to inspect the finished
products to eliminate defective products.
- For example, a company manufacturing nuts and bolts would inspect its finished goods and would
reject those nuts and bolts that are outside certain specified tolerance range.
- Since that time, quality systems of organisations have undergone four stages of evolution as shown
in Figure 11.3.
- The initial product inspection method gave way to quality control (QC) principles.

- Thus, quality control aims at correcting the causes of errors and not just rejecting the defective
products.

102
- The next breakthrough in quality systems, was the development of the quality assurance (QA)
principles.
2. Product Metrics versus Process Metrics:
All modern quality systems lay emphasis on collection of certain product and process metrics during
product development.
- Let us first understand the basic differences between product and process metrics.
ISO 9000:
- International standards organisation (ISO) is a consortium of 63 countries established to formulate
and foster standardisation.
- ISO published its 9000 series of standards in 1987.
1 What is ISO 9000 Certification?:
- ISO 9000 certification serves as a reference for contract between independent parties.
- In particular, a company awarding a development contract can form his opinion about the possible
vendor performance based on whether the vendor has obtained ISO 9000 certification or not.
- The ISO 9000 standard specifies the guidelines for maintaining a quality system.
- The types of software companies to which the different ISO standards apply are as follows:
- ISO 9001: This standard applies to the organisations engaged in design, development, production,
and servicing of goods. This is the standard that is applicable to most software development
organisations.
- ISO 9002: This standard applies to those organisations which do not design products but are only
involved in production. Examples of this category of industries include steel and car
manufacturing industries who buy the product and plant designs from external sources and are
involved in only manufacturing those products. Therefore, ISO 9002 is not applicable to software
development organisations.
- ISO 9003: This standard applies to organisations involved only in installation and testing of
products.
2.ISO 9000 for Software Industry:
- ISO 9000 is a generic standard that is applicable to a large gamut of industries, starting from a
steel manufacturing industry to a service rendering company.
- Therefore, many of the clauses of the ISO 9000 documents are written using generic
terminologies and it is very difficult to interpret them in the context of software development
organisations.
- Two major differences between software development and development of other kinds of
products are as follows:
 Software is intangible and therefore difficult to control. It means that software would not be
visible to the user until the development is complete and the software is up and running. It is
difficult to control and manage anything that you cannot see and feel. In contrast, in any other

103
type of product manufacturing such as car manufacturing, you can see a product being developed
through various stages such as fitting engine, fitting doors, etc.
 During software development, the only raw material consumed is data. In contrast, large
quantities of raw materials are consumed during the development of any other product. As an
example, consider a steel making company.
3.Why Get ISO 9000 Certification?:
- some of the benefits that accrue to organisations obtaining ISO certification:
 Confidence of customers in an organisation increases when the organisation qualifies for
ISO 9001 certification. This is especially true in the international market. In fact, many
organisations awarding international software development contracts insist that the
development organisation have ISO 9000 certification. For this reason, it is vital for
software organisations involved in software export to obtain ISO 9000 certification.
 ISO 9000 requires a well-documented software production process to be in place. A well-
documented software production process contributes to repeatable and higher quality of
the developed software.
 ISO 9000 makes the development process focused, efficient, and cost effective.
 ISO 9000 certification points out the weak points of an organizations and recommends
remedial action.
 ISO 9000 sets the basic framework for the development of an optimal process and TQM.
4 How to Get ISO 9000 Certification?:
- An organisation intending to obtain ISO 9000 certification applies to a ISO 9000 registrar for
registration. The ISO 9000 registration process consists of the following stages:
1. Application stage: Once an organisation decides to go for ISO 9000 certification, it applies to a
registrar for registration.
2. Pre-assessment: During this stage the registrar makes a rough assessment of the organisation.
3. Document review and adequacy audit: During this stage, the registrar reviews the documents
submitted by the organisation and makes suggestions for possible improvements.
4. Compliance audit: During this stage, the registrar checks whether the suggestions made by it
during review have been complied to by the organisation or not.
5. Registration: The registrar awards the ISO 9000 certificate after successful completion of all
previous phases.
6. Continued surveillance: The registrar continues monitoring the organisation periodically.
5. Salient Features of ISO 9001 Requirements:
1. Document control: All documents concerned with the development of a software product
should be properly managed, authorised, and controlled. This requires a configuration
management system to be in place.

104
2. Planning: Proper plans should be prepared and then progress against these plans should be
monitored.
3. Review: Important documents across all phases should be independently checked and
reviewed for effectiveness and correctness.
4. Testing: The product should be tested against specification.
5. Organisational aspects: Several organisational aspects should be addressed e.g., management
reporting of the quality team.
6.ISO 9000-2000:
- ISO revised the quality standards in the year 2000 to fine tune the standards.
- The major changes include a mechanism for continuous process improvement.
- There is also an increased emphasis on the role of the top management, including establishing a
measurable objectives for various roles and levels of the organisation.
- The new standard recognises that there can be many processes in an organisation.
7. Shortcomings of ISO 9000 Certification:

- ISO 9000 requires a software production process to be adhered to, but does not guarantee the
process to be of high quality. It also does not give any guideline for defining an appropriate
process.
- ISO 9000 certification process is not fool-proof and no international accredition agency exists.
Therefore it is likely that variations in the norms of awarding certificates can exist among the
different accredition agencies and also among the registrars.
- Organisations getting ISO 9000 certification often tend to downplay domain expertise and the
ingenuity of the developers. These organisations start to believe that since a good process is in

place, the development results are truly person-independent.


- ISO 9000 does not automatically lead to continuous process improvement. In other words, it does
not automatically lead to TQM.
COMPUTER AIDED SOFTWARE ENGINEERING:
CASE ENVIRONMENT:
- Although individual CASE tools are useful, the true power of a tool set can be realised only when
these set of tools are integrated into a common framework or environment.
- If the different CASE tools are not integrated, then the data generated by one tool would have to
input to the other tools.
- CASE tools are characterised by the stage or stages of software development life cycle on which
they focus.
- CASE environment facilitates the automation of the step-by-step methodologies for software
development.

105
- In contrast to a CASE environment, a programming environment is an integrated collection of
tools to support only the coding phase of software development.
- The tools commonly integrated in a programming environment are a text editor, a compiler, and a
debugger.
- The different tools are integrated to the extent that once the compiler detects an error, the editor
takes automatically goes to the statements in error and the error statements are highlighted.
- Examples of popular programming environments are Turbo C environment, Visual Basic, Visual
C++, etc.

1.Benefits of CASE:
- A key benefit arising out of the use of a CASE environment is cost saving through all
developmental phases. Different studies carry out to measure the impact of CASE, put the effort
reduction between 30 per cent and 40 per cent.
- Use of CASE tools leads to considerable improvements in quality. This is mainly due to the facts
that one can effortlessly iterate through the different phases of software development, and the
chances of human error is considerably reduced.
- CASE tools help produce high quality and consistent documents. Since the important data relating
to a software product are maintained in a central repository, redundancy in the stored data is
reduced, and therefore, chances of inconsistent documentation is reduced to a great extent.
- CASE tools take out most of the drudgery in a software engineers work. For example, they need
not check meticulously the balancing of the DFDs, but can do it effortlessly through the press of a
button.
106
- CASE tools have led to revolutionary cost saving in software maintenance efforts. This arises not
only due to the tremendous value of a CASE environment in traceability and consistency checks,
but also due to the systematic information capture during the various phases of software
development as a result of adhering to a CASE environment.
- Introduction of a CASE environment has an impact on the style of working of a company, and
makes it oriented towards the structured and orderly approach.
CASE SUPPORT IN SOFTWARE LIFE CYCLE:
1.Prototyping Support:
- We have already seen that prototyping is useful to understand the requirements of complex
software products, to demonstrate a concept, to market new ideas, and so on. The prototyping
CASE tool’s requirements are as follows:
o Define user interaction.
o Define the system control flow.
o Store and retrieve data required by the system.
o Incorporate some processing logic.
- A good prototyping tool should support the following features:
- Since one of the main uses of a prototyping CASE tool is graphical user interface (GUI)
development, a prototyping CASE tool should support the user to create a GUI using a graphics
editor. The user should be allowed to define all data entry forms, menus and controls.
- It should integrate with the data dictionary of a CASE environment.
- If possible, it should be able to integrate with external user defined modules written in C or some
popular high level programming languages.
- The user should be able to define the sequence of states through which a created prototype can
run. The user should also be allowed to control the running of the prototype.
- The run time system of prototype should support mock up run of the actual system and
management of the input and output data.
2. Structured Analysis and Design:

- A CASE tool should support one or more of the structured analysis and design technique. The
CASE tool should support effortlessly drawing analysis and design diagrams.
- The CASE tool should support drawing fairly complex diagrams and preferably through a
hierarchy of levels.
- It should provide easy navigation through different levels and through design and analysis.
- The tool must support completeness and consistency checking across the design and analysis and
through all levels of analysis hierarchy.
3. Code Generation:

- More pragmatic support expected from a CASE tool during code generation phase are the
following:
107
- The CASE tool should support generation of module skeletons or templates in one or more
popular languages. It should be possible to include copyright message, brief description of the
module, author name and the date of creation in some selectable format.
- The tool should generate records, structures, class definition automatically from the contents of
the data dictionary in one or more popular programming languages.
- It should generate database tables for relational database management systems.
- The tool should generate code for user interface from prototype definition for X window and MS
window based applications.
4. Test Case Generator:

- The CASE tool for test case generation should have the following features:
- It should support both design and requirement testing.
- It should generate test set reports in ASCII format which can be directly imported into the test
plan document.
CHARACTERISTICS OF CASE TOOLS
1. Hardware and Environmental Requirements:
- In most cases, it is the existing hardware that would place constraints upon the CASE tool
selection.
- Thus, instead of defining hardware requirements for a CASE tool, the task at hand becomes to fit
in an optimal configuration of CASE tool in the existing hardware capabilities.
- Therefore, we have to emphasise on selecting the most optimal CASE tool configuration for a
given hardware configuration.
- The heterogeneous network is one instance of distributed environment and we choose this for
illustration as it is more popular due to its machine independent features.
- The CASE tool implementation in heterogeneous network makes use of client-server paradigm.
- The multiple clients which run different modules access data dictionary through this server
2 Documentation Support:
- The deliverable documents should be organized graphically and should be able to incorporate text
and diagrams from the central repository.
- This helps in producing up-to-date documentation. The CASE tool should integrate with one or
more of the commercially available desktop publishing packages.
- It should be possible to export text, graphics, tables, data dictionary reports to the DTP package
in standard forms such as PostScript.
3 Project Management:

108
- It should support collecting, storing, and analysing information on the software project’s progress
such as the estimated task duration, scheduled and actual task start, completion date, dates and
results of the reviews, etc.
4 External Interface:
- The tool should allow exchange of information for reusability of design.
- The information which is to be exported by the tool should be preferably in ASCII format and
support open architecture.
- Similarly, the data dictionary should provide a programming interface to access information.
- It is required for integration of custom utilities, building new techniques, or populating the data
dictionary.
5 Reverse Engineering Support:
- The tool should support generation of structure charts and data dictionaries from the existing
source codes.
- It should populate the data dictionary from the source code.
- If the tool is used for re-engineering information systems, it should contain conversion tool from
indexed sequential file structure, hierarchical and network database to relational database systems.
6 Data Dictionary Interface:
- The data dictionary interface should provide view and update access to the entities and relations
stored in it. It should have print facility to obtain hard copy of the viewed screens.
- It should provide analysis reports like cross-referencing, impact analysis, etc. Ideally, it should
support a query language to view its contents.
7. Tutorial and Help:
- The application of CASE tool and thereby its success depends on the users’ capability to
effectively use all the features supported.
- Therefore, for the uninitiated users, a tutorial is very important. The tutorial should not be limited
to teaching the user interface part only, but should comprehensively cover the following points:
1. The tutorial should cover all techniques and facilities through logically classified sections.
2. The tutorial should be supported by proper documentation.
ARCHITECTURE OF A CASE ENVIRONMENT:
- The important components of a modern CASE environment are user interface, tool set, object
- management system (OMS), and a repository.
- We have already seen the characteristics of the tool set. Let us examine the other components of a
CASE environment.

109
User interface:
- The user interface provides a consistent framework for accessing the different tools thus making it
easier for the users to interact with the different tools and reducing the overhead of learning how
the different tools are used.
Object management system and repository:
- Different case tools represent the software product as a set of entities such as specification,
design, text data, project plan, etc.
- The object management system maps these logical entities into the underlying storage
management system (repository).
- The commercial relational database management systems are geared towards supporting large
volumes of information structured as simple relatively short records.
- There are a few types of entities but large number of instances.
- By contrast, CASE tools create a large number of entity and relation types with perhaps a few
instances of each.
Software Maintenance:
CHARACTERISTICS OF SOFTWARE MAINTENANCE:
- Software maintenance is becoming an important activity of a large number of organisations.
- This is no surprise, given the rate of hardware obsolescence, the immortality of a software product
per se, and the demand of the user community to see the existing software products run on newer
platforms, run in newer environments, and/or with enhanced features.
- When the hardware platform changes, and a software product performs some low-level functions,
maintenance is necessary.
Types of Software Maintenance:
1. Corrective: Corrective maintenance of a software product is necessary either to rectify the bugs
observed while the system is in use.

110
2. Adaptive: A software product might need maintenance when the customers need the product to
run on new platforms, on new operating systems, or when they need the product to interface with
new hardware or software.
3. Perfective: A software product needs maintenance to support the new features that users want it
to support, to change different functionalities of the system according to customer demands, or to
enhance the performance of the system.
1. Characteristics of Software Evolution:
- Lehman and Belady have studied the characteristics of evolution of s e v e r a l software products
[1980]. They have expressed their observations in the form of laws.
- Lehman’s first law: A software product must change continually or become progressively less
useful. Every software product continues to evolve after its development through maintenance
efforts. Larger products stay in operation for longer times because of higher replacement costs
and therefore tend to incur higher maintenance efforts.
- Lehman’s second law: The structure of a program tends to degrade as more and more
maintenance is carried out on it. The reason for the degraded structure is that when you add a
function during maintenance, you build on top of an existing program, often in a way that the
existing program was not intended to support.
- Lehman’s third law: Over a program’s lifetime, its rate of development is approximately
constant. The rate of development can be quantified in terms of the lines of code written or
modified. Therefore this law states that the rate at which code is written or modified is
approximately the same during development and maintenance.
2 Special Problems Associated with Software Maintenance:
- Software maintenance work currently is typically much more expensive than what it should be
and takes more time than required. The reasons for this situation are the following:
 Software maintenance work in organisations is mostly carried out using ad hoc techniques
 Software maintenance has a very poor image in industry.
 Another problem associated with maintenance work is that the majority of software products
needing maintenance are legacy products.
SOFTWARE REVERSE ENGINEERING:
- Software reverse engineering is the process of recovering the design and the requirements
specification of a product from an analysis of its code.
- The purpose of reverse engineering is to facilitate maintenance work by improving the
understandability of a system and to produce the necessary documents for a legacy system.
- Reverse engineering is becoming important, since legacy software products lack proper
documentation, and are highly unstructured.

111
- The first stage of reverse engineering usually focuses on carrying out cosmetic changes to the
code to improve its readability, structure, and understandability, without changing any of its
functionalities.
- A way to carry out these cosmetic changes is shown schematically in Figure 13.1.

- After the cosmetic changes have been carried out on a legacy software, the proces of extracting
the code, design, and the requirements specification can begin. These activities are schematically
shown in Figure 13.2

SOFTWARE MAINTENANCE PROCESS MODELS:


- Before discussing process models for software maintenance, we need to analyse various activities
involved in a typical software maintenance project.
- The activities involved in a software maintenance project are not unique and depend on several
factors such as:
- (i) the extent of modification to the product required,
- (ii) the resources available to the maintenance team,
- (iii) the conditions of the existing product (e.g., how structured it is, how well documented it is,
etc.),
- (iv) the expected project risks, etc.
- Since the scope (activities required) for different maintenance projects vary widely, no single
maintenance process model can be developed to suit every kind of maintenance project. However,
two broad categories of process models can be proposed.

112
First model:
- The first model is preferred for projects involving small reworks where the code is changed
directly and the changes are reflected in the relevant documents later.
- This maintenance process is graphically presented in Figure 13.3.
- In this approach, the project starts by gathering the requirements for changes.
- The requirements are next analysed to formulate the strategies to be adopted for code change.
- At this stage, the association of at least a few members of the original development team goes a
long way in reducing the cycle time, especially for projects involving unstructured and
inadequately documented code.
- The availability of a working old system to the maintenance engineers at the maintenance site
greatly facilitates the task of the maintenance team as they get a good insight into the working of
the old system and also can compare the working of their modified system with the old system.
- Also, debugging of the reengineered system becomes easier as the program traces of both the
systems can be compared to localise the bugs.

Second model
- The second model is preferred for projects where the amount of rework required is significant.
This approach can be represented by a reverse engineering cycle followed by a forward
engineering cycle.
- Such an approach is also known as software re-engineering.

113
-
- An important advantage of this approach is that it produces a more structured design compared to
what the original product had, produces good documentation, and very often results in increased
efficiency.
- The efficiency improvements are brought about by a more efficient design. However, this
approach is more costly than the first approach.
- An empirical study indicates that process 1 is preferable when the amount of rework is no more
than 15 per cent (see Figure 13.5).

- Besides the amount of rework, several other factors might affect the decision regarding using
process model 1 over process model 2 as follows:
o Re-engineering might be preferable for products which exhibit a high failure rate.
o Re-engineering might also be preferable for legacy products having poor design and code
structure.
ESTIMATION OF MAINTENANCE COST:
- Boehm [1981] proposed a formula for estimating maintenance costs as part of his COCOMO cost
estimation model.
- Boehm’s maintenance cost estimation is made in terms of a quantity called the annual change
traffic (ACT).

114
- Boehm defined ACT as the fraction of a software product’s source instructions which undergo
change during a typical year either through addition or deletion.

- where, KLOCadded is the total kilo lines of source code added during maintenance
- The annual change traffic (ACT) is multiplied with the total development cost to arrive at the
maintenance cost:

Maintenance cost = ACT × Development cost


SOFTWARE REUSE:
ISSUES IN ANY REUSE PROGRAM:
- The following are some of the basic issues that must be clearly understood for starting any reuse
program:
1. Component creation.
2. Component indexing and storing.
3. Component search.
4. Component understanding.
5. Component adaptation.
6. Repository maintenance.
 Component creation: For component creation, the reusable components have to be first
identified. Selection of the right kind of components having potential for reuse is important.
 Component indexing and storing: Indexing requires classification of the reusable components
so that they can be easily searched when we look for a component for reuse. The components
need to be stored in a relational database management system (RDBMS) or an object-oriented
database system (ODBMS) for efficient access when the number of components becomes large.
 Component searching: The programmers need to search for right components matching their
requirements in a database of components.
 Component understanding :The programmers need a precise and sufficiently complete
understanding of what the component does to be able to decide whether they can reuse the
component. To facilitate understanding, the components should be well documented and should
do something simple.
 Component adaptation: Often, the components may need adaptation before they can be reused,
since a selected component may not exactly fit the problem at hand. However, tinkering with the
code is also not a satisfactory solution because this is very likely to be a source of bugs.

115
 Repository maintenance: A component repository once is created requires continuous
maintenance. New components, as and when created have to be entered into the repository.
A REUSE APPROACH:
- A promising approach that is being adopted by many organisations is to introduce a building
block approach into the software development process.
- For this, the reusable components need to be identified after every development project is
completed.
- The reusability of the identified components has to be enhanced and these have to be cataloged
into a component library.
- Domain analysis is a promising approach to identify reusable components.
- In the following subsections, we discuss the domain analysis approach to create reusable
components.
1. Domain Analysis:
- The aim of domain analysis is to identify the reusable components for a problem domain.
Reuse domain:
 A reuse domain is a technically related set of application areas.
 A body of information is considered to be a problem domain for reuse, if a deep and
comprehensive relationship exists among the information items as characterised by patterns of
similarity among the development components of the software product.
 A reuse domain is a shared understanding of some community, characterised by concepts,
techniques, and terminologies that show some coherence.
 Examples of domains are accounting software domain, banking software domain, business
software domain, manufacturing automation software domain, telecommunication software
domain, etc
 During domain analysis, a specific community of software developers get together to discuss
community-wide solutions.
 Analysis of the application domain is required to identify the reusable components.
 The actual construction of the reusable components for a domain is called domain engineering.
Evolution of a reuse domain:
 The ultimate results of domain analysis is development of problem oriented languages. The
problem-oriented languages are also known as application generators.
 These application generators, once developed form application development standards.
 The domains slowly develop. As a domain develops, we may distinguish the various stages it
undergoes:

116
 Stage 1 : There is no clear and consistent set of notations. Obviously, no reusable components are
available. All software is written from scratch.
 Stage 2 : H e r e , only experience from similar projects are used in a development effort. This
means that there is only knowledge reuse.
 Stage 3: At this stage, the domain is ripe for reuse. The set of concepts are stabilised and the
notations standardised. Standard solutions to standard problems are available. There is both
knowledge and component reuse.
 Stage 4: The domain has been fully explored. The software development for the domain can
largely be automated. Programs are not written in the traditional sense any more. Programs are

written using a domain specific language, which is also known as an application generator.

2 Component Classification:
 Components need to be properly classified in order to develop an effective indexing and storage
scheme.
 We have already remarked that hardware reuse has been very successful. If we look at the
classification of hardware components for clue, then we can observe that hardware components
are classified using a multilevel hierarchy.
 At the lowest level, the components are described in several forms—natural language
description, logic schema, timing information, etc.
 The higher the level at which a component is described, the more is the ambiguity. This has
motivated the Prieto-Diaz’s classification scheme.
Prieto-Diaz’s classification scheme:
- Each component is best described using a number of different characteristics or facets. For
example, objects can be classified using the following:
1. Actions they embody.
2. Objects they manipulate.
3. Data structures used.
4. Systems they are part of, etc.
- Prieto-Diaz’s faceted classification scheme requires choosing an n-tuple that best fits a
component.
- Faceted classification has advantages over enumerative classification. Strictly enumerative
schemes use a pre-defined hierarchy.
3 Searching:
- The domain repository may contain thousands of reuse items. In such large domains, what is the
most efficient way to search an item that one is looking for?

117
- A popular search technique that has proved to be very effective is one that provides a web
interface to the repository.
- Using such a web interface, one would search an item using an approximate automated search
using key words, and then from these results would do a browsing using the links provided to
look up related items.
- The approximate automated search locates products that appear to fulfill some of the specified
requirements.
- The items located through the approximate search serve as a starting point for browsing the
repository.
4 Repository Maintenance:
- Repository maintenance involves entering new items, retiring those items which are no more
necessary, and modifying the search attributes of items to improve the effectiveness of search.
- Also, the links relating the different items may need to be modified to improve the effectiveness
of search.
- The software industry is always trying to implement something that has not been quite done
before.
5. Reuse without Modifications:
- Once standard solutions emerge, no modifications to the program parts may be necessary. One
can directly plug in the parts to develop his application.
- Reuse without modification is much more useful than the classical program libraries. These can
be supported by compilers through linkage to run-time support routines (application generators).
- Application generators translate specifications into application programs.
- The specification usually is written using 4GL. The specification might also be in a visual form.
The programmer would create a graphical drawing using some standard available symbols.
- Defining what is variant and what is invariant corresponds to parameterising a subroutine to make
it reusable.
- Application generators have been applied successfully to data processing application, user
interface, and compiler development.
- Application generators are less successful with the development of applications with close
interaction with hardware such as real-time systems.

118

You might also like