Software Engineering
Software Engineering
MCA
Second Semester
Bharathidasan University
Centre for Distance and Online Education
Chairman:
Dr. M. Selvam
Vice-Chancellor
Bharathidasan University
Tiruchirappalli-620 024
Tamil Nadu
Co-Chairman:
Dr. G. Gopinath
Registrar
Bharathidasan University
Tiruchirappalli-620 024
Tamil Nadu
Course Co-Ordinator:
Dr. A. Edward William Benjamin
Director-Centre for Distance and Online Education
Bharathidasan University
Tiruchirappalli-620 024
Tamil Nadu
The Syllabus is Revised from 2021-22 onwards
Reviewer
Mrs. T. Lucia Agnes Beena, Asst. Professor & Head, Dept of Information Techonology, St. Joseph’s college
Author:
Trichy – 620 002
Dr. T. Dheepak, Asst Professor, Dept of Computer Science, CDOE,
Bharathidasan University, Trichy
Author
Rohit Khurana, CEO, ITL Education solutions Ltd.
Units (1-14)
Information contained in this book has been published by VIKAS ® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Publisher, its Authors shall in no event be liable for any errors, omissions
or damages arising out of use of this information and specifically disclaim any implied warranties or
merchantability or fitness for any particular use.
BLOCK I : INTRODUCTION
Unit 1 Software: Role of software, Software myths. Generic view of Unit 1: Software
process: A layered technology, a process framework, The Capability (Pages 1-14);
Maturity Model Integration (CMMI) Unit 2: Process Patterns
Unit 2 Process patterns, Process assessment, Personal and Team (Pages 15-24);
process models. Unit 3: Process Model
Unit 3 Process model: The waterfall model, Incremental process models, (Pages 25-36)
Evolutionary process models, The Unified process.
BLOCKII: REQUIREMENTENGINEERING
Unit 4 Design and Construction, Requirement Engineering Tasks, Unit 4: Design and Construction
Requirements Engineering Process, Validating Requirements. (Pages 37-53);
Unit 5 Building the Analysis Model: Requirement analysis, Data Unit 5: Building the Analysis Model
Modeling concepts, Object-Oriented Analysis (Pages 54-68);
Unit 6 Modeling: Scenario-Based Modeling, Flow-Oriented Modeling Unit 6: Modeling
Class-Based Modeling, Creating a Behavioral Model. (Pages 69-81)
The notion of software engineering was first proposed in 1968. Since then, software
NOTES engineering has evolved as a full-fledged engineering discipline that is accepted as
a field involving in-depth study and research. Software engineering methods and
tools have been successfully implemented in various applications spread across
different walks of life. Software engineering has been defined as a systematic
approach to develop software within a specified time and budget. It provides
methods to handle complexities in a software system and enables the development
of reliable software systems that maximize productivity.
The Institute of Electrical and Electronic Engineers (IEEE) defines software
as ‘a collection of computer programs, procedures, rules and associated
documentation and data’. Software is responsible for managing, controlling and
integrating the hardware components of a computer system in order to accomplish
a specific task. It tells the computer what to do and how to do it.
This book discusses the phases of software development, the design process,
the coding methodology and problems that might occur during the development
procedure of software. Software engineering offers ways and means to deal with
intricacies in a software system and facilitates the development of dependable
software systems, which maximize productivity. Software development comprises
two phases: Requirement Analysis and Planning. The requirement analysis phase
measures the requirements of the end-user, while planning chalks out the strategy
to achieve those requirements through the software to be produced. Cost estimation
establishes an estimate of the budget required to develop a software or project
and also helps in effective use of resources and time. This is done at the beginning
of the project and also between various stages of development to capture any
fluctuation of expenditure.
Once the requirements are specified, the design process begins. In this
process, the software engineer collates the customer’s business requirements and
technical considerations to model the product to be built.After successful completion
of the design phase, the specifications thus created are translated into source code.
This is the implementation phase wherein the language that meets user’s requirements
in the best way is selected for coding.
The software is tested after the implementation phase culminates. Testing
brings forward any kind of errors and bugs that exist, evaluates the capability of
the system and ensures whether the system is built to meet the target users’
requirements. The more stringent the testing, the better will be the quality of the
product. As the product is used in the market, the users’ requirements keep on
changing. This calls for additions of new features and functionalities. Software
maintenance modifies the system according to the user’s needs and also eliminates
errors, as and when they arise.
This book, Software Engineering, follows the SIM format or the self-
instructional mode wherein each unit begins with an ‘Introduction’ to the topic
followed by an outline of the ‘Objectives’. The detailed content is then presented
in a simple and organized manner, interspersed with ‘Check Your Progress’questions
to test the understanding of the students. A ‘Summary’ along with a list of ‘Key
Self-Instructional
Words’ and a set of ‘Self Assessment Questions and Exercises’ is also provided at
Material the end of each unit for effective recapitulation.
Software
BLOCK - I
INTRODUCTION
NOTES
UNIT 1 SOFTWARE
Structure
1.0 Introduction
1.1 Objectives
1.2 Role of Software
1.2.1 Software Myths
1.3 Generic View of Process
1.3.1 Process Framework
1.3.2 Capability Maturity Model Integration (CMMI)
1.4 Answers to Check Your Progress Questions
1.5 Summary
1.6 Key Words
1.7 Self Assessment Questions and Exercises
1.8 Further Readings
1.9 Learning Outcomes
1.0 INTRODUCTION
In this unit‚ you will learn about the software and software development project.
In earlier times, software was simple in nature and hence, software development
was a simple activity. However, as technology improved, software became more
complex and software projects grew larger. Software development now
necessitated the presence of a team, which could prepare detailed plans and designs,
carry out testing, develop intuitive user interfaces, and integrate all these activities
into a system. This new approach led to the emergence of a discipline known as
software engineering.
Software engineering provides methods to handle complexities in a software
system and enables the development of reliable software systems, which maximize
productivity. In addition to the technical aspects of the software development, it
also covers management activities which include guiding the team, budgeting,
preparing schedules, etc. The notion of software engineering was first proposed in
1968. Since then, software engineering has evolved as a full-fledged engineering
discipline, which is accepted as a field involving in-depth study and research.
Software engineering methods and tools have been successfully implemented in
various applications spread across different walks of life.
1.1 OBJECTIVES
Self-Instructional
Material 3
Software Business software: This class of software is widely used in areas where
management and control of financial activities is of utmost importance. The
fundamental component of a business system comprises payroll, inventory,
and accounting software that permit the user to access relevant data from
NOTES the database. These activities are usually performed with the help of
specialized business software that facilitates efficient framework in business
operations and in management decisions.
Engineering and scientific software: This class of software has emerged
as a powerful tool in the research and development of next generation
technology. Applications such as the study of celestial bodies, under-surface
activities, and programming of an orbital path for space shuttles are heavily
dependent on engineering and scientific software. This software is designed
to perform precise calculations on complex numerical data that are obtained
during real-time environment.
Artificial intelligence (AI) software: This class of software is used where
the problem-solving technique is non-algorithmic in nature. The solutions of
such problems are generally non-agreeable to computation or
straightforward analysis. Instead, these problems require specific problem-
solving strategies that include expert system, pattern recognition, and game-
playing techniques. In addition, they involve different kinds of search
techniques which include the use ofheuristics. The role of artificial intelligence
software is to add certain degrees of intelligence to the mechanical hardware
in order to get the desired work done in an agile manner.
Web-based software: This class of software acts as an interface between
the user and the Internet. Data on the Internet is in the form of text, audio,
or video format, linked with hyperlinks. Web browser is a software that
retrieves web pages from the Internet. The software incorporates executable
instructions written in special scripting languages such as CGI orASP. Apart
from providing navigation on the Web, this software also supports additional
features that are useful while surfing the Internet.
Personal computer (PC) software: This class of software is used for
both official and personal use. The personal computer software market has
grown over in the last two decades from normal text editor to word processor
and from simple paintbrush to advanced image-editing software. This
software is used predominantly in almost every field, whether it is database
management system, financial accounting package, or multimedia-based
software. It has emerged as a versatile tool for routine applications.
1.2.1 Software Myths
The development of software requires dedication and understanding on the
developers’ part. Many software problems arise due to myths that are formed
during the initial stages of software development. Unlike ancient folklore that often
provides valuable lessons, software myths propagate false beliefs and confusion
Self-Instructional in the minds of management, users and developers.
4 Material
Management Myths Software
User Myths
In most cases, users tend to believe myths about the software because software
managers and developers do not try to correct the false beliefs. These myths lead
to false expectations and ultimately develop dissatisfaction among the users.
Common user myths are listed in Table 1.2.
Table 1.2 User Myths
Self-Instructional
Material 5
Software Developer Myths
In the early days of software development, programming was viewed as an art,
but now software development has gradually become an engineering discipline.
NOTES However, developers still believe in some myths. Some of the common developer
myths are listed in Table 1.3.
Table 1.3 Developer Myths
Software Crisis
In the late 1960s, it became clear that the development of software is different
from manufacturing other products. This is because employing more manpower
(programmers) later in the software development does not always help speed up
the development process. Instead, sometimes it may have negative impacts like
delay in achieving the scheduled targets, degradation of software quality, etc. Though
software has been an important element of many systems since a long time,
developing software within a certain schedule and maintaining its quality is still
difficult.
History has seen that delivering software after the scheduled date or with
errors has caused large scale financial losses as well as inconvenience to many.
Disasters such as the Y2K problem affected economic, political, and administrative
systems of various countries around the world. This situation, where catastrophic
failures have occurred, is known as software crisis. The major causes of software
crisis are the problems associated with poor quality software such as malfunctioning
Self-Instructional
6 Material
of software systems, inefficient development of software, and the most important, Software
process, the software project can be easily developed. The activities in software
project comprise various tasks for managing resources and developing products.
Figure 1.2 shows that software project involves people (developers, project
manager, end-users, and so on) also referred to as participants who use software NOTES
processes to create a product according to the user’s requirements. The participants
play a major role in the development of the project and select the appropriate
process for the project. In addition, a project is efficient if it is developed within
the time constraint. The outcome or the result of the software project is known as
a product. Thus, a software project uses software processes to create a product.
Process framework (see Figure 1.5) determines the processes which are essential
for completing a complex software project. This framework identifies certain
activities, known as framework activities, which are applicable to all software NOTES
projects regardless of their type and complexity. Some of the framework activities
are listed below.
Communication: It involves communication with the user so that the
requirements are easily understood.
Planning: It establishes a plan for accomplishing the project. It describes
the schedule for the project, the technical tasks involved, expected risks,
and the required resources.
Modeling: It encompasses creation of models, which allow the developer
and the user to understand software requirements and the designs to achieve
those requirements.
Construction: It combines generation of code with testing to uncover errors
in the code.
Deployment: It implies that the final product (software) is delivered to the
user. The user evaluates the delivered product and provides feedback based
on the evaluation.
Self-Instructional
12 Material
Software
1.5 SUMMARY
Self-Instructional
14 Material
Process Patterns
2.0 INTRODUCTION
In this unit‚ you will learn about the process assessment. To accomplish a set of
tasks, it is important to go through a sequence of predictable steps. This sequence
of steps refers to a road map, which helps in developing a timely, high quality, and
highly efficient product or system. Road map, commonly referred to as software
process, comprises activities, constraints, and resources that are used to produce
an intended system. Software process helps to maintain a level of consistency and
quality in products or services that are produced by different people. The process
needs to be assessed in order to ensure that it meets a set of basic process criteria,
which is essential for implementing the principles of software engineering in an
efficient manner. You will also learn about the personal and team process models.
2.1 OBJECTIVES
The existence of software process does not guarantee the timely delivery of the
software and its ability to meet the user’s expectations. The process needs to be
assessed in order to ensure that it meets a set of basic process criteria, which is
essential for implementing the principles of software engineering in an efficient
manner. The process is assessed to evaluate methods, tools, and practices, which
Self-Instructional
Material 15
Process Patterns are used to develop and test the software. The aim of process assessment is to
identify the areas for improvement and suggest a plan for making that improvement.
The main focus areas of process assessment are listed below.
Obtaining guidance for improving software development and test processes
NOTES
Obtaining an independent and unbiased review of the process
Obtaining a baseline (defined as a set of software components and
documents that have been formerly reviewed and accepted; that serves as
the basis for further development) for improving quality and productivity of
processes.
As shown in Figure 2.1, software process assessment examines whether
the software processes are effective and efficient in accomplishing the goals. This
is determined by the capability of selected software processes. The capability of a
process determines whether a process with some variations is capable of meeting
user’s requirements. In addition, it measures the extent to which the software
process meets the user’s requirements. Process assessment is useful to the
organization as it helps in improving the existing processes. In addition, it determines
the strengths, weaknesses and the risks involved in the processes.
Figure 2.1 also shows that process assessment leads to process capability
determination and process improvement. Process capability determination is an
organized assessment, which analyzes the software processes in an organization.
In addition, process capability determination identifies the capabilities of a process
and the risks involved in it. The process improvement identifies the changes to be
made in the software processes. The software capability determination motivates
the organization to perform software process improvement.
Different approaches are used for assessing software process. These
approaches are SPICE (ISO/IEC15504), ISO 9001:2000, standard CMMI
assessment method for process improvement, CMM-based appraisal for
internal process improvement, and Bootstrap.
Self-Instructional
16 Material
SPICE (ISO/IEC15504) Process Patterns
Self-Instructional
Material 17
Process Patterns
NOTES
ISO 9001:2000
ISO (International Organization for Standardization) established a standard known
as ISO 9001:2000 to determine the requirements of quality management systems.
A quality management system refers to the activities within an organization,
which satisfies the quality related expectations of customers. Organizations ensure
that they have a quality management system by demonstrating their conformance
to the ISO 9001:2000 standard. The major advantage of this standard is that it
achieves a better understanding and consistency of all quality practices throughout
the organization. In addition, it strengthens the customer’s confidence in the product.
This standard follows a plan-do-check-act (PDCA) cycle (see Figure 2.3), which
includes a set of activities that are listed below.
Plan: Determines the processes and resources which are required to develop
a quality product according to the user’s satisfaction.
Do: Performs activities according to the plan to create the desired product.
Check: Measures whether the activities for establishing qualitymanagement
according to the requirements are accomplished. In addition, it monitors
the processes and takes corrective actions to improve them.
Act: Initiates activities which constantly improve processes in the
organization.
Class ‘C’ appraisal methods are inexpensive and used for a short duration.
In addition, they provide quick feedback to the result of the assessment. In other
words, these appraisal methods are useful for periodic assessment of the projects.
Class ‘B’ and Class ‘C’ appraisals are useful for organizations that do not
require generation of ratings. The primary reason for all appraisal methods should
be to identify the strengths and weaknesses of the processes for their improvements.
CMM-based Appraisal for Internal Process Improvement (CBA IPI)
CBA-IPItool is used in an organization to gain insight into the software development
capability. For this, the strengths and weaknesses of the existing process are
identified in order to prioritize software improvement plans and focus on software
improvements, which are beneficial to the organization. The organization’s software
process capability is assessed by a group of individuals known as the assessment
team, which generates findings and provides ratings according to the CMM
(Capability Maturity Model). These findings are collected from questionnaires,
document reviews and interviews with the managers of the organization. Thus, the
primary goal of CBA IPI is to provide an actual picture of the existing processes in
an organization. To achieve this, the assessment team performs the following
functions.
Provides data as a baseline to the organization in order to check its software
capability
Identifies issues that have an impact on the process improvement
Self-Instructional
20 Material
Provides sufficiently complete findings to the organization. These are used Process Patterns
The CBA IPI method is similar to SCAMPI as both are used for process
assessment in an organization. However, differences do exist between the two
approaches. These differences are listed in Table 2.4.
Table 2.4 Differences between CBA IPI and SCAMPI
Issue CBA IPI SCAMPI
Model based Capability Maturity Model (CMM) Capability Maturity Model
Integration (CMMI)
Licensing No Yes
Authorization Through training in assessor Through training in
appraisal program program
Cost Less external cost due to internal Costly due to model scope,
resource usage appraisal complexity, and
training
Performance Less rigorous More rigorous
Training Authorized lead assessors Licensed and with
authorized lead appraisers
Note: The CMMI appraisal method provides a consistent rating for organizations to convert their
appraisal ratings into a maturity level.
Self-Instructional
Material 21
Process Patterns Bootstrap
Bootstrap is an improvement on SEI approaches for process assessment and
improvement and covers the requirements laid by ISO 9000. This approach
NOTES evaluates and improves the quality of software development and management
process of an organization. It defines a framework for assessing and promoting
process improvement. The basic objectives of this approach are listed below.
To support evaluation of the process capability
To identifythe strengths and weaknesses of the processes in the organization
being assessed
To support the accomplishment of goals in an organization by planning
improved actions
To increase the effectiveness of the processes while implementing standard
requirements in the organization.
The main feature of the bootstrap approach is the assessment process, which
leads to an improvement in the software development processes. During the
assessment, the organizational processes are evaluated to define each process.
Note that after the assessment is done, data is collected in a central database. In
addition, it provides two questionnaires (forms containing a set of questions, which
are distributed to people to gain statistical information): the first to gather data
about the organization that develops the software and the second to gather data
about the projects.
1. The aim of process assessment is to identify the areas for improvement and NOTES
suggest a plan for making that improvement.
2. The main feature of the bootstrap approach is the assessment process‚
which leads to an improvement in the software development processes.
3. SPICE stands for Software Process Improvement and Capability
Determination.
2.5 SUMMARY
NOTES
2.6 KEY WORDS
Self-Instructional
24 Material
Process Model
3.0 INTRODUCTION
A process model can be defined as a strategy (also known as software engineering
paradigm), comprising process, methods, and tools layers as well as the general
phases for developing the software. It provides a basis for controlling various
activities required to develop and maintain the software. In addition, it helps the
software development team in facilitating and understanding the activities involved
in the project.
A process model for software engineering depends on the nature and
application of the software project. Thus, it is essential to define process models
for each software project. IEEE defines a process model as ‘a framework containing
the processes, activities, and tasks involved in the development, operation, and
maintenance of a software product, spanning the life of the system from the definition
of its requirements to the termination of its use.’A process model reflects the goals
of software development such as developing a high quality product and meeting
the schedule on time. In addition, it provides a flexible framework for enhancing
the processes. Other advantages of the software process model are listed below.
Enables effective communication: It enhances understanding and
provides a specific basis for process execution.
Facilitates process reuse: Process development is a time consuming
and expensive activity. Thus, the software development team utilizes the
existing processes for different projects.
Effective: Since process models can be used again and again; reusable
processes provide an effective means for implementing processes for
software development.
Self-Instructional
Material 25
Process Model Facilitates process management: Process models provide a
framework for defining process status criteria and measures for software
development. Thus, effective management is essential to provide a clear
description of the plans for the software project.
NOTES
3.1 OBJECTIVES
As stated earlier, the waterfall model comprises several phases, which are
listed below.
System/information engineering modeling: This phase establishes the
requirements for all parts of the system. Software being a part of the larger
system, a subset of these requirements is allocated to it. This system view is
necessary when software interacts with other parts of the system including
hardware, databases, and people. System engineering includes collecting
requirements at the system level while information engineering includes
collecting requirements at a level where all decisions regarding business
strategies are taken.
Requirements analysis: This phase focuses on the requirements of the
software to be developed. It determines the processes that are to be
incorporated during the development of the software. To specify the
requirements, users’ specifications should be clearly understood and their
requirements be analyzed. This phase involves interaction between the users
and the software engineers and produces a document known as Software
Requirements Specification (SRS).
Design: This phase determines the detailed process of developing the
software after the requirements have been analyzed. It utilizes software
requirements defined by the user and translates them into software
representation. In this phase, the emphasis is on finding solutions to the
problems defined in the requirements analysis phase. The software engineer
is mainly concerned with the data structure, algorithmic detail and interface
representations.
Coding: This phase emphasizes translation of design into a programming
language using the coding style and guidelines. The programs created should
be easy to read and understand. All the programs written are documented
according to the specification.
Self-Instructional
Material 27
Process Model Testing: This phase ensures that the software is developed as per the user’s
requirements. Testing is done to check that the software is running efficiently
and withminimum errors. It focuseson the internal logic and external functions
of the software and ensures that all the statements have been exercised
NOTES (tested). Note that testing is a multistage activity, which emphasizes
verification and validation of the software.
Implementation and maintenance: This phase delivers fully functioning
operational software to the user. Once the software is accepted and deployed
at the user’s end, various changes occur due to changes in the external
environment (these include upgrading a new operating system or addition
of a new peripheral device). The changes also occur due to changing
requirements of the user and changes occurring in the field of technology.
This phase focuses on modifying software, correcting errors, and improving
the performance of the software.
Various advantages and disadvantages associated with the waterfall model
are listed in Table 3.2.
Table 3.2 Advantages and Disadvantages of Waterfall Model
Advantages Disadvantages
Relatively simple to understand. Requirements need to be specified
Each phase of development proceeds before the development proceeds.
sequentially. The changes of requirements in later
Allows managerial control where a phases of the waterfall model cannot
schedule with deadlines is set for each be done. This implies that once the
stage of development. software enters the testing phase, it
Helps in controlling schedules, budgets, becomes difficult to incorporate
and documentation. changes at such a late phase.
No user involvement and working
version of the software is available
when the software is being developed.
Does not involve risk management.
Assumes that requirements are stable
and are frozen across the project span.
Evolutionary software are the models that are iterative in nature. They models
helps the software engineers to develop advance version of an available software,
means initially a rapid version of the product is being developed. After that the
product is developed to more accurate version with the help of the reviewers who
review the product after each release and submit improvements. The main two
evolutionary models are:
Self-Instructional
28 Material
1. Incremental model Process Model
2. Spiral model
3.3.1 Incremental Process Model
NOTES
The incremental model (also known as iterative enhancement model) comprises
the features of waterfall model in an iterative manner. The waterfall model performs
each phase for developing complete software whereas the incremental model has
phases similar to the linear sequential model and has an iterative nature of
prototyping. As shown in Figure 3.2, during the implementation phase, the project
is divided into small subsets known as increments that are implemented individually.
This model comprises several phases where each phase produces an increment.
These increments are identified in the beginning of the development process and
the entire process from requirements gathering to delivery of the product is carried
out for each increment.
The basic idea of this model is to start the process with requirements and
iteratively enhance the requirements until the final software is implemented. In
addition, as in prototyping, the increment provides feedback from the user specifying
the requirements of the software. This approach is useful as it simplifies the software
development process as implementation of smaller increments is easier than
implementing the entire system.
As shown in Figure 3.2, each stage of incremental model adds some
functionality to the product and passes it on to the next stage. The first increment
is generally known as a core product and is used by the user for a detailed
evaluation. This process results in creation of a plan for the next increment. This
plan determines the modifications (features or functions) of the product in order to
accomplish user requirements. The iteration process, which includes the delivery
of the increments to the user, continues until the software is completely developed.
The increments result in implementations, which are assessed in order to measure
the progress of the product.
Self-Instructional
Material 29
Process Model Various advantages and disadvantages associated with the incremental model
are listed in Table 3.3.
Table 3.3 Advantages and Disadvantages of Incremental Model
NOTES Advantages Disadvantages
Avoids the problems resulting in risk- Requires planning at the
driven approach in the software. management and technical level.
Understanding increases through Becomes invalid when there is time
successive refinements. constraint on the project schedule or
Performs cost-benefit analysis before when the users cannot accept the
enhancing software with capabilities. phased deliverables.
Incrementally grows in effective
solution after every iteration.
Does not involve high complexity
rate.
Early feedback is generated because
implementation occurs rapidly for a
small subset of the software.
Self-Instructional
30 Material
Table 3.4 Advantages and Disadvantages of Prototyping Model Process Model
Advantages Disadvantages
Provides a working model to the user early in If the user is not satisfied by the developed
the process, enabling early assessment and prototype, then a new prototype is developed.
increasing user’s confidence. This process goes on until a perfect prototype NOTES
The developer gains experience and insight by is developed. Thus, this model is time
developing a prototype thereby resulting in consuming and expensive.
better implementation of requirements. The developer loses focus of the real purpose
The prototyping model serves to clarify of prototype and hence, may compromise with
requirements, which are not clear, hence the quality of the software. For example,
reducing ambiguity and improving developers may use some inefficient
communication between the developers and algorithms or inappropriate programming
users. languages while developing the prototype.
There is a great involvement of users in Prototyping can lead to false expectations. For
software development. Hence, the example, a situation may be created where the
requirements of the users are met to the user believes that the development of the
greatest extent. system is finished when it is not.
Helps in reducing risks associated with the The primary goal of prototyping is speedy
software. development, thus, the system design can
suffer as it is developed in series without
considering integration of all other
components.
The spiral model is also similar to the prototyping model as one of the key
features of prototyping is to develop a prototype until the user requirements are
accomplished. The second step of the spiral model functions similarly. The prototype
is developed to clearly understand and achieve the user requirements. If the user
Self-Instructional
32 Material
is not satisfied with the prototype, a new prototype known as operational Process Model
prototype is developed.
Various advantages and disadvantages associated with the spiral model are
listed in Table 3.5.
NOTES
Table 3.5 Advantages and Disadvantages of Spiral Model
Advantages Disadvantages
Self-Instructional
Material 33
Process Model
3.5 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS
3.6 SUMMARY
The waterfall model (also known as the classical life cycle model), the
development of software proceeds linearly and sequentially from requirement
analysis to design, coding, testing, integration, implementation, and
maintenance.
The waterfall model performs each phase for developing complete software
whereas the incremental model has phases similar to the linear sequential
model and has an iterative nature of prototyping.
Evolutionary software are the models that are iterative in nature.
The incremental model (also known as iterative enhancement model)
comprises the features of waterfall model in an iterative manner. The basic
idea of this model is to start the process with requirements and iteratively
enhance the requirements until the final software is implemented. In addition,
as in prototyping, the increment provides feedback from the user specifying
the requirements of the software. This approach is useful as it simplifies the
software development process as implementation of smaller increments is
easier than implementing the entire system.
IEEE defines the spiral model as ‘a model of the software development
process in which the constituent activities, typical requirements analysis,
preliminary and detailed design, coding, integration, and testing, are
performed iteratively until the software is complete.’
Self-Instructional
34 Material
One of the key features of the spiral model is that each cycle is completed Process Model
Self-Instructional
Material 35
Process Model
3.9 FURTHER READINGS
Self-Instructional
36 Material
Design and Construction
BLOCK - II
REQUIREMENT ENGINEERING
NOTES
UNIT 4 DESIGN AND
CONSTRUCTION
Structure
4.0 Introduction
4.1 Objectives
4.2 Requirements Engineering Task
4.2.1 Requirements Engineering Process
4.3 Requirements Validation
4.4 Answers to Check Your Progress Questions
4.5 Summary
4.6 Key Words
4.7 Self Assessment Questions and Exercises
4.8 Further Readings
4.9 Learning Outcomes
4.0 INTRODUCTION
In the software development process, requirement phase is the first software
engineering activity. This phase is a user-dominated phase and translates the ideas
or views into a requirements document. Note that defining and documenting the
user requirements in a concise and unambiguous manner is the first major step to
achieve a high-quality product.
The requirement phase encompasses a set of tasks, which help to specify
the impact of the software on the organization, customers’ needs, and how users
will interact with the developed software. The requirements are the basis of the
system design. If requirements are not correct the end product will also contain
errors.Note that requirements activity like all other software engineering activities
shouldbe adapted to the needs of the process, the project, the product and the
peopleinvolved in the activity. Also, the requirements should be specified at
differentlevels of detail. This is because requirements are meant for people such as
users,business managers, system engineers, and so on. For example, business
managersare interested in knowing which features can be implemented within the
allocatedbudget whereas end-users are interested in knowing how easy it is to use
thefeatures of software.
Self-Instructional
Material 37
Design and Construction
4.1 OBJECTIVES
Functional Requirements
IEEE defines functional requirements as ‘a function that a system or component
must be able to perform.’ These requirements describe the interaction of software
with its environment and specify the inputs, outputs, external interfaces, and the
functions that should be included in the software. Also, the services provided by
Self-Instructional
Material 39
Design and Construction functional requirements specify the procedure by which the software should react
to particular inputs or behave in particular situations.
To understand functional requirements properly, let us consider the following
example of an online banking system.
NOTES
The user of the bank should be able to search the desired services from the
available ones.
There should be appropriate documents for users to read. This implies that
when a user wants to open an account in the bank, the forms must be
available so that the user can open an account.
After registration, the user should be provided with a unique
acknowledgement number so that he can later be given an account number.
The above mentioned functional requirements describe the specific services
provided by the online banking system. These requirements indicate user
requirements and specify that functional requirements maybe described at different
levels of detail in an online banking system. With the help of these functional
requirements, users can easily view, search and download registration forms and
other information about the bank. On the other hand, if requirements are not stated
properly, they are misinterpreted by software engineers and user requirements are
not met.
The functional requirements should be complete and consistent.
Completeness implies that all the user requirements are defined. Consistency implies
that all requirements are specified clearly without any contradictory definition.
Generally, it is observed that completeness and consistency cannot be achieved in
large software or in a complex system due to the problems that arise while defining
the functional requirements of these systems. The different needs of stakeholders
also prevent the achievement of completeness and consistency. Due to these
reasons, requirements may not be obvious when they are first specified and may
further lead to inconsistencies in the requirements specification.
Non-functional Requirements
The non-functional requirements (also known as quality requirements) are related
to system attributes such as reliability and response time. Non-functional
requirements arise due to user requirements, budget constraints, organizational
policies, and so on. These requirements are not related directly to any particular
function provided by the system.
Non-functional requirements should be accomplished in software to make
it perform efficiently. For example, if an aeroplane is unable to fulfill reliability
requirements, it is not approved for safe operation. Similarly, if a real time control
system is ineffective in accomplishing non-functional requirements, the control
functions cannot operate correctly. Different types of non-functional requirements
are shown in Figure 4.2.
Self-Instructional
40 Material
The description of different types of non-functional requirements is listed Design and Construction
below.
Product requirements: These requirements specify how software product
performs. Product requirements comprise the following.
NOTES
o Efficiency requirements: Describe the extent to which the software
makes optimal use of resources, the speed with which the system
executes, and the memory it consumes for its operation. For example,
the system should be able to operate at least three times faster than the
existing system.
o Reliability requirements: Describe the acceptable failure rate of the
software. For example, the software should be able to operate even if a
hazard occurs.
Domain Requirements
Requirements which are derived from the application domain of the system instead
from the needs of the users are known as domain requirements. These
requirements may be new functional requirements or specify a method to perform
some particular computations. In addition, these requirements include any constraint
that may be present in the existing functional requirements.As domain requirements
reflect the fundamentals of the application domain, it is important to understand
these requirements. Also, if these requirements are not fulfilled, it may be difficult
to make the system work as desired.
A system can include a number of domain requirements. For example, it
may comprise a design constraint that describes the user interface, which is capable
Self-Instructional
42 Material
of accessing all the databases used in a system. It is important for a development Design and Construction
Technical feasibility assesses the current resources (such as hardware and software)
and technology, which are required to accomplish user requirements in the software
within the allocated time and budget. For this, the software development team NOTES
ascertains whether the current resources and technology can be upgraded or added
in the software to accomplish specified user requirements. Technical feasibility
also performs the following tasks.
Analyzes the technical skills and capabilities of the software development
team members
Determines whether the relevant technology is stable and established
Ascertains that the technology chosen for software development has a large
number of users so that they can be consulted when problems arise or
improvements are required.
Operational Feasibility
Operational feasibility assesses the extent to which the required software performs
a series of steps to solve business problems and user requirements. This feasibility
is dependent on human resources (software development team) and involves
visualizing whether the software will operate after it is developed and be operative
once it is installed. Operational feasibility also performs the following tasks.
Determines whether the problems anticipated in user requirements are of
high priority
Determines whether the solution suggested by the software development
team is acceptable
Analyzes whether users will adapt to a new software
Determines whether the organization is satisfied by the alternative solutions
proposed by the software development team.
Economic Feasibility
Economic feasibility determines whether the required software is capable of
generating financial gains for an organization. It involves the cost incurred on the
software development team, estimated cost of hardware and software, cost of
performing feasibility study, and so on. For this, it is essential to consider expenses
made on purchases (such as hardware purchase) and activities required to carry
out software development. In addition, it is necessary to consider the benefits that
can be achieved by developing the software. Software is said to be economically
feasible if it focuses on the issues listed below.
Cost incurred on software development to produce long-term gains for an
organization
Self-Instructional
Material 45
Design and Construction Cost required to conduct full software investigation (such as requirements
elicitation and requirements analysis)
Cost of hardware, software, development team, and training.
NOTES Feasibility Study Process
Feasibility study comprises the following steps.
Information assessment: Identifies information about whether the system
helps in achieving the objectives of the organization. It also verifies that the
system can be implemented using new technology and within the budget
and whether the system can be integrated with the existing system.
Information collection: Specifies the sources from where information about
software can be obtained. Generally, these sources include users (who will
operate the software), organization (where the software will be used), and
the software development team (which understands user requirements and
knows how to fulfill them in software).
Report writing: Uses a feasibility report, which is the conclusion of the
feasibility study by the software development team. It includes the
recommendations whether the software development should continue. This
report may also include information about changes in the software scope,
budget, and schedule and suggestions of any requirements in the system.
Figure 4.5 shows the feasibility study plan, which comprises the following
sections.
General information: Describes the purpose and scope of feasibility study.
It also describes system overview, project references, acronyms and
abbreviations, and points of contact to be used. System overview
provides description about the name of the organization responsible for the
software development, system name or title, system category, operational
status, and so on. Project references provide a list of the references used
to prepare this document such as documents relating to the project or
Self-Instructional
46 Material
previously developed documents that are related to the project. Acronyms Design and Construction
and abbreviations provide a list of the terms that are used in this document
along with their meanings. Points of contact provide a list of points of
organizational contact with users for information and coordination. For
example, users require assistance to solve problems (such as troubleshooting) NOTES
and collect information such as contact number, e-mail address, and so on.
Management summary: Provides the following information.
o Environment: Identifies the individuals responsible for software
development. It provides information about input and output
requirements, processing requirements of the software and the interaction
of the software with other software. It also identifies system security
requirements and the system’s processing requirements
o Current functional procedures: Describes the current functional
procedures of the existing system, whether automated or manual. It also
includes the data-flow of the current system and the number of team
members required to operate and maintain the software.
o Functional objective: Provides information about functions of the
system such as new services, increased capacity, and so on.
o Performance objective: Provides information about performance
objectives such as reduced staff and equipment costs, increased
processing speeds of software, and improved controls.
o Assumptions and constraints: Provides information about assumptions
and constraints such as operational life of the proposed software, financial
constraints, changing hardware, software and operating environment,
and availability of information and sources.
o Methodology: Describes the methods that are applied to evaluate the
proposed software in order to reach a feasible alternative. These methods
include survey, modeling, benchmarking, etc.
o Evaluation criteria: Identifies criteria such as cost, priority, development
time, and ease of system use, which are applicable for the development
process to determine the most suitable system option.
o Recommendation: Describes a recommendation for the proposed
system. This includes the delays and acceptable risks.
Proposed software: Describes the overall concept of the system as well
as the procedure to be used to meet user requirements. In addition, it
provides information about improvements, time and resource costs, and
impacts. Improvements are performed to enhance the functionality and
performance of the existing software. Time and resource costs include the
costs associated with software development from its requirements to its
maintenance and staff training. Impacts describe the possibility of future
happenings and include various types of impacts as listed below.
Self-Instructional
Material 47
Design and Construction o Equipment impacts: Determine new equipment requirements and
changes to be made in the currently available equipment requirements.
o Software impacts: Specify any additions or modifications required in
the existing software and supporting software to adapt to the proposed
NOTES
software.
o Organizational impacts: Describe any changes in organization, staff
and skills requirement.
o Operational impacts: Describe effects on operations such as user-
operating procedures, data processing, data entry procedures, and so
on.
o Developmental impacts: Specify developmental impacts such as
resources required to develop databases, resources required to develop
and test the software, and specific activities to be performed by users
during software development.
o Security impacts: Describe security factors that may influence the
development, design, and continued operation of the proposed software.
Alternative systems: Provide description of alternative systems, which
are considered in a feasibility study. This also describes the reasons for
choosing a particular alternative system to develop the proposed software
and the reason for rejecting alternative systems.
Note: Economic feasibility uses several methods to perform cost-benefit analysis such as
payback analysis, return on investment (ROI) and present value analysis.
Self-Instructional
48 Material
In the validation phase, the work products produced as a consequence of Design and Construction
Self-Instructional
Material 49
Design and Construction Requirements Review
Requirements validation determines whether the requirements are substantial to
design the system. The problems encountered during requirements validation are
NOTES listed below.
Unclear stated requirements
Conflicting requirements are not detected during requirements analysis
Errors in the requirements elicitation and analysis
Lack of conformance to quality standards.
To avoid the problems stated above, a requirements review is conducted,
which consists of a review team that performs a systematic analysis of the
requirements. The review team consists of software engineers, users, and other
stakeholders who examine the specification to ensure that the problems associated
with consistency, omissions, and errors are detected and corrected. In addition,
the review team checks whether the work products produced during the
requirements phase conform to the standards specified for the process, project,
and the product.
At the review meeting, each participant goes over the requirements before
the meeting starts and marks the items which are dubious or need clarification.
Checklists are often used for identifying such items. Checklists ensure that no
source of errors, whether major or minor, is overlooked by the reviewers. A‘good’
checklist consists of the following.
Is the initial state of the system defined?
Is there a conflict between one requirement and the other?
Are all requirements specified at the appropriate level of abstraction?
Is the requirement necessary or does it represent an add-on feature that
may not be essentially implemented?
Is the requirement bounded and has a clear defined meaning?
Is each requirement feasible in the technical environment where the product
or system is to be used?
Is testing possible once the requirement is implemented?
Are requirements associated with performance, behavior, and operational
characteristics clearly stated?
Are requirements patterns used to simplify the requirements model?
Are the requirements consistent with the overall objective specified for the
system/product?
Have all hardware resources been defined?
Is the provision for possible future modifications specified?
Self-Instructional
50 Material
Are functions included as desired by the user (and stakeholder)? Design and Construction
Self-Instructional
Material 51
Design and Construction
4.4 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS
4.5 SUMMARY
Requirement is a condition or capability possessed by the software or system
component in order to solve a real world problem.
The non-functional requirements (also known as quality requirements) are
related to system attributes such as reliability and response time.
Requirements which are derived from the application domain of the system
instead from the needs of the users are known as domain requirements.
Operational feasibility assesses the extent to which the required software
performs a series of steps to solve business problems and user requirements.
The development of software begins once the requirements document is
‘ready’.
The requirements document should be formulated and organized according
to the standards of the organization.
Economic feasibility determines whether the required software is capable
of generating financial gains for an organization. It involves the cost incurred
on the software development team, estimated cost of hardware and software,
cost of performing feasibility study.
Self-Instructional
52 Material
Functional Requirements: The requirements which describe the Design and Construction
Self-Instructional
Material 53
Building the Analysis Model
5.0 INTRODUCTION
In this unit‚ you will learn about the requirement elicitation‚ analysis and data
modeling concepts. Requirement elicitation is a process of collecting information
about software requirements from different individuals. Requirements analysis helps
to understand, interpret, classify, and organize the software requirements in order
to assess the feasibility, completeness, and consistency of the requirements. You
will also learn about the object oriented analysis which is used to describe the
system requirements using prototypes.
5.1 OBJECTIVES
Self-Instructional
Material 57
Building the Analysis Model
NOTES
The guidelines followed while creating an analysis model are listed below.
The model should concentrate on requirements in the problem domain that
are to be accomplished. However, it should not describe the procedure to
accomplish requirements in the system.
Every element of the analysis model should help in understanding the software
requirements. This model should also describe the information domain,
function and behavior of the system.
The analysis model should be useful to all stakeholders because every
stakeholder uses this model in his own manner. For example, business
stakeholders use this model to validate requirements whereas software
designers view this model as a basis for design.
The analysis model should be as simple as possible. For this, additional
diagrams that depict no new or unnecessary information should be avoided.
Also, abbreviations and acronyms should be used instead of complete
notations.
Elements of requirements model
Requirements for a computer-based system can be represented in different ways.
Some people believe that it is better to chose one mode of representation whereas
others believe that requirements model should be described using different modes
as different modes of representation allow considering the requirements from
different perspectives.
Each requirements model consists of some set of elements which are specific
to that model only. However, there exists a set of generic elements, which are
common to most of the analysis models. These elements are as follows:
Scenario-based elements: Scenario based elements form the first part of
the analysis model that is developed. These elements act as input for the
creation of other modeling elements. Scenario-based modeling gives a high
level idea of the system from the user’s point of view. The basic use cases,
use-case diagrams and activity diagrams are the examples of scenario-based
elements.
Self-Instructional
58 Material
Class-based elements: A class is a collection of things that have similar Building the Analysis Model
attributes and common behavior. It forms the basis for the objet-oriented
software development. Thus, class-based elements are basically used in
OO software. They give a static view of the system and how the different
parts (or classes) are related. UML class diagrams, analysis packages, CRC NOTES
models and collaboration diagrams are the example of class-based elements.
The UML class diagram lists the attributes of a class and the operations that
can be applied to modify these attributes.
Behavioral elements: As the name suggests, the behavioral elements depict
the behavior of the system. They describe how the system responds to
external stimuli. State diagrams and activity diagrams are the examples of
behavioral elements. The state transition diagram depicts the various possible
states of a system and the events that cause transition from state to another.
It also lists the actions to be taken in response to a particular event.
Flow-oriented elements: A computer-based system accepts input in
different forms, transform them using functions and produces output in
different forms. The flow-oriented elements depict how the information flows
throughout the system. Data-flow diagrams (DFD) and control-flow
diagrams are the examples of flow-oriented elements.
Requirements Modeling Approaches
Requirements modeling is a technical representation of the system and there exist
a variety of approaches for building the requirements models. Two common
approaches include structured analysis and object-oriented modeling. Each of
these describes a different manner to represent the data, functional, and behavioral
information of the system.
Structured analysis
Structured analysis is a top-down approach, which focuses on refining the problem
with the help of functions performed in the problem domain and data produced by
these functions. This approach facilitates the software engineer to determine the
information received during analysis and to organize the information in order to
avoid the complexity of the problem. The purpose of structured analysis is to
provide a graphical representation to develop new software or enhance the existing
software.
Object-oriented modeling will be discussed later in this unit.
Self-Instructional
Material 59
Building the Analysis Model
5.3 DATA MODELING CONCEPTS
The data model depicts three interrelated aspects of the system: data objects that
NOTES are to be processed, attributes that characterize the data objects, and the
relationship that links different data objects. Entity relationship (ER) model/diagram
is the most commonly used model for data modeling.
Entity relationship diagram
IEEE defines ER diagram as ‘a diagram that depicts a set of real-world entities
and the logical relationships among them’. This diagram depicts entities, the
relationships between them, and the attributes pictorially in order to provide a
high-level description of conceptual data models.An ER diagram is used in different
phases of software development.
Once an ER diagram is created, the information represented by it is stored
in the database. Note that the information depicted in an ER diagram is independent
of the type of database and can later be used to create database of any kind, such
as relational database, network database, or hierarchical database. An ER diagram
comprises data objects and entities, data attributes, relationships, and
cardinality and modality.
Data objects and entities
Data object is a representation of composite information used by software.
Composite information refers to different features or attributes of a data object
and this object can be in any of the following forms:
External entity: describes the data that produces or accepts information.
For example, a report.
Occurrence: describes an action of a process. For example, a telephone
call.
Event: describes a happening that occurs at a specific place or time. For
example, an alarm.
Role: describes the actions or activities assigned to an individual or object.
For example, a systems analyst.
Place: describes the location of objects or storage area. For example, a
wardrobe.
Structure: describes the arrangement and composition of objects. For
example, a file.
An entity is the data that stores information about the system in a database.
Examples of an entity include real world objects, transactions, and persons.
Self-Instructional
60 Material
Building the Analysis Model
Data attributes
Data attributes describe the properties of a data object. Attributes that identify
entities are known as key attributes. On the other hand, attributes that describe
an entity are known as non-key attributes. Generally, a data attribute is used to NOTES
perform the following functions:
Naming an instance (occurrence) of data object.
Description of the instance.
Making reference to another instance in another table.
Data attributes help to identify and classify an occurrence of entity or a
relationship. These attributes represent the information required to develop software
and there can be several attributes for a single entity. For example, attributes of
‘account’ entity are ‘number’, ‘balance’, and so on. Similarly, attributes of ‘user’
entity are ‘name’, ‘address’, and ‘age’. However, it is important to consider the
maximum attributes during requirements elicitation because with more attributes,
it is easier for a software development team to develop a software. In case some
of the data attributes are not applicable, they can be discarded at a later stage.
Relationships
Entities are linked to each other in different ways. This link or connection of data
objects or entities with each other is known as relationship. Note that there
should be at least two entities to establish a relationship between them. Once the
entities are identified, the software development team checks whether relationship
exists between them or not. Each relationship has a name, modality or optionality
(whether the relationship is optional or mandatory), and degree (number of entities
participating in the relationship). These attributes confirm the validity of a given
relationship.
To understand entities, data attributes, and relationship, let us consider
an example. Suppose in a computerized banking system, one of the processes is
to use saving account, which includes two entities, namely, ‘user’ and ‘account’.
Each ‘user’ has a unique ‘account number’, which makes it easy for the bank to
refer to a particular registered user. On the other hand, ‘account’ entity is used to
deposit cash and cheque and to withdraw cash from the saving account. Depending
upon the type and nature of transactions, an account can be of various types such
as current account, saving account, or over-draft account. The relationship between
the user and the account can be described as ‘user has account in a bank’.
In Figure 5.2, entities are represented by rectangles, attributes are
represented by ellipses, and relationships are represented by diamond symbols. A
key attribute is also depicted by an ellipse but with a line below it. This line below
the text in the ellipse indicates the uniqueness of each entity.
Self-Instructional
Material 61
Building the Analysis Model
NOTES
NOTES
Name Description
Object An instance of a class used to describe the entity.
Class A collection of similar objects, which encapsulate data and
procedural abstractions in order to describe their states and the
operations to be performed by them.
Attribute A collection of data values that describe the state of a class.
Operation Also known as methods and services, it provides a means to
modify the state of a class.
Superclass Also known as base class, it is a generalization of a collection
of classes related to it.
Subclass A specialization of superclass and inherits the attributes and
operations from the superclass.
Inheritance A process in which an object inherits some or all the features of
a superclass.
Polymorphism An ability of objects to be used in more than one form in one or
more classes.
Self-Instructional
Material 63
Building the Analysis Model Generally, it is considered that object-oriented systems are easier to develop and
maintain. Also, it is considered that the transition from object-oriented analysis to
object-oriented design can be done easily. This is because object-oriented analysis
is resilient to changes as objects are more stable than functions that are used in
NOTES structured analysis. Note that object-oriented analysis comprises a number of
steps, which includes identifying objects, identifying structures, identifying attributes,
identifying associations and defining services (see Figure 5.4).
Identifying objects
While performing an analysis, an object encapsulates the attributes on which it
provides the services. Note that an object represents entities in a problem domain.
The identification of the objects starts by viewing the problem space and its
description. Then, a summary of the problem space is gathered to consider the
‘nouns’. Nouns indicate the entities used in problem space and which will further
be modelled as objects. Some examples of nouns that can be modelled as objects
are structures, events, roles and locations.
Identifying structures
Structures depict the hierarchies that exist between the objects. Object modelling
applies the concept of generalization and specialization to define hierarchies and
to represent the relationships between the objects. As mentioned earlier, superclass
is a collection of classes that can further be refined into one or more subclasses.
Note that a subclass can have its own attributes and services apart from the
attributes and services inherited from its superclass. To understand generalization
and specialization, consider an example of class ‘car’. Here, ‘car’ is a superclass,
which has attributes, such as wheels, doors and windows. There may be one or
more subclasses of a superclass. For instance, a superclass ‘car’ has subclasses
‘Mercedes’ and ‘Toyota’, which have the inherited attributes along with their own
attributes, such as comfort, locking system, and so on.
Self-Instructional
64 Material
It is essential to consider the objects that can be identified as generalization Building the Analysis Model
Identifying attributes
Attributes add details about an object and store the data for the object. For
example, the class ‘book’ has attributes, such as author name, ISBN and publication
house. The data about these attributes is stored in the form of values and are
hidden from outside the objects. However, these attributes are accessed and
manipulated by the service functions used for that object. The attributes to be
considered about an object depend on the problem and the requirement for that
attribute. For example, while modelling the student admission system, attributes,
such as age and qualification are required for the object ‘student’. On the other
hand, while modelling for hospital management system, the attribute ‘qualification’
is unnecessary and requires other attributes of class ‘student’, such as gender,
height and weight. In short, it can be said that while using an object, only the
attributes that are relevant and required by the problem domain should be
considered.
Identifying associations
Associations describe the relationship between the instances of several classes.
For example, an instance of class ‘university’ is related to an instance of class
‘person’ by ‘educates’ relationship. Note that there is no relationship between the
class ‘university’and class ‘person’. However, only the instance(s) of class ‘person’
(that is, student) is related to class ‘university’. This is similar to entity–relationship
modelling, where one instance can be related by 1:1, 1:M, and M:M relationships.
An association mayhave its own attributes, which may or maynot be present
in other objects. Depending on the requirement, the attributes of the association
can be ‘forced’ to belong to one or more objects without losing the information.
However, this should not be done unless the attribute itself belongs to that object.
Defining services
As mentioned earlier, an object performs some services. These services are carried
out when the object receives a message for it. Services are a medium to change
the state of an object or carry out a process. These services describe the tasks
and processes provided by a system. It is important to consider the ‘occur’ services
in order to create, destroy and maintain the instances of an object. To identify the
services, the system states are defined and then the external events and the required
responses are described. For this, the services provided by objects should be
considered.
Self-Instructional
Material 65
Building the Analysis Model
5.6 SUMMARY
Self-Instructional
66 Material
An object-oriented approach is used to describe system requirements using Building the Analysis Model
Self-Instructional
Material 67
Building the Analysis Model Pressman, Roger S. 1997. Software Engineering, a Practitioner’s Approach.
New Delhi: Tata McGraw-Hill.
Somerville, Ian. 2001. Software Engineering. New Delhi: Pearson Education.
NOTES Ghezzi, Carlo, Mehdi Jazayeri, and Dino Mandriolli . 1991. Fundamentals of
Software Engineering. New Delhi: Prentice-Hill of India.
Jawadekar, Waman S. 2004. Software Engineering: Principles and Practice.
New Delhi: Tata McGraw-Hill.
Hughes 2017 Software project management
Self-Instructional
68 Material
Modeling
UNIT 6 MODELING
Structure NOTES
6.0 Introduction
6.1 Objectives
6.2 Scenario-Based Modeling
6.3 Flow Modeling
6.4 Class Modeling
6.5 Behavioral Modeling
6.6 Answers to Check Your Progress Questions
6.7 Summary
6.8 Key Words
6.9 Self Assessment Questions and Exercises
6.10 Further Readings
6.11 Learning Outcomes
6.0 INTRODUCTION
In this unit‚ you will learn about the scenario-based modeling‚ flow modeling‚
class modeling and behavioral modeling. Software models are ways of expressing
a software design. Usually some sort of abstract language or pictures are used to
express the software design.
6.1 OBJECTIVES
Self-Instructional
Material 69
Modeling
Use-case diagrams
Use-cases describe the tasks or series of tasks in which the users will use the
software under a specific set of conditions. Each use-case provides one or more
NOTES scenarios in order to understand how a system should interact with another system
to accomplish the required task. Note that use-cases do not provide description
about the implementation of software.
Use-cases are represented with the help of a use-case diagram, which depicts
the relationships among actors and use-cases within a system. A use-case diagram
describes what exists outside the system (actors) and what should be performed
by the system (use-cases). The notations used to represent a use-case diagram
are listed in Table 6.1.
Table 6.1 Use-Case Notations
Self-Instructional
70 Material
Modeling
NOTES
While creating a DFD, certain guidelines are followed to depict the data-
flow of system requirements effectively. These guidelines help to create DFD in an
understandable manner. The commonly followed guidelines for creating DFD are:
DFD notations should be given meaningful names. For example, verbs should
be used for naming a process whereas nouns should be used for naming
external entity, data store, and data-flow.
Abbreviations should be avoided in DFD notations.
Each process should be numbered uniquely but the numbering should be
consistent.
A DFD should be created in an organized manner so that it is easily
understood.
Unnecessary notations should be avoided in DFD in order to avoid
complexity.
A DFD should be logically consistent. For this, processes without any input
or output and any input without output should be avoided.
There should be no loops in a DFD.
A DFD should be refined until each process performs a simple function so
that it can be easily represented as a program component.
A DFD should be organized in a series of levels so that each level provides
more detail than the previous level.
Self-Instructional
72 Material
The name of a process should be carried to the next level of DFD. Modeling
Each DFD should not have more than six processes and related data stores.
The data store should be depicted at the context level where it first describes
an interface between two or more processes. Then, the data store should NOTES
be depicted again in the next level of DFD that describes the related
processes.
There are various levels of DFD, which provide details about the input,
processes, and output of a system. Note that the level of detail of process increases
with increase in level(s). However, these levels do not describe the system’s internal
structure or behaviour. These levels are:
Level 0 DFD (also known as context diagram): This shows an overall
view of the system.
Level 1 DFD: This elaborates level 0 DFD and splits the process into a
detailed form.
Level 2 DFD: This elaborates level 1 DFD and displays the process(s) in
a more detailed form.
Level 3 DFD: This elaborates level 2 DFD and displays the process(s) in
a detailed form.
To understand various levels of DFD, let us consider an example of banking
system. In Figure 6.2, level 0 DFD is drawn. This DFD represents how a ‘user’
entity interacts with a ‘banking system’ process and avails its services. The level 0
DFD depicts the entire banking system as a single process. There are various
tasks performed in a bank, such as transaction processing, pass book entry,
registration, demand draft creation, and online help. The data-flow indicates that
these tasks are performed by both the user and the bank. Once the user performs
a transaction, the bank verifies whether the user is registered in the bank or not.
Self-Instructional
Material 73
Modeling The level 0 DFD is expanded in level 1 DFD (see Figure 6.3). In this DFD,
the ‘user’entity is related to several processes in the bank, which include ‘register’,
‘user support’, and ‘provide cash’. Transaction can be performed if the user is
already registered in the bank. Once the user is registered, he can perform
NOTES transaction by the processes, namely, ‘deposit cheque’, ‘deposit cash’, and
‘withdraw cash’. Note that the line in the process symbol indicates the level of
process and contains a unique identifier in the form of a number. If the user is
performing transaction to deposit cheque, the user needs to provide a cheque to
the bank. The user’s information, such as name, address, and account number is
stored in ‘user_detail’ data store, which is a database. If cash is to be deposited
and withdrawn, then, the information about the deposited cash is stored in
‘cash_detail’ data store. User can get a demand draft created by providing cash
to the bank. It is not necessary for the user to be registered in that bank to have a
demand draft. The details of amount of cash and date are stored in ‘DD_detail’
data store. Once the demand draft is prepared, its receipt is provided to the user.
The ‘user support’ process helps users by providing answers to their queries related
to the services available in the bank.
Level 1 DFD can be further refined into level 2 DFD for any process of
banking system that has detailed tasks to perform. For instance, level 2 DFD can
be prepared to deposit a cheque, deposit cash, withdraw cash, provide user
Self-Instructional
74 Material
support, and to create a demand draft. However, it is important to maintain the Modeling
continuity of information between the previous levels (level 0 and level 1) and level
2 DFD. As mentioned earlier, the DFD is refined until each process performs a
simple function, which is easy to implement.
NOTES
Let us consider the ‘withdraw cash’ process (as shown in Figure 6.3) to
illustrate the level 2 DFD. The information collected from level 1 DFD acts as an
input to level 2 DFD. Note that ‘withdraw cash’ process is numbered as ‘3’ in
Figure 6.3 and contains further processes, which are numbered as ‘3.1’, ‘3.2’,
‘3.3’, and ‘3.4’ in Figure 6.4. These numbers represent the sublevels of ‘withdraw
cash’ process. To withdraw cash, the bank checks the status of balance in the
user’s account (as shown by ‘check account status’ process) and then allots a
token (shown as ‘allot token’ process). After the user withdraws cash, the balance
in user’s account is updated in the ‘user_detail’ data store and a statement is
provided to the user.
Self-Instructional
Material 75
Modeling
NOTES
Data dictionary
Although data-flow diagrams contain meaningful names of notations, they do not
provide complete information about the structure of data-flows. For this, a data
dictionary is used, which is a repository that stores description of data objects to
be used by the software. A data dictionary stores an organized collection of
information about data and their relationships, data-flows, data types, data stores,
processes, and so on. In addition, it helps users to understand the data types and
processes defined along with their uses. It also facilitates the validation of data by
avoiding duplication of entries and provides online access to definitions to the
users.
A data dictionary comprises the source of data, which are data objects and
entities as well as the elements listed here:
Name: Provides information about the primary name of the data store,
external entity, and data-flow.
Alias: Describes different names of data objects and entities used.
Where-used/how-used: Lists all the processes that use data objects and
entities and how they are used in the system. For this, it describes the inputs
to the process, output from the process, and the data store.
Content description: Provides information about the content with the help
of data dictionary notations (such as ‘=’, ‘+’, and ‘* *’).
Supplementary information: Provides information about data types, values
used in variables, and limitations of these values.
Once all the objects in the problem domain are identified, the objects that have the
same characteristics are grouped together to form an object class or simply a
Self-Instructional
76 Material
class. A class is a type definition that defines what attributes each object of that Modeling
class encapsulate and what services it provides. Note that class is just a definition
and it does not create any object and cannot hold any value. When objects of a
class are created, memory for them is allocated.
NOTES
During class modeling, all the classes are identified and represented using
UML class diagrams. The notation for a class is a rectangular box divided into
three sections. The top section contains the class name, middle section contains
the attributes that belong to the class, and the bottom section lists all the services
(operations) provided by the objects of this class. The UML convention is to use
boldface for class name and to capitalize the initial letter of each word in the class
name (see Figure 6.6).
NOTES
which is not-empty-not-full. Further insertions will keep the queue in this state until
the maximum size is reached. At this point, the state of the queue changes and it
becomes full. An insertion on the full queue results in an error. The delete operation
can be performed on full as well as not-empty-not-full queue. When an element is NOTES
deleted from the full queue, the queue becomes not-empty-not-full. Further deletions
keep the queue in this state, however, when the last element is deleted from the
queue, the queue becomes empty.
1. DFD is a diagram that depicts data sources, data sinks, data storage, and
processes performed on data as nodes, and logical flow of data as links
between the nodes.
2. A data dictionary stores an organized collection of information about data
and their relationships, data-flows, data types, data stores, processes.
3. A class is a type definition that defines what attributes each object of that
class encapsulate and what services it provides.
6.7 SUMMARY
Use-cases describe the tasks or series of tasks in which the users will use
the software under a specific set of conditions.
Use-cases are represented with the help of a use-case diagram, which depicts
the relationships among actors and use-cases within a system. A use-case
Self-Instructional
Material 79
Modeling diagram describes what exists outside the system (actors) and what should
be performed by the system (use-cases).
A DFD depicts the flow of data within a system and considers a system that
transforms inputs into the required outputs. When there is complexity in a
NOTES
system, data needs to be transformed using various steps to produce an
output.
DFD is a diagram that depicts data sources, data sinks, data storage, and
processes performed on data as nodes, and logical flow of data as links
between the nodes.
A data dictionary stores an organized collection of information about data
and their relationships, data-flows, data types, data stores, processes.
A class is a type definition that defines what attributes each object of that
class encapsulate and what services it provides. Note that class is just a
definition and it does not create any object and cannot hold any value.
Behavioral model depict the overall behavior of the system.Asystem always
exists in some state—an observable mode of behavior; and it may change
its state in response to some event occurrence in the external environment.
The state transition diagram shows the system behavior by depicting its
states and the events that cause the change in system state. To better
understand the design of state transition diagram, consider an object that
represents a queue.
Self-Instructional
80 Material
Long Answer Questions Modeling
Self-Instructional
Material 81
Design Engineering
BLOCK - III
SYSTEM DESIGN
NOTES
UNIT 7 DESIGN ENGINEERING
Structure
7.0 Introduction
7.1 Objectives
7.2 Basics of Software Design
7.2.1 Software Design Concepts
7.2.2 Types of Design Patterns
7.2.3 Developing a Design Model
7.3 Answers to Check Your Progress Questions
7.4 Summary
7.5 Key Words
7.6 Self Assessment Questions and Exercises
7.7 Further Readings
7.8 Learning Outcomes
7.0 INTRODUCTION
Once the requirements document for the software to be developed is available,
the software design phase begins. While the requirement specification activity deals
entirely with the problem domain, design is the first phase of transforming the
problem into a solution. In the design phase, the customer and business requirements
and technical considerations all come together to formulate a product or a system.
The design process comprises a set of principles, concepts and practices,
which allow a software engineer to model the system or product that is to be built.
This model, known as design model, is assessed for quality and reviewed before
a code is generated and tests are conducted. The design model provides details
about software data structures, architecture, interfaces and components which
are required to implement the system.
7.1 OBJECTIVES
Self-Instructional
82 Material
Design Engineering
7.2 BASICS OF SOFTWARE DESIGN
Self-Instructional
84 Material
Note that design principles are often constrained by the existing hardware Design Engineering
configuration, the implementation language, the existing file and data structures,
and the existing organizational practices.Also, the evolution of each software design
should be meticulouslydesigned for future evaluations, references and maintenance.
NOTES
7.2.1 Software Design Concepts
Every software process is characterized by basic concepts along with certain
practices or methods. Methods represent the manner through which the concepts
are applied. As new technology replaces older technology, many changes occur in
the methods that are used to apply the concepts for the development of software.
However, the fundamental concepts underlining the software design process remain
the same, some of which are described here.
Abstraction
Abstraction refers to a powerful design tool, which allows software designers to
consider components at an abstract level, while neglecting the implementation
details of the components. IEEE defines abstraction as ‘a view of a problem
that extracts the essential information relevant to a particular purpose and
ignores the remainder of the information.’ The concept of abstraction can be
used in two ways: as a process and as an entity. As a process, it refers to a
mechanism of hiding irrelevant details and representing only the essential features
of an item so that one can focus on important things at a time. As an entity, it
refers to a model or view of an item.
Each step in the software process is accomplished through various levels of
abstraction.At the highest level, an outline of the solution to the problem is presented
whereas at the lower levels, the solution to the problem is presented in detail. For
example, in the requirements analysis phase, a solution to the problem is presented
using the language of problem environment and as we proceed through the software
process, the abstraction level reduces and at the lowest level, source code of the
software is produced.
There are three commonly used abstraction mechanisms in software design,
namely, functional abstraction, data abstraction and control abstraction. All
these mechanisms allow us to control the complexity of the design process by
proceeding from the abstract design model to concrete design model in a systematic
manner.
Functional abstraction: This involves the use of parameterized
subprograms. Functional abstraction can be generalized as collections of
subprograms referred to as ‘groups’. Within these groups there exist routines
which may be visible or hidden. Visible routines can be used within the
containing groups as well as within other groups, whereas hidden routines
are hidden from other groups and can be used within the containing group
only.
Self-Instructional
Material 85
Design Engineering Data abstraction: This involves specifying data that describes a data object.
For example, the data object window encompasses a set of attributes
(window type, window dimension) that describe the window object clearly.
In this abstraction mechanism, representation and manipulation details are
NOTES ignored.
Control abstraction: This states the desired effect, without stating the exact
mechanism of control. For example, if and while statements in programming
languages (like C and C++) are abstractions of machine code
implementations, which involve conditional instructions. In the architectural
design level, this abstraction mechanism permits specifications of sequential
subprogram and exception handlers without the concern for exact details of
implementation.
Architecture
Software architecture refers to the structure of the system, which is composed of
various components of a program/system, the attributes (properties) of those
components and the relationship amongst them. The software architecture enables
the software engineers to analyze the software design efficiently. In addition, it also
helps them in decision-making and handling risks. The software architecture does
the following.
Provides an insight to all the interested stakeholders that enable them to
communicate with each other
Highlights early design decisions, which have great impact on the software
engineering activities (like coding and testing) that follow the design phase
Creates intellectual models of how the system is organized into components
and how these components interact with each other.
Currently, software architecture is represented in an informal and unplanned
manner. Though the architectural concepts are often represented in the infrastructure
(for supporting particular architectural styles) and the initial stages of a system
configuration, the lack of an explicit independent characterization of architecture
restricts the advantages of this design concept in the present scenario. Note that
software architecture comprises two elements of design model, namely, data design
and architectural design. Both these elements have been discussed later in this
unit.
Patterns
A pattern provides a description of the solution to a recurring design problem of
some specific domain in such a way that the solution can be used again and again.
The objective of each pattern is to provide an insight to a designer who can determine
the following.
Whether the pattern can be reused
Self-Instructional
86 Material
Whether the pattern is applicable to the current project Design Engineering
Self-Instructional
Material 87
Design Engineering
NOTES
Self-Instructional
88 Material
Information hiding is of immense use when modifications are required during Design Engineering
the testing and maintenance phase. Some of the advantages associated with
information hiding are listed below.
Leads to low coupling
NOTES
Emphasizes communication through controlled interfaces
Decreases the probability of adverse effects
Restricts the effects of changes in one component on others
Results in higher quality software.
Stepwise Refinement
Stepwise refinement is a top-down design strategy used for decomposing a system
from a high level of abstraction into a more detailed level (lower level) of abstraction.
At the highest level of abstraction, function or information is defined conceptually
without providing any information about the internal workings of the function or
internal structure of the data.As we proceed towards the lower levels of abstraction,
more and more details are available.
Software designers start the stepwise refinement process by creating a
sequence of compositions for the system being designed. Each composition is
more detailed than the previous one and contains more components and interactions.
The earlier compositions represent the significant interactions within the system,
while the later compositions show in detail how these interactions are achieved.
To have a clear understanding of the concept, let us consider an example of
stepwise refinement. Every computer program comprises input, process, and
output.
1. INPUT
Get user’s name (string) through a prompt.
Get user’s grade (integer from 0 to 100) through a prompt and validate.
2. PROCESS
3. OUTPUT
This is the first step in refinement. The input phase can be refined further as
given here.
1. INPUT
Get user’s name through a prompt.
Get user’s grade through a prompt.
While (invalid grade)
Ask again:
2. PROCESS
3. OUTPUT
Note: Stepwise refinement can also be performed for PROCESS and OUTPUT phase. Self-Instructional
Material 89
Design Engineering Refactoring
Refactoring is an important design activity that reduces the complexity of module
design keeping its behaviour or function unchanged. Refactoring can be defined as
NOTES a process of modifying a software system to improve the internal structure of
design without changing its external behavior. During the refactoring process, the
existing design is checked for any type of flaws like redundancy, poorly constructed
algorithms and data structures, etc., in order to improve the design. For example,
a design model might yield a component which exhibits low cohesion (like a
component performs four functions that have a limited relationship with one another).
Software designers may decide to refactor the component into four different
components, each exhibiting high cohesion. This leads to easier integration, testing,
and maintenance of the software components.
Structural Partitioning
When the architectural style of a design follows a hierarchical nature, the structure
of the program can be partitioned either horizontally or vertically. In horizontal
partitioning, the control modules (as depicted in the shaded boxes in Figure
7.4(a)) are used to communicate between functions and execute the functions.
Structural partitioning provides the following benefits.
The testing and maintenance of software becomes easier.
The negative impacts spread slowly.
The software can be extended easily.
Besides these advantages, horizontal partitioning has some disadvantage
also. It requires to pass more data across the module interface, which makes the
control-flow of the problem more complex. This usually happens in cases where
data moves rapidly from one function to another.
Self-Instructional
90 Material
Concurrency Design Engineering
Computer has limited resources and they must be utilized efficiently as much as
possible. To utilize these resources efficiently, multiple tasks must be executed
concurrently. This requirement makes concurrency one of the major concepts of NOTES
software design. Every system must be designed to allow multiple processes to
execute concurrently, whenever possible. For example, if the current process is
waiting for some event to occur, the system must execute some other process in
the mean time.
However, concurrent execution of multiple processes sometimes may result
in undesirable situations such as an inconsistent state, deadlock, etc. For example,
consider two processes Aand B and a data item Q1 with the value ‘200’. Further,
suppose A and B are being executed concurrently and firstly A reads the value of
Q1 (which is ‘200’) to add ‘100’ to it. However, before A updates the value of
Q1, B reads the value of Q1 (which is still ‘200’) to add ‘50’ to it. In this situation,
whether Aor B first updates the value of Q1, the value of Q1 would definitely be
wrong resulting in an inconsistent state of the system. This is because the actions
of A and B are not synchronized with each other. Thus, the system must control
the concurrent execution and synchronize the actions of concurrent processes.
One way to achieve synchronization is mutual exclusion, which ensures that
two concurrent processes do not interfere with the actions of each other. To ensure
this, mutual exclusion may use locking technique. In this technique, the processes
need to lock the data item to be read or updated. The data item locked by some
process cannot be accessed by other processes until it is unlocked. It implies that
the process, that needs to access the data item locked by some other process, has
to wait.
7.2.3 Developing a Design Model
To develop a complete specification of design (design model), four design models
are needed (see Figure 7.5). These models are listed below.
Data design: This specifies the data structures for implementing the software
by converting data objects and their relationships identified during the analysis
phase. Various studies suggest that design engineering should begin with
data design, since this design lays the foundation for all other design models.
Architectural design: This specifies the relationship between the structural
elements of the software, design patterns, architectural styles, and the factors
affecting the ways in which architecture can be implemented.
Component-level design: This provides the detailed description of how
structural elements of software will actually be implemented.
Interface design: This depicts how the software communicates with the
system that interoperates with it and with the end-users.
Self-Instructional
Material 91
Design Engineering
NOTES
Self-Instructional
92 Material
Design Engineering
7.4 SUMMARY
Self-Instructional
94 Material
Architectural Design
8.0 INTRODUCTION
In this unit‚ you will learn about the architectural design of software. Software
architecture refers to the structure of the system, which is composed of various
components of a program/system, the attributes (properties) of those components
and the relationship amongst them. The software architecture enables the software
engineers to analyze the software design efficiently.
8.1 OBJECTIVES
Data design is the first design activity, which results in less complex, modular, and
efficient program structure. The information domain model developed during
analysis phase is transformed into data structures needed for implementing the
software. The data objects, attributes, and relationships depicted in entity
relationship diagrams and the information stored in data dictionary provide a base
for data design activity. During the data design process, data types are specified
along with the integrity rules required for the data. For specifying and designing
efficient data structures, some principles should be followed. These principles are
listed below.
Self-Instructional
Material 95
Architectural Design The data structures needed for implementing the software as well as the
operations that can be applied on them should be identified.
A data dictionary should be developed to depict how different data objects
interact with each other and what constraints are to be imposed on the
NOTES
elements of data structure.
Stepwise refinement should be used in data design process and detailed
design decisions should be made later in the process.
Only those modules that need to access data stored in a data structure
directly should be aware of the representation of the data structure.
A library containing the set of useful data structures along with the operations
that can be performed on them should be maintained.
Language used for developing the system should support abstract data types.
The structure of data can be viewed at three levels, namely, program
component level, application level, and business level. At the program
component level, the design of data structures and the algorithms required to
manipulate them is necessary, if high-quality software is desired.At the application
level, it is crucial to convert the data model into a database so that the specific
business objectives of a system could be achieved. At the business level, the
collection of information stored in different databases should be reorganized into
data warehouse, which enables data mining that has an influential impact on the
business.
Note: Data design helps to represent the data component in the conventional systems and
class definitions in object-oriented systems.
It develops and documents top-level design for the external and internal
interfaces.
It develops preliminary versions of user documentation. NOTES
It defines and documents preliminary test requirements and the schedule for
software integration.
The sources of architectural design are listed below.
Information regarding the application domain for the software to be
developed
Using data-flow diagrams
Availability of architectural patterns and architectural styles.
Architectural design is of crucial importance in software engineering during
which the essential requirements like reliability, cost, and performance are dealt
with. This task is cumbersome as the software engineering paradigm is shifting
from monolithic, stand-alone, built-from-scratch systems to componentized,
evolvable, standards-based, and product line-oriented systems. Also, a key
challenge for designers is to know precisely how to proceed from requirements to
architectural design. To avoid these problems, designers adopt strategies such as
reusability, componentization, platform-based, standards-based, and so on.
Though the architectural design is the responsibility of developers, some
other people like user representatives, systems engineers, hardware engineers,
and operations personnel are also involved. All these stakeholders must also be
consulted while reviewing the architectural design in order to minimize the risks
and errors.
Architectural Design Representation
Architectural design can be represented using the following models.
Structural model: Illustrates architecture as an ordered collection of
program components
Dynamic model: Specifies the behavioral aspect of the software architecture
and indicates how the structure or system configuration changes as the
function changes due to change in the external environment
Process model: Focuses on the design of the business or technical process,
which must be implemented in the system
Functional model: Represents the functional hierarchy of a system
Framework model: Attempts to identify repeatable architectural design
patterns encountered in similar types of application. This leads to an increase
in the level of abstraction.
Self-Instructional
Material 97
Architectural Design Architectural Design Output
The architectural design process results in an Architectural Design Document
(ADD). This document consists of a number of graphical representations that
NOTES comprises software models along with associated descriptive text. The software
models include static model, interface model, relationship model, and dynamic
process model. They show how the system is organized into a process at run-
time.
Architectural design document gives the developers a solution to the problem
stated in the Software Requirements Specification (SRS). Note that it considers
only those requirements in detail that affect the program structure. In addition to
ADD, other outputs of the architectural design are listed below.
Various reports including audit report, progress report, and configuration
status accounts report
Various plans for detailed design phase, which include the following
o Software verification and validation plan
o Software configuration management plan
o Software quality assurance plan
o Software project management plan.
8.2.2 Architectural Styles
Architectural styles define a group of interlinked systems that share structural and
semantic properties. In short, the objective of using architectural styles is to establish
a structure for all the components present in a system. If an existing architecture is
to be re-engineered, then imposition of an architectural style results in fundamental
changes in the structure of the system. This change also includes re-assignment of
the functionality performed by the components.
By applying certain constraints on the design space, we can make different
style-specific analysis from an architectural style. In addition, if conventional
structures are used for an architectural style, the other stakeholders can easily
understand the organization of the system.
A computer-based system (software is part of this system) exhibits one of
the many available architectural styles. Every architectural style describes a system
category that includes the following.
Computational components such as clients, server, filter, and database to
execute the desired system function
A set of connectors such as procedure call, events broadcast, database
protocols, and pipes to provide communication among the computational
components
Constraints to define integration of components to form a system
Self-Instructional
98 Material
A semantic model, which enable the software designer to identify the Architectural Design
Self-Instructional
Material 99
Architectural Design It is maintainable and modifiable.
It supports concurrent execution.
Some disadvantages associated with the data-flow architecture are listed below.
NOTES It often degenerates to batch sequential system.
It does not provide enough support for applications requires user interaction.
It is difficult to synchronize two different but related streams.
Object-oriented Architecture
In object-oriented architectural style, components of a system encapsulate data
and operations, which are applied to manipulate the data. In this style, components
are represented as objects and they interact with each other through methods
(connectors). This architectural style has two important characteristics, which are
listed below.
Objects maintain the integrity of the system.
An object is not aware of the representation of other objects.
Some of the advantages associated with the object-oriented architecture
are listed below.
It allows designers to decompose a problem into a collection of independent
objects.
The implementation detail of objects is hidden from each other and hence,
they can be changed without affecting other objects.
Layered Architecture
In layered architecture, several layers (components) are defined with each layer
performing a well-defined set of operations. These layers are arranged in a
hierarchical manner, each one built upon the one below it (see Figure 8.2). Each
layer provides a set of services to the layer above it and acts as a client to the layer
below it. The interaction between layers is provided through protocols (connectors)
that define a set of rules to be followed during interaction. One common example
of this architectural style is OSI-ISO (Open Systems Interconnection-International
Organization for Standardization) communication system (see Figure 8.2).
Self-Instructional
100 Material
Data-centered Architecture Architectural Design
NOTES
Data coupling: Two modules are said to be ‘data coupled’ if they use
parameter list to pass data items for communication. In Figure 8.6, Module
1 and Module 3 are data coupled.
Stamp coupling: Two modules are said to be ‘stamp coupled’ if they
communicate by passing a data structure that stores additional information
than what is required to perform their functions. In Figure 8.6, data structure
is passed between Modules 1 and 4. Therefore, Module 1 and Module 4
are stamp coupled.
Control coupling: Two modules are said to be ‘control coupled’ if they
communicate (pass a piece of information intended to control the internal
logic) using at least one ‘control flag’. The control flag is a variable whose
value is used by the dependent modules to make decisions. In Figure 8.7,
when Module 1 passes the control flag to Module 2, Module 1 and Module
2 are said to be control coupled.
Self-Instructional
104 Material
Architectural Design
NOTES
Self-Instructional
Material 105
Architectural Design Communicational cohesion: In this, the elements within the modules
perform different functions, yet each function references the same input or
output information.
Procedural cohesion: In this, the elements within the modules are involved
NOTES
in different and possibly unrelated activities.
Temporal cohesion: In this, the elements within the modules contain
unrelated activities that can be carried out at the same time.
Logical cohesion: In this, the elements within the modules perform similar
activities, which are executed from outside the module.
Coincidental cohesion: In this, the elements within the modules perform
activities with no meaningful relationship to one another.
After having discussed various types of cohesions, Figure 8.10 illustrates
the procedure which can be used in determining the types of module cohesion for
software design.
Self-Instructional
106 Material
Architectural Design
NOTES
1. Data design is the first design activity, which results in less complex, modular,
and efficient program structure.A coupling is a device used to connect two
shafts together at their ends for the purpose of transmitting power.
2. Data centered architecture is a layered process which provides architectural
guidelines in data center development. Data Centered Architecture is also
known as Database Centric Architecture.
3. Coupling measures the degree of interdependence among the modules.
Several factors like interface complexity, type of data that pass across the
interface, type of communication, number of interfaces per module, etc.
influence the strength of coupling between two modules.
4. Cohesion measures the relative functional strength of a module. It represents
the strength of bond between the internal elements of the modules.
Self-Instructional
Material 107
Architectural Design
8.4 SUMMARY
Self-Instructional
108 Material
Architectural Design
8.6 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Self-Instructional
Material 109
User Interface Design
9.0 INTRODUCTION
In this unit‚ you will learn about the user interface design. It is the design of user
interfaces for software and machine. The goal of designing is to make the interaction
as simple and efficient as possible, in order to accomplish user goals.
9.1 OBJECTIVES
User interfaces determine the way in which users interact with the software. The
user interface design creates an effective communication medium between a human
and a computing machine. It provides easy and intuitive access to information as
well as efficient interaction and control of software functionality. For this, it is
necessary for the designers to understand what the user requires from the user
interface.
Since the software is developed for the user, the interface through which the
user interacts with the software should also be given prime importance. The
developer should interact with the person (user) for whom the interface is being
Self-Instructional
110 Material
designed before designing it. Direct interaction with end-users helps the developers User Interface Design
to improve the user interface design because it helps designers to know the user’s
goals, skills and needs. Figure 9.1 shows an example of a simple user interface
design.
NOTES
its operating systems so that the users are able to use it in a user-friendly manner.
9.2.3 User Interface Design Process Steps
The user interface design, like all other software design elements, is an iterative NOTES
process. Each step in this design occurs a number of times, each time elaborating
and refining information developed in the previous step. Although many different
user interface design models have been proposed, all have the following steps in
common.
Analysis and modelling: Initially, the profile of end-users for whom the
interface is to be designed is analyzed, to determine their skill level and
background. On this basis, users are classified into different categories, and
for each category, the requirements are collected. After identifying the
requirements, a detailed task analysis is made to identify the tasks that are
to be performed by each class of users in order to achieve the system
objectives. Finally, the environment in which user has to work is analyzed.
Using the information gathered, an analysis model of the interface is designed.
Interface design: Using the analysis model developed in the previous step,
the interface object and the series of actions to accomplish the defined
tasks are specified. In addition, a complete list of events (user actions),
which causes the state of the user interface to change is also defined.
Interface implementation and validation: A prototype of the interface
is developed and is provided to the user to experiment with it. Once the
user evaluates the prototype, it is modified according to their requirements.
This process is repeated iteratively until all the requirements specified by
the user are met.
To sum up, the user interface design activity starts with the identification of
the user, task, and environmental requirements. After this, user states are created
and analyzed to define a set of interface objects and actions. These objects then
form the basis for the creation of screen layout, menus, icons, and much more.
While designing the user interface, the following points must be considered.
Follow the rules stated in further. If an interface does not follow any of these
rules to a reasonable degree, then it needs to be redesigned.
Determine how interface will be implemented.
Consider the environment (like operating system, development tools, display
properties, and so on).
9.2.4 Evaluating a User Interface Design
Although the interface design process results in a useful interface, it cannot be
expected from a designer to design an interface of high quality in the first run. Each
iteration of the steps involved in user interface design process leads to development
Self-Instructional
Material 115
User Interface Design of a prototype. The objective of developing a prototype is to capture the ‘essence’
of the user interface. The prototype is evaluated, discrepancies are detected, and
accordingly redesigning takes place. This process carries on until a good interface
evolves.
NOTES
Evaluating a user interface requires its prototype to include the look and feel
of the actual interface and should offer a range of options. However, it is not
essential for the prototype to support the whole range of software behaviour.
Choosing an appropriate evaluation technique helps in knowing whether a prototype
is able to achieve the desired user interface.
Evaluation Techniques
Evaluation of interface must be done in such a way that it can provide feedback to
the next iteration. Each iteration of evaluation must specify what is good about the
interface and where is a scope for improvement.
Some well known techniques for evaluating user interface are: use it yourself,
colleague evaluation, user testing, and heuristic evaluation. Each technique
has its own advantages and disadvantages and the emphasis of each technique for
various issues like ease of learning and efficiency of use varies. The rule of aesthetic
pleasing largely varies from user to user and can be evaluated well by observing
what attracts the people.
Use it yourself: This is the first technique of evaluation and in this technique
the designer himself uses the interface for a period of time to determine its
good and bad features. It helps the designer to remove the bad features of
the interface. This technique also helps in identifying the missing components
of the interface and its efficiency.
Colleague evaluation: Since the designers are aware of the functionality
of the software, it is possible that they may miss out on issues of ease of
learning and efficiency. Showing the interface to a colleague may help in
solving these issues. Note that if the prototype is not useful in the current
state, colleagues might not spend sufficient time in using it to identify many
efficiency issues.
User testing: This testing is considered to be the most practical approach
of evaluation. In this technique, users test the prototype with the expected
differences recorded in the feedback. The communication between the users
while testing the interface provides the most useful feedback. Before allowing
the users to start testing, the designers choose some tasks expected to be
performed by the users. In addition, they should prepare the necessary
background details in advance. Users should spend sufficient time to
understand these details before performing the test. This testing is considered
the best way to evaluate ease of learning. However, it does not help much in
identifying the inefficiency issues.
Self-Instructional
116 Material
Heuristic evaluation: In this technique of evaluation, a checklist of issues User Interface Design
1. User interfaces determine the way in which users interact with the software.
The user interface design creates an effective communication medium
between a human and a computing machine.
2. Software designers strive to achieve a good user interface by following
three rules, namely, ease of learning, efficiency of use, and aesthetic
appeal.
3. Some well-known techniques for evaluating user interface are: use it
yourself, colleague evaluation, user testing, and heuristic evaluation.
9.4 SUMMARY
User interfaces determine the way in which users interact with the software.
The user interface design creates an effective communication medium
between a human and a computing machine.
The response time of a system is the time from when the user issues some
request to the time the system responds to that request.
Sometimes during processing, errors occur due to some exceptional
conditions such as out of memory, failure of communication link, etc. The
system represents these errors to the users using error messages.
Designing a good and efficient user interface is a common objective among
software designers.
The user interface design, like all other software design elements, is an
iterative process.
Evaluation of interface must be done in such a way that it can provide
feedback to the next iteration.
It should contain proper suggestions to recover from the errors. The users
may be asked to invoke the online help system to find out more about the
error situation
Self-Instructional
118 Material
User Interface Design
9.5 KEY WORDS
User Interfaces: It determines the way in which users interact with the
software. NOTES
Consistency: Designers apply the principle of coherence to keep the
interface consistent internally as well as externally.
Self-Instructional
Material 119
Testing Strategies
BLOCK - IV
SYSTEM TESTING
NOTES
UNIT 10 TESTING STRATEGIES
Structure
10.0 Introduction
10.1 Objectives
10.2 Software Testing Fundamentals
10.2.1 Test Plan
10.2.2 Software Testing Strategies
10.2.3 Levels of Testing
10.2.4 Unit Testing
10.2.5 Integration Testing
10.2.6 Validation Testing
10.2.7 System Testing
10.3 Testing Conventional Applications
10.3.1 White Box Testing
10.3.2 Black Box Testing
10.4 Debugging
10.4.1 The Debugging Process
10.4.2 Induction Strategy
10.5 Answers to Check Your Progress Questions
10.6 Summary
10.7 Key Words
10.8 Self Assessment Questions and Exercises
10.9 Further Readings
10.10 Learning Outcomes
10.0 INTRODUCTION
Testing of software is critical, since testing determines the correctness, completeness
and quality of the software being developed. Its main objective is to detect errors
in the software. Errors prevent software from producing outputs according to user
requirements. They occur if some part of the developed system is found to be
incorrect, incomplete, or inconsistent. Errors can broadly be classified into three
types, namely, requirements errors, design errors, and programming errors. To
avoid these errors, it is necessary that: requirements are examined for conformance
to user needs, software design is consistent with the requirements and notational
convention, and the source code is examined for conformance to the requirements
specification, design documentation and user expectations. All this can be
accomplished through efficacious means of software testing.
Self-Instructional
120 Material
The activities involved in testing phase basically evaluate the capability of Testing Strategies
the developed system and ensure that the system meets the desired requirements.
It should be noted that testing is fruitful only if it is performed in the correct manner.
Through effective software testing, the software can be examined for correctness,
comprehensiveness, consistency and adherence to standards. This helps in delivering NOTES
high-quality software products and lowering maintenance costs, thus leading to
more contented users.
10.1 OBJECTIVES
After going through this unit‚ you will be able to:
Discuss the guidelines that are required to perform efficient and effective
testing
Design test plan, which specifies the purpose, scope, and method of software
testing
Understand various levels of testing including unit testing, integration testing,
system testing, and acceptance testing
Explain white box testing and black box testing techniques
Explain how testing is performed in the object-oriented environment
Explain how to perform debugging.
NOTES
Self-Instructional
Material 123
Testing Strategies Testability
The ease with which a program is tested is known as testability. Testability should
always be considered while designing and implementing a software system so that
NOTES the errors (if any) in the system can be detected with minimum effort. There are
several characteristics of testability, which are listed below (also see Figure 10.3).
Easy to operate: High-quality software can be tested in a better manner.
This is because if software is designed and implemented considering quality,
then comparatively fewer errors will be detected during the execution of
tests.
Stability: Software becomes stable when changes made to the software
are controlled and when the existing tests can still be performed.
Observability: Testers can easily identify whether the output generated
for certain input is accurate simply by observing it.
Easy to understand: Software that is easy to understand can be tested in
an efficient manner. Software can be properly understood by gathering
maximum information about it. For example, to have a proper knowledge
of software, its documentation can be used, which provides complete
information of software code therebyincreasing its clarity and making testing
easier.
Decomposability: By breaking software into independent modules,
problems can be easily isolated and the modules can be easily tested.
Self-Instructional
124 Material
Characteristics of Software Test Testing Strategies
There are several tests (such as unit and integration) used for testing a software.
Each test has its own characteristics. The following points, however, should be
noted. NOTES
High probability of detecting errors: To detect maximum errors, the tester
should understand the software thoroughly and try to find the possible ways in
which the software can fail. For example, in a program to divide two numbers,
the possible way in which the program can fail is when 2 and 0 are given as
inputs and 2 is to be divided by 0. In this case, a set of tests should be developed
that can demonstrate an error in the division operator.
No redundancy: Resources and testing time are limited in software
development process. Thus, it is not beneficial to develop several tests,
which have the same intended purpose. Every test should have a distinct
purpose.
Choose the most appropriate test: There can be different tests that have
the same intent but due to certain limitations, such as time and resource
constraint, only few of them are used. In such a case, the tests which are
likely to find more errors should be considered.
Moderate: A test is considered good if it is neither too simple nor too
complex. Many tests can be combined to form one test case. However, this
can increase the complexity and leave many errors undetected. Hence, all
tests should be performed separately.
10.2.1 Test Plan
A test plan describes how testing would be accomplished. It is a document that
specifies the purpose, scope, and method of software testing. It determines the
testing tasks and the persons involved in executing those tasks, test items, and the
features to be tested. It also describes the environment for testing and the test
design and measurement techniques to be used. Note that a properly defined test
plan is an agreement between testers and users describing the role of testing in
software.
A complete test plan helps the people who are not involved in test group
to understand why product validation is needed and how it is to be performed.
However, if the test plan is not complete, it might not be possible to check
how the software operates when installed on different operating systems or
when used with other software. To avoid this problem, IEEE states some
components that should be covered in a test plan. These components are
listed in Table 10.1.
Self-Instructional
Material 125
Testing Strategies Table 10.1 Components of a Test Plan
Component Purpose
Responsibilities Assigns responsibilities to different people and keeps them focused.
NOTES
Assumptions Avoids any misinterpretation of schedules.
Test Provides an abstract of the entire process and outlines specific tests.
The testing scope, schedule, and duration are also outlined.
Communication Communication plan (who, what, when, how about the people) is
developed.
Risk analysis Identifies areas that are critical for success.
Defect reporting Specifies the way in which a defect should be documented so that it
may reoccur and be retested and fixed.
Environment Describes the data, interfaces, work area, and the technical
environment used in testing. All this is specified to reduce or
eliminate the misunderstandings and sources of potential delay.
Self-Instructional
126 Material
10.2.2 Software Testing Strategies Testing Strategies
Self-Instructional
Material 127
Testing Strategies Verification and Validation
Software testing is often used in association with the terms ‘verification’ and
‘validation’. Verification refers to the process of ensuring that the software is
NOTES developed according to its specifications. For verification, techniques like reviews,
analysis, inspections and walkthroughs are commonly used. While validation refers
to the process of checking that the developed software meets the requirements
specified by the user. Verification and validation can be summarized thus as given
here.
Verification: Is the software being developed in the right way?
Validation: Is the right software being developed?
Types of Software Testing Strategies
There are different types of software testing strategies, which are selected by the
testers depending upon the nature and size of the software. The commonly used
software testing strategies are listed below (also see Figure 10.4).
Analytic testing strategy: This uses formal and informal techniques to
access and prioritize risks that arise during software testing. It takes a
complete overview of requirements, design, and implementation of objects
to determine the motive of testing. In addition, it gathers complete information
about the software, targets to be achieved, and the data required for testing
the software.
Model-based testing strategy: This strategy tests the functionality of
software according to the real world scenario (like software functioning in
an organization). It recognizes the domain of data and selects suitable test
cases according to the probability of errors in that domain.
Methodical testing strategy: It tests the functions and status of software
according to the checklist, which is based on user requirements. This strategy
is also used to test the functionality, reliability, usability, and performance of
software.
Process-oriented testing strategy: It tests the software according to
already existing standards, such as IEEE standards. In addition, it checks
the functionality of software by using automated testing tools.
Dynamic testing strategy: This tests the software after having a collective
decision of the testing team. Along with testing, this strategy provides
information about the software, such as test cases used for testing the errors
present in it.
Philosophical testing strategy: It tests software assuming that any
component of software can stop functioning anytime. It takes help from
software developers, users and systems analysts to test the software.
Self-Instructional
128 Material
Testing Strategies
NOTES
Advantages Disadvantages
ITG can more efficiently find defects ITG may perform some tests that have
related to interaction among different already been performed by the
modules, system usability and developers. This results in duplication of
performance, and many other special cases effort as well as wastage of time.
ITG serves the better solution than leaving It is essential for the test group to be
testing to the developers. This is because physically collocated with the design
the developers have neither training nor group; otherwise, problems may arise.
any motivation for testing.
Test groups can have better perception of Keeping a separate group for testing
how reliable is the software before results in extra cost to the organization.
delivering it to the user.
Note: Along with software testers, customers, end-users, and management also play an
important role in software testing.
Self-Instructional
130 Material
Integration testing: Once the individual units are tested, they are integrated Testing Strategies
and checked for interfaces between them. The integration testing focuses
on issues associated with verification and program construction as
components begin interacting with one another.
NOTES
Validation testing: This testing provides the assurance that the software
constructed validates all the functional, behavioral, and performance
requirements established during requirements analysis.
System testing: This testing tests the entire software and the system elements
as a whole. It ensures that the overall system functions according to the user
requirements.
Strategic Issues
There are certain issues that need to be addressed for the successful implementation
of software testing strategy. These issues are listed below.
In addition to detecting errors, a good testing strategy should also assess
portability and usability of the software.
It should use quantifiable manner to specify software requirements such as
outputs expected from software, test effectiveness, and mean time to failure
which should be clearly stated in the test plan.
It should improve testing method continuously to make it more effective.
Test plans that support rapid cycle testing should be developed. The feedback
from rapid cycle testing can be used to control the corresponding strategies.
It should develop robust software, which is able to test itself using debugging
techniques.
It should conduct formal technical reviews to evaluate the test cases and
test strategy. The formal technical reviews can detect errors and
inconsistencies present in the testing process.
Test Strategies for Conventional Software
The test strategies chosen by most software teams for testing conventional software
generally fall between two extremes. At one extreme is the unit testing where the
individual components of the software are tested. While at the other extreme is the
integration testing that facilitates the integration of components into a system and
ends with tests that examine the integrated system.
10.2.4 Unit testing
Unit testing is performed to test the individual units of software. Since the software
comprises various units/modules, detecting errors in these units is simple and
consumes less time, as they are small in size. However, it is possible that the
outputs produced by one unit become input for another unit. Hence, if incorrect
output produced by one unit is provided as input to the second unit then it also
Self-Instructional
Material 131
Testing Strategies produces wrong output. If this process is not corrected, the entire software may
produce unexpected outputs. To avoid this, all the units in the software are tested
independently using unit testing (see Figure 10.5).
NOTES
Unit testing is not just performed once during the software development,
but repeated whenever the software is modified or used in a new environment.
Some other points noted about unit testing are listed below.
Each unit is tested separately regardless of other units of software.
The developers themselves perform this testing.
The methods of white box testing are used in this testing.
Unit testing is used to verify the code produced during software coding and
is responsible for assessing the correctness of a particular unit of source code. In
addition, unit testing performs the following functions.
It tests all control paths to uncover maximum errors that occur during the
execution of conditions present in the unit being tested.
It ensures that all statements in the unit have been executed at least once.
It tests data structures (like stacks, queues) that represent relationships among
individual data elements.
It checks the range of inputs given to units. This is because every input
range has a maximum and minimum value and the input given should be
within the range of these values.
It ensures that the data entered in variables is of the same data type as
defined in the unit.
It checks all arithmetic calculations present in the unit with all possible
combinations of input values.
Self-Instructional
132 Material
Unit testing methods Testing Strategies
Unit testing is performed by conducting a number of unit tests where each unit test
checks an individual component that is either new or modified. A unit test is also
referred to as a module test as it examines the individual units of code that constitute NOTES
the program and eventually the software. In a conventional structured programming
language such as C, the basic unit is a function or subroutine while in object-
oriented language such as C++, the basic unit is a class.
Various tests that are performed as a part of unit testing are listed below
(also see Figure 10.6).
Module interface: This is tested to check whether information flows in a
proper manner in and out of the ‘unit’ being tested. Note that test of data-
flow (across a module interface) is required before any other test is initiated.
Local data structure: This is tested to check whether temporarily stored
data maintains its integrity while an algorithm is being executed.
Boundary conditions: These are tested to check whether the module
provides the desired functionality within the specified boundaries.
Independent paths: These are tested to check whether all statements in a
module are executed at least once. Note that in this testing, the entire control
structure should be exercised.
Error-handling paths: After successful completion of various tests, error-
handling paths are tested.
Self-Instructional
134 Material
a test case is developed to determine whether the unit generates errors in Testing Strategies
Note: Drivers and stubs are not delivered with the final software product. Thus, they
represent an overhead.
The big bang approach and incremental integration approach are used
to integrate modules of a program. In the big bang approach, initially all modules
are integrated and then the entire program is tested. However, when the entire
program is tested, it is possible that a set of errors is detected. It is difficult to
correct these errors since it is difficult to isolate the exact cause of the errors when
the program is very large. In addition, when one set of errors is corrected, new
sets of errors arise and this process continues indefinitely.
To overcome this problem, incremental integration is followed. The
incremental integration approach tests program in small increments. It is easier
to detect errors in this approach because only a small segment of software code is
tested at a given instance of time. Moreover, interfaces can be tested completely
if this approach is used. Various kinds of approaches are used for performing
incremental integration testing, namely, top-down integration testing, bottom-
up integration testing, regression testing, and smoke testing.
Top-down integration testing
In this testing, the software is developed and tested by integrating the individual
modules, moving downwards in the control hierarchy. In top-down integration
testing, initially only one module known as the main control module is tested.
Self-Instructional
136 Material
After this, all the modules called by it are combined with it and tested. This Testing Strategies
process continues till all the modules in the software are integrated and tested.
It is also possible that a module being tested calls some of its subordinate
modules. To simulate the activity of these subordinate modules, a stub is written.
NOTES
Stub replaces modules that are subordinate to the module being tested. Once the
control is passed to the stub, it manipulates the data as least as possible, verifies
the entry, and passes the control back to the module under test (see Figure 10.9).
To perform top-down integration testing, the following steps are used.
1. The main control module is used as a test driver and all the modules that are
directly subordinate to the main control module are replaced with stubs.
2. The subordinate stubs are then replaced with actual modules, one stub at a
time. The way of replacing stubs with modules depends on the approach
(depth first or breadth first) used for integration.
3. As each new module is integrated, tests are conducted.
4. After each set of tests is complete, its time to replace another stub with
actual module.
5. In order to ensure no new errors have been introduced, regression testing
may be performed.
Self-Instructional
Material 137
Testing Strategies
NOTES
Advantages Disadvantages
Advantages Disadvantages
Self-Instructional
Material 139
Testing Strategies Regression testing
Software undergoes changes every time a new module is integrated with the existing
subsystem (Figure 10.12). Changes can occur in the control logic or input/output
NOTES media, and so on. It is possible that new data-flow paths are established as a
result of these changes, which may cause problems in the functioning of some
parts of the software that was previously working perfectly. In addition, it is also
possible that new errors may surface during the process of correcting existing
errors. To avoid these problems, regression testing is used.
Advantages Disadvantages
Smoke testing
Smoke testing is defined as an approach of integration testing in which a subset of
test cases designed to check the main functionality of software are used to test
whether the vital functions of the software work correctly. This testing is best
suitable for testing time-critical software as it permits the testers to evaluate the
software frequently.
Smoke testing is performed when the software is under development. As
the modules of the software are developed, they are integrated to form a ‘cluster’.
After the cluster is formed, certain tests are designed to detect errors that prevent
the cluster to perform its function. Next, the cluster is integrated with other clusters
thereby leading to the development of the entire software, which is smoke tested
frequently. Asmoke test should possess the following characteristics.
It should run quickly.
It should try to cover a large part of the software and if possible the entire
software.
It should be easy for testers to perform smoke testing on the software.
It should be able to detect all errors present in the cluster being tested.
It should try to find showstopper errors.
Self-Instructional
Material 141
Testing Strategies Generally, smoke testing is conducted every time a new cluster is developed
and integrated with the existing cluster. Smoke testing takes minimum time to detect
errors that occur due to integration of clusters. This reduces the risk associated
with the occurrence of problems such as introduction of new errors in the software.
NOTES A cluster cannot be sent for further testing unless smoke testing is performed on it.
Thus, smoke testing determines whether the cluster is suitable to be sent for further
testing. Other benefits associated with smoke testing are listed below.
Minimizes the risks, which are caused due to integration of different
modules: Since smoke testing is performed frequently on the software, it
allows the testers to uncover errors as early as possible, thereby reducing
the chance of causing severe impact on the schedule when there is delay in
uncovering errors.
Improves quality of the final software: Since smoke testing detects both
functional and architectural errors as early as possible, they are corrected
early, thereby resulting in a high-quality software.
Simplifies detection and correction of errors: As smoke testing is
performed almost every time a new code is added, it becomes clear that
the probable cause of errors is the new code.
Assesses progress easily: Since smoke testing is performed frequently,
it keeps track of the continuous integration of modules, that is, the
progress of software development. This boosts the morale of software
developers.
Integration test documentation
To understand the overall procedure of software integration, a document known
as test specification is prepared. This document provides information in the
form of a test plan, test procedure, and actual test results.
Self-Instructional
142 Material
Figure 10.13 shows the test specification document, which comprises the Testing Strategies
following sections.
Scope of testing: Outlines the specific design, functional, and performance
characteristics of the software that need to be tested. In addition, it describes
NOTES
the completion criteria for each test phase and keeps track of the constraints
that occur in the schedule.
Test plan: Describes the testing strategy to be used for integrating the
software. Testing is classified into two parts, namely, phases and builds.
Phases describe distinct tasks that involve various subtasks. On the other
hand, builds are groups of modules that correspond to each phase. Some
of the common test phases that require integration testing include user
interaction, data manipulation and analysis, display outputs, database
management, and so on. Every test phase consists of a functional category
within the software. Generally, these phases can be related to a specific
domain within the architecture of the software. The criteria commonly
considered for all test phases include interface integrity, functional validity,
information content, and performance.
In addition to test phases and builds, a test plan should also include the
following.
o A schedule for integration, which specifies the start and end date for
each phase.
o A description of overhead software that focuses on the characteristics
for which extra effort may be required.
o A description of the environment and resources required for the testing.
Test procedure ‘n’: Describes the order of integration and the
corresponding unit tests for modules. Order of integration provides
information about the purpose and the modules that are to be tested. Unit
tests are performed for the developed modules along with the description
of tests for these modules. In addition, test procedure describes the
development of overhead software, expected results during integration
testing, and description of test case data. The test environment and tools or
techniques used for testing are also mentioned in a test procedure.
Actual test results: Provides information about actual test results and
problems that are recorded in the test report. With the help of this information,
it is easy to carry out software maintenance.
References: Describes the list of references that are used for preparing
user documentation. Generally, references include books and websites.
Appendices: Provides information about the test specification document.
Appendices serve as a supplementary material that is provided at the end
of the document.
Self-Instructional
Material 143
Testing Strategies Test Strategies for Object-Oriented Software
Like conventional software, the software testing in object-oriented (OO) software
also aims to uncover maximum errors with minimum effort. However as the nature
NOTES of object-oriented software is different from that of conventional software, the
test strategies as well as testing techniques used for object-oriented software are
also differ.
Unit testing in OO context
In object-oriented environment, the concept of unit is different. Here, the focus of
unit testing is the class (or an instance of a class, usually called object), which is an
encapsulated package binding the data and the operations that can be performed
on these data together. But the smallest testable units in object-oriented software
are the operations defined inside the class. Since in OO environment, a single
operation may belong to many classes, it is ineffective to test any operation in a
standalone fashion, as we do in conventional unit testing approach; rather an
operation needs to be tested as a part of class. The class testing for object-oriented
software is equivalent to the unit testing of conventional software. However, unlike
unit testing of conventional software, it is not driven by the details of modules and
data across module interfaces. Rather, it focuses on the operations defined inside
the class and the state behavior of class.
Integration testing in OO context
The object-oriented software do not necessarily follow a hierarchical structure
due to which the conventional top-down and bottom-up integration approaches
are of little use for them. Moreover, conventional incremental integration approach
(which means integrating operations one at a time into a class) also seems impossible
because the operation being integrated into a class may need to interact directly or
indirectly with other operations that form the class. To avoid such problems, two
different integration approaches, including thread-based testing and use-based
testing, are adopted for the integration testing of OO software.
In thread-based testing approach, the set of classes that need to respond
an input or an event are determined. Each such set of classes is said to form a
thread. After determining the sets of classes forming threads, each thread is
integrated and tested individually. Regression testing is also performed to ensure
that no error occur as a result of integration. On the other hand, in use-based
testing approach, the integration process starts with the independent classes
(the classes that either do not or do have a little collaboration with other classes).
After the independent classes have been integrated and tested, the integration
testing proceeds to next layer of classes called dependent classes which make
use of independent classes. This integration procedure continues until the entire
system has been integrated and tested.
Self-Instructional
144 Material
Test Strategies for Web Applications Testing Strategies
The test strategy for Web applications (WebApps) conforms to the basic principles
used for all software testing and follows the strategy and testing tactics recommended
for object-oriented software. The steps followed in the strategic approach used NOTES
for WebApps are summarized below.
1. Examine the content model for the WebApp to reveal errors.
2. Examine the interface model for the WebApp to ascertain whether all the
use-cases can be conciliated.
3. Examine the design model for the WebApp to find out navigation errors.
4. Test the user interface to disclose errors in the presentation.
5. Perform unit testing for each functional component.
6. Test the navigation across the architecture.
7. Implement the WebApp in many different environmental configurations and
check whether it is compatible with each environmental configuration.
8. Conduct the security testing with an aim to exploit vulnerabilities within the
WebApp or in its environment.
9. Conduct the performance testing.
10. Make the WebApp tested by the end users and evaluate the results obtained
from them for content and navigation errors, performance and reliability of
WebApp, compatibility and usability concerns.
10.2.6 Validation Testing
After the individual components of a system have been unit tested, assembled as a
complete system, and the interfacing errors have been detected as well as corrected,
the validation testing begins. This testing is performed to ensure that the functional,
behavioral and performance requirements of the software are met. IEEE defines
validation testing as a ‘formal testing with respect to user needs, requirements,
and business processes conducted to determine whether or not a system
satisfies the validation criteria and to enable the user, customers or other
authorized entity to determine whether or not to accept the system’.
During validation testing, the software is tested and evaluated by a group of
users either at the developer’s site or user’s site. This enables the users to test the
software themselves and analyze whether it is meeting their requirements. To perform
acceptance testing, a predetermined set of data is given to the software as input. It
is important to know the expected output before performing acceptance testing so
that outputs produced by the software as a result of testing can be compared with
them. Based on the results of tests, users decide whether to accept or reject the
software. That is, if both outputs (expected and produced) match, the software is
considered to be correct and is accepted; otherwise, it is rejected.
Self-Instructional
Material 145
Testing Strategies The various advantages and disadvantages associated with validation testing
are listed in Table 10.6.
Table 10.6 Advantages and Disadvantages of Validation Testing
NOTES
Advantages Disadvantages
Beta testing assesses the performance of the software at user’s site. This testing is
‘live’ testing and is conducted in an environment, which is not controlled by the
developer. That is, this testing is performed without any interference from the NOTES
developer (see Figure 10.15). Beta testing is performed to know whether the
developed software satisfies the user requirements and fits within the business
processes.
Self-Instructional
148 Material
Testing Strategies
NOTES
Recovery testing
Recovery testing is a type of system testing in which the system is forced to
fail in different ways to check whether the software recovers from the failures
without any data loss. The events that lead to failure include system crashes,
hardware failures, unexpected loss of communication, and other catastrophic
problems.
To recover from any type of failure, a system should be fault-tolerant. A
fault- tolerant system can be defined as a system, which continues to perform the
intended functions even when errors are present in it. In case the system is not
fault-tolerant, it needs to be corrected within a specified time limit after failure has
occurred so that the software performs its functions in a desired manner.
Test cases generated for recovery testing not only show the presence of
errors in a system, but also provide information about the data lost due to problems
such as power failure and improper shutting down of computer system. Recovery
testing also ensures that appropriate methods are used to restore the lost data.
Other advantages of recovery testing are listed below.
It checks whether the backup data is saved properly.
It ensures that the backup data is stored in a secure location.
It ensures that proper detail of recovery procedures is maintained.
Security testing
Systems with sensitive information are generally the target of improper or illegal
use. Therefore, protection mechanisms are required to restrict unauthorized access
to the system. To avoid any improper usage, security testing is performed which
identifies and removes the flaws from software (if any) that can be exploited by the
Self-Instructional
Material 149
Testing Strategies intruders and thus, result in security violations. To find such kind of flaws, the
tester like an intruder tries to penetrate the system by performing tasks such as
cracking the password, attacking the system with custom software, intentionally
producing errors in the system, etc. The security testing focuses on the following
NOTES areas of security.
Application security: To check whether the user can access only those
data and functions for which the system developer or user of system has
given permission. This security is referred to as authorization.
System security: To check whether only the users, who have permission
to access the system, are accessing it. This security is referred to as
authentication.
Generally, the disgruntled/dishonest employees or other individuals outside
the organization make an attempt to gain unauthorized access to the system. If
such people succeed in gaining access to the system, there is a possibility that a
large amount of important data can be lost resulting in huge loss to the organization
or individuals.
Security testing verifies that the system accomplishes all the security
requirements and validates the effectiveness of these security measures. Other
advantages associated with security testing are listed below.
It determines whether proper techniques are used to identify security risks.
It verifies that appropriate protection techniques are followed to secure the
system.
It ensures that the system is able to protect its data and maintain its
functionality.
It conducts tests to ensure that the implemented security measures are
working properly.
Stress testing
Stress testing is designed to determine the behavior of the software under abnormal
situations. In this testing, the test cases are designed to execute the system in such
a way that abnormal conditions arise. Some examples of test cases that may be
designed for stress testing are listed below.
Test cases that generate interrupts at a much higher rate than the average rate
Test cases that demand excessive use of memory as well as other resources
Test cases that cause ‘thrashing’ by causing excessive disk accessing.
IEEE defines stress testing as ‘testing conducted to evaluate a system or
component at or beyond the limits of its specified requirements.’ For example,
if a software system is developed to execute 100 statements at a time, then stress
testing may generate 110 statements to be executed. This load may increase until
the software fails. Thus, stress testing specifies the way in which a system reacts
Self-Instructional
150 Material
when it is made to operate beyond its performance and capacity limits. Some Testing Strategies
the following.
All independent paths within the program have been exercised at least once.
All internal data structures have been exercised. NOTES
All loops (simple loops, concatenated loops, and nested loops) have been
executed at and within their specific boundaries.
All segments present between the controls structures (like ‘switch’ statement)
have been executed at least once.
Each branch (like ‘case’ statement) has been exercised at least once.
All the logical conditions as well as their combinations have been executed
at least once for both true and false paths.
The various advantages and disadvantages of white box testing are listed in
Table 10.7.
Table 10.7 Advantages and Disadvantages of White Box Testing
Advantages Disadvantages
Covers the larger part of the Tests that cover most of the program
program code while testing. code may not be good for assessing the
Uncovers typographical errors. functionality of surprise (unexpected)
behaviors and other testing goals.
Detects design errors that occur
when incorrect assumptions are Tests based on design may miss other
made about execution paths system problems.
Tests cases need to be changed if
implementation changes.
NOTES
Aflow graph uses different symbols, namely, circles and arrows to represent
various statements and flow of control within the program. Circles represent nodes,
which are used to depict the procedural statements present in the program. A
sequence of process boxes and a decision box used in a flowchart can be easily
mapped into a single node. Arrows represent edges or links, which are used to
depict the flow of control within the program. It is necessary for every edge to end
in a node irrespective of whether it represents a procedural statement. In a flow
graph, the area bounded by edges and nodes is known as a region. In addition,
the area outside the graph is also counted as a region while counting regions. A
flow graph can be easily understood with the help of a diagram. For example, in
Self-Instructional
154 Material
Figure 10.21 (a) a flowchart has been depicted, which has been represented as a Testing Strategies
NOTES
Self-Instructional
Material 155
Testing Strategies Where P1, P2, P3, and P4 represent different independent paths present
in the program.
To determine the number of independent paths through a program, the
cyclomatic complexity metric is used that provides a quantitative measure of
NOTES
the logical complexity of a program. The value of this metric defines the number of
test cases that should be developed to ensure that all statements in the program
get exercised at least once during testing.
Cyclomatic complexity of a program can be computed by using any of the
following three methods.
By counting the total number of regions in the flow graph of a program. For
example, in Figure 10.21 (b), there are four regions represented by R1,
R2, R3, and R4; hence, the cyclomatic complexity is four.
Byusing the following formula.
CC = E - N + 2
Where
CC = the cyclomatic complexity of the program
E = the number of edges in the flow graph
N = the number of nodes in the flow graph.
For example, in Figure 10.21 (b), E = 11, N = 9. Therefore, CC = 11 - 9 +
2 = 4.
Byusing the following formula.
CC = P + 1
Where
P = the number of predicate nodes in the flow graph.
For example, in Figure 10.21 (b), P = 3. Therefore, CC = 3 + 1 = 4.
Note: Cyclomatic complexity can be calculated either manually (generally for small program
suites) or using automated tools. However, for most operational environments, automated
tools are preferred.
Fig. 10.22 Flow Graph to Find the Greater between Two Numbers
Self-Instructional
Material 157
Testing Strategies Generating graph matrix
Graph matrix is used to develop software tool that in turn helps in carrying out
basis path testing. It is defined as a data structure used to represent the flow graph
NOTES of a program in a tabular form. This matrix is also used to evaluate the control
structures present in the program during testing.
Graph matrix is a square matrix of the size N×N, where N is the number of
nodes in the flow graph. An entry is made in the matrix at the intersection of ith
row and jth column if there exists an edge between ith and jth node in the flow
graph. Every entry in the graph matrix is assigned some value known as link
weight. Adding link weights to each entry makes the graph matrix a useful tool for
evaluating the control structure of the program during testing. Figure 10.23 (b)
shows the graph matrix generated for the flow graph depicted in Figure 10.23 (a).
In the flow graph shown in Figure 10.23 (a), numbers and letters are used
to identify each node and edge respectively. In Figure 10.23 (b), a letter entry is
made if there is an edge between two nodes of the flow graph. For example, node
3 is connected to the node 6 by edge d and node 4 is connected to node 2 by
edge c, and so on.
Control structure testing
Control structure testing is used to enhance the coverage area by testing various
control structures (which include logical structures and loops) present in the
program. Note that basis path testing is used as one of the techniques for control
structure testing. Various types of testing performed under control structure testing
are condition testing, data-flow testing, and loop testing.
Condition testing
Incondition testing, thetestcasesarederived to determine whether the logical conditions
and decision statements are free from errors. The errors presenting logical conditions
Self-Instructional
158 Material
can be incorrect Boolean operators, missing parenthesis in a Boolean expression, Testing Strategies
Self-Instructional
160 Material
Testing Strategies
NOTES
Mutation testing
Mutation testing is a white box method where errors are ‘purposely’ inserted into
a program (under test) to verify whether the existing test case is able to detect the
error. In this testing, mutants of the program are created by making some changes
in the original program. The objective is to check whether each mutant produces
an output that is different from the output produced by the original program (see
Figure 10.25).
In mutation testing, test cases that are able to ‘kill’ all the mutants should be
developed. This is accomplished by testing mutants with the developed set of test
cases. There can be two possible outcomes when the test cases test the program—
Self-Instructional
Material 161
Testing Strategies either the test case detects the faults or fails to detect faults. If faults are detected,
then necessary measures are taken to correct them.
When no faults are detected, it implies that either the program is absolutely
correct or the test case is inefficient to detect the faults. Therefore, it can be
NOTES
concluded that mutation testing is conducted to determine the effectiveness of a
test case. That is, if a test case is able to detect these ‘small’ faults (minor changes)
in a program, then it is likely that the same test case will be equally effective in
finding real faults.
To perform mutation testing, a number of steps are followed, which are
listed below.
1. Create mutants of a program.
2. Check both program and its mutants using test cases.
3. Find the mutants that are different from the main program. A mutant is said
to be different from the main program if it produces an output, which is
different from the output produced by the main program.
4. Find mutants that are equivalent to the program. A mutant is said to be
equivalent to the main program if it produces the same output as that of the
main program.
5. Compute the mutation score using the formula given below.
M = D /(N-E)
Where M = Mutation score
N = Total number of mutants of the program
D = Number of mutants different from the main program
E = Total number of mutants that are equivalent to the main program.
6. Repeat steps 1 to 5 till the mutation score is ‘1’.
However, mutation testing is very expensive to run on large programs.
Thus, certain tools are used to run mutation tests on large programs. For
example, ‘Jester’ is used to run mutation tests on Java code. This tool targets
the specific areas of the program code, such as changing constants and Boolean
values.
10.3.2 Black Box Testing
Black box (or functional) testing checks the functional requirements and examines
the input and output data of these requirements (see Figure 10.26). When black
box testing is performed, only the sets of ‘legal’ input and corresponding outputs
should be known to the tester and not the internal logic of the program to produce
that output. Hence to determine the functionality, the outputs produced for the
given sets of input are observed.
Self-Instructional
162 Material
Testing Strategies
NOTES
The black box testing is used to find the following errors (see Figure 10.27).
Interface errors, such as functions, which are unable to send or receive data
to/from other software.
Incorrect functions that lead to undesired output when executed.
Missing functions and erroneous data structures.
Erroneous databases, which lead to incorrect outputs when software uses
the data present in these databases for processing.
Incorrect conditions due to which the functions produce incorrect outputs
when they are executed.
Termination errors, such as certain conditions due to which a function enters
a loop that forces it to execute indefinitely.
In this testing, tester derives various test cases to exercise the functional
requirements of the software without considering implementation details of the
code. Then, the software is run for the specified sets of input and the outputs
produced for each input set is compared against the specifications to conform the
correctness. If they are as specified by the user, then the software is considered to
be correct else the software is tested for the presence of errors in it. The advantages
and disadvantages associated with black box testing are listed in Table 10.8.
Self-Instructional
Material 163
Testing Strategies Table 10.8 Advantages and Disadvantages of Black box Testing.
Advantages Disadvantages
Various methods used in black box testing are equivalence class partitioning,
boundary value analysis, and cause-effect graphing (see Figure 10.28). In
equivalence class partitioning, the test inputs are classified into equivalence
classes such that one input checks (validates) all the input values in that class. In
boundary value analysis, the boundary values of the equivalence classes are
considered and tested. In cause effect graphing, cause-effect graphs are used
to design test cases, which provides all the possible combinations of inputs to the
program.
NOTES
An equivalence class depicts valid or invalid states for the input condition.
An input condition can be either a specific numeric value, a range of values, a
Boolean condition, or a set of values. The general guidelines that are followed for
generating the equivalence classes are listed in Table 10.9.
Table 10.9 Guidelines for Generating Equivalence Classes
various conditions that make the effect ‘true’ are recognized. Acondition has two
states, ‘true’ and ‘false’. A condition is ‘true’ if it causes the effect to occur;
otherwise, it is ‘false’. The conditions are combined using Boolean operators such
as ‘AND’ (&), ‘OR’ (|), and ‘NOT’ (~). Finally, a test case is generated for all NOTES
possible combinations of conditions.
Various symbols are used in the cause-effect graph (see Figure 10.31). The
figure depicts various logical associations among causes ci and effects ei. The
dashed notation on the right side in the figure indicates various constraint associations
that can be applied to either causes or effects.
Causes Effects
C1: side x is less than the sum of sides y and z. E1: no triangle is formed.
C2: sides x, y, z are equal. E2: equilateral triangle is formed.
C3: side x is equal to side y. E3: isosceles triangle is formed.
C4: side y is equal to side z.
C5: side x is equal to side z.
Self-Instructional
Material 167
Testing Strategies 2. The cause-effect graph is generated as shown in Figure 10.32.
NOTES
3. A decision table (a table that shows a set of conditions and the actions
resulting from them) is drawn as shown in Table 10.11.
Table 10.11 Decision Table
Conditions
C1: x < y + z 0 Χ Χ Χ Χ
C2: x = y = z Χ 1 Χ Χ Χ
C3: x = y Χ Χ 1 Χ Χ
C4: y = z Χ Χ Χ 1 Χ
C5: x = z Χ Χ Χ Χ 1
Actions
E1: not a triangle 1
E2: equilateral triangle 1
E3: isosceles triangle 1 1 1
10.4 DEBUGGING
Self-Instructional
Material 169
Testing Strategies Guidelines for Debugging
Some guidelines that are followed while performing debugging are discussed here.
Debugging is the process of solving a problem. Hence, individuals involved
NOTES in debugging should understand all the causes of an error before starting
with debugging.
No experimentation should be done while performing debugging. The
experimental changes instead of removing errors often increase the problem
by adding new errors in it.
When there is an error in one segment of a program, there is a high possibility
that some other errors also exist in the program. Hence, if an error is found
in one segment of a program, rest of the program should be properly
examined for errors.
It should be ensured that the new code added in a program to fix errors is
correct and is not introducing any new error in it. Thus, to verify the
correctness of a new code and to ensure that no new errors are introduced,
regression testing should be performed.
10.4.1 The Debugging Process
During debugging, errors are encountered that range from less damaging (like
input of an incorrect function) to catastrophic (like system failure, which lead to
economic or physical damage). Various levels of errors and their damaging effects
are shown in Figure 10.33. Note that with the increase in number of errors, the
amount of effort to find their causes also increases.
Once errors are identified in a software system, to debug the problem, a
number of steps are followed, which are listed below.
1. Defect confirmation/identification: A problem is identified in a system
and a defect report is created. A software engineer maintains and analyzes
this error report and finds solutions to the following questions.
Does a defect exist in the system?
Can the defect be reproduced?
What is the expected/desired behavior of the system?
What is the actual behavior?
Debugging Strategies
As debugging is a difficult and time-consuming task, it is essential to develop a
proper debugging strategy. This strategy helps in performing the process of
debugging easily and efficiently. The commonly-used debugging strategies are
debugging by brute force, induction strategy, deduction strategy, backtracking
strategy, and debugging by testing (see Figure 10.35).
Self-Instructional
Material 173
Testing Strategies 5. Mutation testing is a white box method where errors are ‘purposely’ inserted
into a program (under test) to verify whether the existing test case is able to
detect the error.
6. Debugging is defined as a process of analyzing and removing the error. It is
NOTES
considered necessary in most of the newly developed software or hardware
and in commercial products/personal application programs.
10.6 SUMMARY
Self-Instructional
Material 175
Testing Strategies Schach, Stephen R. 2005. Object Oriented and Classical Software Engineering.
New Delhi: Tata McGraw-Hill.
Pressman, Roger S. 1997. Software Engineering, a Practitioner’s Approach.
New Delhi: Tata McGraw-Hill.
NOTES
Somerville, Ian. 2001. Software Engineering. New Delhi: Pearson Education.
Ghezzi, Carlo, Mehdi Jazayeri, and Dino Mandriolli . 1991. Fundamentals of
Software Engineering. New Delhi: Prentice-Hill of India.
Jawadekar, Waman S. 2004. Software Engineering: Principles and Practice.
New Delhi: Tata McGraw-Hill.
Hughes 2017 Software project management
Self-Instructional
176 Material
Product Metrics
11.0 INTRODUCTION
To achieve an accurate schedule and cost estimate, better quality products, and
higher productivity, an effective software management is required, which in turn
can be attained through the use of software metrics. Ametric is a derived unit of
measurement that cannot be directly observed, but is created by combining or
relating two or more measures. Product metrics is the measurement of work product
produced during different phases of software development.
Various studies suggest that careful implementation and application of
software metrics helps in achieving better management results, both in the short
run (a given project) and the long run (improving productivity of future projects).
Effective metrics not only describe the models that are capable of predicting process
or product parameters, but also facilitates the development of these models. An
ideal metrics should be simple and precisely defined, easily obtainable, valid, and
robust.
11.1 OBJECTIVES
Self-Instructional
Material 177
Product Metrics Describe process‚ product and project metrics
Discuss the issues in software metrics
To assess the quality of the engineered product or system and to better understand
the models that are created, some measures are used. These measures are collected
throughout the software development life cycle with an intention to improve the
software process on a continuous basis. Measurement helps in estimation, quality
control, productivity assessment and project control throughout a software project.
Also, measurement is used by software engineers to gain insight into the design
and development of the work products. In addition, measurement assists in strategic
decision-making as a project proceeds.
Software measurements are of two categories, namely, direct measures
and indirect measures. Direct measures include software processes like cost
and effort applied and products like lines of code produced, execution speed, and
other defects that have been reported. Indirect measures include products like
functionality, quality, complexity, reliability, maintainability, and many more.
Generally, software measurement is considered as a management tool which
if conducted in an effective manner, helps the project manager and the entire
software team to take decisions that lead to successful completion of the project.
Measurement process is characterized by a set of five activities, which are listed
below.
Formulation: This performs measurement and develops appropriate metric
for software under consideration.
Collection: This collects data to derive the formulated metrics.
Analysis: This calculates metrics and the use of mathematical tools.
Interpretation: This analyzes the metrics to attain insight into the quality of
representation.
Feedback: This communicates recommendation derived from product
metrics to the software team.
Note that collection and analysis activities drive the measurement process.
In order to perform these activities effectively, it is recommended to automate
data collection and analysis, establish guidelines and recommendations for each
metric, and use statistical techniques to interrelate external quality features and
internal product attributes.
Once measures are collected they are converted into metrics for use. IEEE defines
Self-Instructional
metric as ‘a quantitative measure of the degree to which a system, component,
178 Material
or process possesses a given attribute.’The goal of software metrics is to identify Product Metrics
and control essential parameters that affect software development. Other objectives
of using software metrics are listed below.
Measuring the size of the software quantitatively.
NOTES
Assessing the level of complexity involved.
Assessing the strength of the module by measuring coupling.
Assessing the testing techniques.
Specifying when to stop testing.
Determining the date of release of the software.
Estimating cost of resources and project schedule.
Software metrics help project managers to gain an insight into the efficiency
of the software process, project, and product. This is possible by collecting quality
and productivity data and then analyzing and comparing these data with past
averages in order to know whether quality improvements have occurred. Also,
when metrics are applied in a consistent manner, it helps in project planning and
project management activity. For example, schedule-based resource allocation
can be effectively enhanced with the help of metrics.
Difference in Measures, Metrics, and Indicators
Metrics is often used interchangeably with measure and measurement. However,
it is important to note the differences between them. Measure can be defined as
quantitative indication of amount, dimension, capacity, or size of product and process
attributes. Measurement can be defined as the process of determining the
measure. Metrics can be defined as quantitative measures that allow software
engineers to identify the efficiency and improve the quality of software process,
project, and product.
To understand the difference, let us consider an example. A measure is
established when a number of errors is (single data point) detected in a software
component. Measurement is the process of collecting one or more data points. In
other words, measurement is established when many components are reviewed
and tested individually to collect the measure of a number of errors in all these
components. Metrics are associated with individual measure in some manner. That
is, metrics are related to detection of errors found per review or the average
number of errors found per unit test.
Once measures and metrics have been developed, indicators are obtained.
These indicators provide a detailed insight into the software process, software
project, or intermediate product. Indicators also enable software engineers or
project managers to adjust software processes and improve software products, if
required. For example, measurement dashboards or key indicators are used to
monitor progress and initiate change. Arranged together, indicators provide
snapshots of the system’s performance. Self-Instructional
Material 179
Product Metrics Measured Data
Before data is collected and used, it is necessary to know the type of data involved
in the software metrics. Table 11.1 lists different types of data, which are identified
NOTES in metrics along with their description and the possible operations that can be
performed on them.
Table 11.1 Type of Data Measured
Type of Data Possible Operations Description of Data
Nominal =, Categories
Interval +, - Differences
Self-Instructional
180 Material
Simple and computable: Derivation of software metrics should be easy Product Metrics
Self-Instructional
Material 181
Product Metrics Is the information practical?
Does it provide the desired information?
3. Establish counting criteria: The model is broken down into its lowest-
NOTES level metric entities and the counting criteria (which are used to measure
each entity) are defined. This specifies the method for the measurement of
each metric primitive. For example, to estimate the size of a software project,
line of code (LOC) is a commonly used metric. Before measuring size in
LOC, clear and specific counting criteria should be defined.
4. Decide what is good: Once it is decided what to measure and how to
measure, it is necessary to determine whether action is needed. For example,
if software is meeting the quality standards, no corrective action is necessary.
However, if this is not true, then goals can be established to help the software
conform to the quality standards laid down. Note that the goals should be
reasonable, within the time frame, and based on supporting actions.
5. Metrics reporting: Once all the data for metric is collected, the data should
be reported to the concerned person. This involves defining report format,
data extraction and reporting cycle, reporting mechanisms, and so on.
6. Additional qualifiers:Additional metric qualifiers that are ‘generic’in nature
should be determined. In other words, metric that is valid for several
additional extraction qualifiers should be determined.
The selection and development of software metrics is not complete until the
effect of measurement and people on each other is known. The success of metrics
in an organization depends on the attitudes of the people involved in collecting the
data, calculating and reporting the metrics, and people involved in using these
metrics. Also, metrics should focus on process, projects, and products and not on
the individuals involved in this activity.
Self-Instructional
182 Material
Product Metrics
11.6 PROCESS METRICS
Self-Instructional
186 Material
Where Product Metrics
In addition, various other metrics like simple morphology metrics are also
used. These metrics allow comparison of different program architecture using a
set of straightforward dimensions. Ametric can be developed by referring to call
and return architecture as shown in Figure 11.2. This metric can be defined by the
following equation.
Size = n + a
Where
n = number of nodes
a = number of arcs.
For example, in Figure 11.2, there are 11 nodes and 10 arcs. Here, Size
can be calculated by the following equation.
Self-Instructional
188 Material
Size = n + a = 11 + 10 = 21. Product Metrics
Depth is defined as the longest path from the top node (root) to the leaf
node and width is defined as the maximum number of nodes at any one level. In
Figure 11.2, the depth is 3 and the width is 6.
NOTES
Coupling of the architecture is indicated by arc-to-node ratio. This ratio
also measures the connectivity density of the architecture and is calculated by the
following equation.
r = a/n.
Quality of software design also plays an important role in determining the
overall quality of the software. Many software quality indicators that are based on
measurable design characteristics of a computer program have been proposed.
One of them is Design Structural Quality Index (DSQI), which is derived
from the information obtained from data and architectural design. To calculate
DSQI, a number of steps are followed, which are listed below.
1. To calculate DSQI, the following values must be determined.
Number of components in program architecture (S1)
Number of components whose correct function is determined by the
source of input data (S2)
Number of components whose correct function depends on previous
processing (S3)
Number of database items (S4)
Number of different database items (S5)
Number of database segments (S6)
Number of components having single entry and exit (S7).
2. Once all the values from S1 to S7 are known, some intermediate values are
calculated, which are listed below.
Program structure (D1): If discrete methods are used for developing
architectural design then D1=1, else D1 = 0
Module independence (D2): D2 = 1-(S2/S1)
Modules not dependent on prior processing (D3): D3 = 1-(S3/S1)
Database size (D4): D4 = 1-(S5/S4)
Database compartmentalization (D5): D5 = 1-(S6/S4)
Module entrance/exit characteristic (D6): D6 = 1-(S7/S1).
3. Once all the intermediate values are calculated, DSQI is calculated by the
following equation.
DSQI = wiDi
Self-Instructional
Material 189
Product Metrics Where
i = 1 to 6
w = 1 (w is the weighting of the importance of intermediate values).
i i
NOTES Component-level Design Metrics
In conventional software, the focus of component-level design metrics is on the
internal characteristics of the software components. The software engineer can
judge the quality of the component-level design by measuring module cohesion,
coupling and complexity. Component-level design metrics are applied after
procedural design is final. Various metrics developed for component-level design
are listed below.
Cohesion metrics: Cohesiveness of a module can be indicated by the
definitions of the following five concepts and measures.
o Data slice: Defined as a backward walk through a module, which
looks for values of data that affect the state of the module as the walk
starts
o Data tokens: Defined as a set of variables defined for a module
o Glue tokens: Defined as a set of data tokens, which lies on one or
more data slice
o Superglue tokens: Defined as tokens, which are present in every data
slice in the module
o Stickiness: Defined as the stickiness of the glue token, which depends
on the number of data slices that it binds.
Coupling metrics: This metric indicates the degree to which a module is
connected to other modules, global data and the outside environment. A
metric for module coupling has been proposed, which includes data and
control flow coupling, global coupling, and environmental coupling.
o Measures defined for data and control flow coupling are listed below.
di = total number of input data parameters
ci = total number of input control parameters
d0= total number of output data parameters
c0 =total number of output control parameters
o Measures defined for global coupling are listed below.
gd = number of global variables utilized as data
gc = number of global variables utilized as control
o Measures defined for environmental coupling are listed below.
w = number of modules called
r = number of modules calling the modules under consideration
Self-Instructional
190 Material
By using the above mentioned measures, module-coupling indicator (mc) is Product Metrics
calculated by using the following equation.
mc = K/M
Where NOTES
K = proportionality constant
M = di + (a*ci) + d0 + (b*c0) + gd + (c*gc) + w + r.
Note that K, a, b, and c are empirically derived. The values of mc and
overall module coupling are inversely proportional to each other. In other
words, as the value of mc increases, the overall module coupling decreases.
Complexity metrics: Different types of software metrics can be calculated
to ascertain the complexity of program control flow. One of the most widely
used complexity metrics for ascertaining the complexity of the program is
cyclomatic complexity, which has already been discussed in Chapter 6.
Self-Instructional
Material 191
Product Metrics Complexity: Determined by assessing how classes are related to each
other
Coupling: Defined as the physical connection between OO design elements
NOTES Sufficiency: Defined as the degree to which an abstraction possesses the
features required of it
Cohesion: Determined by analyzing the degree to which a set of properties
that the class possesses is part of the problem domain or design domain
Primitiveness: Indicates the degree to which the operation is atomic
Similarity: Indicates similarity between two or more classes in terms of
their structure, function, behavior, or purpose
Volatility: Defined as the probability of occurrence of change in the OO
design
Size: Defined with the help of four different views, namely, population,
volume, length, and functionality. Population is measured by calculating
the total number of OO entities, which can be in the form of classes or
operations. Volume measures are collected dynamically at any given point
of time. Length is a measure of interconnected designs such as depth of
inheritance tree. Functionality indicates the value rendered to the user by
the OO application.
Metrics for Coding
Halstead proposed the first analytic laws for computer science by using a set of
primitive measures, which can be derived once the design phase is complete and
code is generated. These measures are listed below.
n1 = number of distinct operators in a program
n2 = number of distinct operands in a program
N1 = total number of operators
N2 = total number of operands.
By using these measures, Halstead developed an expression for overall
program length, program volume, program difficulty, development effort,
and so on.
Program length (N) can be calculated by using the following equation.
N = n1 log2 n1 + n2 log2 n2.
Program volume (V) can be calculated by using the following equation.
V = N log2 (n1+n2).
Note that program volume depends on the programming language used
and represents the volume of information (in bits) required to specify a program.
Volume ratio (L) can be calculated by using the following equation.
Self-Instructional
192 Material
L = Volume of the most compact form of a program
Product Metrics
Volume of the actual program
Where, value of L must be less than 1. Volume ratio can also be calculated
by using the following equation.
NOTES
L = (2/n1)* (n2/N2).
Program difficulty level (D) and effort (E) can be calculated by using the
following equations.
D = (n1/2)*(N2/n2).
E = D * V.
Metrics for Software Testing
Majority of the metrics used for testing focus on testing process rather than the
technical characteristics of test. Generally, testers use metrics for analysis, design,
and coding to guide them in design and execution of test cases.
Function point can be effectively used to estimate testing effort. Various
characteristics like errors discovered, number of test cases needed, testing effort,
and so on can be determined by estimating the number of function points in the
current project and comparing them with any previous project.
Metrics used for architectural design can be used to indicate how integration
testing can be carried out. In addition, cyclomatic complexity can be used effectively
as a metric in the basis-path testing to determine the number of test cases needed.
Halstead measures can be used to derive metrics for testing effort. By using
program volume (V) and program level (PL), Halstead effort (e) can be calculated
by the following equations.
e = V/PL
Where
PL = 1/[(n1/2) × (N2/n2)] ...(1)
For a particular module (z), the percentage of overall testing effort allocated
can be calculated by the following equation.
Percentage of testing effort (z) = e(z)/e(i)
Where, e(z) is calculated for module z with the help of equation (1).
Summation in the denominator is the sum of Halstead effort (e) in all the modules
of the system.
Metrics for Object-oriented Testing
For developing metrics for object-oriented (OO) testing, different types of design
metrics that have a direct impact on the testability of object-oriented system are
considered. While developing metrics for OO testing, inheritance and encapsulation
are also considered. A set of metrics proposed for OO testing is listed below.
Self-Instructional
Material 193
Product Metrics Lack of cohesion in methods (LCOM): This indicates the number of
states to be tested. LCOM indicates the number of methods that access
one or more same attributes. The value of LCOM is 0, if no methods access
the same attributes. As the value of LCOM increases, more states need to
NOTES be tested.
Percent public and protected (PAP): This shows the number of class
attributes, which are public or protected. Probability of adverse effects
among classes increases with increase in value of PAP as public and protected
attributes lead to potentially higher coupling.
Public access to data members (PAD): This shows the number of classes
that can access attributes of another class. Adverse effects among classes
increase as the value of PAD increases.
Number of root classes (NOR): This specifies the number of different
class hierarchies, which are described in the design model. Testing effort
increases with increase in NOR.
Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater
than 1, it indicates that the class inherits its attributes and operations from
many root classes. Note that this situation (where FIN > 1) should be
avoided.
Metrics for Software Maintenance
For the maintenance activities, metrics have been designed explicitly. IEEE have
proposed Software Maturity Index (SMI), which provides indications relating
to the stability of software product. For calculating SMI, following parameters are
considered.
Number of modules in current release (MT)
Number of modules that have been changed in the current release (Fc)
Number of modules that have been added in the current release (Fa)
Number of modules that have been deleted from the current release (Fd).
Once all the parameters are known, SMI can be calculated by using the
following equation.
SMI = [MT – (Fa + Fc + Fd)]/MT.
Note that a product begins to stabilize as SMI reaches 1.0. SMI can also
be used as a metric for planning software maintenance activities by developing
empirical models in order to know the effort required for maintenance.
Project metrics enable the project managers to assess current projects, track
potential risks, identify problem areas, adjust workflow, and evaluate the project
Self-Instructional
194 Material
team’s ability to control the quality of work products. Note that project metrics Product Metrics
are used for tactical purposes rather than strategic purposes used by the process
metrics.
Project metrics serve two purposes. One, they help to minimize the
NOTES
development schedule by making necessary adjustments in order to avoid delays
and alleviate potential risks and problems. Two, these metrics are used to assess
the product quality on a regular basis and modify the technical issues if required.
As the quality of the project improves, the number of errors and defects are reduced,
which in turn leads to a decrease in the overall cost of a software project.
Applying Project Metrics
Often, the first application of project metrics occurs during estimation. Here, metrics
collected from previous projects act as a base from which effort and time estimates
for the current project are calculated. As the project proceeds, original estimates
of effort and time are compared with the new measures of effort and time. This
comparison helps the project manager to monitor (supervise) and control the
progress of the project.
As the process of development proceeds, project metrics are used to track
the errors detected during each development phase. For example, as software
evolves from design to coding, project metrics are collected to assess quality of
the design and obtain indicators that in turn affect the approach chosen for coding
and testing. Also, project metrics are used to measure production rate, which is
measured in terms of models developed, function points, and delivered lines of
code.
Self-Instructional
196 Material
software quality. Also, in order to achieve high quality both explicit and implicit Product Metrics
Lines of code and functional point metrics can be used for estimating object-
oriented software projects. However, these metrics are not appropriate in the
case of incremental software development as they do not provide adequate details
for effort and schedule estimation. Thus, for object-oriented projects, different
sets of metrics have been proposed. These are listed below.
Self-Instructional
Material 197
Product Metrics Number of scenario scripts: Scenario scripts are a sequence of steps,
which depict the interaction between the user and the application. A number
of scenarios is directly related to application size and number of test cases
that are developed to test the software, once it is developed. Note that
NOTES scenario scripts are analogous to use-cases.
Number of key classes: Key classes are independent components, which
are defined in object-oriented analysis. As key classes form the core of the
problem domain, they indicate the effort required to develop software and
the amount of ‘reuse’ feature to be applied during the development process.
Number of support classes: Classes, which are required to implement
the system but are indirectly related to the problem domain, are known as
support classes. For example, user interface classes and computation class
are support classes. It is possible to develop a support class for each key
class. Like key classes, support classes indicate the effort required to develop
software and the amount of ‘reuse’ feature to be applied during the
development process.
Average number of support classes per key class: Key classes are
defined early in the software project while support classes are defined
throughout the project. The estimation process is simplified if the average
number of support classes per key class is already known.
Number of subsystems: A collection of classes that supports a function
visible to the user is known as a subsystem. Identifying subsystems makes
it easier to prepare a reasonable schedule in which work on subsystems is
divided among project members.
The afore-mentioned metrics are collected along with other project metrics
like effort used, errors and defects detected, and so on. After an organization
completes a number of projects, a database is developed, which shows the
relationship between object-oriented measure and project measure. This
relationship provides metrics that help in project estimation.
11.13 SUMMARY
Self-Instructional
202 Material
Risk Strategies
BLOCK - V
RISK AND QUALITY MANAGEMENT
NOTES
UNIT 12 RISK STRATEGIES
Structure
12.0 Introduction
12.1 Objectives
12.2 Reactive vs Proactive Risk Strategies
12.3 Software Risk and Risk Identification
12.4 Answers to Check Your Progress Questions
12.5 Summary
12.6 Key Words
12.7 Self Assessment Questions and Exercises
12.8 Further Readings
12.9 Learning Outcomes
12.0 INTRODUCTION
In this unit‚ you will learn about the risk strategies and risk identification. Risk is an
expectation of loss, a potential problem that may or may not occur in the future.
Risk strategy is a structured and coherent plan to identify, assess and manage risk.
12.1 OBJECTIVES
Self-Instructional
Material 203
Risk Strategies Difference between proactive and reactive strategies
Responds to the risk after occurrence. Helps in eliminating the risk before it occurs.
NOTES
Once hazard occurs, employees take Hazard mechanisms and threats are identified
action to prevent an accident. before hazard occurrence.
uncertainty is reduced. Note that it is difficult to achieve software in which all the
risks are eliminated. Hence, it is essential to minimize the effect of risks as they
cannot be eliminated completely. For this purpose, effective risk management is
required. NOTES
12.5 SUMMARY
Self-Instructional
208 Material
Risk Projection
REFINEMENT
NOTES
Structure
13.0 Introduction
13.1 Objectives
13.2 Risk Assessment: Risk Projection and Refinement
13.3 Risk Control, Mitigation and Monitoring
13.3.1 RMMM Plan
13.4 Answers to Check Your Progress Questions
13.5 Summary
13.6 Key Words
13.7 Self Assessment Questions and Exercises
13.8 Further Readings
13.9 Learning Outcomes
13.0 INTRODUCTION
In this unit‚ you will learn about the risk assessment and RMMM plan. Risk
assessment is a method or process to identify the risk factors that have the potential
to cause harm or loss. Risk management‚ mitigation and monitoring (RMMM)
plan documents all work executed as a part of risk analysis and used by the project
manager as a part of the overall project plan.
13.1 OBJECTIVES
Risk assessment comprises the following three functions (see Figure 13.1).
Risk identification: Identifies the events that have an adverse impact on
the project. These events can be a change in user requirements, new
development technologies, and so on. In order to identify the risks, inputs
from project management team members, users, and management are
considered. The project plan including the sub-plans should also be carefully
analyzed in order to identify the areas of uncertainty before starting the
project. There are various kinds of risks that occur during software
development. Some of the common types of risks are listed below.
o Project risks: These risks are harmful for project plan as they affect
the project schedule and result in increased costs. They identify the
budgetary, project schedule, personnel, stakeholder, and requirements
problems.
o Technical risks: These risks are derived from software or hardware
technologies, which are being used as a part in the software under
development. These risks are harmful for the quality of a project and
result in difficulty in implementation of software. They identify problems
in design, implementation, interface, maintenance, and much more.
o Business risks: These risks are derived from the organizational
environment where software is being developed. They are harmful for
capability and existence of a project and result in difficulty by users to
accept it. They identify market risk, sales risk, management risk, and so
on.
Self-Instructional
210 Material
o Known risks: These risks are harmful as they are detected only after a Risk Projection
and Refinement
careful examination of the project plan and result in technical problems.
These risks include unrealistic delivery date of software, lack of project
scope, and much more.
NOTES
o Requirements risks: These risks are derived from the change in user
requirements. In addition, these risks occur when new user requirements
are being managed.
o Estimation risks: These risks are derived from the management
estimations, which include the features required in the software and the
resources required to complete the software.
Risk analysis: Discovers possible risks using various techniques. These
techniques include decision analysis, cost-risk analysis, schedule analysis,
reliability analysis, and many more. After a risk is identified, it is evaluated in
order to assess its impact on the project. Once the evaluation is done, risk
is ‘ranked’ according to its probability of occurrence. After analyzing the
areas of uncertainty, a description of how these areas affect the performance
of project is made.
Risk prioritization: Ranks the risks according to their capability to impact
the project, which is known as risk exposure. Determination of risk
exposure is done with the help of statistically-based decision mechanisms.
These mechanisms specify how to manage the risk. The risks that impact
the project the most should be specified so that they can be eliminated
completely. On the other hand, the risks that are minor and have no or little
impact on the project can be avoided.
QUESTIONS
13.5 SUMMARY
Risk assessment concentrates on determination of risks. The occurrence of
risks depends on their nature, scope, and timing. Nature of risks specifies
the problems that arise when risk occurs.
Risk control concentrates on the management of risks in order to minimize
or eliminate their effect.
Project risks are harmful for project plan as they affect the project schedule
and result in increased costs.
Risk mitigation minimizes the impact of risks. For this purpose, risk mitigation
techniques are used, which are based on the occurrence of risks and their
level of impact. Risk mitigation techniques incur additional cost on extra
resources and time, which are required to complete the project.
Risk monitoring include milestone tracking, tracking the risks that have
greatest impact, continual risk re-assessment and so on.
Self-Instructional
214 Material
Quality Management
14.0 INTRODUCTION
Nowadays, quality has become an important factor to be considered while
developing software. This is due to the fact that users are interested in quality
software, which is according to their requirements and is delivered within a specified
time. Furthermore, users require software, which is maintainable and minimizes
the time and cost for correcting problems in it. The objective of the software
development team is to design the software with minimum errors and required
functionality according to user specified requirements. Software quality depends
on various characteristics such as correctness, reliability, and efficiency. These
characteristics are used as a checklist to implement quality in software.
14.1 OBJECTIVES
Self-Instructional
Material 215
Quality Management
14.2 QUALITY CONCEPTS
Self-Instructional
216 Material
Comparison of product quality procedures with the established standards Quality Management
Self-Instructional
Material 217
Quality Management while dealing with user complaints. Failure costs are further divided into
following categories (see Figure 14.1).
o Internal failure costs: These costs are incurred before the software is
delivered to the user. When software is unable to perform properly due
NOTES
to errors in it, internal failure costs are incurred for missed milestones
and the overtime done to complete these milestones within the specified
schedule. For example, internal failure costs are incurred in fixing errors
that are detected in regression testing.
o External failure costs: These costs are incurred after the software is
delivered to the user. When errors are detected after software is
delivered, a lot of time and money is consumed to fix the errors. For
example, external failure costs are incurred on the modification of
software, technical support calls, warrantycosts, and other costs imposed
by law.
Software quality assurance (SQA) comprises various tasks that are responsible
NOTES for ensuring quality. These tasks are assigned to software engineers and the SQA
group. The common task of software engineers and the SQA group is to verify
that the software is developed according to the user requirements and established
standards.
SQA group comprises the quality head, quality analyst, and quality
control staff. This group is concerned with keeping track of responsibilities of
each team member and ensuring that tasks are being performed properly so that
high-quality software is developed. The quality head is responsible to check that
all the individuals of SQAgroup are performing the assigned tasks correctly. The
quality analyst is responsible for establishing plans and procedures for evaluating
software quality. Furthermore, the quality analyst reviews the software project
and ensures that software quality is according to the established standards. Quality
control staff is responsible for checking the functionality and performance of
software by executing it and conducting different kinds of testing to detect errors.
Various functions performed by the SQA group are discussed here.
Preparation of SQA plan: This plan is made in the initial stages of
development (during process planning) and is reviewed by the management
who is responsible for making decisions and policies in an organization as
well as assigning and controlling tasks to be performed. SQA plan determines
the tasks that are to be carried out by software engineers and the SQA
group. In addition, it identifies the audits and reviews to be performed and
standards and documents to be produced by the SQA group.
Participation in the software development process description: A
process is chosen by the software developers while developing the software.
The SQA group assists the software development team by reviewing the
process description and checking whether it is in accordance with the user
requirements and established standards. The choice of process depends
on the complexity of the software project.
Reviewal of activities in compliance with the defined software
process: The SQA group examines the process in order to record the
changes made to it. In addition, it ensures that project documentation is
updated to accommodate accurate records, to further ensure proper
understanding during software maintenance. With incorrect records, the
process cannot be evaluated accurately.
Verification of compliance with defined standards: The SQA group
assesses the process and compares it with established procedures and
standards so that quality is incorporated in the software during each phase
of software development.
Self-Instructional
222 Material
Software Quality Assurance Plan Quality Management
Quality planning is a structured process for defining the procedures and methods,
which are used to develop software. Quality planning starts in the early phases of
software development. The SQA plan defines the activities for the SQA group. NOTES
The objective of the SQA plan is to identify the quality assurance activities that are
followed bythe SQA group. Generally, the plan comprises documentation such as
project plan, test plans and user documents. Figure 14.5 shows an SQA plan
designed by IEEE.
Self-Instructional
Material 223
Quality Management Software process improvement activities: Describes the activities
required for Software Process Improvement (SPI). It also outlines the
objectives and tasks of SPI.
Software configuration management (SCM) overview: Provides an
NOTES
overview of SCM plan. This plan provides information such as description
of configuration management process and procedures to perform SCM.
SQA tools, techniques, and methods: Describes the SQA tools,
techniques, and methods that are used by the SQA group.
Appendix: Provides additional information about SQA plan.
Self-Instructional
224 Material
Submitting a profile to the management about the adherence of defined Quality Management
Self-Instructional
Material 225
Quality Management
NOTES
A review team performs software reviews (see Figure 14.7). The team
members that constitute a review team are listed below.
Author or producer: Develops the product and is responsible for making
corrections in it after the review is over. The author raises issues concerning
the product during review meeting.
Moderator or review leader: The review leader performs the following
activities.
o Ensures that review procedures are performed throughout the review
meeting
o Ensures that the review team is performing their specified responsibilities
o Verifies the product for review
o Assembles an effective team and keeps review meetings on schedule
o Organizes a schedule of the product to be reviewed.
Recorder: Records important issues of the product, which arise during a
review meeting.A document is prepared containing the results of the review
meeting. This document includes type and source of errors.
Reviewer or inspector: Analyzes the product and prepares notes before
the review meeting begins. There can be more than one reviewer in the
team depending upon the size of the product to be reviewed.
Self-Instructional
226 Material
Quality Management
NOTES
Self-Instructional
Material 227
Quality Management Formal Technical Review (FTR)
A formal technical review (FTR) is a formal review that is performed by a review
team. It is performed in any phase of software development for any work product,
NOTES which may include requirements specification, design document, code, and test
plan. Each FTR is conducted as a meeting and is considered successful only if it is
properly planned, controlled, and attended. The objectives of FTR are listed below.
To detect errors in functioning of software and errors occurring due to
incorrect logic in software code
To check whether the product being reviewed accomplishes user
requirements
To ensure that a product is developed using established standards.
Review Guidelines
To conduct a formal technical review, there are some review guidelines. These
guidelines are established before a review meeting begins. The commonly followed
review guidelines are listed below.
Review the product: The focus of formal review should be to detect errors
in the product instead of pointing out mistakes (if any) committed by a team
member. The aim should be to conduct the review in harmony among all the
team members.
Set the agenda: Formal reviews should keep a track of the schedule. The
moderator is responsible for maintaining this schedule. For this purpose, he
ensures that each review team member is performing the assigned task
properly.
Keep track of discussion: During the review meeting, various issues arise
and it is possible that each review team member has a different view on an
issue. Such issues should be recorded for further discussion.
Advance preparation: Reviewers should make an advance preparation
for the product to be reviewed. For this purpose, the reviewers should note
the issues that can arise during the review meeting. Then, it is easy for them
to discuss the issues during the review meeting.
Indicate problems in the product: The objective of a review meeting
should be only to indicate the problems or errors. In case there are no
proper suggestions for the problems, a review meeting should be conducted
again.
Categorize the error: The errors detected in the software should be
classified according to the following categories.
o Critical errors: Refer to errors that bring the execution of the entire
software to a halt. Thus, crucial errors need to be ‘fixed’ before the
software is delivered.
Self-Instructional
228 Material
o Major errors: Refer to errors that affect the functionality of programs Quality Management
during their execution. Like critical errors, major errors need to be fixed
before the software is delivered
o Besides errors: Refer to errors that do not affect the usability of the
NOTES
software
o No errors: Indicates that there are no errors in the software.
Prepare notes: The recorder who is one of the reviewers should keep a
record of the issues in order to set priorities for other reviewers as the
information is recorded.
Specify the number of people: There should be a limited number of
individuals in the meeting that should be specified before the meeting begins.
Develop a checklist: A checklist should be maintained at the end of the
meeting. The checklist helps the reviewer to focus on important issues that
are discussed at the meeting. Generally, a checklist should be prepared for
analysis, design, and code documents.
Review Meeting
Areview meeting is conducted to review the product in order to validate its quality.
Review team members examine the product to identify errors in it. As shown in
Figure 14.8, a successful review consists of a number of stages, which are described
here.
Planning: In this stage, the author studies the product that requires a review
and informs the moderator about it. The moderator verifies the product that
is to be examined. The verification is essential to determine whether the
product requires a review. After verification, the moderator assigns tasks to
the review team members. The objective of planning is to ensure that the
review process is followed in an efficient manner. In addition, it ensures that
a proper schedule is made to conduct an effective review.
Overview: In this stage, the product is analyzed in order to detect errors.
For this purpose, knowledge of the product is essential. In case reviewers
do not have proper knowledge of the product, the author explains the
functionality and the techniques used in the product.
Preparation: In this stage, each review team member examines the product
individually to detect and record problems (such as errors and defects) in
the product. There are several types of problems that are identified during
the preparation stage. Generally, the problems considered at this stage are
listed below.
o Clarity: User requirements are not understood properly
o Completeness: Details of user requirements are incomplete
o Consistency: Names of data structures and functions are used illogically
Self-Instructional
Material 229
Quality Management o Functionality: Functions, inputs, and outputs are not specified properly
o Feasibility: Constraints such as time, resources, and techniques are
not specified correctly.
NOTES Meeting: In this stage, the moderator reviews the agenda and issues related
to the product. The problems described in the overview stage are discussed
among review team members. Then, the recorder records the problems in
a defect list, which is used to detect and correct the errors later by the
author. The defect list is divided into the following categories.
o Accept the product: In this category, the author accepts the product
without the need for any further verification. This is because there are
no such problems that halt the execution of the product.
o Conditionally accept the product: In this category, the author accepts
the product, which requires verification. If there are problems in the
product, the next stage (rework stage) is followed.
o Re-examine the product: In this category, the author re-examines the
product to understand the problems in it. After rework, the product is
sent to the moderator again to verify that problems are eliminated.
Rework: In this stage, the author revises the problems that are identified
during the review meeting. He determines the problems in the product and
their causes with the help of a defect list. Then, he resolves the problems in
the product and brings it back to the moderator for the follow-up stage.
Follow-up: In this stage, the product is verified after the author has performed
rework on it. This is due to the fact that he may have introduced new errors
in the product during rework. In case there are still some errors in the
product, a review meeting is conducted again.
While the review is being conducted, the recorder records all the issues
raised during the review meeting. After he completes the task, the reviewer
summarizes the review issues. These issues are recorded in two kinds of documents,
which are listed below.
Self-Instructional
230 Material
Review Issue list: This document is concerned with the identification of Quality Management
problems in the product. It also acts as a checklist that informs the author
about the corrections made in the product.
Review summary report: This document focuses on information such as
NOTES
the phase of software development that is reviewed, the review team member
who reviewed it, and conclusions of the review meeting. Generally, a review
summary report comprises a single page and is advantageous for future
reference by the software development team.
Cost Impact of Software Errors
Formal technical reviews are used for detecting errors during the development
process of the software. This implies that reviews detect errors and thereby reduce
the cost of software and increase its efficiency. Thus, the formal technical reviews
provide cost-effective software.
For example, Figure 14.9 (a) shows the errors present in software before
conducting a formal technical review. When FTR is conducted, there is reduction
in errors, as shown in Figure 14.9 (b). Here, approximately half of the errors are
fixed and some part of software still contains errors in it. These errors can occur
when some of the errors are fixed and due to it, new errors arise in the software.
It is observed that FTR conducted in early stages of the software development
process reduces the time and cost incurred in detecting errors in software.
DMDAV: It stands for define, measure, design, analyze, and verify. This
approach is used while a software process is being developed in the
organization. Various attributes and functions present in this approach are
listed in Table 14.3.
Self-Instructional
232 Material
Table 14.3 DMDAV Methodology Quality Management
Attributes Functions
Define Specify goals of the project and the user requirements.
Measure Evaluate and determine the user requirements. NOTES
Design Design the process to avoid the root cause of defect.
Analyze Study process options to meet the user requirements.
Verify Ascertain the design performance so that the user requirements are
accomplished.
Note: In order to achieve Six Sigma, the errors should not be more than 3.4 per million occurrences.
next phase, which is useful life. As compared to the burn in phase, the useful life
phase is constant. The failure rate increases in the wear out phase. This is because
the hardware components wear out with time. As shown in Figure 14.10(b), the
rate of software failure as compared to Figure 14.10(a) is less and software failure NOTES
occurs largely in the integration and testing phase. As the software is tested, errors
are detected and removed. This results in decreasing the failure rate for the next
phase of the software known as useful life. The software does not wear out, but
becomes obsolete due to factors such as changes in the technology or the version,
increase in complexity, and so on.
Three approaches are used to improve the reliability of software. These
approaches are listed below.
Fault avoidance: The design and implementation phase of the software
development uses the process that minimizes the probability of faults before
the software is delivered to the user.
Fault detection and removal: Verification and validation techniques are
used to detect and remove faults. In addition, testing and debugging can
also remove faults.
Fault tolerance: The designed software manages the faults in such a way
that software failure does not occur. There are three aspects of fault tolerance.
o Damage assessment: This detects the parts of software affected due
to occurrence of faults.
o Fault recovery: This restores the software to the last known safe state.
Safe state can be defined as the state where the software functions as
desired.
o Fault repair: This involves modifying the software in such a way that
faults do not recur.
Software Reliability Models
A software reliability model is examined as a mathematical analysis model for the
purpose of measuring and assessing software quality and reliability quantitatively.
As shown in Figure 14.11, detection and removal of errors incur huge costs. When
the probability of software failure occurrence decreases, software reliability
increases. A model used to describe software reliability is known as Software
Reliability Growth Model (SRGM). The software reliability model evaluates
the level of software quality before the software is delivered to the user. The
objectives of software reliability models are listed below.
To evaluate the software quantitatively
To provide development status, test status, and schedule status
To monitor reliability performance and changes in reliability performance of
the software
Self-Instructional
Material 235
Quality Management To evaluate the maintenance cost for faults that are not detected during the
testing phase.
The software reliability models can be divided into two classes. The first
class deals with the designing and coding phases of software development by
NOTES
analyzing the reliability factors of the product. The second class deals with the
testing phase. It describes the software failure-occurrence phenomenon or software
fault-detection phenomenon by applying statistics theories that estimate software
reliability. The software reliability models are broadly classified into two categories,
namely, dynamic models and static models.
rather than a direct measure. Ameasure similar to MTBF is ‘mean time to repair’
(MTTR), which is the average amount of time taken to repair the machine after a
failure occurs. It can be combined with ‘mean time to failure’ (MTTF), which
describes how long the software can be used to calculate MTBF. NOTES
MTBF = MTTF + MTTR.
In addition to assessment of reliability, software availability is also calculated.
Availability of software is defined as the probability of software to operate and
deliver the desired request. Availability is an indirect measure to determine
maintainability. This is because it determines the probability that software is working
in accordance with the user requirements at a specified time. It is calculated from
probabilistic measures MTBF and MTTR. In the mathematical notation, availability
is defined as follows:
Availability = (1 – MTTR / (MTTR + MTBF))* 100%.
Software Safety
Software safety is a software quality assurance activity that follows a systematic
approach to identify, analyze, record, and control software hazards to ensure that
software operates in an intended manner. Ahazard is defined as a set of conditions
that can lead to an accident in a given set of conditions. These conditions are
beyond the control of the software developer, that is, the software developer is
unable to predict the conditions that can cause software to stop its functioning.
A software hazard occurs when a wrong input is given. Some hazards are
avoidable and can be eliminated by changing the design of the system while others
cannot be avoided and must be handled by the software. A technique known as
fault-free analysis is used to prevent hazards in the developed software. In this
analysis a detailed study is carried out to detect conditions that cause hazards.
Once the hazards are analyzed, the requirements for the software are specified.
It is important to note the difference between software safety and software
reliability. Software reliability uses statistical and mathematical methods to identify
software failure and it is not necessary that this occurrence of failure leads to a
hazardous situation. Software safety identifies the conditions in which these failures
lead to a hazardous situation.
NOTES 1. Software quality assurance is concerned with process quality and refers to
planned and systematic sets of activities, which ensure that software life
cycle processes and products conform to requirements, standards, and
procedures.
2. Costs incurred in performing quality-related activities are referred to as
quality costs.
3. Software reviews are systematic evaluation of the software in order to detect
errors.
4. A software reliability model is examined as a mathematical analysis model
for the purpose of measuring and assessing software quality and reliability
quantitatively.
5. A formal technical review (FTR) is a formal review that is performed by a
review team.
14.9 SUMMARY
Quality refers to the features and characteristics of a product or service,
which define its ability to satisfy user requirements.
Software quality control is concerned with product quality and checks,
whether the product meets user requirements and is developed in accordance
with the established standards and procedures.
Software quality assurance is concerned with process quality and refers to
planned and systematic sets of activities, which ensure that software life
cycle processes and products conform to requirements, standards, and
procedures.
Costs incurred in performing quality-related activities are referred to as
quality costs.
Software quality assurance (SQA) comprises various tasks that are
responsible for ensuring quality. These tasks are assigned to software
engineers and the SQA group.
Quality planning is a structured process for defining the procedures and
methods, which are used to develop software. Quality planning starts in the
early phases of software development.
Software reviews are systematic evaluation of the software in order to detect
errors. These reviews are conducted in various phases of the software
development process such as analysis, design, and coding.
Self-Instructional
238 Material
A formal technical review (FTR) is a formal review that is performed by a Quality Management
Self-Instructional
Material 239
Quality Management Long Answer Questions
1. What is software quality assurance? Explain.
2. Explain the various activities of SQA.
NOTES 3. What do you understand by software reviews? Discuss its significance.
4. Explain the commonly used software quality approaches.
5. Explain the term software reliability.