UNIT-2 Requirement Engineering
UNIT-2 Requirement Engineering
Requirement Engineering
Requirements engineering (RE) refers to the process of defining, documenting, and maintaining requirements
in the engineering design process. Requirement engineering provides the appropriate mechanism to understand
what the customer desires, analyzing the need, and assessing feasibility, negotiating a reasonable solution,
specifying the solution clearly, validating the specifications and managing the requirements as they are
transformed into a working system. Thus, requirement engineering is the disciplined application of proven
principles, methods, tools, and notation to describe a proposed system's intended behavior and its associated
constraints.
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for developing the software that is acceptable
to users, flexible to change and conformable to established standards.
Types of Feasibility:
1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are needed to
accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required software
performs a series of levels to solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can generate
financial profits for an organization.
This is also known as the gathering of requirements. Here, requirements are identified with the help of
customers and existing systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements are analyzed to identify
inconsistencies, defects, omission, etc. We describe requirements in terms of relationships and also resolve
conflicts if any.
The models used at this stage include ER diagrams, data flow diagrams (DFDs), function decomposition
diagrams (FDDs), data dictionaries, etc.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling the requirements. DFD shows
the flow of data through a system. The system may be a company, an organization, a set of procedures, a
computer hardware system, a software system, or any combination of the preceding. The DFD is also known as a
data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information about all data items defined in
DFDs. At the requirements stage, the data dictionary should at least define customer data items, to ensure that the
customer and developers use the same definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the entity-relationship diagram,
often called an "E-R diagram." It is a detailed logical representation of the data for the organization and uses
three main constructs i.e. data entities, relationships, and their associated attributes.
New requirements emerge during the process as business needs a change, and a better understanding of the
system is developed.
The priority of requirements from different viewpoints changes during development process.
The business and technical environment of the system changes during the development.
Clear
Correct
Consistent
Coherent
Comprehensible
Modifiable
Verifiable
Prioritized
Unambiguous
Traceable
Credible source
Software Requirements: Largely software requirements must be categorized into two categories:
1. Functional Requirements: Functional requirements define a function that a system or system element
must be qualified to perform and must be documented in different forms. The functional requirements
are describing the behavior of the system as it correlates to the system's functionality.
2. Non-functional Requirements: This can be the necessities that specify the criteria that can be used to
decide the operation instead of specific behaviors of the system.
Non-functional requirements are divided into two main categories:
o Execution qualities like security and usability, which are observable at run time.
o Evolution qualities like testability, maintainability, extensibility, and scalability that embodied
in the static structure of the software system.
Analysis Model is a technical representation of the system. It acts as a link between system description and
design model. In Analysis Modeling, information, behavior, and functions of the system are defined and
translated into the architecture, component, and interface level design in the design modeling.
Objectives of Analysis Modeling:
It must establish a way of creating software design.
It must describe the requirements of the customer.
It must define a set of requirements that can be validated, once the software is built.
Data Dictionary:
It is a repository that consists of a description of all data objects used or produced by the software.
It stores the collection of data present in the software. It is a very crucial element of the analysis
model. It acts as a centralized repository and also helps in modeling data objects defined during
software requirements.
Entity Relationship Diagram (ERD):
It depicts the relationship between data objects and is used in conducting data modeling activities.
The attributes of each object in the Entity-Relationship Diagram can be described using Data
object description. It provides the basis for activity related to data design.
It depicts the functions that transform data flow and it also shows how data is transformed when
moving from input to output. It provides the additional information which is used during the
analysis of the information domain and serves as a basis for the modeling of function. It also
enables the engineer to develop models of functional and information domains at the same time.
It shows various modes of behavior (states) of the system and also shows the transitions from one
state to another state in the system. It also provides the details of how the system behaves due to
the consequences of external events. It represents the behavior of a system by presenting its states
and the events that cause the system to change state. It also describes what actions are take n due
to the occurrence of a particular event.
Process Specification:
It stores the description of each function present in the data flow diagram. It describes the input to
a function, the algorithm that is applied for the transformation of input, and the output that is
produced. It also shows regulations and barriers imposed on the performance characteristics that
are applicable to the process and layout constraints that could influence the way in which the
process will be implemented.
Control Specification:
It stores additional information about the control aspects of the software. It is used to indicate how
the software behaves when an event occurs and which processes are invoked due to the
occurrence of the event. It also provides the details of the processes which are executed to manage
events.
It stores and provides complete knowledge about a data object present and used in the software. It
also gives us the details of attributes of the data object present in the Entity Relationship Diagram.
Hence, it incorporates all the data objects and their attributes.
A Data Flow Diagram (DFD) is a traditional visual representation of the information flows within a system. A
neat and clear DFD can depict the right amount of the system requirement graphically. It can be manual,
automated, or a combination of both.
It shows how data centers and leaves the system, what changes the information, and where data is stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It may be used as a
communication tool between a system analyst and any person who plays a part in the order that acts as a starting
point for redesigning a system. The DFD is also called as a data flow graph or bubble chart.
1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the order of events;
arrows in DFD represents flowing data. A DFD does not involve any order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a DFD, suppress
that urge! A diamond-shaped box is used in flow charts to represents decision points with multiple exists
paths of which the only one is taken. This implies an ordering of events, which makes no sense in a
DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until the end of the
analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown in fig:
Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data store.
Data Store: A set of parallel lines shows a place for the collection of data items. A data store indicates that the
data is stored which can be used at a later stage or by the other processes in a different order. The data store can
have an element or group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system inputs or sink of system
outputs.
0-level DFDM
It is also known as fundamental system model, or context diagram represents the entire software requirement as
a single bubble with input and output data denoted by incoming and outgoing arrows. Then the system is
decomposed and described as a DFD with multiple bubbles. Parts of the system represented by each of these
bubbles are then decomposed and documented as more and more detailed DFDs. This process may be repeated
at as many levels as necessary until the program at hand is well understood. It is essential to preserve the
number of inputs and outputs between levels, this concept is called leveling by DeMacro. Thus, if bubble "A"
has two inputs x1 and x2 and one output y, then the expanded DFD, that represents "A" should have exactly two
external inputs and one external output as shown in fig:
The Level-0 DFD, also called context diagram of the result management system is shown in fig. As the bubbles
are decomposed into less and less abstract bubbles, the corresponding data flow may also be needed to be
decomposed.
1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this level, we highlight
the main objectives of the system and breakdown the high-level process of 0-level DFD into sub processes.
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project or record the
specific/necessary detail about the system's functioning.
Decision Table
A decision table is a brief visual representation for specifying which actions to perform depending on
given conditions. The information represented in decision tables can also be represented as decision
trees or in a programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with their corresponding
outputs and is also called a cause-effect table. The reason to call cause-effect table is a related logical
diagramming technique called cause-effect graphing that is basically used to obtain the decision
table.
Any complex business flow can be easily converted into test scenarios & test cases using this
technique.
Decision tables work iteratively which means the table created at the first iteration is used as
input tables for the next tables. The iteration is done only if the initial table is not satisfactory.
Simple to understand and everyone can use this method to design the test scenarios & test
cases.
It provides complete coverage of test cases which helps to reduce the rework on writing test
scenarios & test cases.
These tables guarantee that we consider every possible combination of condition values. This
is known as its completeness property.
The production of the requirements stage of the software development process is Software Requirements
Specifications (SRS) (also called a requirements document). This report lays a foundation for software
engineering activities and is constructing when entire requirements are elicited and analyzed. SRS is a formal
report, which acts as a representation of software that enables the customers to review whether it (SRS) is
according to their requirements. Also, it comprises user requirements for a system as well as detailed
specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of applications that perform
particular functions in a specific environment. It serves several goals depending on who is writing it. First, the
SRS could be written by the client of a system. Second, the SRS could be written by a developer of the system.
The two methods create entirely various situations and establish different purposes for the document altogether.
The first case, SRS, is used to define the needs and expectation of the users. The second case, SRS, is written
for various purposes and serves as a contract document between customer and developer.
Characteristics of good SRS
(1).Correctness: User review is used to provide the accuracy of requirements stated in the SRS. SRS is said to
be perfect if it covers all the needs that are truly expected from the system.
(2).Completeness: The SRS is complete if, and only if, it includes the following elements:
(1). All essential requirements, whether relating to functionality, performance, design, constraints, attributes, or
external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all available categories
of situations.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and
units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its
conflict. There are three types of possible conflict in the SRS:
(1). the specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For example,
(a) One requirement may determine that the program will add two inputs, and another may determine that the
program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms for that object. For
example, a program's request for user input may be called a "prompt" in one requirement's and a "cue" in
another. The use of standard terminology and descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This
suggests that each element is uniquely interpreted. In case there is a method used with multiple definitions, the
requirements report should determine the implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if each requirement
in it has an identifier to indicate either the significance or stability of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential, especially for life-
critical applications, while others may be desirable. Each element should be identified to make these differences
clear and explicit. Another way to rank requirements is to distinguish classes of items as essential, conditional,
and optional.
Modifiability: SRS should be made as modifiable as likely and should be capable of quickly obtain changes to
the system to some extent. Modifications should be perfectly indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-effective system to
check whether the final software meets those requirements. The requirements are verified with the help of
reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it facilitates the
referencing of each condition in future development or enhancement documentation.
1. Backward Traceability: This depends upon each requirement explicitly referencing its source in earlier
documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or reference
number.
The forward traceability of the SRS is especially crucial when the software product enters the operation and
maintenance phase. As code and design document is modified, it is necessary to be able to ascertain the
complete set of requirements that may be concerned by those modifications.
9. Design Independence: There should be an option to select from multiple design alternatives for the final
system. More specifically, the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test cases and test plans
from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain but might not
be trained in computer science. Hence, the purpose of formal notations and symbols should be avoided too as
much extent as possible. The language should be kept simple and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details should be
explained explicitly. Whereas,for a feasibility study, fewer analysis can be used. Hence, the level of abstraction
modifies according to the objective of the SRS.
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and complete.
Verbose and irrelevant descriptions decrease readability and also increase error possibilities.
Black-box view: It should only define what the system should do and refrain from stating how to do these. This
means that the SRS document should define the external behavior of the system and not discuss the
implementation issues. The SRS report should view the system to be developed as a black box and should
define the externally visible behavior of the system. For this reason, the SRS report is also known as the black-
box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely understand it.
Response to undesired events: It should characterize acceptable responses to unwanted events. These are called
system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be correct. This means
that it should be possible to decide whether or not requirements have been met in an implementation.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include adding more resources,
employing more workers to help maintain quality and so much more.
Verification is the process of checking that software achieves its goal without any bugs. It is the process to
ensure whether the product that is developed is right or not. It verifies whether the developed product fulfills
the requirements that we have. Verification is static testing.
Verification Validation
It includes checking documents, design, codes It includes testing and validating the actual
and programs. product.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, Methods used in validation are Black Box Testing,
walkthroughs, inspections and desk-checking. White Box Testing and non-functional testing.
It can find the bugs in the early stage of the It can only find the bugs that could not be found
development. by the verification process.
Software Quality Framework is a model for software quality by connecting and integrating the
different views of software quality. This framework connects the customer view with the developer
view of software quality and it treats software as a product. The software product view describes the
characteristics of a product that bear on its ability to satisfy stated and implied needs.
This is a framework that describes all the different concepts relating to quality in a common way
measured by qualitative scale that can be understood and interpreted in a common way. Therefore the
most influential factor for the developers is the customer perception. This framework connects the
developer with the customer to derive a common interpretation for quality.
1. Developers View:
Validation and verification are two independent methods used together for checking that a software
product meets the requirements and that it fulfills its intended purpose. Validation checks that the
product design satisfies the purposeful usage and verification checks for errors in the software. The
primary concern for developers is in the design and engineering processes involved in producing
software. Quality can be measured by the degree of conformance to predetermined requirement and
standards, and deviations from these standards can lead to poor quality and low reliability. While
validation and verification are used by the developers to improve the software, the two methods don’t
represent a quantifiable quality measurement.
The developer view of software quality and customer view of software quality are both different
things.
For example the customer understands or describes the quality of operation as meeting the
requirement while the developers use different factors to describe the software quality.
The developer view of quality in the software is influenced by many factors.
This model stresses on 3 primary ones:
1. The code:
It is measured by its correctness and reliability.
2. The data:
It is measured by the application integrity.
3. Maintainability:
It has different measures the simplest is the mean time to change.
2. Users View:
When the user acquires software, he/she always expect a high-quality software. When end users develop their
software then quality is different. End-user programming, a phrase popularized by which is programming to
achieve the result of a program primarily for personal, rather than public use. The important distinction here
is that software itself is not primarily intended for use by a large number of users with varying needs.
For example, a teacher may write a spreadsheet to track students ‘test scores. In these end-user programming
situations, the program is a means to an end that could be used to accomplish a goal. In contradiction to end -
user programming, professional programming has the goal of producing software for others to use.
For example, the moment a novice Web developer moves from designing a web page for himself to designing
a Web page for others, the nature of this activity has changed.
Users find software quality as a fit between their goals and software’s functionality. The better the quality,
the more likely the user will be satisfied with the soft-ware. When the quality is bad, developers must meet
user needs or face a diminishing demand for their software. Therefore, the user understands quality as fitness
for purpose. Avoiding complexity and keeping software simple, considerably lessens the implementation risk
of software. In some instances, users abandoned the implementation of a complex software because the
software developers were expecting the users to change their business and to go with the way the software
works.
Product View:
The product view describes quality as correlated to inherent characteristics of the product. Product quality is
defined as the set of characteristics and features of a product that gives contribution to its ability to fulfill
given requirements. Product quality can be measured by the value-based view which sees the quality
as dependent on the amount a customer is willing to pay for it. According the users, a high-quality
product is one that satisfies their expectations and preferences while meeting their requirement. Satisfaction
of end users of the product represents craft to learn, use, upgrade the product and when asked to participate i n
rating the product, a positive rating is given.
ISO 9000 is defined as a set of international standards on quality management and quality assurance developed
to help companies effectively document the quality system elements needed to maintain an efficient quality
system. They are not specific to any one industry and can be applied to organizations of any size.
ISO 9000 can help a company satisfy its customers, meet regulatory requirements, and achieve continual
improvement. It should be considered to be a first step or the base level of a quality system.
ASQ is the only place where organizations can obtain the American National Standard Institute
(ANSI) versions of these standards in the ISO 9000 family.
ISO 9000 history and revisions: ISO 9000:2000, 2008, and 2015
ISO 9000 was first published in 1987 by the International Organization for Standardization (ISO), a specialized
international agency for standardization composed of the national standards bodies of more than 160 countries.
The standards underwent revisions in 2000 and 2008. The most recent versions of the standard, ISO
9000:2015 and ISO 9001:2015, were published in September 2015.
ASQ administers the U.S. Technical Advisory Groups and subcommittees that are responsible for developing
the ISO 9000 family of standards. In its standards development work, ASQ is accredited by ANSI.
ISO 9000:2000
ISO 9000:2000 refers to the ISO 9000 update released in the year 2000. The ISO 9000:2000 revision had five
goals:
ISO 9000:2000 was again updated in 2008 and 2015. ISO 9000:2015 is the most current version.
ISO 9000:2015 principles of Quality Management
The ISO 9000:2015 and ISO 9001:2015 standards are based on seven quality management principles that senior
management can apply to promote organizational improvement.
1. Customer focus
2. Leadership
4. Process approach
5. Improvement
7. Relationship management
Identify and select suppliers to manage costs, optimize resources, and create value
Establish relationships considering both the short and long term
Share expertise, resources, information, and plans with partners
Collaborate on improvement and development activities
Recognize supplier successes
Learn more about supplier quality and see resources related to managing the supply chain
Software Engineering Institute Capability Maturity Model (SEICMM)
The Capability Maturity Model (CMM) is a procedure used to develop and refine an organization's software
development process.
The model defines a five-level evolutionary stage of increasingly organized and consistently more mature
processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development
center promote by the U.S. Department of Defense (DOD).
Capability Maturity Model is used as a benchmark to measure the maturity of an organization's software
process.
Methods of SEICMM
There are two methods of SEICMM:
Capability Evaluation: Capability evaluation provides a way to assess the software process capability of an
organization. The results of capability evaluation indicate the likely contractor performance if the contractor is
awarded a work. Therefore, the results of the software process capability assessment can be used to select a
contractor.
Software Process Assessment: Software process assessment is used by an organization to improve its process
capability. Thus, this type of evaluation is for purely internal use.
SEI CMM categorized software development industries into the following five maturity levels. The various
levels of SEI CMM have been designed so that it is easy for an organization to build its quality system starting
from scratch slowly.
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no processes are
described and followed. Since software production processes are not limited, different engineers follow their
process and as a result, development efforts become chaotic. Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are established.
Size and cost estimation methods, like function point analysis, COCOMO, etc. are used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and documented. There
is a common organization-wide understanding of operations, roles, and responsibilities. The ways through
defined, the process and product qualities are not measured. ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size, reliability, time
complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect correction time,
productivity, the average number of defects found per hour inspection, the average number of failures detected
during testing per LOC, etc. The software process and product quality are measured, and quantitative quality
requirements for the product are met. Various tools like Pareto charts, fishbone diagrams, etc. are used to
measure the product and process quality. The process metrics are used to analyze if a project performed
satisfactorily. Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data are evaluated
for continuous process improvement.
Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas (KPAs) that contains
the areas an organization should focus on improving its software process to the next level. The focus of each
level and the corresponding key process areas are shown in the fig.
SEI CMM provides a series of key areas on which to focus to take an organization from one level of maturity to
the next. Thus, it provides a method for gradual quality improvement over various stages. Each step has been
carefully designed such that one step enhances the capability already built up.