0% found this document useful (0 votes)
17 views29 pages

SE Comp Software Engg Insem Notes by Imp Solution Hub

The document provides an overview of software engineering, defining it as the study of designing, developing, and maintaining software to ensure quality and efficiency. It discusses the layered technology of software development, outlining key processes, methods, and tools, as well as various software process models like Waterfall, V-Model, Incremental, and RAD. Additionally, it addresses the software crisis, common myths in software development, and the importance of quality assurance and project management.

Uploaded by

kuwargautam108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views29 pages

SE Comp Software Engg Insem Notes by Imp Solution Hub

The document provides an overview of software engineering, defining it as the study of designing, developing, and maintaining software to ensure quality and efficiency. It discusses the layered technology of software development, outlining key processes, methods, and tools, as well as various software process models like Waterfall, V-Model, Incremental, and RAD. Additionally, it addresses the software crisis, common myths in software development, and the importance of quality assurance and project management.

Uploaded by

kuwargautam108
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

2019 PATTERN

CLICK ON LOGO JOIN SOCIAL MEDIA PLATFORM

18000+ STUDENT
TRUSTED
TELEGRAM
CHANNEL

PROVIDED BY :-OMKAR TULE


https://t.me/SPPU_SE_COMP

Unit-1
Introduction to Software engineering
What is Software engineering?
Definition: Software engineering is a detailed study of engineering to the design, development and
maintenance of software. Software engineering was introduced to address the issues of low-quality software
projects. Problems arise when a software generally exceeds timelines, budgets, and reduced levels of quality. It
ensures that the application is built consistently, correctly, on time and on budget and within requirements

Software engineering is Layered Technology


Software development is totally a layered technology. That means, to develop software one will have to go from
one layer to another. The layers are related and each layer demands the fulfillment of the previous layer. Figure
below is the upward flowchart of the layers of software development.

01. A Quality Focus: Software engineering must rest on an organizational commitment to quality. Total quality
management and similar philosophies foster a continuous process improvement culture, and this culture
ultimately leads to the development of increasingly more mature approaches to software engineering. The
bedrock that supports software engineering is a quality focus.
2. Process: The foundation for software engineering is the process layer. Process defines a framework for a set
of Key Process Areas (KPAs) that must be established for effective delivery of software engineering technology.
This establishes the context in which technical methods are applied, work products such as models, documents,
data, reports, forms, etc. are produced, milestones are established, quality is ensured, and change is properly
managed.
03. Methods: Software engineering methods provide the technical how-to's for building software. Methods will
include requirements analysis, design, program construction, testing, and support. This relies on a set of basic
principles that govern each area of the technology and include modeling activities and other descriptive
techniques.
04. Tools: Software engineering tools provide automated or semi-automated support for the process and the
methods. When tools are integrated so that information created by one tool can be used by another, a system
for the support of software development, called computer-aided software engineering, is established.

Software Process:
Process defines a framework for a set of Key Process Areas (KPAs) that must be established for effective
delivery of software engineering technology. This establishes the context in which technical methods are
applied, work products such as models, documents, data, reports, forms, etc. are produced, milestones are
established, quality is ensured, and change is properly managed.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Software Process Framework:
A process framework establishes the foundation for a complete software process by identifying a small number
of framework activities that are applicable to all software projects, regardless of size or complexity. It also
includes a set of umbrella activities that are applicable across the entire software process. Some most applicable
framework activities are described below.

1. Communication: This activity involves heavy communication with customers and other stakeholders in
order to gather requirements and other related activities.
2. Planning: Here a plan to be followed will be created which will describe the technical tasks to be
conducted, risks, required resources, work schedule etc.
3. Modeling: A model will be created to better understand the requirements and design to achieve these
requirements.
4. Construction: Here the code will be generated and tested.
5. Deployment: Here, a complete or partially complete version of the software is represented to the
customers to evaluate and they give feedbacks based on the evaluation.
These above described five activities can be used in any kind of software Development. The details of the
software development process may become a little different, but the framework activities remain the same.

Umbrella activities

1. Software project tracking and control : Tracking and Control is the dual process of detecting when a
project is drifting off-plan, and taking corrective action to bring the project back on track. But a successful
project manager will also be able to tell when the plan itself is faulty, and even re-plan the project and its goals if
necessary.
2. Formal technical reviews : This includes reviewing the techniques that has been used in the project.
3. Software quality assurance : This is very important to ensure the quality measurement of each part to
ensure them.
4. Software configuration management : In software engineering, software configuration management (SCM
or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary
field of configuration management. SCM practices include revision control and the establishment of baselines.
5. Document preparation and production : All the project planning and other activities should be hardly
copied and the production get started here.
6. Reusability management : This includes the backing up of each part of the software project they can be
corrected or any kind of support can be given to them later to update or upgrade the software at user/time
demand.
7. Measurement : This will include all the measurement of every aspects of the software project.
8. Risk management : Risk management is a series of steps that help a software team to understand and
manage uncertainty. It’s a really good idea to identify it, assess its probability of occurrence, estimate its impact,
and establish a contingency plan that ‘should the problem actually occur’.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Software crisis
Software crisis is a term used in the early days of computing science for the difficulty of writing useful and
efficient computer programs in the required time. The software crisis was due to the rapid increases in
computer power and the complexity of the problems that could not be tackled. With the increase in the
complexity of the software, many software problems arose because existing methods were insufficient.
The causes of the software crisis were linked to the overall complexity of hardware and the software
development process. The crisis manifested itself in several ways:

 Projects running over-budget


 Projects running over-time
 Software was very inefficient
 Software was of low quality
 Software often did not meet requirements
 Projects were unmanageable and code difficult to maintain
 Software was never delivered

Software Myths:
The development of software requires dedication and understanding on the developers' part. Many software
problems arise due to myths that are formed during the initial stages of software development. Unlike ancient
folklore that often provides valuable lessons, software myths propagate false beliefs and confusion in the minds
of management, users and developers.

Table Management Myths

 Standards are often incomplete,


inadaptable, and outdated.
 The members of an organization can
 Developers are often unaware of all the
acquire all-the information, they
established standards.
require from a manual, which
 Developers rarely follow all the known
contains standards, procedures, and
standards because not all the standards
principles;
tend to decrease the delivery time of
software while maintaining its quality.
 Adding more manpower to the project,
 If the project is behind schedule, which is already behind schedule, further
increasing the number of delays the project.
programmers can reduce the time  New workers take longer to learn about
gap. the project as compared to those already
working on the project.
 Outsourcing software to a third party
 If the project is outsourced to a third does not help the organization, which is
party, the management can relax and incompetent in managing and controlling
let the other firm develop software the software project internally. The
for them. organization invariably suffers when it
out sources the software project.

In most cases, users tend to believe myths about the software because software managers and developers do
not try to correct the false beliefs. These myths lead to false expectations and ultimately develop dissatisfaction
among the users. Common user myths are listed in Table.

Table User Myths

 Brief requirement stated in the  Starting development with incomplete


Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad
https://t.me/SPPU_SE_COMP
initial process is enough to start and ambiguous requirements often lead
development; detailed requirements to software failure. Instead, a complete
can be added at the later stages. and formal description of requirements is
essential before starting development.
 Adding requirements at a later stage
often requires repeating the entire
development process.
 Incorporating change requests earlier in
 Software is flexible; hence software the development process costs lesser
requirement changes can be added than those that occurs at later stages.
during any phase of the development This is because incorporating changes
process. later may require redesigning and extra
resources.

In the early days of software development, programming was viewed as an art, but now software development
has gradually become an engineering discipline. However, developers still believe in some myths-. Some of the
common developer myths are listed in Table.

Table Developer Myths

 Software development is considered  50% to 70% of all the efforts are expended
complete when the code is delivered. after the software is delivered to the user.
 The quality of programs is not the only
 The success of a software project
factor that makes the project successful
depends on the quality of the
instead the documentation and software
product produced.
configuration also playa crucial role.
 Software engineering is about creating
 Software engineering requires quality at every level of the software project.
unnecessary documentation, which Proper documentation enhances quality
slows down the project. which results in reducing the amount of
rework.
 The deliverables of a successful project
 The only product that is delivered
includes not only the working program but
after the completion of a project is
also the documentation to guide the users
the working program(s).
for using the software.
 The quality of software can be measured
during any phase of development process by
applying some quality assurance
 Software quality can be assessed
mechanism. One such mechanism is formal
only after the program is executed.
technical review that can be effectively used
during each phase of development to
uncover certain errors.

Software Process Model :

To solve actual problems in an industry setting, a software engineer or a team of engineers must incorporate a
development strategy that encompasses the process, methods, and tools layers and the generic phases. It gives a
workflow. A Process Model describes the sequence of phases for the entire lifetime of a product. Therefore it is
sometimes also called Product Life Cycle. This covers everything from the initial commercial idea until the final
de-installation or disassembling of the product after its use

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
1. Waterfall Model

The waterfall model is a sequential approach, where each fundamental activity of a process represented as a
separate phase, arranged in linear order. In the waterfall model, you must plan and schedule all of the activities
before starting working on them (plan-driven process).Plan-driven process is a process where all the activities
are planned first, and the progress is measured against the plan. While the agile process, planning is
incremental and it’s easier to change the process to reflect requirement changes. The phases of the waterfall
model are: Requirements, Design, Implementation, Testing, and Maintenance.

The Nature of Waterfall Phases

In principle, the result of each phase is one or more documents that should be approved and the next phase
shouldn’t be started until the previous phase has completely been finished.
In practice, however, these phases overlap and feed information to each other. For example, during design,
problems with requirements can be identified, and during coding, some of the design problems can be found,
etc. The software process therefore is not a simple linear but involves feedback from one phase to another. So,
documents produced in each phase may then have to be modified to reflect the changes made.

2. V- model
Its Verification and Validation model. Just like the waterfall model, the V-Shaped life cycle is a sequential path of
execution of processes. Each phase must be completed before the next phase begins. V-Model is one of the
many software development models.Testing of the product is planned in parallel with a corresponding phase of
development in V-model.

1. Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model
before development is started, a system test plan is created. The test plan focuses on meeting the
functionality specified in the requirements gathering.
2. The high-level design (HLD) phase focuses on system architecture and design. It provide overview of
solution, platform, system, product and service/process. An integration test plan is created in this phase as
well in order to test the pieces of the software systems ability to work together.
3. The low-level design (LLD) phase is where the actual software components are designed. It defines the
actual logic for each and every component of the system. Class diagram with all the methods and relation
between classes comes under LLD. Component tests are created in this phase as well.
4. The implementation phase is, again, where all coding takes place. Once coding is complete, the path of
execution continues up the right side of the V where the test plans developed earlier are now put to use.
5. Coding: This is at the bottom of the V-Shape model. Module design is converted into code by developers.
Unit Testing is performed by the developers on the code written by them.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
The Testing included in V Model are

1. Integration testing: Integration Test Plans are developed during the Architectural Design Phase. These
tests verify that units created and tested independently can coexist and communicate among themselves.
Test results are shared with customer's team.
2. Unit testing: In the V-Model, Unit Test Plans (UTPs) are developed during module design phase. These
UTPs are executed to eliminate bugs at code level or unit level. A unit is the smallest entity which can
independently exist, e.g. a program module. Unit testing verifies that the smallest entity can function
correctly when isolated from the rest of the codes/units.

Figure: V Model

3. System testing: System Tests Plans are developed during System Design Phase. Unlike Unit and Integration
Test Plans, System Test Plans are composed by client's business team. System Test ensures that
expectations from application developed are met. The whole application is tested for its functionality,
interdependency and communication. System Testing verifies that functional and non-functional
requirements have been met. Load and performance testing, stress testing, regression testing, etc., are
subsets of system testing.
4. User acceptance testing: User Acceptance Test (UAT) Plans are developed during the Requirements
Analysis phase. Test Plans are composed by business users. UAT is performed in a user environment that
resembles the production environment, using realistic data. UAT verifies that delivered system meets user's
requirement and system is ready for use in real time.

3. Incremental Model

The incremental model applies the waterfall model incrementally. The series of releases is referred to as
“increments”, with each increment providing more functionality to the customers. After the first increment, a
core product is delivered, which can already be used by the customer. Based on customer feedback, a plan is
developed for the next increments, and modifications are made accordingly. This process continues, with
increments being delivered until the complete product is delivered. The incremental philosophy is also used in
the agile process model (see agile modeling).

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Characteristics of Incremental Model

1. System is broken down into many mini development projects.


2. Partial systems are built to produce the final system.
3. First tackled highest priority requirements.
4. The requirement of a portion is frozen once the incremented portion is developed.

4. RAD Model
Rapid Application Development (RAD) is an incremental software development process model which is a “high-
speed” adaptation of the linear sequential model in which rapid development is achieved by using component-
based construction. If requirements are well understood and project scope is constrained, the RAD process
enables a development team to create a “fully functional system” within very short time periods, such as in 60 to
90 days.

1. Communication :
This step works to understand the business problems and the information characteristics that the software
must accommodate.
2. Planning : This is very important as multiple teams work on different systems.
3. Modeling : Modeling includes the major phases, like business, data, process modeling and establishes
design representation that serves as the basis for RAD’s construction activity.
4. Construction : This includes the use of preexisting software components and the application of automatic
code generation.
5. Deployment : Deployment establishes a basis for the subsequent repetitions, if required.

Problems with the RAD model : Like others, RAD approach has backwards.
1. For large but scalable projects, RAD requires sufficient human resources to create the right number of RAD
teams.
2. If developers and customers are not committed to the rapid fire activities, RAD projects will fail.
3. RAD may not be appropriate when technical risks are high.
4. If a system can not be properly modularized, building the components will be problematic.
5. If high performance is be achieved through tuning the interfaces into system components, the RAD
approach may not work.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP

5. Prototyping

A prototype is a version of a system or part of the system that’s developed quickly to check the customer’s
requirements or feasibility of some design decisions. So, a prototype is useful when a customer or developer is
not sure of the requirements, or of algorithms, efficiency, business rules, response time, etc. In prototyping, the
client is involved throughout the development process, which increases the likelihood of client acceptance of
the final implementation. While some prototypes are developed with the expectation that they will be
discarded, it is possible in some cases to evolve from prototype to working system.

A software prototype can be used:

[1] In the requirements engineering, a prototype can help with the elicitation and validation of system
requirements. It allows the users to experiment with the system, and so, refine the requirements. They may get
new ideas for requirements, and find areas of strength and weakness in the software. Furthermore, as the
prototype is developed, it may reveal errors and in the requirements. The specification maybe then modified to
reflect the changes.
[2] In the system design, a prototype can help to carry out deign experiments to check the feasibility of a
proposed design.
For example, a database design may be prototype-d and tested to check it supports efficient data access for the
most common user queries.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
1. Quick Plan : Based on the requirements and others of the communication part a quick plane is made to design the
software.
2. Modeling Quick Design : Based on the quick plane, ‘A Quick Design’ occurs. The quick design focuses on a
representation of those aspects of the software that will be visible to the customer/user, such as input approaches and
output formats.
3. Construction of Prototype : The quick design leads to the construction of a prototype.
4. Deployment, delivery and feedback : The prototype is evaluated by the customer/user and used to refine
requirements for the software to be developed. All these steps are repeated to tune the prototype to satisfy user’s need.
At the same time enable the developer to better understand what needs to be done.

6. Spiral Model :

The spiral model is an evolutionary software process model that combines the iterative nature of prototyping
with the controlled and systematic aspects of the linear sequential model. Using the spiral model, software is
developed in a series of incremental releases. During early iterations, the incremental release might be a paper
model or prototype. During later iterations, increasingly more complete versions of the engineered system are
produced. A spiral model is divided into a number of framework activities, also called task regions. Typically,
there are between three and six task regions. Given figure is of a spiral model that contains five task regions

1. Customer communication — Tasks required to establish effective communication between developer and
customer.
2. Planning: Tasks required to define resources, timelines, and other project related information.
3. Modeling: Tasks required in building one or more representations of the application.
4. Construction and release: Tasks required to construct, test, install.
5. Deployment : Tasks required to deliver the software, getting feedbacks etc.

Software engineering team moves around the spiral in a clockwise direction, beginning at the center. The first
circuit around the spiral might result in the development of a product specification; subsequent passes around
the spiral might be used to develop a prototype and then progressively more sophisticated versions of the
software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule
are adjusted based on feedback derived from customer evaluation. In addition, the project manager adjusts the
planned number of iterations required to complete the software.
The spiral model is a realistic approach to the development of large-scale systems and software. The spiral
model enables the developer to apply the prototyping approach at any stage in the evolution of the product. The
spiral model demands a direct consideration of technical risks at all stages of the project and, if properly
applied, should reduce risks before they become problematic. It demands considerable risk assessment
expertise and relies on this expertise for success.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP

7. Concurrent Development Model


The concurrent development model, sometimes called concurrent engineering. It allows a software team to
represent iterative and concurrent elements of any of the process model.
For example, the modeling activity defined for the spiral model is accomplished by invoking one or more of the
software engineering actions: prototyping, analysis, and design. The activity—modeling—may be in any one of
the states noted at any given time. Similarly, other activities, actions, or tasks (e.g., communication or
construction) can be represented in an similar manner. All software engineering activities exist concurrently
but reside in different states.

For example, early in a project the communication activity (not shown in the figure) has completed its first
iteration and exists in the awaiting changes state. The modeling activity (which existed in the inactive state
while initial communication was completed, now makes a transition into the under development state. If,
however, the customer indicates that changes in requirements must be made, the modeling activity moves from
the under development state into the awaiting changes state. Concurrent modeling defines a series of events
that will trigger transitions from state to state for each of the software engineering activities, actions, or tasks

Cleanroom Software Engineering


Cleanroom software engineering is a process for developing high-quality software with certified reliability.
Originally developed by Harlan Mills, the "cleanroom" name was borrowed from the electronics industry, where
clean rooms help prevent defect during fabrication. In that sense, cleanroom software engineering focuses on
defect prevention, rather than defect removal, and formal verification of a program's correctness. The
Cleanroom Reference Model Cleanroom differs from other formal methods in that it doesn't require
mathematically defined requirements—those stated in plain English are adequate. These requirements are
divided into tagged statements for traceability. The process of tagging requirements in small verifiable
statements allows for tracing and verification of each requirement throughout the process. Moreover, since
attempts to document requirements are likely to have errors, inconsistencies, and omissions, Cleanroom refines
many of these through the "Box Structure Development Method," a process that treats software as a set of
communicating state machines that separate behavioral and implementation concerns

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
About Clean Room Engineering

 Is a approach to Build correctness in software being developed


 Removing dependencies on costly processes.
 Main concept is to write Correct code right from first line
 It takes Software development to next level.
 In traditional approach Software failure lead to hazards.
 Cleanroom software engineering can guarantee, it will not lead to hazards
 Cost effective and Time effective

 Systematic and disciplines approach


 Remove possible errors in early stage
 Mills, Dyer and Linger have proposed, but gained no popularity
 Another research has explains three possible reason for failure
1. Theoretical and Mathematical
2. Offers no testing
3. Rigorous process in its development stages

Software quality assurance (SQA)

It consists of a means of monitoring the software engineering processes and methods used to ensure quality.
The methods by which this is accomplished are many and varied, and may include ensuring conformance to one
or more standards, such as ISO 9000 or a model such as CMMI. SQA encompasses the entire software
development process, which includes processes such as requirements definition, software design, coding,
source code control, code reviews, software configuration management, testing, release management, and
product integration. SQA is organized into goals, commitments, abilities, activities, measurements, and
verifications. Planned and Systematic Approach to the Evaluation of the Quality of Software Product Standards,
Processes, procedures. It also Assures that Standards and Procedures are Established and Followed throughout
the Software Development Process. Evolution is done using methods such as Continuous Monitoring, Product
Evaluation, and Auditing .

Types of Standard
1. Documentation Standard
2. Design Standard

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
3. Code Standard

Documentation Standard:
 Specify Form and Content for Planning, Control, and Product.
 Documentation Provide Consistency throughout a System
 Documentation can be written in any form
 Each Practice should be Documented so it can be Repeated or Changed later if needed
Design Standards
 Specify the Content and Form of how Design Documents are Developed
 Provide Rules and Methods to Transfer:
1. Software Requirements to Software Design
2. Software Design into Software Design Documentation
 Many Major Companies have Design Development Software to aid in the Process
Code Standards
 Specify what Language the Code is written in and Define any Restrictions on Language Features
 Code Standards Define:
• Legal Language Structures
• Style Conventions
• Rules for Data Structures and Interfaces
• Internal Code Documentation
 Using Methods such as “Peer Reviews”, “Buddy Checks”, and Code Analysis can Enforce Standards

Software Quality Framework


 Mccall’s Quality Factor
 ISO 9126 Quality Factor

1. Mccall’s Quality Factor

A quality factor represents a behavioral characteristic of a system. Following are the list of quality factors:

1.Correctness:
 A software system is expected to meets the explicitly specified functional requirements and the
implicitly expected non-functional requirements.
 If a software system satisfies all the functional requirements, the system is said to be correct.
2.Reliability
 Customers may still consider an incorrect system to be reliable if the failure rate is very small and it
does not adversely affect their mission objectives.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
 Reliability is a customer perception, and an incorrect software can still be considered to be reliable.
3.Efficiency:
 Efficiency concerns to what extent a software system utilizes resources, such as computing power,
memory, disk space, communication bandwidth, and energy.
 A software system must utilize as little resources as possible to perform its functionalities.
4.Integrity:
 A system’s integrity refers to its ability to withstand attacks to its security.
 In other words, integrity refers to the extent to which access to software or data by unauthorized
persons or programs can be controlled.
5.Usability:
 A software is considered to be usable if human users find it easy to use.
 Without a good user interface a software system may fizzle out even if it possesses many desired
qualities.
6.Maintainability:
 Maintenance refers to the upkeep of products in response to deterioration of their components due to
continuous use of the products.
 Maintenance refers to how easily and inexpensively the maintenance tasks can be performed.
 For software products, there are three categories of maintenance activities : corrective, adaptive and
perfective maintenance.
7.Testability:
 Testability means the ability to verify requirements. At every stage of software development, it is
necessary to consider the testability aspect of a product.
 To make a product testable, designers may have to instrument a design with functionalities not available
to the customer.
8.Flexibility:
 Flexibility is reflected in the cost of modifying an operational system.
 In order to measure the flexibility of a system, one has to find an answer to the question: How easily can
one add a new feature to a system.
9.Portability
 Portability of a software system refers to how easily it can be adapted to run in a different execution
environment.
 Portability gives customers an option to easily move from one execution environment to another to best
utilize emerging technologies in furthering their business.
10.Reusability
 Reusability means if a significant portion of one product can be reused, maybe with minor
modifications, in another product.
 Reusability saves the cost and time to develop and test the component being reused.
11.Interoperability :
 Interoperability means whether or not the output of one system is acceptable as input to another
system, it is likely that the two systems run on different computers interconnected by a network.
 An example of interoperability is the ability to roam from one cellular phone network in one country to
another cellular network in another country.

ISO 9126 Quality Factor

The ISO 9126 standard and to give a detailed description of the software quality model used by this standard.
ISO 9126 is an international standard for the evaluation of software. The standard is divided into four parts
which addresses, respectively, the following subjects: quality model; external metrics; internal metrics; and
quality in use metrics.
These characteristics are broken down into sub characteristics; a high level table is shown below. It is at the sub
characteristic level that measurement for SPI will occur. The main characteristics of the ISO9126-1 quality
model, can be defined as follows.
Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad
https://t.me/SPPU_SE_COMP

1. Functionality: Functionality is the essential purpose of any product or service. For certain items this is
relatively easy to define, for example a ship's anchor has the function of holding a ship at a given location. The
more functions a product has, e.g. an ATM machine, then the more complicated it becomes to define it's
functionality. For software a list of functions can be specified, i.e. a sales order processing systems should be
able to record customer information so that it can be used to reference a sales order. A sales order system
should also provide the following functions:
 Record sales order product, price and quantity.
 Calculate total price.
 Calculate appropriate sales tax.
 Calculate date available to ship, based on inventory.
 Generate purchase orders when stock falls below a given threshold.
2. Reliability : Once a software system is functioning, as specified, and delivered the reliability characteristic
defines the capability of the system to maintain its service provision under defined conditions for defined
periods of time. One aspect of this characteristic is fault tolerance that is the ability of a system to withstand
component failure. For example if the network goes down for 20 seconds then comes back the system should be
able to recover and continue functioning.
3. Usability : Usability only exists with regard to functionality and refers to the ease of use for a given function.
For example a function of an ATM machine is to dispense cash as requested. Placing common amounts on the
screen for selection, i.e. $20.00, $40.00, $100.00 etc, does not impact the function of the ATM but addresses the
Usability of the function. The ability to learn how to use a system (learn ability) is also a major sub
characteristic of usability.
4. Efficiency: This characteristic is concerned with the system resources used when providing the required
functionality. The amount of disk space, memory, network etc. provides a good indication of this characteristic.
As with a number of these characteristics, there are overlaps. For example the usability of a system is influenced
by the system's Performance, in that if a system takes 3 hours to respond the system would not be easy to use
although the essential issue is a performance or efficiency characteristic.
5. Maintainability: The ability to identify and fix a fault within a software component is what the
maintainability characteristic addresses. In other software quality models this characteristic is referenced as
supportability. Maintainability is impacted by code readability or complexity as well as modularization.
Anything that helps with identifying the cause of a fault and then fixing the fault is the concern of
maintainability. Also the ability to verify (or test) a system, i.e. testability, is one of the sub characteristics of
maintainability.
6.Portability :This characteristic refers to how well the software can adopt to changes in its environment or
with its requirements. The sub characteristics of this characteristic include adaptability. Object oriented design
and implementation practices can contribute to the extent to which this characteristic is present in a given
system.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP

Unit-2
Requirement Analysis

Requirements Engineering
Requirements engineering helps software engineers better understand the problems they are
trying to solve. The requirements engineering process begins with inception, moves on to elicitation,
negotiation, problem specification, and ends with review or validation of the specification

1. Inception The overriding goal of the inception phase is to achieve concurrence among all
stakeholders on the lifecycle objectives for the project. ... Establishing the project's software scope
and boundary conditions, including an operational vision, acceptance criteria and what is intended
to be in the product and what is not software engineers use context-free questions to establish a
basic understanding of the problem, the people who want a solution, the nature of the solution, and
the effectiveness of the collaboration between customers and developers
2. Elicitation Requirement elicitation is the process of collecting the requirements of a system or
requirement gathering from user, customers and stakeholders by conducting meetings, interviews,
questionnaires, brainstorming sessions, prototyping etc. It find out from customers what the
product objectives are, what is to be done, how the product fits into business needs, and how the
product is used on a day to day basis
3. Elaboration During the Elaboration phase the project team is expected to capture a healthy
majority of the system requirements. However, the primary goals of Elaboration are to address
known risk factors and to establish and validate the system architecture. It focuses on developing a
refined technical model of software function, behavior, and information
4. Negotiation It includes Discussion for to rank the requirements, to Decide Priority , Decide risk, to
decide Project cost and etc. The requirements are categorized and organized into subsets, relations
among requirements identified, requirements reviewed for correctness, requirements prioritized
based on customer needs)
5. Specification The goal of the specification phase is to produce a specification document,
graphical model , mathematical model that wil serve as the contract between the system
builder and the client. This document must be understandable by both sides. It should be precise
enough for the developer to use. It must contain acceptance criteria, It also include written work

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
products produced describing the function, performance, and development constraints for a
computer-based system
6. Requirements validation formal technical reviews used to examine the specification work
products to ensure requirement quality and that all work products conform to agreed upon
standards for the process, project, and products Software verification and validation. In
software project management, software testing, and software engineering, verification and
validation (V&V) is the process of checking that a software system meets specifications and that it
fulfills its intended purpose. It may also be referred to as software quality control.
7. Requirements management ware Development new requirements emerge and existing
requirements change during all the stages of SDLC (Software Development Life Cycle). system
developers and Managers want to minimize these negative impacts to maintain the quality of
software system. To minimize these negative impacts, a process is needed for documenting the
changes and controlling the effects of changes. The name of needed process is “Requirement
Management”. Requirement Management is a process where changes in requirements are
documented and controlled. activities that help project team to identify, control, and track
requirements and changes as project proceeds, similar to software configuration management
(SCM) techniques. They also use traceability table which shows how the change in one particular
requirement will affect other part of system.

Figure: Traceability Table

Prioritizing Requirements
Requirement prioritization is used in Software product management for determining which candidate
requirements of a software product should be included in a certain release. Requirements are also prioritized to
minimize risk during development so that the most important or high risk requirements are implemented first.

Cost Value Approach: A good and relatively easy to use method for prioritizing software product requirements
is the cost-value approach. This approach was created by Joachim Karlsson and Kevin Ryan. The approach was
then further developed and commercialized in the company Focal Point (that was acquired by Telelogic in
2005). Their basic idea was to determine for each individual candidate requirement what the cost of
implementing the requirement would be and how much value the requirement has. The assessment of values
and costs for the requirements was performed using the Analytic Hierarchy Process (AHP). This method was
created by Thomas Saaty. Its basic idea is that for all pairs of (candidate) requirements a person assesses a
value or a cost comparing the one requirement of a pair with the other. For example, a value of 3 for (Req1,
Req2) indicates that requirement 1 is valued three times as high as requirement 2. Trivially, this indicates that

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
(Req2, Req1) has value ⅓. In the approach of Karlsson and Ryan, five steps for reviewing candidate
requirements and determining a priority among them are identified. These are summed up below. [3]

1. Requirement engineers carefully review candidate requirements for completeness and to ensure that
they are stated in an unambiguous way.
2. Customers and users (or suitable substitutes) apply AHP’s pairwise comparison method to assess the
relative value of the candidate requirements.
3. Experienced software engineers use AHP’s pairwise comparison to estimate the relative cost of
implementing each candidate requirement.
4. A software engineer uses AHP to calculate each candidate requirement’s relative value and
implementation cost, and plots these on a cost-value diagram. Value is depicted on the y axis of this
diagram and estimated cost on the x-axis.
5. The stakeholders use the cost-value diagram as a conceptual map for analyzing and discussing the
candidate requirements. Now software managers prioritize the requirements and decide which will be
implemented.

Now, the cost-value approach and the prioritizing of requirements in general can be placed in its
context of Software product management. As mentioned earlier, release planning is part of this
process. Prioritization of software requirements is a sub process of the release planning process.

The release planning process consists of the sub processes:

1. Prioritize requirements
2. Select requirements
3. Define release requirements
4. Validate release requirements
5. Prepare launch

Kano Model: Noriaki Kano developed the Kano analysis model in the late 1980s to identify and contrast
essential customer requirements from incremental requirements. One of his goals was to initiate critical
thinking about the nature of requirements. His characterization approach can be used to drive prioritization of
software requirements.
Kano analysis allows us to prioritize requirements as a function of customer satisfaction.
Kano defined four categories into which each feature or requirement can be classified (an Apple®
iPod® is used for examples in each of the following four requirement categories):
1.Surprise and delight. Capabilities that differentiate a product from its competition (e.g. the iPod
nav-wheel).
2.More is better. Dimensions along a continuum with a clear direction of increasing utility (e.g.
battery life or song capacity).
3.Must be. Functional barriers to entry—without these capabilities, customers will not use the
product (e.g. UL approval).
4.Better not be. Represents things that dissatisfy customers (e.g. inability to increase song capacity
via upgrades).

Surprise and delight requirements: For just a moment, think about software as a user, not an accountant. We
want software that is interesting and fun to use. Affordances in the user interface that allow us to just “do what
comes naturally” and have the software do exactly what we want. New ideas that make software better. We’re
not talking about a button that pops up dancing squirrels when clicked, rather valuable features that make
software great.

Great examples of valuable features include:


Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad
https://t.me/SPPU_SE_COMP
• The nav-wheel on the iPod, as a good hardware example.

• Google’s Gmail™ use of labels instead of folders for organizing email, as a good software example.

• Contextual help buttons that open to exactly the right page in a Help file.

More is better requirements: These are the most easily grasped concepts—bigger, faster, better, stronger. The
challenge in writing a more is better requirement is in knowing when enough is enough. Requirements such as
“minimize” or “maximize” are ambiguous. What is the theoretical minimum response time for a search engine?
Does it take a few hundred micro-seconds for the bits to travel from the server to the user, plus a few micro-
seconds for switch latency, plus a few nano-seconds for a CPU to find the answer? It would be completely
impractical to unambiguously request that our developers minimize search time. Specifying precise objectives
can be very difficult as well. The law of diminishing returns comes into play. There is a concept in economics
called utility which represents the tangible and intangible benefits of something. We can consider the utility of a
feature wit h respect to the target users.
Must be requirements: Must be requirements are the easiest to elicit and are the ones that most people
consider when they talk about requirements. Stakeholders can usually tell us what they must have in the
software. In the Am I hot or not? post on requirements prioritization, the company 37signals focuses on this as
its primary criterion for inclusion in a software initial release. They choose to only put essential, or must be
requirements, into the initial release of software.
Better not be requirements : This is just the opposite of surprise and delight. If dreamers think about what
makes something great, then critics complain about what holds it back. This bucket does not have a place in
Kano’s analysis. Saying, “Users don’t like confusing navigation,” does not provide any benefit relative to saying,
“Users prefer intuitive navigation.” We suggest not using this category at all.

Figure Kano Model

What is UML?
UML can be described as a general purpose visual modelling language to visualize, specify, construct, and
document software system. UML was created by the Object Management Group (OMG) and UML 1.0
specification draft, was proposed to the OMG in January 1997. OMG is continuously making efforts to create a

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
truly industry standard. UML is a pictorial language used to make software blueprints . The Main types of UML
Diagram are

1. A structure diagram is a conceptual modeling tool used to document the different structures that
make up a system such as a database or an application. It shows the hierarchy or structure of the
different components or modules of the system and shows how they connect and interact with
each other.
2. Behavioral diagrams visualize, specify, construct, and document the dynamic aspects of a system.
The behavioral diagrams are categorized as follows: use case diagrams, interaction diagrams, state
chart diagrams, and activity diagrams.
3. Interaction diagrams illustrate how objects interact via messages. They are used for dynamic
object modeling. There are two common types: sequence and communication interaction diagrams.

Based on Above three types UML diagrams are further classified as

Relationship in UML Diagrams : Relationship is another most important building block of UML. It shows how
the elements are associated with each other and this association describes the functionality of an application

1. Dependency
2. Association
3. Generalization
4. Realization
5. Aggregation
6. Composition

1. Aggregation: An aggregation relationship depicts a classifier as a part of, or as subordinate to, another
classifier.
2. Association : An association relationship is a structural relationship between two model elements that
shows that objects of one classifier (actor, use case, class, interface, node, or component) connect and can
navigate to objects of another classifier. Even in bidirectional relationships, an association connects two
classifiers, the primary (supplier) and secondary (client).
a. Association Role: Appears near the end of the association on the diagram top show the role of object.
b. Association Name: Identifies the association. Also appears on the diagram near the mid-point of the
association.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
c. Multiplicity: allows to specify cardinality i.e. number of elements of some collection of elements.
multiplicity element defines some collection of elements, and includes both multiplicity as well as
specification of order and uniqueness of the collection elements.
3. Composition: A composition relationship represents a whole–part relationship and is a type of aggregation.
A composition relationship specifies that the lifetime of the part classifier is dependent on the lifetime of the
whole classifier.
4. Dependency: A dependency relationship indicates that changes to one model element (the supplier or
independent model element) can cause changes in another model element (the client or dependent model
element). The supplier model element is independent because a change in the client does not affect it. The
client model element depends on the supplier because a change to the supplier affects the client.
5. Generalization: A generalization relationship indicates that a specialized (child) model element is based on
a general (parent) model element. Although the parent model element can have one or more children, and
any child model element can have one or more parents, typically a single parent has multiple children. In
UML 2.0, several classes can constitute a generalization set of another class. Generalization relationships
appear in class, component, and use-case diagrams.
6. Realization: A realization relationship exists between two model elements when one of them must realize,
or implement, the behavior that the other specifies. The model element that specifies the behavior is the
supplier, and the model element that implements the behavior is the client. In UML 2.0, this relationship is
normally used to specify those elements that realize or implement the behavior of a component.

Use Case Diagram


A use case Represents user interaction with the system. It is Used to gather the requirements of a
system. Also Used to get an outside view of a system. It can Identify the external and internal factors
influencing the system and can Show the interaction among the requirements are actors.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Activity Diagram: Activity diagram is another important diagram in UML to describe the dynamic
aspects of the system. Activity diagram is basically a flowchart to represent the flow from one activity
to another activity. The activity can be described as an operation of the system. The control flow is
drawn from one operation to another. An activity in Unified Modeling Language (UML) is a major task
that must take place in order to fulfill an operation contract. Activities can be represented in activity
diagrams. An activity can represent: The invocation of an operation. A step in a business process.

Swim Lane Diagram: Swim Lane Diagram are similar to Activity Diagram with some Variation. It
Allows to Show Flow of Activities described by Use cases. Also Describe Which Actor are involved in
specific functions, We can differentiate the Activities in different lane of Users

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Class Diagram:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static
structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among objects. Class diagram describes the
attributes and operations of a class and also the constraints imposed on the system. The class
diagrams are widely used in the modeling of objectoriented systems because they are the only UML
diagrams, which can be mapped directly with object-oriented languages.
Class. A class represents a relevant concept from the domain, a set of persons, objects, or ideas that
are depicted in the IT system.

Package diagram
Package diagram is used to simplify complex class diagrams, you can group classes into packages. A
package is a collection of logically related UML elements. The diagram below is a business model in
which the classes are grouped into packages: Packages appear as rectangles with small tabs at the top.
A package in the Unified Modeling Language is used "to group elements, and to provide a namespace
for the grouped elements". A package may contain other packages, thus providing for a hierarchical
organization of packages. Pretty much all UML elements can be grouped into packages.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
State Chart Diagram
A state diagram, also called a state machine diagram or statechart diagram, is an illustration of the
states an object can attain as well as the transitions between those states in the Unified Modeling
Language (UML). Purpose of Statechart Diagrams. Statechart diagram is one of the five UML diagrams
used to model the dynamic nature of a system. They define different states of an object during its
lifetime and these states are changed by events. Statechart diagrams are useful to model the reactive
systems.

Sequence Diagram
Sequence diagrams are sometimes called event diagrams or event scenarios. A sequence diagram
shows, as parallel vertical lines (lifelines), different processes or objects that live simultaneously, and,
as horizontal arrows, the messages exchanged between them, in the order in which they occur. The
sequence diagram is a good diagram to use to document a system's requirements and to flush out a
system's design. The reason the sequence diagram is so useful is because it shows the interaction
logic between the objects in the system in the time order that the interactions take place. Sequence
diagrams can be useful references for businesses and other organizations.

Try drawing a sequence diagram to:

 Represent the details of a UML use case.


 Model the logic of a sophisticated procedure, function, or operation.
 See how objects and components interact with each other to complete a process.
 Plan and understand the detailed functionality of an existing or future scenario.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP

State Machine Diagram


State machine diagram is a behavior diagram which shows discrete behavior of a part of designed system
through finite state transitions. State machine diagrams can also be used to express the usage protocol of
part of a system.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Data Modeling
Data modeling is a modeling method, used in software engineering to produce a type of conceptual
data model (or semantic data model) of a system, often a relational database, and its requirements in a
top-down fashion Data Vault Modeling. It include study of all data their attributes relationship hold
between data and so on.
1. Data Object: A data object is a region of storage that contains a value or group of values. Each
value can be accessed using its identifier or a more complex expression that refers to the object. In
addition, each object has a unique data type.
2. Data Attribute: In computing, an attribute is a specification that defines a property of an object,
element, or file. It may also refer to or set the specific value for a given instance of such. For clarity,
attributes should more correctly be considered metadata. An attribute is frequently and generally a
property of a property.
3. Relationship: It represents how Data Object are correlated with each other.
4. Cardinality and Modality: Cardinality Is Similar to Multiplicity. Modality Represents the
minimum & Maximum number of relationship. If Occurrence of Relation is Optional Modality is 0, If
Occurrence of Relation is Mandatory Modality is 1.

Data & Control Flow Modeling

Flow-Oriented Modeling Represents how data objects are transformed at they move through the
system A data flow diagram (DFD) is the diagrammatic form that is used Considered by many to be an
‘old school’ approach, flow-oriented modeling continues to provide a view of the system that is unique
it should be used to supplement other analysis model elements .

Data Flow Diagram: A data flow diagram (DFD) maps out the flow of information for any process or
system. It uses defined symbols like rectangles, circles and arrows, plus short text labels, to show data
inputs, outputs, storage points and the routes between each destination. Data flowcharts can range
from simple, even hand-drawn process overviews, to in-depth, multi-level DFDs that dig progressively
deeper into how the data is handled. They can be used to analyze an existing system or model a new
one. Like all the best diagrams and charts, a DFD can often visually “say” things that would be hard to
explain in words, and they work for both technical and nontechnical audiences, from developer to CEO.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
That’s why DFDs remain so popular after all these years. While they work well for data flow software
and systems, they are less applicable nowadays to visualizing interactive, real-time or database-
oriented software or systems.
A level 0 data flow diagram (DFD), also known as a context diagram, shows a data system as a whole
and emphasizes the way it interacts with external entities. This DFD level 0 example shows how such
a system might function within a typical retail business. The Level 1 DFD shows how the system is
divided into sub-systems (processes), each of which deals with one or more of the data flows to or
from an external agent, and which together provide all of the functionality of the system as a whole.
The DFD Can be extended up to level 7.

Using any convention’s DFD rules or guidelines, the symbols depict the four components of data flow
diagrams.

1. External entity: an outside system that sends or receives data, communicating with the system
being diagrammed. They are the sources and destinations of information entering or leaving
the system. They might be an outside organization or person, a computer system or a business
system. They are also known as terminators, sources and sinks or actors. They are typically
drawn on the edges of the diagram.
2. Process: any process that changes the data, producing an output. It might perform
computations, or sort data based on logic, or direct the data flow based on business rules. A
short label is used to describe the process, such as “Submit payment.”
3. Data store: files or repositories that hold information for later use, such as a database table or
a membership form. Each data store receives a simple label, such as “Orders.”
4. Data flow: the route that data takes between the external entities, processes and data stores. It
portrays the interface between the other components and is shown with arrows, typically
labeled with a short data name, like “Billing details.”

DFD Level 0 DFD Level 1

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
Control Flow Diagram & Control Specification
A control flow diagram (CFD) is a diagram to describe the control flow of a business process, process
or review. Control flow diagrams are widely used in multiple engineering disciplines. A control flow
diagram is a very helpful tool for both systems developers and stakeholders. It shows where control
begins and ends, and where it branches on all points in between. Symbols used in the diagram show us
the flow of control and the inputs and outputs to various processes.
 The Control Specification Represents the Behavior of the System.
 The Behavior of the system is represented using State chart also called as State Transition
diagram.

Control specifications may refer to the parameters of a physical production process, or to change
management and result measurement methods employed in the corporate environment to improve
business processes.

Process Specification:

The process specification is used to describe all flow model processes. The process
specification includes Program Design Language, Pseudo code or Structures English sentences
and all related information related to the Process

 Example: Consider A Process of Analyzing triangle

Software Requirements Specification (SRS)


A software requirements specification (SRS) is a description of a software system to be developed.
The software requirements specification lays out functional and non-functional requirements, and
it may include a set of use cases that describe user interactions that the software must provide. In
short, the purpose of this SRS document is to provide a detailed overview of our software product, its
parameters and goals. This document describes the project's target audience and its user interface,
hardware and software requirements.

Characteristics of requirements and SRS

1. Complete: The requirements must be complete, what is the meaning of completeness? It means
that all the required information to implement (read Code) the requirement. There is no need to
assume anything in order to implement the same. One of the important aspects of completeness is
to also have measuring units, if applicable.

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad


https://t.me/SPPU_SE_COMP
2. Consistent: Consistency is an important aspect of requirements. It means that all inputs to any
process, must be processed similarly. It should not happen that processes produce different
outputs for inputs coming from different sources. Consistent requirements also mean that you will
not find a contradicting information in the SRS document.
3. Feasible: This is one of the crucial part of requirements capturing. All the requirements included
in the SRS must be feasible to implement. A requirement to be feasible must be:
 Implementable within the given time frame and budget
 Implementable using the existing and chosen technology platform
 A feature, which will be used by the end users
4. Modifiable: Every SRS document must be modifiable. In the modern software projects,
requirements are never static and don’t stop coming after the SRS document is signed off. We can’t
expect the customers to stop altering the requirements or adding new requirements as we also
need to look at business needs. So the best way to manage the requirements is to manage these
changes. In order to do so, we must have an SRS, which clearly identifies each and every
requirement in a systematic manner. In case of any changes, the specific requirements and the
dependent ones can be modified accordingly without impact the others.
5. Unambiguous: Unambiguous means a single interpretation. If a requirement is defined in such a
manner that it can only be interpreted in one way, it means that the requirement is unambiguous.
All subjective words or statements must be eliminated from the requirements.
6. Testable: A testable requirement can be defined as a requirement, which can be tested and
validated using any of the following methods are Inspection, Walkthrough, Demonstration, Testing.

SRS Template:

Prepared By: Prof V. K. Wani SNJB’s KBJ CoE Chandwad

You might also like