Software Engineering Bit 12texr
Software Engineering Bit 12texr
Stage 1: Planning and Requirement Analysis Requirement analysis is the most important and
fundamental stage in SDLC. It is performed by the senior members of the team with inputs from the
customer, the sales department, market surveys and domain experts in the industry. This information is
then used to plan the basic project approach and to conduct product feasibility study in the economical,
operational and technical areas. The outcome of the technical feasibility study is to define the various
technical approaches that can be followed to implement the project successfully with minimum risks.
Stage 2: Defining Requirements Once the requirement analysis is done the next step is to clearly define
and document the product requirements and get them approved from the customer or the market
analysts. This is done through an SRS (Software Requirement Specification) document which consists of
all the product requirements to be designed and developed during the project life cycle.
Stage 3: Designing the Product Architecture SRS is the reference for product architects to come out with
the best architecture for the product to be developed. Based on the requirements specified in SRS, usually
more than one design approach for the product architecture is proposed and documented in a DDS -
Design Document Specification. A design approach clearly defines all the architectural modules of the
product along with its communication and data flow representation with the external and third party
modules (if any). The internal design of all the modules of the proposed architecture should be clearly
defined with the minutest of the details in DDS.
Stage 4: Building or Developing the Product In this stage of SDLC the actual development starts and the
product is built. The programming code is generated as per DDS during this stage. Developers must follow
the coding guidelines defined by their organization and programming tools like compilers, interpreters,
debuggers, etc. are used to generate the code. Different high level programming languages such as C, C++,
Pascal, Java and PHP are used for coding. The programming language is chosen with respect to the type of
software being developed.
Stage 5: Testing the Product This stage is usually a subset of all the stages as in the modern SDLC models,
the testing activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing
only stage of the product where product defects are reported, tracked, fixed and retested, until the
product reaches the quality standards defined in the SRS. Stage 6: Deployment in the Market and
Maintenance Once the product is tested and ready to be deployed it is released formally in the
appropriate market. The product may first be released in a limited segment and tested in the real business
environment (UAT- User acceptance testing). Then based on the feedback, the product may be released as
it is or with suggested enhancements in the targeting market segment. After the product is released in the
market, its maintenance is done for the existing customer base.
Classical Waterfall Model /Waterfall model
Classical waterfall model is the basic software development life cycle model. Classical waterfall model
divides the life cycle into a set of phases. This model considers that one phase can be started after
completion of the previous phase. That is the output of one phase will be the input to the next phase. Thus
the development process can be considered as
a sequential flow in the waterfall. Here the
phases do not overlap with each other. The
different sequential phases of the classical
waterfall model are shown in the below figure:
The classical waterfall model divides the life
cycle into six phases as shown in Figure . It can
be easily observed from this figure that the
diagrammatic representation of the classical
waterfall model resembles a multi-level
waterfall. This resemblance justifies the name
of the model.
Phases of the classical waterfall model
1.Feasibility Study: The main goal of this phase is to determine whether it would be financially and
technically feasible to develop the software. The feasibility study involves carrying out several activities
such as collection of basic information relating to the software such as the different data items that would
be input to the system, the processing require to be carried out during the data and the output data
required to be produced by the system.
2.Requirements analysis and specification: The aim of the requirement analysis and specification phase is
to understand the exact requirements of the customer and document them properly. This phase consists
of two different activities.Requirement gathering and analysis: Firstly all the requirements regarding the
software are gathered from the customer and then the gathered requirements are analyzed. Requirement
specification: These analyzed requirements are documented in a software requirement specification (SRS)
document. SRS document serves as a contract between development team and customers.
3.System Design:The aim of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming language.
4.Coding and Unit testing: In coding phase software design is translated into source code using any
suitable programming language. The coding phase is sometimes called as implementation phase. The end
product of this phase is a set of program modules that have been individually unit tested.
The aim of the unit testing phase is to check whether each module is working properly or not.
5.Integration and System testing: Integration of different modules are undertaken soon after they have
been coded and unit tested. Integration of various modules is carried out incrementally over a number of
steps. During each integration step, previously planned modules are added to the partially integrated
system and the resultant system is tested Alpha testing Beta testing Acceptance testing
Maintainence: Maintenance is the most important phase of a software life cycle. The effort spent on
maintenance is the 60% of the total effort spent to develop a full software. There are basically three types
of maintenance :
Corrective Maintenance: This type of maintenance is carried out to correct errors that were not
discovered during the product development phasePerfective Maintenance: This type of maintenance is
carried out to enhance the functionalities of the system based on the customer’s request.Adaptive
Maintenance: Adaptive maintenance is usually required for porting the software to work in a new
environment such as work on a new computer platform or with a new operating system.
Advantages of waterfall model This model is very simple and is easy to understand.Phases in this model
are processed one at a time.Each stage in the model is clearly defined.This model works well for smaller
projects and projects where requirements are well understood.
Drawbacks of Classical Waterfall Model No feedback path: In classical waterfall model evolution of a
software from one phase to another phase is like a waterfall. It assumes that no error is ever committed by
developers during any phases. Therefore, it does not incorporate any mechanism for error
correction.Difficult to accommodate change requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but actually
customers’ requirements keep on changing with time. It is difficult to accommodate any change requests
after the requirements specification phase is complete.No overlapping of phases: This model recommends
that new phase can start only after the completion of the previous phase. But in real projects, this can’t be
maintained. To increase the efficiency and reduce the cost, phases may overlap.
Iterative waterfall model proposed to
overcome the shortcomings of the classical
waterfall model. The main drawback in the
classical waterfall model was that any sort of
error in any of the phases was detected at the
end of the entire lifecycle. So, to overcome this
problem, an enhanced version of the classical
waterfall model was introduced.As the name
suggests, in this model, iterations are allowed.
What this means is that, in this type of model,
the developers are free to go into the previous
phases of development to make any sort of modification or changes. This is possible because, in this
model, there are feedback paths provided for every phase and no matter in which phase you are working,
you can always go back to the previous phases and make the necessary changes.When errors are detected
at some later phase, these feedback paths allow correcting errors committed by programmers during some
phase. The feedback paths allow the phase to be reworked in which errors are committed and these
changes are reflected in the later phases. But, there is no feedback path to the stage – feasibility study,
because once a project has been taken, does not give up the project easily.It is good to detect errors in the
same phase in which they are committed. It reduces the effort and time required to correct the errors.
Advantages of Iterative Waterfall Model
Feedback Path: iterative waterfall allows the mechanism of error connection because there is a feedback
path from one phase to its preceding phase which it lacks in the Waterfall Model.Simple: iterative waterfall
model is simple to understand and use. It is the most widely used software development model evolved so
Parallel development: can be done.
Disadvantage of Iterative Waterfall Model
More resource: may be required to implement the iterative waterfall model.Difficult to include change
requests: In the iterative waterfall model, all the requirements must be clearly defined before starting of
the development phase but sometimes customer requirement changes which is difficult to incorporate
change requests that are made after development phase starts.Not support Intermediate delivery: Project
has to be fully completed before it delivered to the customer.Risk handling: Project is prone to many types
of risk but there is no risk handling mechanism.Not suitable for a small project.
Agile Development Models In earlier days Iterative Waterfall model was very popular to complete a
project. But nowadays developers face various problems while using it to develop a software. The main
difficulties included handling change requests from customers during project development and the high
cost and time required to incorporate these changes. To overcome these drawbacks of Waterfall model,
the Agile Software Development model was proposedAgile model is the combination of iterative and
incremental process models. Steps involved in agile SDLC models are:
• Requirement gathering • Requirement Analysis • Design
• Coding • Unit testing • Acceptance testing
The time to complete an iteration is known as a Time Box. Time-box refers to the maximum amount of
time needed to deliver an iteration to customers. The central principle of the Agile model is the delivery of
an increment to the customer after each Time-box.
Advantages: It reduces total development time of the whole project. Customer representative get the
idea of updated software products after each iretation. So, it is easy for him to change any requirement if
needed.
Disadvantages: Due to lack of formal documents, it creates confusion and important decisions taken
during different phases can be misinterpreted at any time by different team members. documentation.
SPIRAL MODEL
Spiral model is one of the most important Software Development Life Cycle models, which provides
support for Risk Handling. In its diagrammatic representation, it looks like a spiral with many loops. The
exact number of loops of the spiral is unknown and can vary from project to project. Each loop of the
spiral is called a Phase of the software development
process.Each phase of Spiral Model is divided into four
quadrants as shown in the above figure. The functions
of these four quadrants are discussed below-
Objectives determination and identify alternative
solutions: Requirements are gathered from the
customers and the objectives are identified,
elaborated and analyzed at the start of every phase.
Then alternative solutions possible for the phase are
proposed in this quadrant.Identify and resolve Risks:
During the second quadrant all the possible solutions
are evaluated to select the best possible solution.
Then the risks associated with that solution is
identified and the risks are resolved using the best
possible strategy. At the end of this quadrant, Prototype is built for the best possible solution.Develop next
version of the Product: During the third quadrant, the identified features are developed and verified
through testing. At the end of the third quadrant, the next version of the software is available.
Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the so far developed
version of the software. In the end, planning for the next phase is started.
Risk handling in spiral ModelA risk is any adverse situation that might affect the successful completion of a
software project. The most important feature of the spiral model is handling these unknown risks after the
project has started. Such risk resolutions are easier done by developing a prototype. The spiral model
supports coping up with risks by providing the scope to build a prototype at every phase of the software
development.
Prototyping Model also support risk handling, but the risks must be identified completely before the start
of the development work of the project. But in real life project risk may occur after the development work
starts, in that case, we cannot use Prototyping Model. In each phase of the Spiral Model, the features of
the product dated and analyzed and the risks at that point of time are identified and are resolved through
prototyping. Thus, this model is much more flexible compared to other SDLC models.
Phases of the spiral ModelEach phase in this model is split into four sectors (or quadrants). In the first
quadrant, a few feature of the software are identified to be taken up for immediate development with
each iteration around the spiral, more complete versions of the software get built.
Quadrant 1: The objectives are investigated, elaborated and analysed. Based on this the risks involved in
the phase are identified. In this quadrant, alternative solutions possible for the phase are
proposed.Quadrant 2: During the second quadrant, the alternative solutions are evaluated to select the
best possible solution.Quadrant 3: activities during the third quadrant consists of developing and verifying
the next level of the software. At the end of the third quadrant, the identified features have been
implemented and the next version of the software is available.Quadrant 4: Activities during the fourth
stage include the results of the stages traversed so far with the customer and planning the next iteration of
the spiral.
Advantages of Spiral Model:Risk Handling: The projects with many unknown risks that occur as the
development proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.Good for large projects: It is recommended to use the Spiral
Model in large and complex projects.Flexibility in Requirements: Change requests in the Requirements at
later phase can be incorporated accurately by using this model.Customer Satisfaction: Customer can see
the development of the product at the early phase of the software development and thus, they habituated
with the system by using it before completion of the total product.Disdvantages of Spiral Model:Complex:
The Spiral Model is much more complex than other SDLC models.Expensive: Spiral Model is not suitable
for small projects as it is expensive.Too much dependable on Risk Analysis: The successful completion of
the project is very much dependent on Risk Analysis. Without very highly experienced expertise, it is going
to be a failure to develop a project using this model.Difficulty in time management: As the number of
phases is unknown at the start of the project, so time estimation is very difficult.
Prototyping model is defined as the process of developing a working model of a product or system that
has to be engineered. Prototyping Model is one of the most popularly used Software Development Life
Cycle Models .This model is used when the customers do not know the exact project requirements before
hand. In this model, a prototype of the end product is first developed, tested and refined as per customer
feedback repeatedly till a final acceptable prototype is achieved which forms the basis for developing the
final product. Types of Prototyping Models
Rapid Throwaway prototypes Incremental prototype
Evolutionary prototype Extreme prototype
PROJECT PLANNING Once a project has been found to be feasible, software project managers undertake
project planning. Initial project planning is undertaken and completed before any development activity
starts. Project planning requires utmost care and attention since commitment to unrealistic time and
resource estimates result in schedule slippage. Schedule delays can cause customer dissatisfaction and
adversely affect team morale. It can even cause project failure. For this reason, project planning is
undertaken by the project managers with utmost care and attention. For effective project planning, in
addition to a thorough knowledge of the various estimation techniques, past experience is crucial. During
project planning, the project manager performs the following activities. Note that the brief description of
the activities.
Estimation: The following project attributes are estimated.
Cost: How much is it going to cost to develop the software product?
Duration: How long is it going to take to develop the product?
Effort: How much effort would be necessary to develop the product?
The effectiveness of all later planning activities such as scheduling and staffing are dependent on the
accuracy with which these three estimations have been made. Scheduling: After all the necessary project
parameters have been estimated, the schedules for manpower and other resources are developed.
Staffing: Staff organisation and staffing plans are made.
Risk management: This includes risk identification, analysis, and planning.
Miscellaneous plans: This includes making several other plans such as quality assurance plan, and
configuration management plan
The basic COCOMO model COnstructive COst estimation MOdel (COCOMO) was proposed by Boehm.
COCOMO prescribes a three stage process for project estimation. In the first stage, an initial estimate is
arrived at. Over the next two stages, the initial estimate is refined to arrive at a more accurate estimate.
COCOMO uses both single and multivariable estimation models at different stages of estimation. The three
stages of COCOMO estimation technique are— basic COCOMO, intermediate COCOMO, and complete
COCOMO. Boehm postulated that any software development project can be classified into one of the
following three categories based on the development complexity— organic, semidetached, embedded.
The basic COCOMO model is a single variable heuristic model that gives an approximate estimate of the
project parameters. The basic COCOMO estimation model is given by expressions of the following forms:
Effort = a1 × (KLOC)a2 PM
Tdev = b1 × (Effort)b2 months
where, KLOC is the estimated size of the software • product expressed in Kilo Lines Of Code. a1, a2, b1, b2
are constants for each category of software product. Tdev is the estimated time to develop the software,
expressed in months. Effort is the total effort required to develop the software product, expressed in
person-months
SOFTWARE REQUIREMENTS SPECIFICATION (SRS) The SRS document usually contains all the user
requirements in a structured though an informal form. All the documents produced during a software
development life cycle, SRS document is probably the most important document and is the toughest to
write.
Users of SRS Document Usually, a large number of different people need the SRS document for very
different purposes. Some of the important categories of users of the SRS document and their needs for use
are as follows:
Users, customers, and marketing personnel: Software developers: Test engineers: User documentation
writers:
Project managers: Maintenance engineers:
The important uses of a well-formulated SRS document: Forms an agreement between the customers and
the developers: A good SRS document sets the stage for the customers to form their expectation about
the software and the developers about what is expected from the software.
Reduces future reworks: Careful review of the SRS document can reveal omissions, misunderstandings,
and inconsistencies early in the development cycle.
Provides a basis for estimating costs and schedules:
Project managers usually estimate the size of the software from an analysis of the SRS document.
Provides a baseline for validation and verification:
The SRS document provides a baseline against which compliance of the developed software can be
checked. It is also used by the test engineers to create the test plan.
Facilitates future extensions: The SRS document usually serves as a basis for planning future
enhancements.
Characteristics of a Good SRS Document Some of the identified desirable qualities of an SRS document are
the Concise: The SRS document should be concise and at the same time unambiguous, consistent, and
complete. Verbose and irrelevant descriptions reduce readability and also increase the possibilities of
errors in the document.
Implementation-independent:
The SRS should be free of design and implementation decisions unless those decisions reflect actual
requirements. It should only specify what the system should do and refrain from stating how to do these.
This means that the SRS document should specify the externally visible behavior of the system and not
discuss the implementation issues
Traceable: It should be possible to trace a specific requirement to the design elements that implement it
and vice ve
Modifiable: Customers frequently change the requirements during the software development due to a
variety of reasons. Therefore, in practice the SRS document undergoes several revisions during software
development.
Identification of response to undesired events: The SRS document should discuss the system responses to
various undesired events and exceptional conditions that may arise.
Verifiable: All requirements of the system as documented in the SRS document should be verifiable.
Attributes of Bad SRS Documents
Over-specification: It occurs when the analyst tries to address the “how to” aspects in the SRS document.
Forward references: One should not refer to aspects that are discussed much later in the SRS document.
Forward referencing seriously reduces readability of the specification.
Wishful thinking: This type of problems concern description of aspects which would be difficult to
implement. Noise: The term noise refers to presence of material not directly relevant to the software
development process.
An SRS document should clearly document the following aspects of a software:
Functional requirements Non-functional requirements Design and implementation constraints
External interfaces required ▪ Other non-functional requirements ❖ Goals of implementation.
Coupling The coupling between two modules indicates the degree of interdependence between them.
Intuitively, if two modules interchange large amounts of data, then they are highly interdependent or
coupled. The degree of coupling between two modules depends on their interface complexity. The
interface complexity is determined based on the number of parameters and the complexity of the
parameters that are interchanged while one module invokes the functions of the other module.
Classification
Data coupling Control coupling: Content coupling
Stamp coupling: Common coupling
Cohesion Cohesion is a measure of the functional strength of a module, whereas the coupling between two
modules is a measure of the degree of interaction (or interdependence) between the two modules
Cohesiveness of a module is the degree to which the different functions of the module co-operate to work
towards a single objective. The different modules of a design can possess different degrees of freedom.
Different classes
Coincidental cohesion Procedural cohesion Functional cohesion
Logical cohesion Communicational cohesion
Temporal cohesion Sequential cohesion
Data Flow Diagrams (DFDs) The DFD (also known as the bubble chart) is a simple graphical formalism that
can be used to represent a system in terms of the input data to the system, various processing carried out
on those data, and the output data generated by the system. It is simple to understand and use. A DFD
model uses a very limited number of primitive symbols to represent the functions performed by a system
and the data flow among these functions. Abstract model of a system, various details of the system are
slowly introduced through different levels of the hierarchy.
Different concepts associated with building a DFD model of a system.
Primitive symbols used for constructing DFDs There are essentially five different types of symbols used for
constructing DFDs. These primitive symbols are depicted below
A function is represented using a circle. This symbol is called a process or a bubble. Bubbles are annotated
with the names of the corresponding functions
An external entity such as a librarian, a library member, etc. is represented by a rectangle. The external
entities are essentially those physical entities external to the software system which interact with the
system by inputting data to the system or by consuming the data produced by the system. In addition to
the human users, the external entity symbols can be used to represent external hardware and software
such as another application software that would interact with the software being modelled.
Synchronous Operation If two bubbles are directly connected by a data flow arrow, then they are
synchronous
Here, the validate-number bubble can start processing only after the read-number bubble has supplied
data to it; and the read-number bubble has to wait until the validate-number bubble has consumed its
data.
Asynchronous operations If two bubbles are connected through a data store, The data produced by a
producer bubble gets stored in the data store. It is therefore possible that the producer bubble stores
several pieces of data items, even before the consumer bubble consumes any of them.
BLACK-BOX TESTING In black-box testing, test cases are designed from an examination of the
input/output values only and no knowledge of design or code is required. Black Box Testing is a software
testing method in which the functionalities of software applications are tested without having knowledge
of internal code structure, implementation details and internal paths. Black Box Testing mainly focuses on
input and output of software applications and it is entirely based on software requirements and
specifications. It is also known as Behavioral Testing. How to do Black Box Testing
Here are the generic steps followed to carry out any type of Black Box Testing.
Initially, the requirements and specifications of the system are examined. Tester chooses valid inputs
(positive test scenario) to check whether System processes them correctly. Also, some invalid inputs
(negative test scenario) are chosen to verify that the System is able to detect them. Tester determines
expected outputs for all those inputs.Software tester constructs test cases with the selected inputs. The
test cases are executed. Software tester compares the actual outputs with the expected outputs. Defects if
any are fixed and re-tested.
The following are the two main approaches available to design black box test cases:
Equivalence class partitioning In the equivalence class partitioning approach, the domain of input values
to the unit under test is partitioned into a set of equivalence classes. The partitioning is done such that for
every input data belonging to the same equivalence class, the program behaves similarly. Equivalence
classes for a unit under test can be designed by examining the input data and output data.
Boundary value analysis Boundary value analysis-based test suite design involves designing test cases
using the values at the boundaries of different equivalence classes. To design boundary value test cases, it
is required to examine the equivalence classes to check if any of the equivalence classes contains a range
of values. For those equivalence classes that are not a range of values (i.e., consist of a discrete collection
of values) no boundary value test cases can be defined. For an equivalence class that is a range of values,
the boundary values need to be included in the test suite. For example, if an equivalence class contains the
integers in the range 1 to 10, then the boundary value test suite is {0,1,10,11}. The important steps in the
black-box test suite design approach:
Examine the input and output values of the program. Identify the equivalence classes. Design equivalence
class test cases by picking one representative value from each equivalence class. Design the boundary
value test cases as follows. Examine if any equivalence class is a range of values. Include the values at the
boundaries of such equivalence classes in the test suite.
WHITE-BOX TESTING White-box testing is an important type of unit testing. A large number of white-box
testing strategies exist. Each testing strategy essentially designs test cases based on analysis of some
aspect of source code and is based on some heuristic. We first discuss some basic concepts associated with
white-box testing, and follow it up with a discussion on specific testing strategies. White Box Testing is
software testing technique in which internal structure, design and coding of software are tested to verify
flow of input-output and to improve design, usability and security. In white box testing, code is visible to
testers so it is also called Clear box testing, Open box testing, Transparent box testing, Code-based testing
and Glass box testing White box testing involves the testing of the software code Internal security holes
.Broken or poorly structured paths in the coding processes .The flow of specific inputs through the code
.Expected output. The functionality of conditional loops .Testing of each statement, object, and function on
an individual basis .A white-box testing strategy can either be coverage-based or fault-based.
Fault-based testing A fault-based testing strategy targets to detect certain types of faults. An example of a
fault-based strategy is mutation testing, which is discussed later in this section.
Coverage-based testing A coverage-based testing strategy attempts to execute (or cover) certain
elements of a program. Popular examples of coverage-based testing strategies are statement coverage,
branch coverage, multiple condition coverage, and path coverage-based testing.
Statement Coverage Statement coverage is a metric to measure the percentage of statements that are
executed by a test suite in a program at least once.
Branch Coverage Branch coverage is also called decision coverage (DC). It is also sometimes referred to as
all edge coverage. A test suite achieves branch coverage, if it makes the decision expression in each branch
in the program to assume both true and false values. In other words, for branch coverage each branch in
the CFG representation of the program must be taken at least once, when the test suite is executed.
Branch testing is also known as all edge testing,
Condition Coverage Condition coverage testing is also known as basic condition coverage (BCC) testing. A
test suite is said to achievebasic condition coverage (BCC), if each basic condition in every conditional
expression assumes both true and false values during testing. For example, for the following decision
statement: if(A||B && C) …; the basic conditions A, B, and C assume both true and false values.
Condition and Decision Coverage A test suite is said to achieve condition and decision coverage, if it
achieves condition coverage as well as decision (that is, branch) coverage. Obviously, condition and
decision coverage is stronger than both condition coverage and decision coverage.
Multiple Condition Coverage Multiple condition coverage (MCC) is achieved, if the test cases make the
component conditions of a composite conditional expression to assume all possible combinations of true
and false values.
Advantages of White Box Testing
Code optimization by finding hidden errors. White box tests cases can be easily automated. Testing is more
thorough as all code paths are usually covered. Testing can start early in SDLC even if GUI is not available.
Disadvantages of WhiteBox Testing White box testing can be quite complex and expensive. Developers
who usually execute white box test cases detest it. The white box testing by developers is not detailed can
lead to production errors. White box testing requires professional resources, with a detailed understanding
of programming and implementation. White-box testing is time-consuming, bigger programming
applications take the time to test fully.