0% found this document useful (0 votes)
58 views235 pages

Se Module

Uploaded by

Prachi Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views235 pages

Se Module

Uploaded by

Prachi Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 235

Content

• Software Engineering-process framework, the Capability Maturity Model (CMM), Advanced Trends in
Software Engineering
• Prescriptive Process Models: The Waterfall, Incremental Process Models, Evolutionary Process Models:
RAD & Spiral
• Agile process model: Extreme Programming (XP), Scrum, KANBAN
Introduction to Software Engineering
• Software is defined as:
Instructions + Data Structures + Documents
• Engineering is the branch of science and technology concerned with the design, building, and use of engines,
machines, and structures. It is the application of science, tools and methods to find cost effective solution to
simple and complex problems.
• SOFTWARE ENGINEERING is defined as a systematic, disciplined and quantifiable approach for the
development, operation and maintenance of software.
• The Evolving role of software:
• The dual role of Software is as follows:
• A Product- Information transformer producing, managing and displaying information.
• A Vessel for delivering a product- Control of computer(operating system),the communication of
information(networks) and the creation of other programs.
Introduction to Software Engineering (Cont…)

• Characteristics of software:
• Software is developed or engineered, but it is not manufactured in the classical sense.
• Software does not wear out, but it deteriorates due to change.
• Software is custom built rather than assembling existing components.
Introduction to Software Engineering (Cont…)
• The changing nature of software:
• The various categories of software are-
• System software: a collection of programs written to service other programs.
• Application software: performs specific functions.
• Engineering and scientific software: have been characterized by "number crunching" algorithms.
• Embedded software: resides in read-only memory and is used to control products and systems for the consumer
and industrial markets.
• Product-line software: software engineering methods, tools and techniques for creating a collection of similar
software systems from a shared set of software assets using a common means of production.
• Web-applications: an application program that is stored on a remote server and delivered over the Internet through
a browser interface.
• Artificial intelligence software: makes use of nonnumeric algorithms to solve complex problems that are not
amenable to computation or straightforward analysis.
Introduction to Software Engineering (Cont…)
• Software Engineering- A layered technology:

Fig: Software Engineering-A layered technology

• Quality focus - Bedrock that supports Software Engineering.


• Process - Foundation for software Engineering
• Methods - Provide technical How-to’s for building software
• Tools - Provide semi-automatic and automatic support to methods
Software Engineering-process framework
• A Process Framework:
• Establishes the foundation for a complete software process
• Identifies a number of framework activities applicable to all software projects
• Also include a set of umbrella activities that are applicable across the entire software process.
Software Engineering-process framework (Cont…)
• A Process Framework:
• A process framework comprises of :
• Common process framework
• Umbrella activities
• Framework activities
• Tasks
• Milestones, deliverables
• SQA points
Software Engineering-process framework (Cont…)

• A Process Framework:
It is used as a basis for the description of process models Generic process activities:
• Communication
• Planning
• Modeling
• Construction
• Deployment
Software Engineering-process framework (Cont…)
• A Process Framework:
Generic view of engineering complimented by a number of umbrella activities:
• Software project tracking and control
• Formal technical reviews
• Software quality assurance
• Software configuration management
• Document preparation and production
• Reusability management
• Measurement
• Risk management
The Capability Maturity Model (CMM)

• In recent years, there has been a significant emphasis on “process maturity.”


• The Software Engineering Institute (SEI) has developed a comprehensive model predicated on a set of
software engineering capabilities that should be present as organizations reach different levels of process
maturity.
• To determine an organization’s current state of process maturity, the SEI uses an assessment that results in a
five point grading scheme.
• The grading scheme determines compliance with a capability maturity model (CMM) that defines key
activities required at different levels of process maturity.
The Capability Maturity Model (CMM) (Cont…)
• The SEI approach provides a measure of the global effectiveness of a company's software engineering
practices and establishes five process maturity levels that are defined in the following manner:
• Level 1: Initial: The software process is characterized as ad hoc and occasionally even chaotic. Few processes
are defined, and success depends on individual effort.
• Level 2: Repeatable: Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar
applications.
• Level 3: Defined: The software process for both management and engineering activities is documented,
standardized, and integrated into an organization-wide software process. All projects use a documented and
approved version of the organization's process for developing and supporting software. This level includes all
characteristics defined for level 2.
The Capability Maturity Model (CMM) (Cont…)

• Level 4: Managed: Detailed measures of the software process and product quality are collected. Both the
software process and products are quantitatively understood and controlled using detailed measures. This
level includes all characteristics defined for level 3.
• Level 5: Optimizing: Continuous process improvement is enabled by quantitative feedback from the process
and from testing innovative ideas and technologies. This level includes all characteristics defined for level 4.
• The five levels defined by the SEI were derived as a consequence of evaluating responses to the SEI
assessment questionnaire that is based on the CMM.
• The results of the questionnaire are distilled to a single numerical grade that provides an indication of an
organization's process maturity.
The Capability Maturity Model (CMM) (Cont…)
• The SEI has associated key process areas (KPAs) with each of the maturity levels.
• The KPAs describe those software engineering functions (e.g., software project planning, requirements
management) that must be present to satisfy good practice at a particular level. Each KPA is described by
identifying the following characteristics:
• Goals—the overall objectives that the KPA must achieve.
• Commitments—requirements (imposed on the organization) that must be met to achieve the goals or provide
proof of intent to comply with the goals.
• Abilities—those things that must be in place (organizationally and technically) to enable the organization to meet
the commitments.
• Activities—the specific tasks required to achieve the KPA function.
• Methods for monitoring implementation—the manner in which the activities are monitored as they are put into
place.
• Methods for verifying implementation—the manner in which proper practice for the KPA can be verified.
Advanced Trends in Software Engineering
• Cross-Platform Development Tools: write apps that work on nearly every major desktop and mobile
platform.
• Artificial Intelligence and Machine Learning
• IoT
• Blockchain
• Continuous Delivery and Deployment: produces software in shorter cycles of feature development, bug
fixing and experimentation, with an aim to release software as quickly as possible. With continuous delivery,
apps are pushed into production for manual download, whereas continuous deployment updates software
through automated deployment.
• Progressive Web Apps: offers app-like experiences in the browser.
• Low-Code Development: to code applications through graphical user interfaces instead of complex
programming languages using drag-and-drop approach.
Process Models
• To solve actual problems in an industry setting, a software engineer or a team of engineers must incorporate a
development strategy that encompasses the process, methods, and tools. This strategy is known as a process
model or a software engineering paradigm.
• A process model for software engineering is chosen based on the nature of the project and application, the
methods and tools to be used, and the controls and deliverables that are required.
• All software development can be characterized as a problem solving loop
in which four distinct stages are encountered: status quo, problem
definition, technical development, and solution integration.
• Status quo: “represents the current state of affairs” .
• Problem definition: identifies the specific problem to be solved.
• Technical development: solves the problem through the application of
some technology.
• Solution integration: delivers the results (e.g., documents, programs,
data, new business function, new product) to those who requested the
solution in the first place.
• This problem solving loop applies to software engineering work at many
different levels of resolution.

Fig: The phases of a problem solving loop


Prescriptive Process Models
• The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks, quality assurance
and change the mechanism for every project.
• The following framework activities are carried out irrespective of the process model chosen by the organization.
• Communication
• Planning
• Modeling
• Construction
• Deployment

• There are three types of prescriptive process models. They are:


• The Waterfall Model
• Incremental Process model
• RAD model
Prescriptive Process Models: Waterfall Model
• The Waterfall Model:
• The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle model'.
• In this model, each phase is fully completed before the beginning of the next phase.
• This model is used for the small projects.
• This model suggests a systematic, sequential approach to software development that begins at the system level
and progresses through analysis, design, coding, testing, and support/Maintenance.

Fig: The Waterfall Model


Prescriptive Process Models: Waterfall Model (Cont…)
• Feasibility study - The main aim of feasibility study is to determine whether it would be financially and technically
feasible to develop the product. Rough understanding between the team members about estimate based on the client
side requirement. After over all discussion they search for variety of solution on the basis of kind of resources and time
requirement etc.
• Requirements analysis and specification- The aim of the requirements analysis and specification phase is to understand
the exact requirements of the customer and to document them properly. This phase consists of two distinct activities,
namely
• Requirements gathering and analysis, and Requirements specification.
• The goal of the requirements gathering activity is to collect all relevant information from the customer regarding the
product to be developed. This is done to clearly understand the customer requirements so that incompleteness and
inconsistencies are removed.
• Afterall ambiguities, inconsistencies, and incompleteness have been resolved and all the requirements properly
understood, the requirements specification activity can start.
• During this activity, the user requirements are systematically organized into a Software Requirements Specification (SRS)
document.
Prescriptive Process Models: Waterfall Model (Cont…)
• Design - The goal of the design phase is to transform the requirements specified in the SRS document into a
structure that is suitable for implementation in some programming language. During the design phase the
software architecture is derived from the SRS document. Two distinctly different approaches are available:
• the traditional design approach
• the object-oriented design approach
• Code generation - The design must be translated into a machine-readable form. The code generation step
performs this task. If design is performed in a detailed manner, code generation can be accomplished
mechanistically.
• Testing - Once code has been generated, program testing begins. The testing process focuses on the logical
internals of the software, ensuring that all statements have been tested, and on the functional externals; that is,
conducting tests to uncover errors and ensure that defined input will produce actual results that agree with
required results.
Prescriptive Process Models: Waterfall Model (Cont…)
• Support - Software will undoubtedly undergo change after it is delivered to the customer (a possible exception is
embedded software). Change will occur because errors have been encountered, because the software must be
adapted to accommodate changes in its external environment or because the customer requires functional or
performance enhancements. Software support/maintenance reapplies each of the preceding phases to an existing
program rather than a new one.
• Maintenance - Maintenance of a typical software product requires much more effort than the effort necessary to
develop the product itself. Maintenance involves performing any one or more of the following three kinds of
activities:
• Correcting errors that were not discovered during the product development phase. This is called corrective
maintenance.
• Improving the implementation of the system, and enhancing the functionalities of the system according to the
customer’s requirements. It is called perfective maintenance.
• Porting the software to work in a new environment. For example, porting may be required to get the software to
work on a new computer platform or with a new operating system. It is called adaptive maintenance.
Prescriptive Process Models: Waterfall Model (Cont…)
• Disadvantages:
• Real projects rarelyfollow the sequential flow that the model proposes. Although the linear model can
accommodate iteration, it does so indirectly. As a result, changes can cause confusion as the project team
proceeds.
• It is often difficult for the customer to state all requirements explicitly. The linear sequential model requires this
and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects.
• The customer must have patience. A working version of the program(s) will not be available until late in the
project time-span. A major blunder, if undetected until the working program is reviewed, can be disastrous.
• The linear nature of the classic life cycle leads to “blocking states” in which some project team members must wait
for other members of the team to complete dependent tasks. In fact, the time spent waiting can exceed the time
spent on productive work. The blocking state tends to be more prevalent at the beginning and end of a linear
sequential process.
Evolutionary Process Models: RAD
• Rapid application development (RAD) is an incremental software development process model that emphasizes an
extremely short development cycle.
• The RAD model is a “high-speed” adaptation of the linear sequential model in which rapid development is
achieved by using component-based construction.
• If requirements are well understood and project scope is constrained, the RAD process enables a development
team to create a “fully functional system” within very short time periods (e.g., 60 to 90 days). Used primarily for
information systems applications, the RAD approach encompasses the following phases:
• Business modeling: The information flow among business functions is modeled in a way that answers the following
questions: What information drives the business process? What information is generated? Who generates it?
Where does the information go? Who processes it?
• Data modeling: The information flow defined as part of the business modeling phase is refined into a set of data
objects that are needed to support the business. The characteristics (called attributes) of each object are identified
and the relationships between these objects defined.
Evolutionary Process Models: RAD (Cont…)
• Process modeling: The data objects defined in the data modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions are created for adding,
modifying, deleting, or retrieving a data object.
• Application generation: RAD assumes the use of fourth generation techniques. Rather than creating software
using conventional third generation programming languages the RAD process works to reuse existing program
components (when possible) or create reusable components (when necessary). In all cases, automated tools are
used to facilitate construction of the software.
• Testing and turnover: Since the RAD process emphasizes reuse, many of the program components have already
been tested. This reduces overall testing time. However, new components must be tested and all interfaces must
be fully exercised.
• If a business application can be modularized in a way that enables each major function to be completed in less
than three months (using the approach described previously), it is a candidate for RAD. Each major function can be
addressed by a separate RAD team and then integrated to form a whole.
Evolutionary Process Models: RAD (Cont…)

FIG: The RAD model


Evolutionary Process Models: RAD (Cont…)
• Disadvantages:
• For large but scalable projects, RAD requires sufficient human resources to create the right number of RAD
teams.
• RAD requires developers and customers who are committed to the rapid-fire activities necessary to get a
system complete in a much abbreviated time frame. If commitment is lacking from either constituency, RAD
projects will fail.
• Not all types of applications are appropriate for RAD. If a system cannot be properly modularized, building
the components necessary for RAD will be problematic. If high performance is an issue and performance is to
be achieved through tuning the interfaces to system components, the RAD approach may not work.
• RAD is not appropriate when technical risks are high. This occurs when a new application makes heavy use of
new technology or when the new software requires a high degree of interoperability with existing computer
programs.
Evolutionary Process Models: Spiral

• The spiral model, originally proposed by Boehm, is an evolutionary software process model that couples the
iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model.
• The spiral model is called a meta model since it encompasses all other life cycle models.
• It provides the potential for rapid development of incremental versions of the software. Using the spiral
model, software is developed in a series of incremental releases.
• During early iterations, the incremental release might be a paper model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
• A spiral model is divided into a number of framework activities, also called task regions.
• Typically, there are between three and six task regions.
Evolutionary Process Models: Spiral (Cont…)
• Six task regions of Spiral Model:
• Customer communication—tasks required to establish effective communication between developer and
customer.
• Planning—tasks required to define resources, timelines, and other project related information.
• Risk analysis—tasks required to assess both technical and management risks.
• Engineering—tasks required to build one or more representations of the application.
• Construction and release—tasks required to construct, test, install, and provide user support (e.g.,
documentation and training).
• Customer evaluation—tasks required to obtain customer feedback based on evaluation of the software
representations created during the engineering stage and implemented during the installation stage.
Evolutionary Process Models: Spiral (Cont…)
• Each of the regions is populated by a set of work tasks, called a
task set, that are adapted to the characteristics of the project to
be undertaken.
• For small projects, the number of work tasks and their formality
is low.
• For larger, more critical projects, each task region contains more
work tasks that are defined to achieve a higher level of
formality.
• As this evolutionary process begins, the software engineering
team moves around the spiral in a clockwise direction,
beginning at the center.
• The first circuit around the spiral might result in the
development of a product specification; subsequent passes
around the spiral might be used to develop a prototype and
then progressively more sophisticated versions of the software.
• Each pass through the planning region results in adjustments to
the project plan.
• Cost and schedule are adjusted based on feedback derived from
customer evaluation.
Fig: A typical spiral model
Evolutionary Process Models: Spiral (Cont…)

• Risk handling is inherently built into this model.


• The spiral model is suitable for development of technically challenging software products that are prone to
several kinds of risks.
• However, this model is much more complex than the other models – this is probably a factor deterring its use
in ordinary projects.
Agile process model
• The meaning of Agile is swift or versatile.
• "Agile process model" refers to a software development approach based on iterative development.
• Agile methods break tasks into smaller iterations, or parts do not directly involve long term planning.
• The project scope and requirements are laid down at the beginning of the development process. Plans
regarding the number of iterations, the duration and the scope of each iteration are clearly defined in
advance.
• Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts from one
to four weeks.
• The division of the entire project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements.
• Each iteration involves a team working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product is demonstrated to the client.
Agile process model (Cont…)
Advantages of Agile Method:
• Frequent Delivery
• Face-to-Face Communication with clients.
• Efficient design and fulfils the business requirement.
• Anytime changes are acceptable.
• It reduces total development time.
Disadvantages of Agile Model:
• Due to the shortage of formal documents, it creates confusion and crucial decisions taken throughout various
phases can be misinterpreted at any time by different team members.
• Due to the lack of proper documentation, once the project completes and the developers allotted to another
project, maintenance of the finished project can become a difficulty.
Agile process model: Extreme Programming (XP)
• Extreme programming (XP) is one of the most important software development frameworks of Agile models. It is
used to improve software quality and responsiveness to customer requirements.
• This type of methodology is used when customers are constantly changing demands or requirements, or when they
are not sure about the system's performance.
• The eXtreme programming model recommends taking the best practices that have worked well in the past in
program development projects to extreme levels.
• Good practices need to be practiced in extreme programming: Some of the good practices that have been
recognized in the extreme programming model and suggested to maximize their use are given below (XP Values):
• Code Review: Code review detects and corrects errors efficiently. It suggests pair programming as coding and
reviewing of written code carried out by a pair of programmers who switch their works between them every hour.
• Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-driven development (TDD)
to continually write and execute test cases. In the TDD approach test cases are written even before any code is
written.
Agile process model: Extreme Programming (XP) (Cont…)
• Incremental development: Incremental development is very good because customer feedback is gained and based
on this development team comes up with new increments every few days after each iteration.
• Simplicity: Simplicity makes it easier to develop good quality code as well as to test and debug it.
• Design: Good quality design is important to develop good quality software. So, everybody should design daily.
• Integration testing: It helps to identify bugs at the interfaces of different functionalities. Extreme programming
suggests that the developers should achieve continuous integration by building and performing integration testing
several times a day.

Fig: XP Methodology
Agile process model: Extreme Programming (XP) (Cont…)
• XP is based on the frequent iteration through which the developers implement User Stories.
• User stories are simple and informal statements of the customer about the functionalities needed.
• A User Story is a conventional description by the user of a feature of the required system.
• It does not mention finer details such as the different scenarios that can occur.
• Based on User stories, the project team proposes Metaphors. Metaphors are a common vision of how the
system would work.
• The development team may decide to build a Spike for some features. A Spike is a very simple program that is
constructed to explore the suitability of a solution being proposed.
• It can be considered similar to a prototype.
Agile process model: Extreme Programming (XP) (Cont…)
• XP Principles:
• Coding: The concept of coding which is used in the XP model is slightly different from traditional coding. Here, the coding activity
includes drawing diagrams (modeling) that will be transformed into code, scripting a web-based system, and choosing among
several alternative solutions.
• Testing: XP model gives high importance to testing and considers it to be the primary factor to develop fault-free software.
• Listening: The developers need to carefully listen to the customers if they have to develop good quality software. Sometimes
programmers may not have the depth knowledge of the system to be developed. So, the programmers should understand
properly the functionality of the system and they have to listen to the customers.
• Designing: Without a proper design, a system implementation becomes too complex and very difficult to understand the
solution, thus making maintenance expensive. A good design results elimination of complex dependencies within a system. So,
effective use of suitable design is emphasized.
• Feedback: One of the most important aspects of the XP model is to gain feedback to understand the exact customer needs.
Frequent contact with the customer makes the development effective.
• Simplicity: The main principle of the XP model is to develop a simple system that will work efficiently in the present time, rather
than trying to build something that would take time and may never be used. It focuses on some specific features that are
immediately needed, rather than engaging time and effort on speculations of future requirements.
Agile process model: SCRUM
• Scrum is a subset of Agile. It is a lightweight process framework for agile development, and the most widely-used
one.
• A “process framework” is a particular set of practices that must be followed in order for a process to be consistent
with the framework. (For example, the Scrum process framework requires the use of development cycles called
Sprints, the XP framework requires pair programming, and so forth.)
• “Lightweight” means that the overhead of the process is kept as small as possible, to maximize the amount of
productive time available for getting useful work done.
• It focuses primarily on ways to manage tasks in team-based development conditions.
• There are three roles in it, and their responsibilities are:
• Scrum Master: The scrum can set up the master team, arrange the meeting and remove obstacles for the process
• Product owner: The product owner makes the product backlog, prioritizes the delay and is responsible for the
distribution of functionality on each repetition.
• Scrum Team: The team manages its work and organizes the work to complete the sprint or cycle.
Agile process model: SCRUM (Cont…)
• Basically, Scrum is derived from activity that occurs during a rugby match.
• Scrum believes in empowering the development team and advocates working in small teams (say- 7 to 9
members).
Agile process model: SCRUM (Cont…)
• Product Backlog
• This is a repository where requirements are tracked with details on the no of requirements(user stories) to be
completed for each release.
• It should be maintained and prioritized by Product Owner, and it should be distributed to the scrum team. Team
can also request for a new requirement addition or modification or deletion.
• Process flow of Scrum Methodologies:
• Each iteration of a scrum is known as Sprint.
• Product backlog is a list where all details are entered to get the end-product.
• During each Sprint, top user stories of Product backlog are selected and turned into Sprint backlog.
• Team works on the defined sprint backlog.
• Team checks for the daily work.
• At the end of the sprint, team delivers product functionality.
Agile process model: SCRUM (Cont…)
• SCRUM Practices:
Agile process model: KANBAN
• Kanban is a popular framework used to implement agile and DevOps software development.
• It requires real-time communication of capacity and full transparency of work.
• Work items are represented visually on a kanban board, allowing team members to see the state of every piece of
work at any time.
• The Kanban board is normally put up on a wall in the project room. The status and progress of the story
development tasks is tracked visually on the Kanban board with flowing Kanban cards.
• Kanban Board:
• Kanban board is used to depict the flow of tasks across the value stream. The Kanban board −
• Provides easy access to everyone involved in the project.
• Facilitates communication as and when necessary.
• Progress of the tasks are visually displayed.
• Bottlenecks are visible as soon as they occur.
Agile process model: KANBAN (Cont…)
• Kanban cards:
• In Japanese, Kanban translates to "visual signal." For Kanban teams, every work item is represented as a separate card on the
board.
• The tasks and stories are represented by Kanban cards. The current status of each task is known by displaying the cards in
separate columns on the board. The columns are labeled as To Do, Doing, and Done. Each task moves from To Do to Doing and
then to Done.
• The main purpose of representing work as a card on the Kanban board is to allow team members to track the progress of work
through its workflow in a highly visual manner.
• Kanban cards feature critical information about that particular work item, giving the entire team full visibility into who is
responsible for that item of work, a brief description of the job being done, how long that piece of work is estimated to take, and
so on.
• Cards on virtual Kanban boards will often also feature screenshots and other technical details that is valuable to the assignee.
• Allowing team members to see the state of every work item at any given point in time, as well as all of the associated details,
ensures increased focus, full traceability, and fast identification of blockers and dependencies.
Agile process model: KANBAN (Cont…)
Advantages of Kanban board:
• The major advantages of using a Kanban board are −
• Empowerment of Team − This means −
• Team is allowed to take decisions as and when required.
• Team collaboratively resolves the bottlenecks.
• Team has access to the relevant information.
• Team continually communicates with customer.
• Continuous Delivery − This means −
• Focus on work completion.
• Limited requirements at any point of time.
• Focus on delivering value to the customer.
• Emphasis on whole project.
Agile process model: KANBAN (Cont…)
• WIP Limit:
• The label in the Doing column also contains a number, which represents the maximum number of tasks that can be in that column at any point of time. i.e.,
the number associated with the Doing column is the WIP (Work-In-Progress) Limit.
• Pull Approach:
• Pull approach is used as and when a task is completed in the Doing column. Another card is pulled from the To Do column.
• Self-directing:
• In Agile Development, the team is responsible for planning, tracking, reporting and communicating in the project. Team is allowed to make decisions and is
accountable for the completion of the development and product quality. This is aligned to the characteristic of empowerment of the team in Kanban.
• Continuous Flow:
• In Agile development, there is no gate approach and the work flows across the different functions without wait-time. This contributes in minimizing the
cycle time characteristic of Kanban.
• Visual Metrics:
• In Agile Kanban, the metrics are tracked visually using −
• Kanban Board
• Burndown Chart
Agile process model: KANBAN (Cont…)
• Uses of Kanban board:
• Measure the cycle times, that can be used to optimize average cycle time.
• Track WIP limit to eliminate waste.
• Track resource utilization to eliminate waste.
• Uses of Burndown chart:
• The current status of the tasks and stories.
• The rate of progress of completing the remaining tasks.

• As Kanban Board is updated daily, it contains all the information that is required by the Burndown charts.
Thank You…!!!
Data Flow Diagrams
 A Data Flow Diagram (DFD) is a traditional visual representation of
the information flows within a system. A neat and clear DFD can
depict the right amount of the system requirement graphically. It can
be manual, automated, or a combination of both.
 It shows how data enters and leaves the system, what changes the
information, and where data is stored.
 The objective of a DFD is to show the scope and boundaries of a
system as a whole. It may be used as a communication tool between
a system analyst and any person who plays a part in the order that
acts as a starting point for redesigning a system. The DFD is also
called as a data flow graph or bubble chart.
Data Flow Diagrams
The following observations about DFDs are essential:
 All names should be unique. This makes it easier to refer to elements in the
DFD.
 Remember that DFD is not a flow chart. Arrows is a flow chart that represents
the order of events; arrows in DFD represents flowing data. A DFD does not
involve any order of events.
 Suppress logical decisions. If we ever have the urge to draw a diamond-shaped
box in a DFD, suppress that urge! A diamond-shaped box is used in flow charts
to represents decision points with multiple exists paths of which the only one is
taken. This implies an ordering of events, which makes no sense in a DFD.
 Do not become bogged down with details. Defer error conditions and error
handling until the end of the analysis.
Data Flow Diagrams
Standard symbols for DFDs are derived from the electric circuit diagram analysis
and are shown in fig:
Symbols
Circle: A circle (bubble) shows a process that transforms data inputs
into data outputs.
Data Flow: A curved line shows the flow of data into or out of a
process or data store.
Data Store: A set of parallel lines shows a place for the collection of
data items. A data store indicates that the data is stored which can be
used at a later stage or by the other processes in a different order. The
data store can have an element or group of elements.
Source or Sink: Source or Sink is an external entity and acts as a
source of system inputs or sink of system outputs.
Levels in Data Flow
Diagrams (DFD)
 The DFD may be used to perform a system or
software at any level of abstraction. Infact, DFDs
may be partitioned into levels that represent
increasing information flow and functional detail.
Levels in DFD are numbered 0, 1, 2 or beyond.
 Here, we will see primarily three levels in the data
flow diagram, which are: 0-level DFD, 1-level
DFD, and 2-level DFD.
0-level DFDM
 It is also known as fundamental system model, or context diagram represents the
entire software requirement as a single bubble with input and output data
denoted by incoming and outgoing arrows. Then the system is decomposed and
described as a DFD with multiple bubbles. Parts of the system represented by
each of these bubbles are then decomposed and documented as more and more
detailed DFDs. This process may be repeated at as many levels as necessary
until the program at hand is well understood. It is essential to preserve the
number of inputs and outputs between levels, this concept is called leveling by
DeMacro. Thus, if bubble "A" has two inputs x1 and x2 and one output y, then
the expanded DFD, that represents "A" should have exactly two external inputs
and one external output as shown in fig:
0-level DFDM
 The Level-0 DFD, also called context diagram of the result management system
is shown in fig. As the bubbles are decomposed into less and less abstract
bubbles, the corresponding data flow may also be needed to be decomposed.
1-level DFDM
 In 1-level DFD, a context diagram is decomposed into multiple
bubbles/processes. In this level, we highlight the main objectives of the system
and breakdown the high-level process of 0-level DFD into subprocesses.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
2-Level DFD
 2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to
project or record the specific/necessary detail about the system's functioning.
Activity Diagram
 The activity diagram is used to demonstrate the flow of
control within the system rather than the implementation. It
models the concurrent and sequential activities.
 The activity diagram helps in envisioning the workflow from
one activity to another. It put emphasis on the condition of
flow and the order in which it occurs. The flow can be
sequential, branched, or concurrent, and to deal with such
kinds of flows, the activity diagram has come up with a fork,
join, etc.
 It is also termed as an object-oriented flowchart. It
encompasses activities composed of a set of actions or
operations that are applied to model the behavioral diagram.
Components of an Activity
Diagram
Activities
The categorization of behavior into one or more actions is termed
as an activity. In other words, it can be said that an activity is a
network of nodes that are connected by edges. The edges depict
the flow of execution. It may contain action nodes, control nodes,
or object nodes.
The control flow of activity is represented by control nodes and
object nodes that illustrates the objects used within an activity.
The activities are initiated at the initial node and are terminated at
the final node.
Components of an Activity
Diagram
Activity partition /swimlane
The swimlane is used to cluster all the related activities in one
column or one row. It can be either vertical or horizontal. It used
to add modularity to the activity diagram. It is not necessary to
incorporate swimlane in the activity diagram. But it is used to add
more transparency to the activity diagram.
Components of an Activity
Diagram
Forks
Forks and join nodes generate the concurrent flow inside the
activity. A fork node consists of one inward edge and several
outward edges. It is the same as that of various decision
parameters. Whenever a data is received at an inward edge, it gets
copied and split crossways various outward edges. It split a single
inward flow into multiple parallel flows.
Components of an Activity
Diagram
Join Nodes
Join nodes are the opposite of fork nodes. A Logical AND
operation is performed on all of the inward edges as it
synchronizes the flow of input across one single output (outward)
edge.
Components of an Activity
Diagram
Pins
It is a small rectangle, which is attached to the action rectangle. It
clears out all the messy and complicated thing to manage the
execution flow of activities. It is an object node that precisely
represents one input to or output from the action.
Notation of an Activity
diagram
Activity diagram constitutes following notations:
Initial State: It depicts the initial stage or beginning of the set of
actions.
Final State: It is the stage where all the control flows and object
flows end.
Decision Box: It makes sure that the control flow or object flow
will follow only one path.
Action Box: It represents the set of actions that are to be
performed.
Notation of an Activity
diagram
Why use Activity
Diagram?
 An event is created as an activity diagram encompassing a group of nodes
associated with edges. To model the behavior of activities, they can be
attached to any modeling element. It can model use cases, classes,
interfaces, components, and collaborations.
 It mainly models processes and workflows. It envisions the dynamic
behavior of the system as well as constructs a runnable system that
incorporates forward and reverse engineering. It does not include the
message part, which means message flow is not represented in an activity
diagram.
 It is the same as that of a flowchart but not exactly a flowchart itself. It is
used to depict the flow between several activities.
How to draw an Activity
Diagram?
 An activity diagram is a flowchart of activities, as it represents the
workflow among various activities. They are identical to the flowcharts,
but they themself are not exactly the flowchart. In other words, it can be
said that an activity diagram is an enhancement of the flowchart, which
encompasses several unique skills.
 Since it incorporates swimlanes, branching, parallel flows, join nodes,
control nodes, and forks, it supports exception handling. A system must be
explored as a whole before drawing an activity diagram to provide a
clearer view of the user. All of the activities are explored after they are
properly analyzed for finding out the constraints applied to the activities.
Each and every activity, condition, and association must be recognized.
How to draw an Activity
Diagram?
After gathering all the essential information, an abstract or a
prototype is built, which is then transformed into the actual
diagram.
Following are the rules that are to be followed for drawing an
activity diagram:
 A meaningful name should be given to each and every activity.
 Identify all of the constraints.
 Acknowledge the activity associations.
Example of an Activity
Diagram
 An example of an activity diagram showing the business flow
activity of order processing is given below.
 Here the input parameter is the Requested order, and once the
order is accepted, all of the required information is then filled,
payment is also accepted, and then the order is shipped. It
permits order shipment before an invoice is sent or payment is
completed.
Example of an Activity
Diagram
When to use an Activity
Diagram?
An activity diagram can be used to portray business processes and workflows. Also, it used
for modeling business as well as the software. An activity diagram is utilized for the
followings:
1. To graphically model the workflow in an easier and understandable way.
2. To model the execution flow among several activities.
3. To model comprehensive information of a function or an algorithm employed within the
system.
4. To model the business process and its workflow.
5. To envision the dynamic aspect of a system.
6. To generate the top-level flowcharts for representing the workflow of an application.
7. To represent a high-level view of a distributed or an object-oriented system.
Use Case Diagram
 A use case diagram is used to represent the dynamic behavior
of a system.
 It encapsulates the system's functionality by incorporating use
cases, actors, and their relationships.
 It models the tasks, services, and functions required by a
system/subsystem of an application.
 It depicts the high-level functionality of a system and also
tells how the user handles a system.
Purpose of Use Case
Diagrams
The main purpose of a use case diagram is to portray the dynamic aspect of a
system. It accumulates the system's requirement, which includes both
internal as well as external influences. It invokes persons, use cases, and
several things that invoke the actors and elements accountable for the
implementation of use case diagrams. It represents how an entity from the
external environment can interact with a part of the system.
Following are the purposes of a use case diagram given below:
 It gathers the system's needs.
 It depicts the external view of the system.
 It recognizes the internal as well as external factors that influence the
system.
 It represents the interaction between the actors.
How to draw a Use Case
diagram?
 It is essential to analyze the whole system before starting with
drawing a use case diagram, and then the system's
functionalities are found. And once every single functionality
is identified, they are then transformed into the use cases to be
used in the use case diagram.
 After that, we will enlist the actors that will interact with the
system. The actors are the person or a thing that invokes the
functionality of a system. It may be a system or a private
entity, such that it requires an entity to be pertinent to the
functionalities of the system to which it is going to interact.
How to draw a Use Case
diagram?
Once both the actors and use cases are enlisted, the relation between the actor and use case/
system is inspected. It identifies the no of times an actor communicates with the system.
Basically, an actor can interact multiple times with a use case or system at a particular
instance of time.
Following are some rules that must be followed while drawing a use case diagram:
 A pertinent and meaningful name should be assigned to the actor or a use case of a
system.
 The communication of an actor with a use case must be defined in an understandable
way.
 Specified notations to be used as and when required.
 The most significant interactions should be represented among the multiple no of
interactions between the use case and actors.
Example of a Use Case
Diagram
A use case diagram depicting the Online Shopping website is given below.
Here the Web Customer actor makes use of any online shopping website to
purchase online. The top-level uses are as follows; View Items, Make
Purchase, Checkout, Client Register.
The View Items use case is utilized by the customer who searches and view
products.
The Client Register use case allows the customer to register itself with the
website for availing gift vouchers, coupons, or getting a private sale
invitation. It is to be noted that the Checkout is an included use case, which
is part of Making Purchase, and it is not available by itself.
Example of a Use Case
Diagram
Example of a Use Case
Diagram
The View Items is further extended by several use cases such as;
Search Items, Browse Items, View Recommended Items, Add to
Shopping Cart, Add to Wish list.

All of these extended use cases provide some functions to


customers, which allows them to search for an item. The View Items
is further extended by several use cases such as; Search Items,
Browse Items, View Recommended Items, Add to Shopping Cart,
Add to Wish list.

All of these extended use cases provide some functions to


customers, which allows them to search for an item.
Example of a Use Case
Diagram
Both View Recommended Item and Add to Wish List include
the Customer Authentication use case, as they necessitate
authenticated customers, and simultaneously item can be added
to the shopping cart without any user authentication.
Example of a Use Case
Diagram
Similarly, the Checkout use case also includes the following use
cases, as shown below. It requires an authenticated Web
Customer, which can be done by login page, user authentication
cookie ("Remember me"), or Single Sign-On (SSO). SSO needs
an external identity provider's participation, while Web site
authentication service is utilized in all these use cases.

The Checkout use case involves Payment use case that can be
done either by the credit card and external credit payment
services or with PayPal.
Example of a Use Case
Diagram
Important tips for drawing a Use Case
diagram
 Following are some important tips that are to be kept in mind while
drawing a use case diagram:
 A simple and complete use case diagram should be articulated.
 A use case diagram should represent the most significant interaction
among the multiple interactions.
 At least one module of a system should be represented by the use case
diagram.
 If the use case diagram is large and more complex, then it should be
drawn more generalized.
Content

• Requirement Engineering
• Requirement Modeling
• Data flow diagram
• Scenario based model
• Software Requirement Specification document format(IEEE)
Requirement Engineering
• To ensure that specified system properly meets the customer’s needs and satisfies the customer’s expectations, a
solid requirements engineering process is the best solution.
• Requirements engineering provides the appropriate mechanism for understanding what the customer wants,
analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the solution unambiguously,
validating the specification, and managing the requirements as they are transformed into an operational system.
• The requirements engineering process can be described in five distinct steps:
• requirements elicitation
• requirements analysis and negotiation
• requirements specification
• system modeling
• requirements validation
• requirements management
Requirement Engineering (Cont…)
• Requirement Elicitation:
• Sommerville and Sawyer suggest a set of detailed guidelines for requirements elicitation, which are summarized in the following
steps:
• Assess the business and technical feasibility for the proposed system.
• Identify the people who will help specify requirements and understand their organizational bias.
• Define the technical environment (e.g., computing architecture, operating system, telecommunications needs) into which the
system or product will be placed.
• Identify “domain constraints” (i.e., characteristics of the business environment specific to the application domain) that limit the
functionality or performance of the system or product to be built.
• Define one or more requirements elicitation methods (e.g., interviews, focus groups, team meetings).
• Solicit participation from many people so that requirements are defined from different points of view; be sure to identify the
rationale for each requirement that is recorded.
• Identify ambiguous requirements as candidates for prototyping.
• Create usage scenarios to help customers/users better identify key requirements.
Requirement Engineering (Cont…)
• Requirements analysis and negotiation:
• Analysis categorizes requirements and organizes them into related subsets; explores each requirement in relationship to others;
examines requirements for consistency, omissions, and ambiguity; and ranks requirements based on the needs of
customers/users.
• As the requirements analysis activity commences, the following questions are asked and answered:
• Is each requirement consistent with the overall objective for the system/product?
• Have all requirements been specified at the proper level of abstraction? That is, do some requirements provide a level of
technical detail that is inappropriate at this stage?
• Is the requirement really necessary or does it represent an add-on feature that may not be essential to the objective of the
system?
• Is each requirement bounded and unambiguous?
• Does each requirement have attribution? That is, is a source (generally, a specific individual) noted for each requirement?
• Do any requirements conflict with other requirements?
Requirement Engineering (Cont…)

• Requirements analysis and negotiation:


• Is each requirement achievable in the technical environment that will house the system or product?
• Is each requirement testable, once implemented?
• Customers, users and stakeholders are asked to rank requirements and then discuss conflicts in priority.
• Risks associated with each requirement are identified and analyzed.
• Rough guestimates of development effort are made and used to assess the impact of each requirement on
project cost and delivery time.
• Using an iterative approach, requirements are eliminated, combined, and/or modified so that each party
achieves some measure of satisfaction.
Requirement Engineering (Cont…)
• Requirements specification:
• The System Specification is the final work product produced by the system and requirements engineer.
• It serves as the foundation for hardware engineering, software engineering, database engineering, and
human engineering.
• It describes the function and performance of a computer-based system and the constraints that will govern
its development.
• The specification bounds each allocated system element.
• The System Specification also describes the information (data and control) that is input to and output from
the system.
• A specification can be a written document, a graphical model, a formal mathematical model, a collection of
usage scenarios, a prototype, or any combination of these.
Requirement Engineering (Cont…)
• System Modelling:
• In order to fully specify what is to be built, there is a need of a meaningful model, a blueprint or three-dimensional rendering.
• It is important to evaluate the system’s components in relationship to one another, to determine how requirements fit into this
picture, and to assess the “aesthetics” of the system as it has been conceived.
• Requirements Validation:
• The work products produced as a consequence of requirements engineering are assessed for quality during a validation step.
• Requirements validation examines the specification to ensure that all system requirements have been stated unambiguously;
that inconsistencies, omissions, and errors have been detected and corrected; and that the work products conform to the
standards established for the process, the project, and the product.
• The primary requirements validation mechanism is the formal technical review. The review team includes system engineers,
customers, users, and other stakeholders who examine the system specification looking for errors in content or interpretation,
areas where clarification may be required, missing information, inconsistencies, conflicting requirements, or unrealistic
(unachievable) requirements.
Requirement Engineering (Cont…)
• Requirements Management:
• Requirements management is a set of activities that help the project team to identify, control, and track
requirements and changes to requirements at any time as the project proceeds.
• Once requirements have been identified, traceability tables are developed. Each traceability table relates identified
requirements to one or more aspects of the system or its environment. Among many possible traceability tables are
the following:
• Features traceability table: Shows how requirements relate to important customer observable system/product
features.
• Source traceability table: Identifies the source of each requirement.
• Dependency traceability table: Indicates how requirements are related to one another.
• Subsystem traceability table: Categorizes requirements by the subsystem(s) that they govern.
• Interface traceability table: Shows how requirements relate to both internal and external system interfaces.
Requirement Engineering (Cont…)
• Before software development is started, it becomes quite essential to understand and document the exact requirement of the customer.
• Experienced members of the development team carry out this job. They are called as analysts.
• The analyst starts requirements gathering and analysis activity by collecting all information from the customer which could be used to
develop the requirements of the system.
• He then analyzes the collected information to obtain a clear and thorough understanding of the product to be developed, with a view to
remove all ambiguities and inconsistencies from the initial customer perception of the problem.
• The following basic questions pertaining to the project should be clearly understood by the analyst in order to obtain a good grasp of the
problem:
• What is the problem?
• Why is it important to solve the problem?
• What are the possible solutions to the problem?
• What exactly are the data input to the system and what exactly are the data output by the system?
• If there are external software or hardware with which the developed software has to interface, then what exactly would the data
interchange formats with the external system be?
Requirement Modeling
• Requirements Modeling:
• Requirements modeling in software engineering is essentially the planning stage of a software application or system. Generally,
the process will begin when a business or an entity (for example, an educational institution) approaches a software development
team to create an application or system from scratch or update an existing one.
• Requirements modeling comprises several stages, or 'patterns': scenario-based modeling, data modeling, flow-oriented
modeling, class-based modeling and behavioral modeling. Each of these stages/patterns examines the same problem from a
different perspective.
• Identifying Requirements:
• Requirements in this context are the conditions that a proposed solution or application must meet in order to solve the business
problem.
• Identifying requirements is not an exclusively technical process, and initially involves all the stakeholders, like the representatives
of the entity that has commissioned the software project, who may not necessarily be from a technical background, as well as
the software developers, who are not necessarily the technical team.
• Together, they discuss and brainstorm about the problem, and decide what functions the proposed application or system must
perform in order to solve it.
Requirement Modeling (Cont…)

• Functional vs. Non-Functional Requirements:


• A functional requirement specifies something that the application or system should do. Often, this is defined
as a behavior of the system that takes input and provides output. For example, a traveler fills out a form in an
airline's mobile application with his/her name and passport details (input), submits the form, and the
application generates a boarding pass with the traveler's details (output).
• Non-functional requirements, sometimes also called quality requirements, describe how the system should
be, as opposed to what it should do. Non-functional requirements of a system include performance (e.g.,
response time), maintainability and scalability, among many others. In the airline application example, the
requirement that the application must display the boarding pass after a maximum of five seconds from the
time the traveler presses the 'submit' button would be a non-functional requirement.
Requirement Modeling (Cont…)

• Requirement modeling strategies:


• Following are the requirement modeling strategies:
• Flow Oriented Modeling
• Class-based Modeling
• Scenario-based Modeling
• Behavior-based Modeling
Flow Oriented Modeling
• It shows how data objects are transformed by processing the function. The Flow oriented elements are:
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
i. Data flow model
• It is a graphical technique. It is used to represent information flow.
• The data objects are flowing within the software and transformed by processing the elements.
• The data objects are represented by labeled arrows. Transformation are represented by circles called as bubbles.
• DFD shown in a hierarchical fashion. The DFD is split into different levels. It also called as 'context level diagram'.
ii. Control flow model
• Large class applications require a control flow modeling.
• The application creates control information instated of reports or displays.
• The applications process the information in specified time.
• An event is implemented as a Boolean value.
• For example, the Boolean values are true or false, on or off, 1 or 0.
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
Behavior-based Modeling:
• It represents the behavior of the system.
• The state diagram in behavior-based modeling is a sequential specification of the behavior.
• The state diagram includes states, transitions, events and activities.
• State diagram shows the transition from one state to another state if a particular event has occurred.
Class-based Modeling:
• Class based modeling represents the object. The system manipulates the operations.
• The elements of the class based model consist of classes and object, attributes, operations, class –
responsibility - collaborator (CRS) models.
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
Class-based Modeling:
• Classes: Classes are determined using underlining each noun or noun clause and enter it into the simple table.
• Classes are found in following forms:
• External entities: The system, people or the device generates the information that is used by the computer based system.
• Things: The reports, displays, letter, signal are the part of the information domain or the problem.
• Occurrences or events: A property transfer or the completion of a series or robot movements occurs in the context of the system
operation.
• Roles: The people like manager, engineer, salesperson are interacting with the system.
• Organizational units: The division, group, team are suitable for an application.
• Places: The manufacturing floor or loading dock from the context of the problem and the overall function of the system.
• Structures: The sensors, computers are defined a class of objects or related classes of objects.
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
Class-based Modeling:
• Attributes:
• Attributes are the set of data objects that are defining a complete class within the context of the problem.
• For example, 'employee' is a class and it consists of name, Id, department, designation and salary of the employee
are the attributes.
• Operations:
• The operations define the behavior of an object.
• The operations are characterized into following types:
• The operations manipulate the data like adding, modifying, deleting and displaying etc.
• The operations perform a computation.
• The operation monitors the objects for the occurrence of controlling an event.
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
Class-based Modeling:
• CRS Modeling:
• The CRS stands for Class-Responsibility-Collaborator.
• It provides a simple method for identifying and organizing the classes that are applicable to the system or
product requirement.
• Class is an object-oriented class name. It consists of information about sub classes and super class.
• Responsibilities are the attributes and operations that are related to the class.
• Collaborations are identified and determined when a class can achieve each responsibility of it. If the class
cannot identify itself, then it needs to interact with another class.
Requirement Modeling (Cont…)
• Requirement modeling strategies: (Cont…)
Behavioral patterns modeling:
• Behavioral model shows the response of software to an external event.
• Steps for creating behavioral patterns for requirement modeling as follows:
• Evaluate all the use cases to completely understand the sequence, interaction within the system.
• Identify the event and understand the relation between the specific event.
• Generate a sequence for each use case.
• Construct a state diagram for the system.
• To verify the accuracy and consistency review the behavioral model.
Requirement Modeling (Cont…)
Scenario-based modeling:
• While developing a computer based system, the customer satisfaction cab be done by presenting him the
scenario based models during the software design process.
• The scenario based modeling can be done by developing the scenarios in the form of use cases, activity
diagram and swim lane diagrams.
• The use case diagram intended to capture the interaction between producer and consumer of the system. All
the required functionalities can be exposed by creating the use case diagrams.
• Activity Diagram: The activity diagram is a graphical representation for representing the flow of interaction
within specific scenarios. It is similar to a flow chart in which various activities that can be performed in the
system are represented. This diagram must be read from top to bottom. It consists of forks and branches. The
fork is used to remember that many activities can be parallel carried out. This diagram also consists of merge,
where multiple branches get combined.
Requirement Modeling (Cont…)

Scenario-based modeling:
• Swim lane Diagram: The activity diagram shows various activities performed, but it does not tell you who is
responsible for these activities. In swim lane diagram the activity diagram is partitioned according to the class
who is responsible for carrying out these activities.
• Use case diagram: Use case diagram is the primary form of system/software requirements for a new software
program underdeveloped. Use cases specify the expected behavior (what), and not the exact method of
making it happen (how). Use cases once specified can be denoted both textual and visual representation (i.e.
use case diagram). A key concept of use case modeling is that it helps us design a system from the end user's
perspective. It is an effective technique for communicating system behavior in the user's terms by specifying
all externally visible system behavior.
Requirement Modeling (Cont…)

Software Requirement Specification (SRS)


• The requirements are specified in specific format known as SRS.
• This document is created before starting the development work.
• The software requirement specification is an official document.
• It shows the detail about the performance of expected system.
• SRS indicates to a developer and a customer what is implemented in the software.
• SRS is useful if the software system is developed by the outside contractor.
• SRS must include an interface, functional capabilities, quality, reliability, privacy etc.
Requirement Modeling (Cont…)

Characteristics of SRS
• The SRS should be complete and consistence.
• The modification like logical and hierarchical must be allowed in SRS.
• The requirement should be easy to implement.
• Each requirement should be uniquely identified.
• The statement in SRS must be unambiguous means it should have only one meaning.
• All the requirement must be valid for the specified project.
Estimation for Software Projects

1
Software Project Planning
The overall goal of project planning is to
establish a pragmatic strategy for controlling,
tracking, and monitoring a complex technical
project.

Why?

So the end result gets done on time, with


quality!

2
Project Planning Task Set-I
 Establish project scope
 Determine feasibility
 Analyze risks
 Risk analysis is considered.
 Define required resources
 Determine require human resources
 Define reusable software resources
 Identify environmental resources

3
Project Planning Task Set-II
 Estimate cost and effort
 Decompose the problem
 Develop two or more estimates using size, function points,
process tasks or use-cases
 Reconcile the estimates
 Develop a project schedule
 Scheduling is considered.
 Establish a meaningful task set
 Define a task network
 Use scheduling tools to develop a timeline chart
 Define schedule tracking mechanisms

4
Estimation
 Estimation of resources, cost, and schedule for a
software engineering effort requires
 experience
 access to good historical information (metrics
 the courage to commit to quantitative predictions when
qualitative information is all that exists
 Estimation carries inherent risk and this risk leads to
uncertainty

5
Write it Down!

Project Scope Software


Estimates Project
Risks Plan
Schedule
Control strategy

6
To Understand Scope ...
 Understand the customers needs
 understand the business context
 understand the project boundaries
 understand the customer’s motivation
 understand the likely paths for change
 understand that ...

Even when you understand,


nothing is guaranteed!

7
What is Scope?
 Software scope describes
 the functions and features that are to be delivered to end-users
 the data that are input and output
 the “content” that is presented to users as a consequence of
using the software
 the performance, constraints, interfaces, and reliability that
bound the system.
 Scope is defined using one of two techniques:
 A narrative description of software scope is developed after
communication with all stakeholders.
 A set of use-cases is developed by end-users.

8
Resources
number software
tools
skills hardware

people
environment network
location resources

project

reusable
OTS
software
new
components components

full-experience part.-experience
components components

9
Project Estimation
 Project scope must be understood
 Elaboration (decomposition) is necessary
 Historical metrics are very helpful
 At least two different techniques should be
used
 Uncertainty is inherent in the process

10
Estimation Techniques
 Past (similar) project experience
 Conventional estimation techniques
 task breakdown and effort estimates
 size (e.g., FP) estimates
 Empirical models
 Automated tools

11
Estimation Accuracy
 Predicated on …
 the degree to which the planner has properly estimated the size
of the product to be built
 the ability to translate the size estimate into human effort,
calendar time, and dollars (a function of the availability of reliable
software metrics from past projects)
 the degree to which the project plan reflects the abilities of the
software team
 the stability of product requirements and the environment that
supports the software engineering effort.

12
Functional Decomposition

Statement functional
of decomposition
Scope Perform a
Grammatical “parse”

13
Conventional Methods:
LOC/FP Approach

 compute LOC/FP using estimates of information


domain values
 use historical data to build estimates for the
project

14
Example: LOC Approach

Average productivity for systems of this type = 620 LOC/pm.


Burdened labor rate =$8000 per month, the cost per line of
code is approximately $13.
Based on the LOC estimate and the historical productivity
data, the total estimated project cost is $431,000 and the
estimated effort is 54 person-months.

15
Example: FP Approach

The estimated number of FP is derived:


FPestimated = count-total 3 [0.65 + 0.01 3 S (Fi)]
FPestimated = 375
organizational average productivity = 6.5 FP/pm.
burdened labor rate = $8000 per month, the cost per FP is approximately $1230.
Based on the FP estimate and the historical productivity data, the total estimated
project cost is $461,000 and the estimated effort is 58 person-months.

16
Process-Based Estimation
Obtained from “process framework”

framework activities

application Effort required to


functions accomplish
each framework
activity for each
application function

17
Process-Based Estimation Example
Activity Risk Construction
CC Planning Analysis Engineering Release CE Totals
Task analysis design code test

Function

UICF 0.50 2.50 0.40 5.00 n/a 8.40


2DGA 0.75 4.00 0.60 2.00 n/a 7.35
3DGA 0.50 4.00 1.00 3.00 n/a 8.50
CGDF 0.50 3.00 1.00 1.50 n/a 6.00
DSM 0.50 3.00 0.75 1.50 n/a 5.75
PCF 0.25 2.00 0.50 1.50 n/a 4.25
DAM 0.50 2.00 0.50 2.00 n/a 5.00

Totals 0.25 0.25 0.25 3.50 20.50 4.50 16.50 46.00

% effort 1% 1% 1% 8% 45% 10% 36%

CC = customer communication CE = customer evaluation

Based on an average burdened labor rate of $8,000 per month, the


total estimated project cost is $368,000 and the estimated effort is 46
person-months.

18
Tool-Based Estimation

project characteristics
calibration factors
LOC/FP data

19
Estimation with Use-Cases

use cases scenarios pages Êscenarios pages LOC LOC estimate


e subsystem
User interface subsystem 6 10 6 Ê 12 5 560 3,366
Engineeringsubsystem
subsystemgroup
group 10 20 8 Ê 16 8 3100 31,233
Infrastructure subsystemgroup
e subsystem group 5 6 5 Ê 10 6 1650 7,970
Total LOC estimate Ê Ê Ê Ê
stimate Ê Ê Ê Ê 42,568

Using 620 LOC/pm as the average productivity for systems of this


type and a burdened labor rate of $8000 per month, the cost per line
of code is approximately $13. Based on the use-case estimate and
the historical productivity data, the total estimated project cost is
$552,000 and the estimated effort is 68 person-months.

20
Empirical Estimation Models
General form:

exponent
effort = tuning coefficient * size

usually derived
as person-months empirically
of effort required derived
usually LOC but
may also be
either a constant or function point
a number derived based
on complexity of project

21
COCOMO-II
 COCOMO II is actually a hierarchy of estimation models
that address the following areas:
 Application composition model. Used during the early stages of
software engineering, when prototyping of user interfaces,
consideration of software and system interaction, assessment of
performance, and evaluation of technology maturity are paramount.
 Early design stage model. Used once requirements have been
stabilized and basic software architecture has been established.
 Post-architecture-stage model. Used during the construction of the
software.

22
The Software Equation
A dynamic multivariable model

E = [LOC x B0.333/P]3 x (1/t4)


where
E = effort in person-months or person-years
t = project duration in months or years
B = “special skills factor”
P = “productivity parameter”

23
Estimation for OO Projects-I
 Develop estimates using effort decomposition, FP analysis, and any other
method that is applicable for conventional applications.
 Using object-oriented analysis modeling, develop use-cases and determine
a count.
 From the analysis model, determine the number of key classes (called
analysis classes).
 Categorize the type of interface for the application and develop a multiplier
for support classes:
 Interface type Multiplier
 No GUI 2.0
 Text-based user interface 2.25
 GUI 2.5
 Complex GUI 3.0

24
Estimation for OO Projects-II
 Multiply the number of key classes (step 3) by the multiplier to obtain
an estimate for the number of support classes.
 Multiply the total number of classes (key + support) by the average
number of work-units per class. Lorenz and Kidd suggest 15 to 20
person-days per class.
 Cross check the class-based estimate by multiplying the average
number of work-units per use-case

25
Estimation for Agile Projects
 Each user scenario (a mini-use-case) is considered separately for
estimation purposes.
 The scenario is decomposed into the set of software engineering tasks that
will be required to develop it.
 Each task is estimated separately. Note: estimation can be based on
historical data, an empirical model, or “experience.”
 Alternatively, the ‘volume’ of the scenario can be estimated in LOC, FP or some
other volume-oriented measure (e.g., use-case count).
 Estimates for each task are summed to create an estimate for the scenario.
 Alternatively, the volume estimate for the scenario is translated into effort using
historical data.
 The effort estimates for all scenarios that are to be implemented for a given
software increment are summed to develop the effort estimate for the
increment.

26
The Make-Buy Decision

27
Computing Expected Cost
expected cost =
(path probability) x (estimated path cost)
i i
For example, the expected cost to build is:
expected cost = 0.30 ($380K) + 0.70 ($450K)
build
= $429 K
similarly,
expected cost
reus =
e
$382K
expected cost
buy =
expected
$267K cost
cont =
$410K r
28
Software Engineering | Project size
estimation techniques
Estimation of the size of the software is an essential part of Software
Project Management. It helps the project manager to further predict the
effort and time which will be needed to build the project. Various measures
are used in project size estimation. Some of these are:

 Lines of Code
 Number of entities in ER diagram
 Total number of processes in detailed data flow diagram
 Function points
Lines of Code (LOC)
As the name suggests, LOC count the total number of lines of
source code in a project. The units of LOC are:

KLOC- Thousand lines of code


NLOC- Non-comment lines of code
KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems
of the same kind. The experts use it to predict the required size
of various components of software and then add them to get the
total size.
Lines of Code (LOC)
The size is estimated by comparing it with the existing
systems of the same kind. The experts use it to predict the
required size of various components of software and then
add them to get the total size.
It’s tough to estimate LOC by analyzing the problem
definition. Only after the whole code has been developed
can accurate LOC be estimated. This statistic is of little
utility to project managers because project planning must
be completed before development activity can begin.
Lines of Code (LOC)
Two separate source files having a similar number of
lines may not require the same effort. A file with
complicated logic would take longer to create than one
with simple logic. Proper estimation may not be
attainable based on LOC.
The length of time it takes to solve an issue is measured
in LOC. This statistic will differ greatly from one
programmer to the next. A seasoned programmer can
write the same logic in fewer lines than a newbie coder.
Lines of Code (LOC)
Advantages:
Universally accepted and is used in many models like COCOMO.
Estimation is closer to the developer’s perspective.
Simple to use.
Disadvantages:
Different programming languages contain a different number of lines.
No proper industry standard exists for this technique.
It is difficult to estimate the size using this technique in the early stages of
the project.
COCOMO MODEL
(Cost Constructive MOdel)
Most widely used software estimation
model.
COCOMO predicts the efforts and
schedule of a software product.
COCOMO Models
• COCOMO is defined in terms of three different
models:
– the Basic model,
– the Intermediate model, and
– the Detailed model.
• The more complex models account for more
factors that influence software projects, and
make more accurate estimates.

SEG3300 A&B W2004 R.L. Probert 2


The Development mode
• the most important factors contributing to a
project's duration and cost is the
Development Mode
• Organic Mode: The project is developed in a familiar,
stable environment, and the product is similar to
previously developed products. The product is
relatively small, and requires little innovation.
• Semidetached Mode: The project's characteristics are
intermediate between Organic and Embedded.

SEG3300 A&B W2004 R.L. Probert 3


The Development mode
• the most important factors contributing to a
project's duration and cost is the
Development Mode:
• Embedded Mode: The project is characterized by tight,
inflexible constraints and interface requirements. An
embedded mode project will require a great deal of
innovation.

SEG3300 A&B W2004 R.L. Probert 4


Basic COCOMO model

• Computes software development effort (and


cost) as function of program size expressed in
estimated lines of code
• Model:
Category ab bb cb db
Organic 2.4 1.05 2.5 0.38
Semi-detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32

TCS2411 Software Engineering 5


Basic COCOMO Equations

E  abkLOC bb

D  cbE db

where
• E is effort in person-months
• D is development time in months
• kLOC is estimated number of lines of code

TCS2411 Software Engineering 6


P=E/D
P- Total number of persons
required to accomplish the
project
Merits
• Good for quick,early,rough order of estimates
Limitations:
• Accuracy is limited
• Does not consider certain factors(H/W
constraints,personal quality,experience,tools)
Example
• Consider a software project using semi-
detached mode with 30000 lines of code.We
will obtain estimation for this project as
follows:

• E=3.0(30)1.12
=135 person-month
• Duration estimation

D=2.5(135)0.35
=14 months
• Person estimation
P=E/D
=135/14
=10 persons approximately
Intermediate COCOMO

• computes software development effort as a


function of program size and a set of “cost
drivers” that include subjective assessments of
product, hardware, personnel, and project
attributes
• Give rating to 15 attributes, from “very low” to
“extra high”, find effort multipllier (from table)
and product of all effort multipliers gives an
effort adjustment factor (EAF)
TCS2411 Software Engineering 12
Cost Driver Attributes
• Product attributes
– Required reliability
– Database size
– Product complexity
• Computer attributes
– Execution time constraint
– Main storage constraint
– Virtual machine volatility
– Computer turnaround time
TCS2411 Software Engineering 13
Cost Driver Attributes (Continued)
• Personnel attributes
– Analyst capability, Programmer capability
– Applications experience
– Virtual machine experience
– Programming language experience
• Project attributes
– Use of modern programming practices
– Use of software tools
– Required development schedule
TCS2411 Software Engineering 14
Intermediate COCOMO Equation
Category ai bi
Organic 3.2 1.05
Semi-detached 3.0 1.12
Embedded 2.8 1.20

E  aikLOC  EAF bi

• where
• E is effort in person-months,
• kLOC is estimated number of lines of code
TCS2411 Software Engineering 15
Merits
• Can be applied to almost entire software for
easy and rough cost estimation
• Can be applied at the s/w product component
level
Limitations:
Many components difficult to estimate
Advanced COCOMO
• Incorporates all characteristics of intermediate
COCOMOwith an assessment of the cost
driver’s impact on each step of software
engineering process

TCS2411 Software Engineering 17


COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that produce
increasingly detailed software estimates.
• The sub-models in COCOMO 2 are:
– Application composition model. Used when software is composed
from existing parts.
– Early design model. Used when requirements are available but design
has not yet started.
– Reuse model. Used to compute the effort of integrating reusable
components.
– Post-architecture model. Used once the system architecture has been
designed and more information about the system is available.
Use of COCOMO 2 models
Application composition model
• Supports prototyping projects and projects where there is
extensive reuse.
• Based on standard estimates of developer productivity in
application (object) points/month.
• Takes CASE tool use into account.
• Formula is
– PM = ( NAP  (1 - %reuse/100 ) ) / PROD
– PM is the effort in person-months, NAP is the number of application
points and PROD is the productivity.
Object point productivity

DeveloperÕs experience Very low Low Nominal High Very high


and capability
ICASE maturity and Very low Low Nominal High Very high
capability
PROD (NOP/month) 4 7 13 25 50
Early design model
• Estimates can be made after the requirements
have been agreed.
• Based on a standard formula for algorithmic
models
– PM = A  SizeB  M where
– M = PERS  RCPX  RUSE  PDIF  PREX  FCIL 
SCED;
– A = 2.94 in initial calibration, Size in KLOC, B varies
from 1.1 to 1.24 depending on novelty of the
project, development flexibility, risk management
approaches and the process maturity.
Multipliers
• Multipliers reflect the capability of the
developers, the non-functional requirements,
the familiarity with the development platform,
etc.
– RCPX - product reliability and complexity;
– RUSE - the reuse required;
– PDIF - platform difficulty;
– PREX - personnel experience;
– PERS - personnel capability;
– SCED - required schedule;
– FCIL - the team support facilities.
The reuse model
• Takes into account black-box code that is
reused without change and code that has to
be adapted to integrate it with new code.
• There are two versions:
– Black-box reuse where code is not modified. An
effort estimate (PM) is computed.
– White-box reuse where code is modified. A size
estimate equivalent to the number of lines of new
source code is computed. This then adjusts the
size estimate for new code.
Reuse model estimates 1
• For generated code:
– PM = (ASLOC * AT/100)/ATPROD
– ASLOC is the number of lines of generated code
– AT is the percentage of code automatically
generated.
– ATPROD is the productivity of engineers in
integrating this code.
Reuse model estimates 2
• When code has to be understood and
integrated:
– ESLOC = ASLOC * (1-AT/100) * AAM.
– ASLOC and AT as before.
– AAM is the adaptation adjustment multiplier
computed from the costs of changing the reused
code, the costs of understanding how to integrate
the code and the costs of reuse decision making.
Post-architecture level
• Uses the same formula as the early design model
but with 17 rather than 7 associated multipliers.
• The code size is estimated as:
– Number of lines of new code to be developed;
– Estimate of equivalent number of lines of new code
computed using the reuse model;
– An estimate of the number of lines of code that have
to be modified according to requirements changes.
The exponent term
• This depends on 5 scale factors (see next slide). Their
sum/100 is added to 1.01
• A company takes on a project in a new domain. The client has
not defined the process to be used and has not allowed time
for risk analysis. The company has a CMM level 2 rating.
– Precedenteness - new project (4)
– Development flexibility - no client involvement - Very high (1)
– Architecture/risk resolution - No risk analysis - V. Low .(5)
– Team cohesion - new team - nominal (3)
– Process maturity - some control - nominal (3)
• Scale factor is therefore 1.17.
Exponent scale factors
Precedentedness Reflects the previous experience of the organisation with this type of
project. Very low means no previous experience, Extra high means
that the organisation is completely familiar with this application
domain.
Development Reflects the degree of flexibility in the development process. Very
flexibility low means a prescribed process is used; Extra high means that the
client only sets general goals.
Architecture/risk Reflects the extent of risk analysis carried out. Very low means little
resolution analysis, Extra high means a complete a thorough risk analysis.
T eam cohesion Reflects how well the development team know each other and work
together. Very low means very difficult interactions, Extra high
means an integrated and effective team with no communication
problems.
Process maturity Reflects the process maturity of the organisation. The computation
of this value depends on the CMM Maturity Questionnaire but an
estimate can be achieved by subtracting the CMM process maturity
level from 5.
Estimation Issues
• Historical Data
• Accuracy
• Estimation Technique
• Automation
• Improving the Estimate

TCS2411 Software Engineering 30


References
• “Software Engineering: A Practitioner’s
Approach” 5th Ed. by Roger S. Pressman, Mc-
Graw-Hill, 2001
• “Software Engineering” by Ian Sommerville,
Addison-Wesley, 2001

TCS2411 Software Engineering 31


Content

• Software Design
• Design Principles & Concepts
• Effective Modular Design, Cohesion and Coupling, Architectural design
Software Design
• Software design is the first of three technical activities—design, code generation, and test—that are
required to build and verify the software.

During design we make


decisions that will ultimately
affect the success of software
construction and, as important,
the ease with which software
can be maintained.
Software Design (Cont…)

• The flow of information during software design is illustrated in Figure above.


• Software requirements, manifested by the data, functional, and behavioral models, feed the design task.
Using one of a number of design methods, the design task produces a data design, an architectural design,
an interface design, and a component design.
• The data design transforms the information domain model created during analysis into the data
structures that will be required to implement the software. The data objects and relationships defined in
the entity relationship diagram and the detailed data content depicted in the data dictionary provide the
basis for the data design activity. Part of data design may occur in conjunction with the design of software
architecture. More detailed data design occurs as each software component is designed.
Software Design (Cont…)
• The architectural design defines the relationship between major structural elements of the software, the
“design patterns” that can be used to achieve the requirements that have been defined for the system,
and the constraints that affect the way in which architectural design patterns can be applied. The
architectural design representation—the framework of a computer-based system—can be derived from
the system specification, the analysis model, and the interaction of subsystems defined within the analysis
model.
• The interface design describes how the software communicates within itself, with systems that
interoperate with it, and with humans who use it. An interface implies a flow of information (e.g., data
and/or control) and a specific type of behavior. Therefore, data and control flow diagrams provide much
of the information required for interface design.
• The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the PSPEC, CSPEC, and STD
serve as the basis for component design.
Software Design (Cont…)
• The architectural design defines the relationship between major structural elements of the software, the
“design patterns” that can be used to achieve the requirements that have been defined for the system,
and the constraints that affect the way in which architectural design patterns can be applied. The
architectural design representation—the framework of a computer-based system—can be derived from
the system specification, the analysis model, and the interaction of subsystems defined within the analysis
model.
• The interface design describes how the software communicates within itself, with systems that
interoperate with it, and with humans who use it. An interface implies a flow of information (e.g., data
and/or control) and a specific type of behavior. Therefore, data and control flow diagrams provide much
of the information required for interface design.
• The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the PSPEC, CSPEC, and STD
serve as the basis for component design.
Design Principle
• The design process should not suffer from “tunnel vision.” A good designer should consider alternative
approaches, judging each based on the requirements of the problem, the resources available to do the job.
• The design should be traceable to the analysis model. Because a single element of the design model often
traces to multiple requirements, it is necessary to have a means for tracking how requirements have been
satisfied by the design model.
• The design should not reinvent the wheel. Systems are constructed using a set of design patterns, many of
which have likely been encountered before. These patterns should always be chosen as an alternative to
reinvention. Time is short and resources are limited! Design time should be invested in representing truly new
ideas and integrating those patterns that already exist.
• The design should “minimize the intellectual distance” between the software and the problem as it exists in
the real world. That is, the structure of the software design should (whenever possible) mimic the structure of
the problem domain.
• The design should exhibit uniformity and integration. A design is uniform if it appears that one person
developed the entire thing. Rules of style and format should be defined for a design team before design work
begins. A design is integrated if care is taken in defining interfaces between design components.
Design Principle (Cont…)
• The design should be structured to accommodate change. The design concepts discussed in the next
section enable a design to achieve this principle.
• The design should be structured to degrade gently, even when aberrant data, events, or operating
conditions are encountered. Welldesigned software should never “bomb.” It should be designed to
accommodate unusual circumstances, and if it must terminate processing, do so in a graceful manner.
• Design is not coding, coding is not design. Even when detailed procedural designs are created for program
components, the level of abstraction of the design model is higher than source code. The only design
decisions made at the coding level address the small implementation details that enable the procedural
design to be coded.
• The design should be assessed for quality as it is being created, not after the fact. A variety of design
concepts and design measures are available to assist the designer in assessing quality.
Design Concept

• Abstraction
• Refinement
• Modularity
Design Concept (Cont…)

• Abstraction:
• There are three basic types of abstraction.
• A procedural abstraction is a named sequence of instructions that has a specific and limited function. An
example of a procedural abstraction would be the word open for a door. Open implies a long sequence of
procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob and pull door, step away
from moving door, etc.).
• A data abstraction is a named collection of data that describes a data object. In the context of the
procedural abstraction open, we can define a data abstraction called door. Like any data object, the data
abstraction for door would encompass a set of attributes that describe the door (e.g., door type, swing
direction, opening mechanism, weight, dimensions). It follows that the procedural abstraction open would
make use of information contained in the attributes of the data abstraction door.
Design Concept (Cont…)

• Abstraction:
• Control abstraction is the third form of abstraction used in software design. Like procedural and data
abstraction, control abstraction implies a program control mechanism without specifying internal details.
An example of a control abstraction is the synchronization semaphore used to coordinate activities in an
operating system.
• Refinement:
• Stepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth.
• A program is developed by successively refining levels of procedural detail.
• A hierarchy is developed by decomposing a macroscopic statement of function (a procedural abstraction)
in a stepwise fashion until programming language statements are reached.
Design Concept (Cont…)

• Refinement:
• In each step (of the refinement), one or several instructions of the given program are decomposed into
more detailed instructions.
• This successive decomposition or refinement of specifications terminates when all instructions are
expressed in terms of any underlying computer or programming language .
• As tasks are refined, so the data may have to be refined, decomposed, or structured, and it is natural to
refine the program and the data specifications in parallel.
• Abstraction and refinement are complementary concepts. Abstraction enables a designer to specify
procedure and data and yet suppress low-level details. Refinement helps the designer to reveal low-level
details as design progresses. Both concepts aid the designer in creating a complete design model as the
design evolves.
Design Concept (Cont…)
• Modularity:
• Modularity means software is divided into separately named and addressable components, often called
modules, that are integrated to satisfy problem requirements.
• It has been stated that "modularity is the single attribute of software that allows a program to be
intellectually manageable".
• Monolithic software (i.e., a large program composed of a single module) cannot be easily grasped by a
reader. The number of control paths, span of reference, number of variables, and overall complexity
would make understanding close to impossible.
• How to define an appropriate module of a given size?
• Modular decomposability. If a design method provides a systematic mechanism for decomposing the
problem into subproblems, it will reduce the complexity of the overall problem, thereby achieving an
effective modular solution.
Design Concept (Cont…)

• Modularity:
• How to define an appropriate module of a given size?
• Modular composability. If a design method enables existing (reusable) design components to be
assembled into a new system, it will yield a modular solution that does not reinvent the wheel.
• Modular understandability. If a module can be understood as a standalone unit (without reference to
other modules), it will be easier to build and easier to change.
• Modular continuity. If small changes to the system requirements result in changes to individual modules,
rather than system-wide changes, the impact of change-induced side effects will be minimized.
• Modular protection. If an aberrant condition occurs within a module and its effects are constrained within
that module, the impact of error-induced side effects will be minimized.
Effective Modular Design

• Over the years, modularity has become an accepted approach in all engineering disciplines.
• A modular design reduces complexity, facilitates change, and results in easier implementation by
encouraging parallel development of different parts of a system.
• Functional Independence:
• The concept of functional independence is a direct outgrowth of modularity and the concepts of
abstraction and information hiding.
• Functional independence is achieved by developing modules with "single-minded" function and an
"aversion" to excessive interaction with other modules.
• In this, each module addresses a specific sub-function of requirements and has a simple interface when
viewed from other parts of the program structure.
Effective Modular Design (Cont…)

• Functional Independence:
• Software with effective modularity i.e. independent modules, is easier to develop because function may
be compartmentalized and interfaces are simplified.
• Independent modules are easier to maintain (and test) because secondary effects caused by design or
code modification are limited, error propagation is reduced, and reusable modules are possible.
• To summarize, functional independence is a key to good design, and design is the key to software quality.
• Independence is measured using two qualitative criteria: cohesion and coupling.
• Cohesion is a measure of the relative functional strength of a module.
• Coupling is a measure of the relative interdependence among modules.
Effective Modular Design (Cont…)

• Cohesion:
• A cohesive module performs a single task within a software procedure, requiring little interaction with
procedures being performed in other parts of a program.
• A cohesive module should (ideally) do just one thing.
• Cohesion represented as a "spectrum." The scale for cohesion is nonlinear. That is, low-end cohesiveness
is much "worse" than middle range, which is nearly as "good" as high-end cohesion.
• A designer need not be concerned with categorizing cohesion in a specific module. Rather, the overall
concept should be understood and low levels of cohesion should be avoided when modules are designed.
• At the low (undesirable) end of the spectrum, we encounter a module that performs a set of tasks that
relate to each other loosely. Such modules are termed coincidentally cohesive.
Effective Modular Design (Cont…)

• Cohesion:
• A module that performs tasks that are related logically (e.g., a module that produces all output regardless
of type) is logically cohesive.
• When a module contains tasks that are related by the fact that all must be executed with the same span of
time, the module exhibits temporal cohesion.
Effective Modular Design (Cont…)

• Coupling:
• Coupling is a measure of interconnection
among modules in a software structure.
• Coupling depends on the interface
complexity between modules, the point at
which entry or reference is made to a
module, and what data pass across the
interface.

Fig: Types of Coupling


Effective Modular Design (Cont…)
• Coupling:
• Figure above provides examples of different types of module coupling.
• Modules a and d are subordinate to different modules. Each is unrelated and therefore no direct coupling
occurs.
• Module c is subordinate to module a and is accessed via a conventional argument list, through which data
are passed.
• As long as a simple argument list is present, low coupling (called data coupling) is exhibited in this portion
of structure.
• A variation of data coupling, called stamp coupling, is found when a portion of a data structure (rather
than simple arguments) is passed via a module interface. This occurs between modules b and a.
Effective Modular Design (Cont…)
• Coupling:
• At moderate levels, coupling is characterized by passage of control between modules.
• Control coupling is very common in most software designs and is shown in Figure above where a “control
flag” (a variable that controls decisions in a subordinate or superordinate module) is passed between
modules d and e.
• Relatively high levels of coupling occur when modules are tied to an environment external to software. For
example, I/O couples a module to specific devices, formats, and communication protocols.
• External coupling is essential, but should be limited to a small number of modules with a structure. High
coupling also occurs when a number of modules reference a global data area. Common coupling, as this
mode is called, is shown in Figure above. Modules c, g, and k each access a data item in a global data area.
• The highest degree of coupling, content coupling, occurs when one module makes use of data or control
information maintained within the boundary of another module. Secondarily, content coupling occurs
when branches are made into the middle of a module. This mode of coupling can and should be avoided.
Effective Modular Design (Cont…)

• Architectural Design:
• The software architecture of a program or computing system is the structure or structures of the system,
which comprise software components, the externally visible properties of those components, and the
relationships among them.
• The architecture is not the operational software. Rather, it is a representation that enables a software
engineer to
• (1) analyze the effectiveness of the design in meeting its stated requirements
• (2) consider architectural alternatives at a stage when making design changes is still relatively easy
• (3) reducing the risks associated with the construction of the software.
Effective Modular Design (Cont…)
• Architectural Design:
• Data centered architectures:
• A data store will reside at the center of this architecture and is accessed frequently by the other components
that update, add, delete or modify the data present within the store.
• The figure illustrates a typical data centered style. The client software access a central repository. Variation
of this approach are used to transform the repository into a blackboard when data related to client or data
of interest for the client change the notifications to client software.
• This data-centered architecture will promote integrability. This means that the existing components can be
changed and new client components can be added to the architecture without the permission or concern of
other clients.
• Data can be passed among clients using blackboard mechanism.
Effective Modular Design (Cont…)
• Architectural Design:
• Data flow architectures:
• This kind of architecture is used when input data to be transformed into output data through a series of
computational manipulative components.
• The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components
called filters connected by pipes.
• Pipes are used to transmit data from one component to the next.
• Each filter will work independently and is designed to take data input of a certain form and produces data output
to the next filter of a specified form. The filters don’t require any knowledge of the working of neighboring filters.
• If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure
accepts the batch of data and then applies a series of sequential components to transform it.
Effective Modular Design (Cont…)

• Architectural Design:
Call and Return architectures: It is used to create a program that is easy to scale and modify. Many sub-styles
exist within this category. Two of them are explained below.

● Remote procedure call architecture: This components is used to present in a main program or sub program
architecture distributed among multiple computers on a network.
● Main program or Subprogram architectures: The main program structure decomposes into number of
subprograms or function into a control hierarchy. Main program contains number of subprograms that can
invoke other components.
Effective Modular Design (Cont…)
• Architectural Design:
● Object Oriented architecture: The components of a system encapsulate data and the operations that must be
applied to manipulate the data. The coordination and communication between the components are established
via the message passing.
● Layered architecture:
a. A number of different layers are defined with each layer performing a well-defined set of operations. Each
layer will do some operations that becomes closer to machine instruction set progressively.
b. At the outer layer, components will receive the user interface operations and at the inner layers,
components will perform the operating system interfacing(communication and coordination with OS)
c. Intermediate layers to utility services and application software functions.
Effective Modular Design (Cont…)
• Architectural Design:
• Layered pattern

• Client-server pattern

• Master-slave pattern

• Pipe-filter pattern

• Broker pattern

• Peer-to-peer pattern

• Event-bus pattern

• Model-view-controller pattern

• Blackboard pattern

• Interpreter pattern
Effective Modular Design (Cont…)
• Architectural Design:
• Layered pattern:
• This pattern can be used to structure programs that can be decomposed into groups of subtasks, each of which is
at a particular level of abstraction. Each layer provides services to the next higher layer.
• Client-server pattern:
• This pattern consists of two parties; a server and multiple clients. The server component will provide services to
multiple client components. Clients request services from the server and the server provides relevant services to
those clients. Furthermore, the server continues to listen to client requests.
• Master-slave pattern:
• This pattern consists of two parties; master and slaves. The master component distributes the work among
identical slave components, and computes a final result from the results which the slaves return.
• Pipe-filter pattern:
• This pattern can be used to structure systems which produce and process a stream of data. Each processing step
is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for
buffering or for synchronization purposes.
Effective Modular Design (Cont…)
• Architectural Design:
• Broker pattern:
• This pattern is used to structure distributed systems with decoupled components. These components can interact
with each other by remote service invocations. A broker component is responsible for the coordination of
communication among components. Servers publish their capabilities (services and characteristics) to a broker.
Clients request a service from the broker, and the broker then redirects the client to a suitable service from its
registry.
• Peer-to-peer pattern:
• In this pattern, individual components are known as peers. Peers may function both as a client, requesting
services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a
server or as both, and it can change its role dynamically with time.
• Event-bus pattern:
• This pattern primarily deals with events and has 4 major components; event source, event listener, channel and
event bus. Sources publish messages to particular channels on an event bus. Listeners subscribe to particular
channels. Listeners are notified of messages that are published to a channel to which they have subscribed
before.
Effective Modular Design (Cont…)

• Architectural Design:
• Model-view-controller pattern:
• This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,
1. model — contains the core functionality and data
2. view — displays the information to the user (more than one view may be defined)
3. controller — handles the input from the user
• This is done to separate internal representations of information from the ways information is presented to, and
accepted from, the user. It decouples components and allows efficient code reuse.
• Interpreter pattern:
• This pattern is used for designing a component that interprets programs written in a dedicated language. It mainly
specifies how to evaluate lines of programs, known as sentences or expressions written in a particular language. The
basic idea is to have a class for each symbol of the language.
Effective Modular Design (Cont…)

• Architectural Design:
• Blackboard pattern
• This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard
pattern consists of 3 main components.
● blackboard — a structured global memory containing objects from the solution space
● knowledge source — specialized modules with their own representation
● control component — selects, configures and executes modules.
● All the components have access to the blackboard. Components may produce new data objects that are added to the
blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching
with the existing knowledge source.
Content

• Unit testing, Integration testing,Validation testing, System testing


• Testing Techniques, white-box testing: Basis path, Control structure testing, black-box testing: Graph based,
Equivalence, Boundary Value
• Types of Software Maintenance, Re-Engineering, Reverse Engineering
Introduction
• Testing is a process of executing a program with the intent of finding an error.
• A good test case is one that has a high probability of finding an as-yet-undiscovered error.
• A successful test is one that uncovers an as-yet-undiscovered error.
• Testing Principles:
• All tests should be traceable to customer requirements.
• Tests should be planned long before testing begins.
• The Pareto principle applies to software testing: Stated simply, the Pareto principle implies that 80 percent
of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The
problem, of course, is to isolate these suspect components and to thoroughly test them.
• Testing should begin “in the small” and progress toward testing “in the large.”
• Exhaustive testing is not possible.
Introduction
• To be most effective, testing should be conducted by an independent third party.

Test Case Template:


Introduction
Test Case Example:
White Box Testing

• White-box testing, sometimes called glass-box testing, is a test case design method
that uses the control structure of the procedural design to derive test cases.
• Using white-box testing methods, the software engineer can derive test cases that
• (1) guarantee that all independent paths within a module have been exercised at
least once
• (2) exercise all logical decisions on their true and false sides
• (3) execute all loops at their boundaries and within their operational bounds
• (4) exercise internal data structures to ensure their validity.
White Box Testing (Cont…)
• Basic Path Method:
• The basis path method enables the test case designer to derive a logical complexity measure of a
procedural design and use this measure as a guide for defining a basis set of execution paths.
• 1. Flow Graph Notation: Before the basis path method can be introduced, a simple notation for the
representation of control flow, called a flow graph (or program graph) must be introduced.
• The flow graph depicts logical control flow using the notation illustrated in Figure below. Each structured
construct has a corresponding flow graph symbol.
White Box Testing (Cont…)
• Basic Path Method:
• 1. Flow Graph Notation:
• each circle, called a flow graph node, represents one or more procedural statements. A sequence of
process boxes and a decision diamond can map into a single node.
• The arrows on the flow graph, called edges or links, represent flow of control and are analogous to
flowchart arrows.
• An edge must terminate at a node, even if the node does not represent any procedural statements.
• Areas bounded by edges and nodes are called regions. When counting regions, we include the area outside
the graph as a region.
• Each node that contains a condition is called a predicate node and is characterized by two or more edges
emanating from it.
White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity
of a program.
• When used in the context of the basis path testing method, the value computed for cyclomatic complexity
defines the number of independent paths in the basis set of a program and provides us with an upper
bound for the number of tests that must be conducted to ensure that all statements have been executed at
least once.
• An independent path is any path through the program that introduces at least one new set of processing
statements or a new condition.
• Cyclomatic complexity has a foundation in graph theory and provides us with an extremely useful software
metric.
White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Complexity is computed in one of three ways:

• 1. The number of regions of the flow graph correspond to the cyclomatic complexity.

• 2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2

where E is the number of flow graph edges, N is the number of flow graph nodes.

• 3. Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1

where P is the number of predicate nodes contained in the flow graph


White Box Testing (Cont…)
• Basic Path Method:
• 2. Cyclomatic Complexity:
• Q. Compute the cyclomatic complexity for graph
besides.
• Solution:

• 1. The flow graph has four regions.

• 2. V(G) = 11 edges 9 nodes + 2 = 4.

• 3. V(G) = 3 predicate nodes + 1 = 4.

• So, cyclomatic complexity for graph is 4.


White Box Testing (Cont…)
• Basic Path Method:
• 3. Deriving Test Cases:
• Using the design or code as a foundation, draw a corresponding flow graph.
• Determine the cyclomatic complexity of the resultant flow graph.
• Determine a basis set of linearly independent paths.
• Prepare test cases that will force execution of each path in the basis set.
• 4. Graph Matrices:
• A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the number of
nodes on the flow graph.
• Each row and column corresponds to an identified node, and matrix entries correspond to connections (an
edge) between nodes. A
White Box Testing (Cont…)
• Basic Path Method:
• 4. Graph Matrices:
• Each node on the flow graph is identified by numbers, while each edge is identified by letters. A letter entry
is made in the matrix to correspond to a connection between two nodes.
• The graph matrix is nothing more than a tabular representation of a flow graph.
• By adding a link weight to each matrix entry, the graph matrix can become a powerful tool for evaluating
program control structure during testing. The link weight provides additional information about control
flow.
White Box Testing (Cont…)
• Control Structure Testing:
• Condition Testing:
• Condition testing is a test case design method that exercises the logical conditions contained in a program
module.
• A simple condition is a Boolean variable or a relational expression, possibly preceded with one NOT (¬)
operator.
• A relational expression takes the form

E1<<relational-operator>> E2

• where E1 and E2 are arithmetic expressions and is one of the following: <, ≤, =, ≠ (nonequality), >, or ≥. A
compound condition is composed of two or more simple conditions, Boolean operators, and parentheses.
White Box Testing (Cont…)
• Control Structure Testing:
• Data Flow Testing:
• The data flow testing method selects test paths of a program according to the locations of definitions and
uses of variables in the program.
• To illustrate the data flow testing approach, assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables. For a
statement with S as its statement number,
• DEF(S) = {X | statement S contains a definition of X}

• USE(S) = {X | statement S contains a use of X}


• If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the condition of
statement S. The definition of variable X at statement S is said to be live at statement S' if there exists a path
from statement S to statement S' that contains no other definition of X.
White Box Testing (Cont…)
• Control Structure Testing:
• Loop Testing:
• Loop testing is a white-box testing technique that focuses exclusively on the validity of loop constructs. Four
different classes of loops can be defined: simple loops, concatenated loops, nested loops, and unstructured
loops.
• Simple loops: The following set of tests can be applied to simple loops, where n is the maximum number of
allowable passes through the loop.
• 1. Skip the loop entirely.
• 2. Only one pass through the loop.
• 3. Two passes through the loop.
• 4. m passes through the loop where m < n.
• 5. n-1, n, n + 1 passes through the loop.
White Box Testing (Cont…)
• Control Structure Testing:
• Loop Testing:
• Nested loops: If we were to extend the test approach for simple loops to nested loops, the number of
possible tests would grow geometrically as the level of nesting increases. This would result in an impractical
number of tests.
• To reduce the number of tests:
• 1. Start at the innermost loop. Set all other loops to minimum values.
• 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum
iteration parameter (e.g., loop counter) values. Add other tests for out-of-range or excluded values.
• 3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values
and other nested loops to "typical" values.
• 4. Continue until all loops have been tested.
White Box Testing (Cont…)

• Control Structure Testing:


• Loop Testing:
• Concatenated loops: Concatenated loops can be tested using the approach defined for simple loops, if each
of the loops is independent of the other. However, if two loops are concatenated and the loop counter for
loop 1 is used as the initial value for loop 2, then the loops are not independent. When the loops are not
independent, the approach applied to nested loops is recommended.
• Unstructured loops Whenever possible, this class of loops should be redesigned to reflect the use of the
structured programming constructs
Black Box Testing
• Black-box testing, also called behavioral testing, focuses on the functional requirements of the
software.
• Black-box testing enables the software engineer to derive sets of input conditions that will fully
exercise all functional requirements for a program. Black-box testing is not an alternative to white-
box techniques.
• Black-box testing attempts to find errors in the following categories:
• (1) incorrect or missing functions
• (2) interface errors
• (3) errors in data structures or external database access
• (4) behavior or performance errors
• (5) initialization and termination errors
Black Box Testing (Cont…)
• Graph-Based Testing:
• Software testing using black box approach begins by creating a graph of important objects and
their relationships and then devising a series of tests that will cover the graph so that each object
and relationship is exercised and errors are uncovered.
• To accomplish these steps, the software engineer begins by creating a graph—a collection of
nodes that represent objects; links that represent the relationships between objects; node
weights that describe the properties of a node (e.g., a specific data value or state behavior); and
link weights that describe some characteristic of a link.
Black Box Testing (Cont…)
• Equivalence Partitioning:
• Equivalence partitioning is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived.
• An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all
character data) that might otherwise require many cases to be executed before the general error
is observed.
• Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby
reducing the total number of test cases that must be developed.
• Test case design for equivalence partitioning is based on an evaluation of equivalence classes for
an input condition.
• Equivalence classes may be defined according to the following guidelines:
• 1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
Black Box Testing (Cont…)
• Equivalence Partitioning:
• 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
• 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
• 4. If an input condition is Boolean, one valid and one invalid class are defined.
• Boundary Value Analysis:
• Boundary value analysis is a test case design technique that complements equivalence
partitioning.
• Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases
at the "edges" of the class.
Black Box Testing (Cont…)
• Boundary Value Analysis:
• Guidelines for BVA are similar in many respects to those provided for equivalence partitioning:
• 1. If an input condition specifies a range bounded by values a and b, test cases should be designed
with values a and b and just above and just below a and b.
• 2. If an input condition specifies a number of values, test cases should be developed that exercise
the minimum and maximum numbers. Values just above and below minimum and maximum are
also tested.
• 3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature vs.
pressure table is required as output from an engineering analysis program. Test cases should be
designed to create an output report that produces the maximum (and minimum) allowable
number of table entries.
• 4. If internal program data structures have prescribed boundaries (e.g., an array has a defined
limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary.
Testing Strategies:
• The software engineering process may be viewed as the spiral illustrated in Figure below Initially, system
engineering defines the role of software and leads to software requirements analysis, where the information
domain, function, behavior, performance, constraints, and validation criteria for software are established.
• Moving inward along the spiral, we come to design and finally to coding.
• Unit testing begins at the vortex of the spiral and concentrates on each unit (i.e., component) of the software
as implemented in source code.
● Testing progresses by moving
outward along the spiral to
integration testing, where the focus is
on design and the construction of the
software architecture.
Testing Strategies:
• Taking another turn outward on the spiral, we encounter validation testing, where requirements established
as part of software requirements analysis are validated against the software that has been constructed.

• At system testing, where the software and other system elements are tested as a whole.
Unit Testing
• Unit testing focuses verification effort on the smallest unit of software design—the software component or
module.
• Unit testing is normally considered as an adjunct to the coding step.
• After source level code has been developed, reviewed, and verified for correspondence to component-level
design, unit test case design begins.

• A review of design information provides guidance for establishing test cases that are likely to uncover errors.
• Each test case should be coupled with a set of expected results.
• Because a component is not a stand-alone program, driver and/or stub software must be developed for each
unit test.
Unit Testing (Cont…)
• The unit test environment is illustrated in Figure besides.
• . In most applications a driver is nothing more than a "main program"
that accepts test case data, passes such data to the component (to be
tested), and prints relevant results.
• Stubsserve to replace modules that are subordinate (called by) the
component to be tested. A stub or "dummy subprogram" uses the
subordinate module's interface, may do minimal data manipulation,
prints verification of entry, and returns control to the module
undergoing testing.
• Drivers and stubs represent overhead. That is, both are software that
must be written but that is not delivered with the final software product.
Unit Testing (Cont…)
• If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately, many components cannot be
adequately unit tested with "simple" overhead software.
• In such cases, complete testing can be postponed until the integration test step.
• Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a
component, the number of test cases is reduced and errors can be more easily predicted and uncovered.

Advantages Disadvantages

● It helps to write better code. ● It takes time to write test cases.

● It helps to catch bugs earlier. ● It’s difficult to write tests for legacy code.

● It helps to detect regression bugs. ● Tests require a lot of time for maintenance.

● It makes code easy to refactor. ● It can be challenging to test GUI code.

● It makes developer more efficient at writing code. ● Unit testing can’t catch all errors.
Integration Testing

• Integration testing is a systematic technique for constructing the program structure while at the same time
conducting tests to uncover errors associated with interfacing.
• The objective is to take unit tested components and build a program structure that has been dictated by
design.

You might also like