Software Engineering
Software Engineering
com
UNIT I
1.0 INTRODUCTION
SOFTWARE:
ENGINEERING:
SOFTWARE ENGINEERING:
Methods
Process
Quality
www.vidyarthiplus.com Page 1
www.vidyarthiplus.com
1.Analysis
2.Design
3.Implementation
4.Testing
5.Maintenance
This view opposed uniqueness and "magic" of programming in an effort to move the
development of software from "magic" (which only a select few can do) to "art" (which the
talented can do) to "science" (which supposedly anyone can do!). There have been numerous
definitions given for software engineering (including that above and below).
www.vidyarthiplus.com Page 2
www.vidyarthiplus.com
People developing systems were consistently wrong in their estimates of time, effort, and
costs
Reliability and maintainability were difficult to achieve
Delivered systems frequently did not work
1979 study of a small number of government projects showed that:
2% worked
3% could work after some corrections
45% delivered but never successfully used
20% used but extensively reworked or abandoned
30% paid and undelivered
Fixing bugs in delivered software produced more bugs
Increase in size of software systems
NASA
StarWars Defense Initiative
Social Security Administration
financial transaction systems
Changes in the ratio of hardware to software costs
early 60's - 80% hardware costs
middle 60's - 40-50% software costs
today - less than 20% hardware costs
Increasingly important role of maintenance
Fixing errors, modification, adding options
Cost is often twice that of developing the software
Advances in hardware (lower costs)
Advances in software techniques (e.g., users interaction)
Increased demands for software
Medicine, Manufacturing, Entertainment, Publishing
Demand for larger and more complex software systems
Airplanes (crashes), NASA (aborted space shuttle launches),
www.vidyarthiplus.com Page 3
www.vidyarthiplus.com
Paradigms refer to particular approaches or philosophies for designing, building and maintaining
software. Different paradigms each have their own advantages and disadvantages which make
one more appropriate in a given situation than perhaps another (!).
A method (also referred to as a technique) is heavily depended on a selected paradigm and may
be seen as a procedure for producing some result. Methods generally involve some formal
notation and process(es).
Thus, the following phases are heavily affected by selected software paradigms
Design
Implementation
Integration
Maintenance
The software development cycle involves the activities in the production of a software system.
Generally the software development cycle can be divided into the following phases:
www.vidyarthiplus.com Page 4
www.vidyarthiplus.com
Validation:
"Are we building the right product"
www.vidyarthiplus.com Page 5
www.vidyarthiplus.com
o The system is executed with test data and its operational behaviour is observed
V& V goals
Verification and validation should establish confidence that the software is fit for purpose
Rather, it must be good enough for its intended use and the type of use will determine the
degree of confidence that is needed
V & V planning
Careful planning is required to get the most out of testing and inspection processes
The plan should identify the balance between static verification and testing
Test planning is about defining standards for the testing process rather than describing
product tests
www.vidyarthiplus.com Page 6
www.vidyarthiplus.com
Software validation
o Evolutionary development
Specification, development and validation are interleaved.
o There are many variants of these models e.g. formal development where a
waterfall-like process is used but the specification is a formal specification that is
refined through several stages to an implementable design.
Waterfall model phases
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com
o Exploratory development
www.vidyarthiplus.com Page 8
www.vidyarthiplus.com
o Throw-away prototyping
Objective is to understand the system requirements. Should start with
poorly understood requirements to clarify what is really needed.
Evolutionary development
Evolutionary development
o Problems
Lack of process visibility;
www.vidyarthiplus.com Page 9
www.vidyarthiplus.com
o Applicability
For small or medium-size interactive systems;
For parts of large systems (e.g. the user interface);
For short-lifetime systems.
Incremental development
Spiral development
www.vidyarthiplus.com Page 10
www.vidyarthiplus.com
RAD MODEL:
www.vidyarthiplus.com Page 11
www.vidyarthiplus.com
Business Business
Modeling Modeling
Data Data
Modeling Modeling
Process Process
Modeling Modeling
Application Application
Generation Generation
60 – 90 Days
PROTOTYPE MODEL:
Requirement collection
Quick Design
Prototype creation(or)modification
Assessment
Prototype refinement
www.vidyarthiplus.com Page 12
www.vidyarthiplus.com
CUSTOMER TEST
DRIVES MOCK - UP
www.vidyarthiplus.com Page 13
www.vidyarthiplus.com
Re-engineering approaches
o Usually follows a ‘waterfall’ model because of the need for parallel development
of different parts of the system
Little scope for iteration between phases because hardware changes are
very expensive. Software may have to compensate for hardware problems
o Inevitably involves engineers from different disciplines who must work together
Much scope for misunderstanding here. Different disciplines use a
different vocabulary and much negotiation is required. Engineers may
have personal agendas to fulfil
www.vidyarthiplus.com Page 15
www.vidyarthiplus.com
A set or arrangement of elements that are organized to accomplish some predefined goal
by processing information.
The goal may be to support some business function or to develop a product that can be
sold to generate business revenue. To accomplish the goal, a computer-based system
makes use of a variety of system elements:
Software. Computer programs, data structures, and related documentation that serve to
effect the logical method, procedure, or control that is required.
data, and electromechanical devices (e.g., sensors, motors, pumps) that provide external
world function.
Procedures. The steps that define the specific use of each system element
or the procedural context in which the system resides.
Product Engineering
Product engineering is a crucial term in the sphere of software development. It is through product
engineering that the future of a product is decided. The purpose of software Product Engineering
is to consistently and innovatively perform a well-defined engineering process that integrates all
software engineering activities to effectively and efficiently develop correct, consistent software
products. Software Engineering tasks include analyzing the system requirements allocated to
software, developing software architecture, designing the software, implementing the software in
the code, integrating software components, and testing the software to verify whether it specifies
specific requirements.
www.vidyarthiplus.com Page 17
www.vidyarthiplus.com
www.vidyarthiplus.com Page 18
www.vidyarthiplus.com
UNIT II
SOFTWARE REQUIREMENTS
Requirement engineering provides the appropriate mechanism for understanding what the
customer wants, analyzing need accessing feasibility, negotiating a reasonable solution,
specifying the solution unambiguously, validating the specification and managing the
requirements as they are transformed into an operational system. The requirement engineering
process can be described in five distinct steps.
www.vidyarthiplus.com Page 1
www.vidyarthiplus.com
requirement elicitation
requirement analysis and negotiation
requirement specification
system modeling
requirement validation
requirement management
* the close ended approach also called throw away prototyping. Using this a prototype serves as
a rough demonstration of requirements.
* an open ended approach, called evolution of the prototyping uses the prototype as the first part
of an analysis activity that will be continued into design and construction.
* Before a close ended or open ended approach can be chosen, it is necessary to determine
whether the system to be built is amenable to prototyping.
* Prototyping factors are application area, application complexity, customer characteristics and
project characteristics.
* in common with other types of s/w development, the prototyping process follows a define s/w
process model.
* This model indicated the processes and tasks, which have to be performed during development
of the prototype.
* Process model device for this particular approach comprise the following stages.
1. analysis requirements: This involved the developer understanding the content and nature of
the customer’s initial requirements.
www.vidyarthiplus.com Page 2
www.vidyarthiplus.com
2. prototype design: Here the developer should choose a suitable implementation approach for
which to develop the prototype. Also a design is derived for the prototype based upon the results
of analysis phase.
Rapid prototyping:
* Each succeeding version of the prototype is produced based upon an analysis of the customer’s
reaction to the demonstration of the previous version.
* Delivered products are delivered from the prototypes that are accepted by the customers via an
optimal optimization process.
* maintenance activities are sparked by new customer’s requirements, which restart the
prototyping process and extent the series of prototypes until a new stable point is reached.
* The spiral model is well known because it combines the common knowledge of water fall
model, incremental method and process model work into an attractive notation, even with less
specific variation of the process.
* Some advantage of this approach are that prototype of different aspects of the system can be
developed concurrently and independently, that each fragment is relatively small, simple and
easy to change, and that different tools and environments can be used for different aspects. The
last property can be important in the short term, if tools are available for solving different parts
of the problem, but these tools have not been integrated together into a comprehensive
prototyping environment.
o Functional requirements
• Statements of services the system should provide, how the system should
react to particular inputs and how the system should behave in particular
situations.
o Non-functional requirements
o Domain requirements
• Requirements that come from the application domain of the system and that
reflect characteristics of that domain
Functional requirements
The user shall be able to search either all of the initial set of databases or select a
subset from it.
The system shall provide appropriate viewers for the user to read documents in the
document store.
Every order shall be allocated a unique identifier (ORDER_ID) which the user shall
be able to copy to the account’s permanent storage area.
Non-functional requirements
Define system properties and constraints e.g. reliability, response time and storage
requirements. Constraints are I/O device capability, system representations, etc.
www.vidyarthiplus.com Page 4
www.vidyarthiplus.com
Non-functional classifications
Product requirements
• Requirements which specify that the delivered product must behave in a
particular way e.g. execution speed, reliability, etc.
Organisational requirements
• Requirements which are a consequence of organisational policies and
procedures e.g. process standards used, implementation requirements, etc.
External requirements
• Requirements which arise from factors which are external to the system and
its development process e.g. interoperability requirements, legislative
requirements, etc.
www.vidyarthiplus.com Page 5
www.vidyarthiplus.com
Product requirement
• 4.C.8 It shall be possible for all necessary communication between the APSE
and the user to be expressed in the standard Ada character set
Organisational requirement
• 9.3.2 The system development process and deliverable documents shall
conform to the process and deliverables defined in XYZCo-SP-STAN-95
External requirement
• 7.6.5 The system shall not disclose any personal information about
customers apart from their name and reference number to the operators of
the system
The requirements document is the official statement of what is required of the
system developers
Should include both a definition and a specification of requirements
It is NOT a design document. As far as possible, it should set of WHAT the system
should do rather than HOW it should do it
Document requirements
Introduction
Glossary
User requirements definition
System architecture
System requirements specification
System models
System evolution
Appendices
Index
Common Factors
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com
company has the capability, in terms of software, hardware, personnel and expertise, to handle
the completion of the project
Economic feasibility
Economic analysis is the most frequently used method for evaluating the effectiveness of
a new system. More commonly known ascost/benefit analysis, the procedure is to determine the
benefits and savings that are expected from a candidate system and compare them with costs. If
benefits outweigh costs, then the decision is made to design and implement the system. An
entrepreneur must accurately weigh the cost versus benefits before taking an action.
Cost Based Study: It is important to identify cost and benefit factors, which can be
categorized as follows: 1. Development costs; and 2. Operating costs. This is an analysis of the
costs to be incurred in the system and the benefits derivable out of the system.
Time Based Study: This is an analysis of the time required to achieve a return on
investments. the benefits derived from the system. The future value of a project is also a factor.
Legal feasibility
Determines whether the proposed system conflicts with legal requirements, e.g. a data
processing system must comply with the local Data Protection Acts.
Operational feasibility
Is a measure of how well a proposed system solves the problems, and takes advantage of
the opportunities identified during scope definition and how it satisfies the requirements
identified in the requirements analysis phase of system development.[1]
Schedule feasibility
A project will fail if it takes too long to be completed before it is useful. Typically this
means estimating how long the system will take to develop, and if it can be completed in a given
time period using some methods like payback period. Schedule feasibility is a measure of how
reasonable the project timetable is. Given our technical expertise, are the project deadlines
reasonable? Some projects are initiated with specific deadlines. You need to determine whether
the deadlines are mandatory or desirable...
www.vidyarthiplus.com Page 8
www.vidyarthiplus.com
Evolutionary prototyping
• An approach to system development where an initial prototype is produced
and refined through a number of stages to the final system
Throw-away prototyping
• A prototype which is usually a practical implementation of the system is
produced to help discover requirements problems and then discarded. The
system is then developed using some other development process
Approaches to prototyping
Evolutionary prototyping
Evolutionary prototyping
www.vidyarthiplus.com Page 9
www.vidyarthiplus.com
Evolutionary prototyping
The system is developed as a series of increments that are delivered to the customer
Techniques for rapid system development are used such as CASE tools and 4GLs
Throw-away prototyping
www.vidyarthiplus.com Page 10
www.vidyarthiplus.com
• Database programming
These are not exclusive techniques - they are often used together
Visual programming is an inherent part of most prototype development systems
DATA MODEL
Data flow diagrams are used to model the system’s data processing
Tracking and documenting how the data associated with a process is helpful to
develop an overall understanding of the system
www.vidyarthiplus.com Page 11
www.vidyarthiplus.com
Data flow diagrams may also be used in showing the data exchange between a
system and other systems in its environment
Scheduler
Enq, DT
Stud
Vss Loan balance Enquir
STUDENT1 y reply
Pal
Daily
lister
FUNCTIONAL MODEL
The Functional flow block diagram (FFBD) is a multi-tier, time-sequenced, step-by-step flow
diagram of the system’s functional flow.The diagram is developed in the 1950s and widely used
in classical systems engineering. The Functional Flow Block Diagram is also referred to as
Functional Flow Diagram, functional block diagram, and functional flow.
Functional Flow Block Diagrams (FFBD) usually define the detailed, step-by-step operational
and support sequences for systems, but they are also used effectively to define processes in
developing and producing systems. The software development processes also use FFBDs
extensively. In the system context, the functional flow steps may include combinations of
www.vidyarthiplus.com Page 12
www.vidyarthiplus.com
hardware, software, personnel, facilities, and/or procedures. In the FFBD method, the functions
are organized and depicted by their logical order of execution. Each function is shown with
respect to its logical relationship to the execution and completion of other functions. A node
labeled with the function name depicts each function. Arrows from left to right show the order of
execution of the functions. Logic symbols represent sequential or parallel execution of functions.
Structured Analysis and Design Technique (SADT) is a software engineering methodology for
describing systems as a hierarchy of functions, a diagrammatic notation for constructing a sketch
for a software application. It offers building blocks to represent entities and activities, and a
variety of arrows to relate boxes. These boxes and arrows have an associated informal
semantics.[19] SADT can be used as a functional analysis tool of a given process, using
successive levels of details. The SADT method allows to define user needs for IT developments,
which is used in industrial Information Systems, but also to explain and to present an activity’s
manufacturing processes, procedures.
www.vidyarthiplus.com Page 13
www.vidyarthiplus.com
DATA DICTIONARIES
Data dictionaries are lists of all of the names used in the system models.
Descriptions of the entities, relationships and attributes are also included
Advantages
Support name management and avoid duplication
www.vidyarthiplus.com Page 14
www.vidyarthiplus.com
www.vidyarthiplus.com Page 15
www.vidyarthiplus.com
UNIT III
Data design –created by transforming the analysis information model (data dictionary and ERD)
into data structures required to implement the software.
Architectural design-defines the relationships among the major structural elements of the
software, it is derived from the system specification, the analysis model, the subsystem
interactions defined in the analysis model (dfd).
Interface design-describes how the software elements communicate with each other, with other
systems, and with human users, the data flow and control flow diagrams provide much the
necessary information.
Design Guidelines:
A design should:
www.vidyarthiplus.com Page 1
www.vidyarthiplus.com
Lead to interfaces reduce the complexity of connections between modules and with the
external environment
Be derived using a reputable method is driven by information obtained during software
requirements
Design Principles:
The design
www.vidyarthiplus.com Page 2
www.vidyarthiplus.com
Information hiding –information (data and Procedure) contained within a module is inaccessible
to modules have no need for such information.
Abstraction
Abstraction is the theory that allows one to deal with concepts apart from the particular
instances of those concepts.
Functional abstraction:
-The number and type of parameters to a routine can be made dynamic and this ability to use
the apt parameter during the apt invocation of the sub-program is functional abstraction
Data Abstraction:
This involves specifying or creating a new data type or a date object by specifying
valid operations on the object.
-Other details such as representative and manipulations of the data are not specified.
Control Abstraction:
www.vidyarthiplus.com Page 3
www.vidyarthiplus.com
MODULARITY
Modularity derives from the architecture. Modularity is a logical partitioning of the software
design that allows complex software to be manageable for purpose of implementation and
maintenance. The logic of partitioning may be based on related functions, implementations
considerations, data links, or other criteria. Modularity does imply interface overhead related to
information exchange between modules and execution of modules.
COHESION:
Coincidentally cohesive is cohesive that performs a set of tasks, that relate to each other object.
Logically cohesive is a cohesive process that performs tasks that are related logically each other
objects.
Method cohesion, like the function cohesion, means that a method should carry only function.
Coupling:
Types of coupling:
www.vidyarthiplus.com Page 4
www.vidyarthiplus.com
Common Coupling:
The objects will access a global data space for both to read and write operations of attributes.
Content Coupling:
It is the degree of coupling. It occurs when one object or module makes use of data control
information maintained within the boundary of another object or module.
Control coupling:
Stamp Coupling:
The connection involves passing an aggregate data structure to another object, which uses
only a portion of the components of the data structure.
It is found when a portion of a data structure is passed via a module or object interface.
Data Coupling:
www.vidyarthiplus.com Page 5
www.vidyarthiplus.com
F
B C
Global
data area
Information Hiding;
Concurrency systems are those systems n which there are multiple independent
processes, which can be activated simultaneously.
On a multi-processor system sharing them across the processors can do such a task.
On a single processor system, concurrency can be achieved by process of
interleaving.
The problems encountered in a concurrent system are Deadlock, Mutual Exclusion
and Process Synchronization.
Verification:
www.vidyarthiplus.com Page 6
www.vidyarthiplus.com
Step 2: Review and refine data flow diagrams for the software.
Step 4: Identify the transaction center and the flow characteristics along each of the action paths.
Step 6: Factor and refine the transaction structure and the structure of each action path.
Step 7: Refine the first-iteration architecture using design heuristics for improved software
quality.
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com
Isolating the details of certain activities within procedures we obtain a program that is expressed
clearer than if it had all activities included. Modularity in a software system is where modules
take the form of objects or units each with an internal structure independent of other objects or
units. The reason for the popularity of object oriented approach is its modularity as when
modifying certain parts it can be done with less affect on the rest of the program.
Any good design requires the entire software product to be split onto several modules or smaller
units.
Advantages of modularization:
Meyer defines five criteria that enable us to evaluate a design method with respect to its ability to
define an effective modular system:
www.vidyarthiplus.com Page 8
www.vidyarthiplus.com
Modular Continuity: If small changes to the system requirements result in changes to individual
modules, rather than system wide changes, the impact of change-induced side effects will be
minimized.
Modular Protection: If an aberrant condition occurs within a module and its effects are
constrained within that module, the impact of error-induced side effects will be minimized.
Modularity Design:
Module can be defined in many ways. Generally a module is a work allocation for a
programmer. Fortran & ADA define module in a different manner.
However, modularity is the concept of breaking the entire system into well-defined
manageable units with well-defined interfaces between these units.
A modular system follows the concept of abstraction also.
Modular programming enhances clarity in design, reduces complexity and hence enables
ease of implementation, testing documenting and maintenance.
www.vidyarthiplus.com Page 9
www.vidyarthiplus.com
Evaluate the first iteration of the program structure to reduce coupling and improve
cohesion.
Attempt to minimize structures with high fan-out: strive for fan-in as structure depth
increase.
Keep the scope of effect of a module within the scope of control for that module.
Evaluate module interfaces to reduce complexity, reduce redundancy, and improve
consistency.
Define modules whose function is predictable and not overly restrictive (e.g. a module
that only implements a single sub function).
Strive for controlled entry modules, avoid pathological connection (e.g. branches into the
middle of another module)
Software Architecture:
While refinement is about the level of detail, architecture is about structure. The architecture of
the procedural and date elements of a design represents a software solution for the real-world
problem defined by the requirements analysis. It is unlikely that there will be one obvious
candidate architecture.
Software systems have had architectures, and programmers have been responsible for the
interactions among the modules and the global properties of assemblage.
Effective software architecture and its explicit representation and design have become
dominant themes in software engineering.
The software architecture of a program or computing system is the structure or structures
of the system, which comprise software components, the externally visible properties of
those components and the relationships among them.
The architecture is not the operational software.
Rather, it is a representation that enables a software engineer to (1) analyze the
effectiveness of the design in meeting its stated requirements, (2) consider architectural
alternatives at a stage with making design changes is till relatively easy, and 3) reducing
the risks associated with the construction of the software.
Importance of Architecture.
www.vidyarthiplus.com Page 10
www.vidyarthiplus.com
The architecture highlights early design decisions that will have a profound impact all
software engineering work that follows and, as important, on the ultimate success of the
system as an operational entity.
Architecture “constitutes a relatively small, intellectually graspable model of how the system
is structured and how its components work together”
The design process for identifying the sub-systems making up a system and the
framework for sub-system control and communication is architectural design
The output of this design process is a description of the software architecture
Architectural design
System structuring
Control modelling
o A model of the control relationships between the different parts of the system
is established
Modular decomposition
A sub-system is a system in its own right whose operation is independent of the services
provided by other sub-systems.
A module is a system component that provides services to other components but would
not normally be considered as a separate system
www.vidyarthiplus.com Page 11
www.vidyarthiplus.com
Architectural models
Dynamic process model that shows the process structure of the system
Architectural styles
An awareness of these styles can simplify the problem of defining system architectures
However, most large systems are heterogeneous and do not follow a single architectural
style
Architecture attributes
Performance
Security
Safety
Availability
Maintainability
System structuring
More specific models showing how sub-systems share data, are distributed and interface
with each other may also be developed
o Shared data is held in a central database or repository and may be accessed by all
sub-systems
o Each sub-system maintains its own database and passes data explicitly to other
sub-systems
When large amounts of data are to be shared, the repository model of sharing is most
commonly used
Advantages
Disadvantages
www.vidyarthiplus.com Page 13
www.vidyarthiplus.com
Client-server architecture
Distributed system model which shows how data and processing is distributed across a
range of components
Set of stand-alone servers which provide specific services such as printing, data
management, etc.
Client-server characteristics
Advantages
Disadvantages
o No central register of names and services - it may be hard to find out what servers
and services are available
There are many popular methodologies such as top-down, structured design and son supporting
concepts such as inheritance, data hiding, abstraction, and modularity. Real time systems require
these concepts too but with a higher degree of precision and clarity.
A distributed system consists of more processors operating in parallel with shared memory and
also their own memory and communicates through a network. Real time systems and distributed
www.vidyarthiplus.com Page 14
www.vidyarthiplus.com
systems are used in process-control and other mission critical applications. So they require a lot
of other trade-offs and considerations than the normal software systems.
In a real-time system, timing constraints must be met for the applications to be correct. A
computing system is real-time to the degree that time constraints must be met for the applications
to be correct.
This is a consequence of the system interacting with its physical environment. The environment
produces stimuli, which must be accepted by the real-time system within the time constraints.
For instance, in air traffic control system the environment consists of aircraft that must be
monitored.
The environment stimuli are received by the system through sensors such as radars. The
environment further requires control outputs, which must e produced within time constraint. In
the air traffic control example, signals to the aircraft and displays to the human operators have
time constraints that must be met. Time-constrained behaviour can obviously be critical to not
just mission success, but even to the safety of property and human life.
In a distribute real-time systems, many of these time constraints are end-end and often require
the scheduling of different resources (e.g. processors on each node and the communication
facilities between them)
One of the things that make real-time resources management so much more difficult than non-
real-time resource management is that the real-time performances requirements of acceptable
predictability of timeliness must be met along with other requirements such as synchronized and
resource utilization.
Other issues like scheduling, safe recovery due to loss of network link failure of a processing
node have to be considered in the design of distributed systems.
A fine of a real-time system requirements aiding tool is the SREM system. Features of SREM:
Provision of a simulation package for evaluation of systems design and choosing design
alternatives.
www.vidyarthiplus.com Page 15
www.vidyarthiplus.com
System Elements:
Collect information from sensors. May buffer information collected in response to a sensor
stimuli.
www.vidyarthiplus.com Page 16
www.vidyarthiplus.com
Data processor:
Carries out processing of collected information and computes the system response.
Actuator control:
System design:
Design both the hardware and the software associated with system. Partition
functions to either hardware or software.
Design decisions should be made on the basis on non-functional system
requirements.
Hardware delivers better performance but potentially longer development and
less scope for change.
Real time Executives:
Monitoring systems are system which takes action when some exceptional sensor value is
detected.
Control systems are systems, which continuously control hardware actuators depending
in the value of associated sensors.
These systems obviously have a great deal in common and differ only in the way in
which system actuators are initiated.
Important class of real-time systems.
Control systems take sensor values and control hardware actuators.
Example:
www.vidyarthiplus.com Page 17
www.vidyarthiplus.com
A system is required to monitor sensors on doors and windows to detect the presence of intruders
in a building.
When a sensor indicates a break-in, the system switches on lights around that area and calls
police automatically.
The system should include provision for operation without mains power supply.
Sensors:
www.vidyarthiplus.com Page 18
www.vidyarthiplus.com
A system may include software, mechanical, electrical and electronic hardware and be
operated by people.
www.vidyarthiplus.com Page 19
www.vidyarthiplus.com
Emergent properties
Properties of the system as a whole rather than properties that can be derived from the
properties of components of a system
They can therefore only be assessed and measured once the components have been
integrated into a system
o This is a complex property which is not simply dependent on the system hardware
and software but also depends on the system operators and the environment where
it is used.
Functional properties
www.vidyarthiplus.com Page 20
www.vidyarthiplus.com
o These appear when all the parts of a system work together to achieve some
objective. For example, a bicycle has the functional property of being a
transportation device once it has been assembled from its components.
• Examples are reliability, performance, safety, and security. These relate to the
behaviour of the system in its operational environment. They are often critical for
computer-based systems as failure to achieve some minimal defined level in these
properties may make the system unusable.
Partition requirements
Identify sub-systems
o Identify a set of sub-systems which collectively can meet the system requirements
System users often judge a system by its interface rather than its functionality
Poor user interface design is the reason why so many software systems are never used
Most users of business systems interact with these systems through graphical interfaces
although, in some cases, legacy text-based interfaces are still used
GUI advantages
Fast, full-screen interaction is possible with immediate access to anywhere on the screen
User-centred design
The aim of this chapter is to sensitise software engineers to key issues underlying the
design rather than the implementation of user interfaces
User-centred design is an approach to UI design where the needs of the user are
paramount and where the user is involved in the design process
www.vidyarthiplus.com Page 22
www.vidyarthiplus.com
UI design principles
UI design must take account of the needs, experience and capabilities of the system users
Designers should be aware of people’s physical and mental limitations (e.g. limited short-
term memory) and should recognise that people make mistakes
UI design principles underlie interface designs although not all principles are applicable
to all designs
UI Design Principles
Principle Description
User familiarity The interface should use terms and concepts
which are drawn from the experience of the
people who will make most use of the system.
Consistency The interface should be consistent in that,
wherever possible, comparable operations should
be activated in the same way.
www.vidyarthiplus.com Page 23
www.vidyarthiplus.com
Design principles
User familiarity
Consistency
Minimal surprise
Recoverability
User guidance
o Some user guidance such as help systems, on-line manuals, etc. should be
supplied
www.vidyarthiplus.com Page 24
www.vidyarthiplus.com
User diversity
o Interaction facilities for different types of user should be supported. For example,
some users have seeing difficulties and so larger text should be available
User-system interaction
o How should information from the user be provided to the computer system?
o How should information from the computer system be presented to the user?
Interaction styles
Direct manipulation
Menu selection
Form fill-in
Command language
Natural language
Menu systems
Users make a selection from a list of possibilities presented to them by the system
The selection may be made by pointing and clicking with a mouse, using cursor keys or
by typing the name of the selection
www.vidyarthiplus.com Page 25
www.vidyarthiplus.com
Users need not remember command names as they are always presented with a list of
valid commands
Context-dependent help can be provided. The user’s context is indicated by the current
menu selection
Actions which involve logical conjunction (and) or disjunction (or) are awkward to
represent
Menu systems are best suited to presenting a small number of choices. If there are many
choices, some menu structuring facility must be used
Form-based interface
NE W BOOK
Title ISBN
Author Price
Publication
Publisher date
Number of
Edition copies
Classification Loan
status
Date of
Order
purchase
status
Command interfaces
www.vidyarthiplus.com Page 26
www.vidyarthiplus.com
Users have to learn and remember a command language. Command interfaces are
therefore unsuitable for occasional users
Users make errors in command. An error detection and recovery system is required
Command languages
Often preferred by experienced users because they allow for faster interaction with the
system
The user types a command in a natural language. Generally, the vocabulary is limited and
these systems are confined to specific application domains (e.g. timetable enquiries)
NL processing technology is now good enough to make these interfaces effective for
casual users but experienced users find that they require too much typing
www.vidyarthiplus.com Page 27
www.vidyarthiplus.com
Information presentation
Static information
Dynamic information
How quickly do information values change? Must the change be indicated immediately?
www.vidyarthiplus.com Page 28
www.vidyarthiplus.com
Digital presentation
Analogue presentation
www.vidyarthiplus.com Page 29
www.vidyarthiplus.com
Application
Message
presentation
system
Error messages
Error message design is critically important. Poor error messages can mean that a user
rejects rather than accepts a system
The background and experience of users should be the determining factor in message
design
Both of these requirements have to be taken into account in help system design
Help information
www.vidyarthiplus.com Page 30
www.vidyarthiplus.com
Multiple entry points should be provided so that the user can get into the help system
from different places.
Some indication of where the user is positioned in the help system is valuable.
Facilities should be provided to allow the user to navigate and traverse the help system.
User documentation
As well as manuals, other easy-to-use documentation such as a quick reference card may
be provided
Some evaluation of a user interface design should be carried out to assess its suitability
Full scale evaluation is very expensive and impractical for most systems
www.vidyarthiplus.com Page 31
www.vidyarthiplus.com
Difficult design problems are often assumed to be readily solved using software
Sub-system development
Bureaucratic and slow mechanism for proposing system changes means that the
development schedule may be extended because of the need for rework
www.vidyarthiplus.com Page 32
www.vidyarthiplus.com
System integration
The process of putting hardware, software and people together to make a system
Real-time executives are specialised operating systems which manage the processes in
the RTS
Responsible for process management and resource (processor and memory) allocation
May be based on a standard RTE kernel which is used unchanged or modified for a
particular application
Executive components
Real-time clock
Interrupt handler
Scheduler
Resource manager
Despatcher
www.vidyarthiplus.com Page 33
www.vidyarthiplus.com
www.vidyarthiplus.com Page 34
www.vidyarthiplus.com
Data collection processes and processing processes may have different periods and
deadlines.
Data collection may be faster than processing e.g. collecting information about an
explosion.
A system collects data from a set of sensors monitoring the neutron flux from a nuclear
reactor.
The ring buffer is itself implemented as a concurrent process so that the collection and
processing processes may be synchronized
A ring buffer
www.vidyarthiplus.com Page 35
www.vidyarthiplus.com
Mutual exclusion
Producer processes collect data and add it to the buffer. Consumer processes take data
from the buffer and make elements available
Producer and consumer processes must be mutually excluded from accessing the same
element.
DATA DESIGN
DFD are directed graphs in which the nodes specify processing activities and the arcs specify
data items transmitted between processing nodes.
To provide an indication of how data are transformed as the move through the system and
Data flow diagram (DFD) –provides an indication of how data are transformed as they move
through the system; also depicts functions that transform the data flow (a function is represented
in a DFD using a process specification or PSPEC).
www.vidyarthiplus.com Page 36
www.vidyarthiplus.com
Shows the relationship of external entities process or transforms data items and data
stores
DFD’s cannot show procedural detail (e.g. conditionals or loops) only the flow of data
through the software.
Refinement from one DFD level to the next should follow approximately a 1:5 ratio (this
ratio will reduce as the refinement proceeds)
To model real-time systems, structured analysis notation must be available for time
continuous data and event processing (e.g. Ward and Mellore or Hately and Pirbhai)
Level 0 data flow diagram should depict the system as a single bubble.
Primary input and output should be carefully noted.
Refinement should begin by consolidating candidate processes, data objects, and stored
to be represented at the next level.
Label all arrows with meaningful names
Information flow must be maintained from one level to level
Refine one bubble at a time
Write a PSPEC (a “mini-spec” written using English or another natural language or a
program design language) for each bubble in the final DFD.
www.vidyarthiplus.com Page 37
www.vidyarthiplus.com
UNIT IV
TESTING
Testing involves exercising the program using data like the real data
processed by unexpected system outputs.
Testing may be carried out during the implementation phase to verify that
the software behaves as intended by its designer and after the implementation is
complete. This later phase checks conformance with requirements is complete.
statistical testing may be used to test the programs performance and reliability
defect testing is intended to find the areas where the program does not
conform to its specification
Testing Vs Debugging:
www.vidyarthiplus.com Page 1
www.vidyarthiplus.com
1. Testing starts with unknown conditions, uses predefined procedures, and has
predictable outcomes only whether or not the program passes the test id
unpredictable.
2. Testing can and should be planned designed and scheduled the procedures for
and duration of debugging cannot be so constrained.
Testing Objectives:
2. A good test case is one that high probability of finding an as yet undiscovered error.
www.vidyarthiplus.com Page 2
www.vidyarthiplus.com
Testing Principles:
1. All tests should be traceable to customer requirements. The most serve defects
are those that cause the program fail to meet its requirements.
2. Test should be planned long before testing begins. All tests can be planned and
designed before any code has been generated.
3. Testing should begin “in the small” and progress towards testing “in the large”.
The first tests planned and executed generally focus on the individual components. As
testing progresses, focus shifts in an attempt to find errors in integrated clusters of
components and ultimately in the entire system.
www.vidyarthiplus.com Page 3
www.vidyarthiplus.com
3. In a group of tests that have a similar intent, time and resource, the test that has
the highest likelihood of uncovering a whole class of errors should be used.
4. A good test should be neither too simple nor too complex. Each test should be
executed separately.
Unit testing
Module
testing
Sub-system
www.vidyarthiplus.com testing Page 4
www.vidyarthiplus.com
System
testing
Acceptance
testing
1. unit testing
2. module testing
3. sub-system testing
4. system testing
5. acceptance testing
Here individual components are tested to ensure that they operate correctly. Each
component is tested separately.
www.vidyarthiplus.com Page 5
www.vidyarthiplus.com
Here his phase involves testing collection of modules which have been
integrated into sub-systems. Sub-systems be independently designed and
implemented. The most common problems which arise in large s/w systems are sub-
systems interface mismatches.
The sub-systems are integrated to the entire system. The testing process is
concerned with finding errors which results from anticipated interactions between
sub-systems and system components. It is also concerned with validating that the
system meets its functional and non-functional requirements.
This is the final stage in the testing process before the system is accepted
for operational use. The system is tested with data supplied by the system producer
rather than stimulated test data. Acceptance testing may reveal errors and omissions
in the system requirements definition because the real data exercise the system in
different ways from the test data. Acceptance testing may also reveal requirements
problems where the systems facilities do not really meet the user’s needs or the
system performance is unacceptable.
www.vidyarthiplus.com Page 6
www.vidyarthiplus.com
Top – down testing test the high level s of a system before testing its
detailed components. The program is represented as a single abstract component
with sub-components represented by stubs. Stubs have the same interface as the
component but very limited functionality.
After the top level components have been tested, its sub-components
are implemented and tested in the same way. This process continues recursively until
the bottom level components are implemented. The whole system may then be
completely tested.
1. Unnoticed design errors may be detected at a early stage in the testing process.
As these errors are mainly structural errors ,early detection means that can be
corrected without undue costs
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com
2. Test output may be difficult to observe. In many systems, the higher levels of that
system do not generate output but, to test these levels, they must be do so. The
tester must create an artificial environment to generate the test results.
When using bottom-up testing, test drivers must be written to exercise the
lower-level components. These test drivers must be written to exercise the lower-
level components. These test drivers simulate the components environment and are
valuable components in their own right.
If the components being tested are reusable components, The test drivers
and test data should be distributed with the component.
www.vidyarthiplus.com Page 8
www.vidyarthiplus.com
Potential re-users can then run these tests to satisfy themselves that the
component behaves as expected in their environment.
4.4Unit testing
It begins at the vortex of the spiral and concentrates on each unit of the s/w as
implemented in source code.
Finally system testing is conducted. In this the software and other system
elements are tested as a whole.
1. Unit Testing:
www.vidyarthiplus.com Page 9
www.vidyarthiplus.com
The module interfaces is tested to ensure that information properly flows into
and out of the program until under test.
The local data structures is examined to ensure that data stored temporarily
maintains its integrity during all steps in an algorithm’s execution.
Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
All independent paths through the control structure are exercised to ensure
that all statements in a module have been executed at least once.
Finally, all error handling paths are tested.
This testing is the re execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.
Regression testing is the activity that helps to ensure that changes do not
introduce unintended behavior or additional errors.
The regression test suite contains three different classes of test cases:
www.vidyarthiplus.com Page 10
www.vidyarthiplus.com
4.6Validation testing:
After each validation has been conducted, one of the two possible
conditions exist:
Configuration review:
www.vidyarthiplus.com Page 11
www.vidyarthiplus.com
1. Recovery testing
2. Security testing
3. Stress testing
4. Performance testing
Recovery testing:
Many computer based systems must recover from faults and resume
processing within a prespecified time.
www.vidyarthiplus.com Page 12
www.vidyarthiplus.com
Recovery testing is a system test that forces the s/w to fail in a variety of
ways and verifies that recovery is properly performed.
Security testing:
During security testing, the tester plays the role of the individual who
desires to penetrate the system.
Stress testing:
www.vidyarthiplus.com Page 13
www.vidyarthiplus.com
Performance testing
For real time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable. Performance testing is designed to test the run-
time performance of s/w within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the unit level,
the performance of an individual module may be assessed.
Performance tests are often couple with stress testing and usually require both hardware and
s/w instrumentation.
www.vidyarthiplus.com Page 14
www.vidyarthiplus.com
Debugging involves formulating a hypothesis about program behaviour then testing these
hypotheses to find the system error
White box testing is called as glass box testing. It is a test case design method that uses the
control structure of the procedural design to the derive test cases.
Focused testing: The programmer can test the program in pieces. It’s much easier to give an
individual suspect module a through workout in glass box testing than in black box testing.
Testing coverage: The programmer can also find out which parts of the program are exercised
by any test. It is possible to find out which lines of code, which branches, or which paths haven’t
yet been tested.
www.vidyarthiplus.com Page 15
www.vidyarthiplus.com
Control flow: The programmer knows what the program is supported to do next, as a function
of its current state.
Data integrity: The programmer knows which parts of the program modify any item of data. By
tracking a data item through the system.
Internal boundaries: The programmer can see internal boundaries in the code that are
completely invisible to the outside tester.
Algorithmic specific: The programmer can apply standard numerical analysis techniques to
predict the results.
The basis path method enables the test case designer to derive a logical complexity measure of
a procedural design and use this measure as a guide for defining and use this measure as a guide for
defining a basis set of execution paths.
Flow graph is a simple notation for the representation of control flow. Each structured construct
has a corresponding flow graph symbol.
www.vidyarthiplus.com Page 16
www.vidyarthiplus.com
Each node that contains a condition is called a predicate node and is characterized by two or
more edges emanating from it.
Cyclomatic complexity:
Cyclomatic complexity is a software metric that provide a quantitative measure of the logical
complexity of a program.
The value computed for cyclomatic defines the number of independent paths in the basis set of
a program and provides us with an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once.
1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
E is the number of the flow graph edges; N is the number of flow graph nodes.
V (G) = P+1
www.vidyarthiplus.com Page 17
www.vidyarthiplus.com
The basis path testing method can be applied to procedural design or to source code.
1. Using the design or code as a foundation draw a corresponding flow graph. A flow graph is
created using the symbols and construction rules.
2. Determine the cyclomatic complexity of the resultant flow graph. V (G) is determined by
applying the above algorithms.
3. Determine a basis set of linearly independent paths. The value of V (G) provides the number of
linearly independent paths through the program control structure.
Condition testing is a test case design method that exercises the logical conditions contained in
a program module.
The condition testing method focuses on testing each condition in the program.
2. The test coverage of conditions in a program provides guidance for the generation of additional
tests for the program.
Branch testing: This is the simplest condition testing strategy. For a compound condition C,
the true and false branches of C and every simple condition in C need to be executed at
least once.
Domain testing: This requires three or four tests to be derived for a relational for a
relational expression.
BRO (branch and relational operator) testing: This technique guarantees the detection of
branch and relational operator errors in a condition provided that all Boolean variable and
relational operators in condition occur only once.
The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variable in the program.
www.vidyarthiplus.com Page 19
www.vidyarthiplus.com
If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the
condition of statement S.
A definition use (DU) chain of variable X is of the form {X, S, S’] where S and S’ are statement
numbers, X is in DEF(S) and USE(S’) and the definition of X in statements S is live at statement S’.
One simple data flow testing strategy is to require that every DU chain be covered at least once.
Data flow testing strategies are useful for selecting test paths for a program containing nested if
and loop statements.
Since the statements in a program are related to each other according to the definitions and
uses of variable the data flow testing approach is effective for error detection.
Problem:
Measuring test coverage and selecting test paths for data flow testing are more difficult.
Loop testing is a white box testing technique that focuses exclusively on the validity of loop
constructs.
www.vidyarthiplus.com Page 20
www.vidyarthiplus.com
1. Simple loops
2. Nested loops
3. Concatenated loops
4. unstructured loops
Simple loops:
Nested loops: The number of possible tests would grow geometrically as the level of nesting
increases.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum
value and other nested loops to typical values.
www.vidyarthiplus.com Page 21
www.vidyarthiplus.com
Concatenated loops:
Concatenated loops can be tested using the approach defined for simple loops, if each of the
loops is independent of the other. However, if two loops are concatenated and the loop counter for loop
1 is used as the initial value for loop2.
Unstructured loops:
Whenever possible, this class of loops would be redesigned to reflect the use of the structured
programming constructs.
Black box testing is also called as behavioral testing. This focuses on the functional requirements
of the s/w. Black box testing enables the s/w engineer to derive sets of input conditions that will
fully exercise all functional requirements for a program.
2. interface errors
www.vidyarthiplus.com Page 22
www.vidyarthiplus.com
1. Equivalent partitioning
3. comparison testing
1. EQUIVALENCE PARTITIONING:
It is a black box testing method that divides the inputs domain of a program into classes of data
from which test cases can be derived.
Test case design for equivalence partitioning is based on an evaluation of equivalence classes for
an input condition.
The input data to a program usually fall into number of different classes. These classes have
common characteristics, for example positive numbers, negative numbers strings without blanks
www.vidyarthiplus.com Page 23
www.vidyarthiplus.com
and so on. Programs normally behave in a comparable way for all members of a class. Because of
this equivalent behavior, these classes are sometimes called equivalent partitions or domains.
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
3. If an input condition specifies a member of a set one valid and one invalid equivalence class are
defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
Test cases for each input domain data item can be developed and executed by applying the
guidelines for the derivation of equivalence classes.
When reliability of software is absolutely critical, redundant hardware and s/w are often used to
minimize the possibility of error. In such situations, each version can be tested with the same test data
to ensure that all provide identical output. Those independent versions from the basis of a black box
testing technique called comparison testing.
If the output from the each version is the same, it is assumed that all implementations are
correct. If the output is different, each of the applications is investigated to determine if a defect in one
or more versions is responsible for the difference.
www.vidyarthiplus.com Page 24
www.vidyarthiplus.com
1. Comparison testing is not foolproof. If the specification from which all versions have been
developed is in error, all versions will likely reflect the error.
2. If each of the independent versions produces identical but incorrect results, condition testing
will fail to detect the error.
Orthogonal testing can be applied to problems in which the input domain is relatively small but
too large to accommodate exhaustive testing. The orthogonal array testing method is particularly
useful in finding errors associated with regions faults an error category associated with faulty logic
within a software component.
When orthogonal array testing occurs, an L9 orthogonal array of test cases is created. The L9
orthogonal array has a balancing property. That is test cases are dispersed uniformly throughout the
test domain.
The orthogonal array testing approach enables us to provide good test coverage with fewer test
case than the exhaustive strategy.
www.vidyarthiplus.com Page 25
www.vidyarthiplus.com
A great number of errors tend to occur at the boundaries of the input domain rather than in the
center. So boundary value analysis (BVA) derives test cases from the output domain as well.
1. If an input condition specifies a range bounded by values a and b, test cases should be designed
with values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be developed that exercise
the minimum and maximum numbers.
4. If internal program data structures have prescribed boundaries, be certain to design a test case
to exercise the data structure at its boundary.
www.vidyarthiplus.com Page 26
www.vidyarthiplus.com
Test data
Tests Derives
Component Test
code outputs
Static measurement- tools that analyze source code without executing test cases.
www.vidyarthiplus.com Page 27
www.vidyarthiplus.com
Test management – tools that assist in the planning, development, and control of testing.
Cross functional tools – tools that cross the bounds of the preceding categories.
www.vidyarthiplus.com Page 28
www.vidyarthiplus.com
UNIT V
SOFTWARE PROJECT MANAGEMENT
5.1 SOFTWARE COST ESTIMATION:
*Efforts costs
*Estimates are made to discover the cost, to the developer of producing a software system.
*There is not a simple relationship between the development ost and the price charged to the
customer.
PROGRAMMER PRODUCTIVITY:
*A measure of the rate at which individual engineers involved in software development produce
software and associated documentation.
PRODUCTIVITY MEASURES:
*Size related measures based on some output from the software process. This may be lines of
delivered source code, object code instructions etc.,
* Function related measures based on an estimate of the functionality of the delivered software.
MEASUREMENT PROBLEM:
www.vidyarthiplus.com Page 1
www.vidyarthiplus.com
LINES OF CODE:
The measures was first proposed when programs were typed on cards with one line per card.
*How does this correspond to statements as in java which can span several lines or where there
can be several statements.
PRODUCTIVITY COMPARISONS:
*The lower level the language, the more productive the programmer.
*The same functionality takes more code to implement in a lower level language than in a high
level language.
FUNCTION POINTS:
*User interactions
www.vidyarthiplus.com Page 2
www.vidyarthiplus.com
*External interfaces
*FP’s can be used to estimate LOC depending on the average number of LOC per FP for a given
language.
*The basic COCOMO model is static single valued model that computes software development
effort as a function of program size expressed in estimated lines of code.
COCOMO 2 LEVELS:
*COCOMO 2 model is a 3 level model that allows increasingly detailed estimates to be prepared
as development progresses.
*Estimates based on object points and a simple formula is used for effort estimation.
MODEL 3:
The advanced COCOMO model incorporates all characteristics of the intermediate version with
an assessment of the driver’s impact on each step of the software engineering process.
The COCOMO models are defined for the three classes of software projects: Using Boehm’s
terminology these are:
www.vidyarthiplus.com Page 3
www.vidyarthiplus.com
1. Organic mode: Relatively small, simple software projects in which small team with good
application experience work to set of less than rigid requirements.
2. Semi- detached mode: An intermediate s/w projects in which teams with mixed experience
levels must meet a blend of rigid and less than rigid requirements
3. Embedded mode: a s/w projects must be developed within a set of tight h/w, s/w.
* complexity of product.
analyst capability
software engineer capability
application experience
virtual machine experience
(iv)project attributes:
www.vidyarthiplus.com Page 4
www.vidyarthiplus.com
Multipliers:
Post-architectural level:
www.vidyarthiplus.com Page 5
www.vidyarthiplus.com
Multipliers:
Projects Planning:
Algorithmic cost models provide a basis for projects planning as they allow
alternative strategies to be compared.
Embedded space craft system
must be reliable
Cost components
Target hardware
Development platform
Effort required
As well as effort estimation, managers must estimate the calendar time required
to complete a project and when staff will be required
Calendar time can be estimated using COCOMO-2 formula
PM is effort computation and B is the exponent computed as discussed above.
The time required is independent of the number of people working o the project.
Staffing requirements:
www.vidyarthiplus.com Page 6
www.vidyarthiplus.com
Key Points:
Level of technology:
The better level of technology and higher productivity so lower cost because the
time taken to complete the project would be less and the lesser number of
resources will be used.
This technique was developed at Rand Corp. in 1948 to gain expert consensus
without introducing the adverse side affects of group meetings.
The Delphi technique can be adopted to software cost estimation in the
following manner:
1. A coordinator provides each estimator with the system definition
document and a form for recording their estimate.
2. The estimator study and complete their estimation anonymously. They
ask questions to the coordinator but do not discuss with one another.
3. The coordinator makes summary and includes any unused rationales
notes by the estimators.
4. Estimators complete another estimation, again anonymously, using the
results of the previous estimates. The estimators whose estimates differ
sharply from the group may be asked justify their answer, anonymously.
5. The process is iterated for as many rounds as required. No group
discussion is allowed during the entire process.
It is possible that after several rounds of estimates will not lead to a
consensus estimate. In this case, the coordinator must discuss the issues
involved with each estimator to determine the reasons for the
differences.
The coordinator may have to gather additional information and present it
to the estimators in order to resolve the differences in viewpoint.
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com
The scheduling is the process of building and monitoring schedules for software
development systems, many engineering tasks need to occur in parallel with one
another to complete the project on time. The output from one task often
determines when another may begin. It is difficult to ensure that a team is
working on the most appropriate tasks without building a detailed schedule and
sticking to it.
Software Project scheduling Principle:
Compartmentalization:
Interdependency
Time allocation
Effort validation
Define responsibilities
Defined outcomes
Defined milestones
*adding people to the project after it is behind schedule often causes the schedule
to slip further.
www.vidyarthiplus.com Page 8
www.vidyarthiplus.com
* The main reason for using more than one person on a project are to get the job
done more rapidly and to improve s/w quality.
1. 2-3 % planning.
2. 10-25% requirement analysis
3. 22- 25% designing
4. 15 – 25% coding
5. 30-40% Testing and debugging.
*Casual: All framework activities applied, only minimum task set required.
Size of project
Number of potential uses
Mission criticality
www.vidyarthiplus.com Page 9
www.vidyarthiplus.com
Application longevity
Requirements stability
Customer/developer communication
Maturity of applicable technology
Performance constraints
Project staffing
Reengineering factors
Scheduling:
* PERT and CPM – quantitative techniques that allow s/w planners to identify the chain of
dependent task in the project work breakdown structure that determines the project duration.
* Time line or GANTT chart : enables s/w planners to determine what task will be needed to
conducted at a given point in the given time.
*Time boxing: Practice of deciding a prior, the fixed amount of time that can be spend on each
task.
www.vidyarthiplus.com Page 10
www.vidyarthiplus.com
Productivity measures
Size related measures based on some output from the software process. This may be lines
of delivered source code, object code instructions, etc.
Measurement problems
Estimating the size of the measure
Estimating the total number of programmer months which have elapsed
Lines of code
What's a line of code?
o The measure was first proposed when programs were typed on cards with one line
per card
o How does this correspond to statements as in Java which can span several lines or
where there can be several statements on one line
Function points
Based on a combination of program characteristics
o user interactions
o external interfaces
www.vidyarthiplus.com Page 11
www.vidyarthiplus.com
The function point count is computed by multiplying each raw count by the weight and
summing all values
Object points
Object points are an alternative function-related measure to function points when 4Gls or
similar languages are used for development
Productivity estimates
Real-time embedded systems, 40-160 LOC/P-month
In object points, productivity has been measured between 4 and 50 object points/month
depending on tool support and developer capability
5.8 ZIPF’s LAW
Zipf's law, an empirical law formulated using mathematical statistics, refers to the fact
that many types of data studied in the physical and social sciences can be approximated with a
Zipfian distribution.
Zipf's law is most easily observed by plotting the data on a log-log graph, with the axes
being log(rank order) and log(frequency). For example, the word "the" (as described above)
would appear at x = log(1), y = log(69971). The data conform to Zipf's law to the extent that the
plot is linear.
Formally, let:
www.vidyarthiplus.com Page 12
www.vidyarthiplus.com
Zipf's law then predicts that out of a population of N elements, the frequency of elements of
rank k, f(k;s,N), is:
Zipf's law holds if the number of occurrences of each element are independent and
identically distributed random variables with power law distribution p(f) = αf − 1 − 1 / s.
In the example of the frequency of words in the English language, N is the number of words
in the English language and, if we use the classic version of Zipf's law, the exponent s is
1. f(k; s,N) will then be the fraction of the time the kth most common word occurs.
The simplest case of Zipf's law is a "1⁄f function". Given a set of Zipfian distributed
frequencies, sorted from most common to least common, the second most common frequency
will occur ½ as often as the first. The third most common frequency will occur ⅓ as often as the
first. The nth most common frequency will occur 1⁄n as often as the first. However, this cannot
hold exactly, because items must occur an integer number of times; there cannot be 2.5
occurrences of a word. Nevertheless, over fairly wide ranges, and to a fairly good approximation,
many natural phenomena obey Zipf's law.
Mathematically, it is impossible for the classic version of Zipf's law to hold exactly if
there are infinitely many words in a language, since the sum of all relative frequencies in the
denominator above is equal to the harmonic series and therefore:
In English, the word frequencies have a very heavy-tailed distribution and can therefore
be modeled reasonably well by a Zipf distribution with an s close to 1.
As long as the exponent s exceeds 1, it is possible for such a law to hold with infinitely
many words, since if s > 1 then
www.vidyarthiplus.com Page 13
www.vidyarthiplus.com
It compares the PLANNED amount of work with what has actually been COMPLETED,
to determine if COST , SCHEDULE, and WORK ACCOMPLISHED are progressing as
planned.
Need to “roll up” progress of many tasks into an overall project status
www.vidyarthiplus.com Page 14
www.vidyarthiplus.com
Planned cost of the total amount of work scheduled to be performed by the milestone
date.
ACWP: “Actual Cost of Work Performed”
Cost incurred to accomplish the work that has been done to date.
BCWP: Budgeted Cost of Work Performed
o The planned (not actual) cost to complete the work that has been done.
Some Derived Metrics
The further CSI is from 1.0, the less likely project recovery becomes.
Performance Metrics
SPI: BCWP/BCWS
www.vidyarthiplus.com Page 15
www.vidyarthiplus.com
CPI: BCWP/ACWP
Forecasting
Baseline Schedule
Time required for data measurement, input, and manipulation can be considerable.
o CM aims to control the costs and effort involved in making changes to a system
www.vidyarthiplus.com Page 16
www.vidyarthiplus.com
When released to CM, software systems are sometimes called baselines as they are a
starting point for further development
CM standards
Standards should define how items are identified, how changes are controlled and how
new versions are managed
Standards may be based on external CM standards (e.g. IEEE standard for CM)
Existing standards are based on a waterfall process model - new standards are needed for
evolutionary development
o Specifications
o Designs
o Programs
o Test data
o User manuals
CM planning
Must define the documents or document classes which are to be managed (Formal
documents)
www.vidyarthiplus.com Page 17
www.vidyarthiplus.com
Documents which might be required for future system maintenance should be identified
and specified as managed documents
The CM plan
Defines who takes responsibility for the CM procedures and creation of baselines
Describes the tools which should be used to assist the CM process and any limitations on
their use
May include information such as the CM of external software, process auditing, etc.
Some of these documents must be maintained for the lifetime of the software
Document naming scheme should be defined so that related documents have related
names.
A hierarchical scheme with multi-level names is probably the most flexible approach
www.vidyarthiplus.com Page 18
www.vidyarthiplus.com
Change management
o From users
o From developers
Change management is concerned with keeping managing of these changes and ensuring
that they are implemented in the most cost-effective way
Records change required, suggestor of change, reason why change was suggested and
urgency of change(from requestor of the change)
Records change evaluation, impact analysis, change cost and recommendations (System
maintenance staff)
www.vidyarthiplus.com Page 19
www.vidyarthiplus.com
Change tracking tools keep track the status of each change request and automatically
ensure
that change requests are sent to the right people at the right time.
Changes should be reviewed by an external group who decide whether or not they are
cost-effective from a strategic and organizational viewpoint rather than a technical
viewpoint
Should be independent of project responsible for system. The group is sometimes called a
change control board
Ensure that version management procedures and tools are properly applied
Versions/variants/releases
www.vidyarthiplus.com Page 20
www.vidyarthiplus.com
Version An instance of a system which is functionally distinct in some way from other
system instances
Version identification
Simple naming scheme uses a linear derivation e.g. V1, V1.1, V1.2, V2.1, V2.2 etc.
Release management
Releases must incorporate changes forced on the system by errors discovered by users
and by hardware changes
System releases
www.vidyarthiplus.com Page 21
www.vidyarthiplus.com
Release problems
o They may be happy with their current system as the new version may provide
unwanted functionality
Release management must not assume that all previous releases have been accepted. All
files required for a release should be re-created when a new release is installed
Factors such as the technical quality of the system, competition, marketing requirements
and customer change requests should all influence the decision of when to issue a new
system release
System building
The process of compiling and linking software components into an executable system
www.vidyarthiplus.com Page 22
www.vidyarthiplus.com
System representation
Systems are normally represented for building by specifying the file name to be
processed by building tools. Dependencies between these are described to the building
tools
Mistakes can be made as users lose track of which objects are stored in which files
A system modelling language addresses this problem by using a logical rather than a physical
system representation.
5.11 PROGRAM EVOLUTION DYNAMICS
After major empirical study, Lehman and Belady proposed that there were a number of
‘laws’ which applied to all systems as they evolved
There are sensible observations rather than laws. They are applicable to large systems
developed by large organisations. Perhaps less applicable in other cases
Lehman’s laws
Prof. Meir M. Lehman, who worked at Imperial College London from 1972 to 2002, and his
colleagues have identified a set of behaviours in the evolution of proprietary software. These
behaviours (or observations) are known as Lehman's Laws, and there are eight of them:
1. Continuing Change
2. Increasing Complexity
3. Large Program Evolution
4. Invariant Work-Rate
5. Conservation of Familiarity
6. Continuing Growth
7. Declining Quality
8. Feedback System
The laws predict that change is inevitable and not a consequence of bad programming and
that there are limits to what a software evolution team can achieve in terms of safely
implementing changes and new functionality.
www.vidyarthiplus.com Page 23
www.vidyarthiplus.com
Maturity Models specific to software evolution have been developed to help improve
processes to ensure continuous rejuvenation of the software evolves iteratively.
The "global process" that is made by the many stakeholders (e.g. developers, users, their
managers) has many feedback loops. The evolution speed is a function of the feedback loop
structure and other characteristics of the global system. Process simulation techniques, such
as system dynamics can be useful in understanding and managing such global process.
They are generally applicable to large, tailored systems developed by large organisations
o Small organisations
Maintenance does not normally involve major changes to the system’s architecture
Maintenance is inevitable
www.vidyarthiplus.com Page 24
www.vidyarthiplus.com
The system requirements are likely to change while the system is being developed
because
the environment is changing. Therefore a delivered system won't meet its requirements!
Systems are tightly coupled with their environment. When a system is installed in an
environment it changes that environment and therefore changes the system requirements.
Types of maintenance
Maintenance costs
www.vidyarthiplus.com Page 25
www.vidyarthiplus.com
Usually greater than development costs (2* to 100* depending on the application)
Ageing software can have high support costs (e.g. old languages, compilers etc.)
Team stability
o Maintenance costs are reduced if the same staff are involved with them for some
time
Contractual responsibility
Staff skills
o Maintenance staff are often inexperienced and have limited domain knowledge
www.vidyarthiplus.com Page 26
www.vidyarthiplus.com
Concerned with activities involved in ensuring that software is delivered on time and on
schedule and in accordance with the requirements of the organisations developing
and procuring the software
Project management is needed because software development is always subject to budget and
schedule constraints that are set by the organisation developing the software
Management activities
Proposal writing
Project costing
Various different types of plan may be developed to support the main software project
plan that is concerned with schedule and budget
www.vidyarthiplus.com Page 27
www.vidyarthiplus.com
end loop
Project organisation
Risk analysis
Work breakdown
Project schedule
Activity organization
The waterfall process allows for the straightforward definition of progress milestones
Split project into tasks and estimate time and resources required to complete each task
www.vidyarthiplus.com Page 28
www.vidyarthiplus.com
Scheduling problems
Estimating the difficulty of problems and hence the cost of developing a solution is hard
Risks are potential problems that affect successful completion of project which involves
uncertainty and potential lose . Risk analysis and management helps the s/w team to overcome
the problems caused by the risks.
The work product is called a Risk Mitigation, Monitoring and Management Plan(RMMMP).
* reactive strategies: also known as fire fighting, project team sets resources aside to deal with
the problem and does nothing until the risks became a problem.
* Pro active strategies: risk management begins long before technical works starts , risks are
identified and prioritized by importance, the team lead builds a plan to avoid such risks.
* Project risk:
* Known risk: predictable from careful evaluation of current project plan and those
extrapolated from past project experience.
www.vidyarthiplus.com Page 29
www.vidyarthiplus.com
* Product – specific risk: the project plan and s/w statement of scope are examined to identify
any specific characteristics.
The risk drivers affecting each risk component are classified according to their impact category
and potential consequence of each undetected s/w fault.
www.vidyarthiplus.com Page 30
www.vidyarthiplus.com
* Factors affecting risk consequences: nature, scope, and timing of the risk.
* If costs are associated with each risk table entry Halstead’s risk exposure can be adopted and
added to risk table.
* Defines referent levels for each project risk that can cause project termination.
* Predict the set of referent point that define a region of termination, bounded by a curve or areas
of uncertainties.
* process of restating the risks as set of more detailed risk that will be easier to migrate, migrate,
and manage.
* CTC format may be a good representation for the detailed risk( condition – transition –
consequence)
* risk monitoring: accessing whether predicted risks occur or not, collect information for further
risk analysis.
* Risk management and contingency planning : actions to be taken in the event that mitigation
step have failed and the risk become a life problem.
www.vidyarthiplus.com Page 31
www.vidyarthiplus.com
1. risk identification:
A systematic attempt to specify thread to the project plan.
Both generic and product specific risks
Risk identification : check list
(i) people / staff
(ii) customer / user
(iii) business/business impact
(iv) application, product size, technology
(v) process maturity
2. risk projection:
establish a scale that reflects the perceived likelihood of the risks.
Define the consequence of the risks.
Estimate the impact of the risk on the project and or the product
3. risk assessment:
recording risks:
building the risk table
(i) estimate the probability of the occurrence
(ii) estimate the impact on the project
(iii) add RMMM plan
(iv) sort the table by probability and impact
www.vidyarthiplus.com Page 32
www.vidyarthiplus.com
monitoring: what factors can be track that will enable us to determine the causes
of risks.
Management: what contingency plans do we have if the risks become a real.
What is CASE?
The computer aided software engineering (CASE) is an automated support for the
software engineering process.
The workshop for software engineering has been called an integrated project support
environment and the tools that fill the workshop are collectively called CASE.
CASE provides the software engineer with the ability to automate manual activities and to
improve engineering insight. CASE tools help to ensure that quality is designed in before the
produce is built.
www.vidyarthiplus.com Page 33
www.vidyarthiplus.com
1. Documentation tools.
These tools provide an important opportunity to improve productivity.
www.vidyarthiplus.com Page 34
www.vidyarthiplus.com
These tools for CASE are evolving from relational database management system to
object oriented database management system.
www.vidyarthiplus.com Page 35
www.vidyarthiplus.com
www.vidyarthiplus.com Page 36