0% found this document useful (0 votes)
61 views464 pages

Software Engg. 1-4

The document discusses the evolving role of software engineering and introduces some key concepts. It explains that software engineering aims to apply a systematic approach to software development to address issues like increasing complexity, costs and bugs. It also summarizes some common software process models like waterfall model and iterative development. Finally, it discusses the importance of a well-defined software process and introduces some generic process framework activities.

Uploaded by

Mohammad Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views464 pages

Software Engg. 1-4

The document discusses the evolving role of software engineering and introduces some key concepts. It explains that software engineering aims to apply a systematic approach to software development to address issues like increasing complexity, costs and bugs. It also summarizes some common software process models like waterfall model and iterative development. Finally, it discusses the importance of a well-defined software process and introduces some generic process framework activities.

Uploaded by

Mohammad Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 464

UNIT-I

Introduction to Software Engineering : The Evolving role of Software.


Software characteristics and applications, Evolution of Software
Engineering, Software crisis. The Software Engineering challenges, The
Software Engineering approach. Software development life cycle.
Software Development Process Models (Paradigms): Waterfall
Model. Prototyping, Iterative Development, Spiral Model. Software
Project: Planning a Software Project. Effort Estimation: (COCOMO
and Function Points Model), Project Scheduling, Staffing and Personnel
Planning, Software Configuration Management Plan, Quality Assurance
Plans, Project Monitoring Plans, Risk Management.
1
Software is: (1) instructions (computer programs) that
when executed provide desired features, function, and
performance; (2) data structures that enable the
programs to adequately manipulate information and (3)
documentation that describes the operation and use of
the programs.

2
3
4
5
Why Software Engineering ?

• Change in nature & complexity of software


• Ready for change
• Concept of one “guru” is over

• We all want improvement

6
The Evolving Role of Software

7
The Evolving Role of Software

8
The Evolving Role of Software

Managers and Technical Persons are asked:


✓ Why does it take so long to get the program
finished?
✓ Why are costs so high?
✓ Why can not we find all errors before release?
✓ Why do we have difficulty in measuring progress of
software development?

9
Software Characteristics

 software must be adapted to meet the needs of new

computing environments or technology.


 software must be enhanced to implement new business

requirements.
 software must be extended to make it interoperable with

other more modern systems or databases.


 software must be re-architected to make it feasible

within a network environment.


10
Software Applications
 System software
 Application software
 Engineering/scientific software
 Embedded software
 WebApps (Web applications)
 AI software
 etc

11
Factors Contributing to the Software Crisis

 Larger problems,
 Lack of adequate training in software
engineering,
 Increasing skill shortage,
 Low productivity improvements.

12
Some Software failures
Windows XP
✓ Microsoft released Windows XP on October 25, 2001.
✓ On the same day company posted 18 MB of
compatibility patches on the website for bug fixes,
compatibility updates, and enhancements.
✓ Two patches fixed important security holes.

This is Software Engineering.


13
Software engineering is an engineering discipline which
is concerned with all aspects of software production
Software engineers should
– adopt a systematic and organised approach to their
work
– use appropriate tools and techniques depending on
• the problem to be solved,
• the development constraints and
– use the resources available
14
Software Engineering

 A rigorous effort should be made to


understand the problem before a software
solution is developed
 design becomes an essential activity

 software should exhibit high quality

 software should be maintainable

15
The IEEE definition:
Software Engineering: (1) The application of a
systematic, disciplined, quantifiable approach to
the development, operation, and maintenance of
software; that is, the application of engineering to
software.
“State of the art of developing quality software on
time and within budget”

16
Software engineering is a field of engineering,
for designing and writing programs for computers or
other electronic devices. A software engineer,
or programmer, writes software or changes existing
software and compiles software using methods that
improve it.

17
Definition of Software Process
 It is a framework for the activities, actions, and tasks
that are required to build high-quality software.
 It defines the approach that is taken as software is
engineered.
 Software engineering: A layered technology of process–
technical methods and automated tools.

18
A Layered Technology

tools

methods

process model

a “quality” focus

Software Engineering

19
A Layered Technology
Methods: Methods are general guidelines that govern
the execution of some activity. Methods include a
broad array of tasks that include communication,
requirements analysis, design modeling, program
construction, testing and support.
Tools: Tools are developed to provide support for
process and methods.

20
Software engineers use many tools and practices in
making software. Some of the most common are:
•Flowcharts
•UML diagram
•Debugging tools
•Compiler
•Text editor etc.
•Computer Aided Software Engineering (CASE) is
example of Integrated tool.
21
A Process Framework

Process framework
Framework activities
work tasks
work products
milestones & deliverables
QA checkpoints
Umbrella Activities

22
Framework Activities

 Communication
 Planning
 Modeling
 Analysis of requirements
 Design
 Construction
 Code generation
 Testing
 Deployment
23
Umbrella Activities
 Software Project Management
 Formal Technical Reviews
 Software Quality Assurance
 Software Configuration Management
 Work Product Preparation and Production
 Reusability Management
 Measurement
 Risk management
24
Software Process Models
✓A software process model is an abstract representation
of a process. It presents a description of a process from
some particular viewpoint as:
✓1. Specification.
✓2. Design.
✓3. Validation.
✓4. Evolution.

25
Process models

 Help in the software development

 Guide the software team through a set of framework


activities

 Process Models may be linear, incremental or


evolutionary

26
Software Development Activities
Requirements Establish customer’s needs
Collection
Analysis Model and specify the requirements
(“what”)
Design Model and specify a solution (“how”)

Implementation Construct a solution in software

Testing Validate the solution against the


requirements
Maintenance Repair defects and adapt the solution
to new requirements

NB: these are ongoing activities, not sequential phases


A Generic Process Model

28
A Generic Process Model
◼ a generic process framework for software engineering
defines five framework activities-communication,
planning, modeling, construction, and deployment.
◼ In addition, a set of umbrella activities- project tracking
and control, risk management, quality assurance,
configuration management, technical reviews, and
others are applied throughout the process.

29
Identifying a Task Set
◼ A task set defines the actual work to be done to
accomplish the objectives of a software engineering
action.
◼ A list of the task to be accomplished

◼ A list of the work products to be produced

◼ A list of the quality assurance filters to be applied

What actions are appropriate for a framework activity depends on


the nature of the problem, the characteristics of the people and the
stakeholders
30
◼ For example, a small software project requested by one
person with simple requirements, the communication
activity might include a phone call with the stakeholder.
Therefore, the only necessary action is phone
conversation, the work tasks of this action are:
◼ 1. Make contact with stakeholder via telephone.

◼ 2. Discuss requirements and take notes.

◼ 3. Organize notes into a brief written statement of

requirements.
◼ 4. E-mail to stakeholder for review and approval. 31
Example of a Task Set for Elicitation
◼ The task sets for Requirements gathering action for a
simple project may include:
1. Make a list of stakeholders for the project.

2. Invite all stakeholders to an informal meeting.

3. Ask each stakeholder to make a list of features and

functions required.
4. Discuss requirements and build a final list.

5. Prioritize requirements.

6. Note areas of uncertainty. 32


◼ The task sets for Requirements gathering action for a big project may
include:
1. Make a list of stakeholders for the project.
2. Interview each stakeholders separately to determine overall wants
and needs.
3. Build a preliminary list of functions and features based on
stakeholder input.
4. Schedule a series of facilitated application specification meetings.
5. Conduct meetings.
6. Produce informal user scenarios as part of each meeting.
7. Refine user scenarios based on stakeholder feedback.
8. Build a revised list of stakeholder requirements.
9. prioritize requirements.
10. Package requirements so that they can be delivered incrementally.
11. Note constraints and restrictions that will be placed on the system.
12. Discuss methods for validating the system.

33
UNIT-I
Introduction to Software Engineering : The Evolving role of Software.
Software characteristics and applications, Evolution of Software
Engineering, Software crisis. The Software Engineering challenges, The
Software Engineering approach. Software development life cycle.
Software Development Process Models (Paradigms): Waterfall
Model. Prototyping, Iterative Development, Spiral Model. Software
Project: Planning a Software Project. Effort Estimation: (COCOMO
and Function Points Model), Project Scheduling, Staffing and Personnel
Planning, Software Configuration Management Plan, Quality Assurance
Plans, Project Monitoring Plans, Risk Management.
1
The Waterfall Model

2
•It is the oldest paradigm for SE. When requirements are
well defined and reasonably stable, it leads to a linear
fashion.
•The classic life cycle suggests a systematic, sequential
approach to software development.
Problems: 1. rarely linear, iteration needed. 2. hard to state
all requirements explicitly. 3.Blocking state. 4. code will
not be released until very late.

3
Waterfall model
✓ Classical model of software engineering

✓ Sequential development approach

Basic Principles
✓ Project is divided into sequential phases, with some overlap and

splash back acceptable between phases.

✓ Stretched control is maintained over the life of the project via

extensive written documentation, formal reviews, and

approval/signoff by the user and information technology

management occurring at the end of most phases before beginning

the next phase. 4


When to use the waterfall model:
✓Requirements are very well known, clear and fixed.
✓Product definition is stable.
✓Technology is understood.
✓There are no ambiguous requirements
✓Sufficient resources with required knowledge are
available freely
✓The project is short.

5
The following list details the steps for using the waterfall model:
System requirements

✓ Establishes the components for building the system

✓ Include hardware requirements, software tools, and other necessary


components.
Software requirements
✓ Requirements analysis includes determining interaction needed with
other applications and databases, performance requirements, user
interface requirements, and so on.
Architectural design

✓ Defines the major components and the interaction of those components,


but it does not define the structure of each component. 6
Detailed design
✓ Defines the specification of each component in detail
Coding
✓ Implements the detailed design specification
Testing
✓ Determines whether the software meets the specified
requirements and finds any errors present in the code
Maintenance
✓ Addresses problems and enhancement requests after the
software releases

7
Waterfall Model - Advantages
✓ Simple and easy to understand and use.
✓ Easy to manage due to the rigidity of the model. ...
✓ Phases are processed and completed one at a time.
✓ Works well for smaller projects where requirements are very
well understood.
✓ Clearly defined stages.
✓ Well understood milestones.
✓ Easy to arrange tasks.
✓ Reinforces good habits: define-before- design, design-before-
code
8
Disadvantages
✓ Idealized, doesn’t match reality well.
✓ it does not allow much reflection or revision.
✓ Once an application is in the testing stage, it is very difficult to
go back and change something that was not well-documented or
thought upon in the concept stage.
✓ Software is delivered late in project, delays discovery of serious
errors.
✓ Difficult to integrate risk management.
✓ Difficult and expensive to make changes.

9
Incremental Process Models

 1. The Incremental Model


 2. The RAD Model

10
The Incremental Model

11
Basic Principles
✓Iterative model, the project is divided into small parts.
✓Allows the development team to make obvious results
earlier in the process and obtain valuable feedback from
system users.
✓Each iteration is actually a mini-Waterfall process with
the feedback from one phase providing critical
information for the design of the next phase.

12
The Incremental model
 Software releases in increments.
 1st increment constitutes Core product.
 Basic requirements are addressed.
 Core product undergoes detailed evaluation by the
customer.
 As a result, plan is developed for the next increment,
Plan addresses the modification of core product to better
meet the needs of customer.
 Process is repeated until the complete product is produced.

13
When to use iterative model:
✓ Requirements of the complete system are clearly
defined and understood.
✓ When the project is big.
✓ Major requirements must be defined; however, some
details can evolve with time.

14
Advantages
✓ Generates working software quickly and early during the
software life cycle.
✓ More flexible
✓ less costly to change scope and requirements
✓ Easier to test and debug during a smaller iteration.
✓ Easier to manage risk because risky pieces are identified and
handled during its iteration.
✓ Allows feedback to proceeding stages
✓ Can be used where the requirements are not well understood

15
Disadvantages
✓ Needs good planning and design.
✓ Needs a clear and complete definition of the whole
system before it can be broken down and
built incrementally.
✓ Total cost is higher than waterfall.
✓ Not easy to manage this model.
✓ No clear milestones in the development process.

16
THE RAD MODEL
(Rapid Application Development)

 An incremental software process model


 Having a short development cycle
 High-speed adoption of the waterfall model using a
component based construction approach
 Creates a fully functional system within a very short
span time of 60 to 90 days

17
When to use RAD Methodology?
•When a system needs to be produced in a short span of time
(2-3 months)
•When the requirements are known
•When the user will be involved all through the life cycle
•When technical risk is less
•When there is a necessity to create a system that can be
modularized in 2-3 months of time
•When a budget is high enough to afford designers for
modeling along with the cost of automated tools for code
generation 18
The RAD Model Team # n
Modeling
Business modeling
Data modeling
Process modeling

Construction
Team # 2 Component reuse
automatic code
generation
Communication Modeling testing

Business modeling
Data modeling
Process modeling

Construction
Planning Team # 1 Component reuse
automatic code
generation
testing
Modeling
Business modeling Deployment
Data modeling integration
Process modeling delivery
feedback

Construction
Component reuse
automatic code
generation
testing

19
THE RAD MODEL
 Multiple software teams work in parallel on different functions
 Modeling encompasses three major phases: Business modeling,
Data modeling and process modeling
 Construction uses reusable components, automatic code generation
and testing
 Problems in RAD
 Requires a number of RAD teams
 Requires commitment from both developer and customer for rapid-
fire completion of activities
 Requires modularity
 Not suited when technical risks are high 20
Phases of RAD model Activities performed in RAD Model

➢ Business Modeling: On basis of the flow of


information and distribution between various business
channels, the product is designed

➢ Data Modeling: The information collected from


business modeling is refined into a set of data objects
that are significant for the business

21
➢ Process Modeling: The data object that is declared in
the data modeling phase is transformed to achieve the
information flow necessary to implement a business
function

➢ Application Generation: Automated tools are used for


the construction of the software, to convert process and
data models into prototypes

➢ Testing and Turnover: As prototypes are


individually tested during every iteration, the overall
22
Advantages of RAD Model

✓ Flexible and adaptable to changes


✓ It is useful when you have to reduce the overall project risk
✓ Due to code generators and code reuse, there is a reduction
of manual coding
✓ Due to prototyping in nature, there is a possibility of lesser
defects
✓ Each phase in RAD delivers highest priority functionality to
client
✓ With less people, productivity can be increased in short
time
23
Disadvantages
✓ It can't be used for smaller projects.
✓ Not all application is compatible with RAD.
✓ When technical risk is high, it is not suitable.
✓ If developers are not committed to delivering software
on time, RAD projects can fail.
✓ Reduced features due to time boxing, where features are
pushed to a later version to finish a release in short
period.
✓ Requires highly skilled designers or developers.
24
UNIT-I
Introduction to Software Engineering : The Evolving role of Software.
Software characteristics and applications, Evolution of Software
Engineering, Software crisis. The Software Engineering challenges, The
Software Engineering approach. Software development life cycle.
Software Development Process Models (Paradigms): Waterfall
Model. Prototyping, Iterative Development, Spiral Model. Software
Project: Planning a Software Project. Effort Estimation: (COCOMO
and Function Points Model), Project Scheduling, Staffing and Personnel
Planning, Software Configuration Management Plan, Quality Assurance
Plans, Project Monitoring Plans, Risk Management.
1
EVOLUTIONARY PROCESS MODEL
 Evolutionary model is a combination of Iterative and
Incremental model of software development life cycle.
 Software evolves over a period of time.
 Business and product requirements often change as
development proceeds making a straight-line path to an
end product unrealistic.
 Types of evolutionary models
 Prototyping
 Spiral model
2
PROTOTYPE Model
✓ The prototype model is a systems development method in
which a prototype is built, tested and then reworked as
necessary until an acceptable outcome is achieved from
which the complete system or product can be developed.
Basic Principles
✓ More traditional development methodology.
✓ User is involved throughout the development process.
✓ Increases the chance of user acceptance of the final
implementation.
3
PROTOTYPING
 A prototype is a elementary working sample, model,

mock-up or just a simulation of the actual product based


on which the other forms (final product, and variations)
are developed.
 The main motive behind prototyping is to validate the

design of the actual product.


 Used when customer defines a set of objective but does
not identify input, output, or processing requirements.
4
When to use Prototype model:
✓Used when the desired system needs to have a lot of
interaction with the end users.
✓Typically, online systems, web interfaces have a very
high amount of interaction with end users, are best suited
for Prototype model.
✓Prototyping ensures that the end users constantly work
with the system and provide a feedback which is
incorporated in the prototype to result in a useable
system.
5
Steps In Prototyping
 Begins with requirement gathering
 Identify whatever requirements are known
 Outline areas where further definition is mandatory
 A quick design occur
 Quick design leads to the construction of prototype
 Prototype is evaluated by the customer
 Requirements are refined
 Prototype is turned to satisfy the needs of customer
6
7
✓ The Prototyping Model should be used when the
requirements of the product are not clearly understood
or are unstable.
✓ It can also be used if requirements are changing quickly.
This model can be successfully used for developing user
interfaces, high technology software-intensive systems,
and systems with complex algorithms and interfaces.
✓ It is also a very good choice to demonstrate the
technical feasibility of the product. 8
Advantages –
✓ The customers get to see the partial product early in the life cycle.
This ensures a greater level of customer satisfaction and comfort.
✓ New requirements can be easily accommodated as there is scope
for refinement.
✓ Missing functionalities can be easily figured out.
✓ Errors can be detected much earlier thereby saving a lot of effort
and cost, besides enhancing the quality of the software.
✓ The developed prototype can be reused by the developer for more
complicated projects in the future.
✓ Flexibility in design.
9
Disadvantages –
✓ Costly w.r.t time as well as money.
✓ Poor Documentation due to continuously changing customer
requirements.
✓ It is very difficult for the developers to accommodate all the
changes demanded by the customer.
✓ There is uncertainty in determining the number of iterations that
would be required before the prototype is finally accepted by the
customer.
✓ Developers in a hurry to build prototypes may end up with sub-
optimal solutions.
10
Limitation Of Prototyping
 In a rush to get it working, overall software quality or
long term maintainability are generally overlooked.
 Use of inappropriate OS or PL.
 Use of inefficient algorithm.

11
The Spiral Model
 An evolutionary model which combines the best
feature of the classical life cycle and the iterative
nature of prototype model.
 Include new element : Risk element.

12
13
Basic Principles
✓ Focus is on risk assessment.
✓ Minimizing project risk by breaking a project into smaller
segments.
✓ Providing more relieve-of-change during the development
process.
✓ Each cycle involves a progression through the same sequence
of steps.
✓ Begin each cycle with an identification of stakeholders and
their win conditions, and end each cycle with review and
assurance. 14
The Spiral Model
 Realistic approach to the development of large scale
system and software.
 Software evolves as process progresses.
 Better understanding between developer and customer.
 The first circle might result in the development of a
product specification.
 Subsequent circles develop a prototype.
 And sophisticated version of software.
15
When to use Spiral model:
✓ When costs and risk evaluation is important
✓ For medium to high-risk projects
✓ Long-term project commitment
✓ Users are unsure of their needs
✓ Requirements are complex

16
Advantages Disadvantages

✓ High amount of risk ✓ Can be a costly model to

analysis. use.

✓ Good for large and ✓ Risk analysis requires

mission-critical projects. highly specific expertise.

✓ Software is produced ✓ Project’s success is highly

early in the software life dependent on the risk

cycle. analysis phase.


✓ Doesn’t work well for
smaller projects.
17
Comparative Analysis of Four Models
Features Water fall Incremental Prototyping Spiral

Requirement Beginning Beginning Frequently Changed Beginning


Specification

Understanding Understood Not Understood Not Understood Well Understood Well


Requirements Well Well

Cost Low Low High Expensive

Availability of No yes yes yes


reusable
component

Complexity of Simple Simple complex complex


system

18
Features Water fall Incremental Prototyping Spiral

Risk Analysis Only at beginning No Risk Analysis No Risk Analysis yes

User Involvement Only at beginning Intermediate High High


in all phases of
SDLC

Guarantee of Less High Good High


Success

Overlapping No No Yes Yes


Phases

Cost Control Yes No No yes

19
Conclusion
✓ Water Fall Model is commonly Used for Software Process
Modeling.

✓ Iterative water fall model overcome the drawback of original


waterfall model. It allow feedback to proceeding stage.

✓ Prototype model used to develop online systems for transaction


processing.

✓ Spiral model is used for development of large, complicated and


expensive projects like scientific Projects.

✓ Each model has advantages and disadvantages for the


development of systems , so each model tries to eliminate the
20
disadvantages of the previous model.
Project management
 People — the most important element of a
successful project.
 Product — the software to be built.
 Process — the set of framework activities and
software engineering tasks to get the job done.
 Project — all work required to make the product a
reality.
 Focus of Project Management
◦ To assure that information system projects meet
customer expectations.
 Delivered in a timely manner.
 Meet constraints and requirements.
➢ Project Management: A controlled process of
initiating, planning, executing and closing down a
project..
 Project Manager
◦ Systems Analyst responsible for:
 Project initiation
 Planning
 Execution
 Closing down
➢ Requires diverse set of skills
• Management
• Leadership
• Technical
• Conflict management
• Customer relations
 Project
◦ Planned undertaking of related activities to reach an
objective that has a beginning and an end.
 Four Phases
◦ Initiating
◦ Planning
◦ Executing
◦ Closing down
1. Project Initiation: The first phase of project
management process in which activities are performed
to assess the size, scope, and complexity of the project
and to establish procedures to support later project
activities.
2. Planning the project: It is a second phase of
project management process, which focuses on
defining clear, discrete activities and the work needed
to complete each activity within a single project.
Following steps are performed:
1. Describe project scope, alternatives and feasibility
2. Divide the project into manageable tasks
3. Estimate resources and create a resource plan.
4. Develop a preliminary schedule.
5. Develop a communication plan.
6. Determine project standards and procedures.
7. Identify and assess risk.
8. Create a preliminary budget.
9. Develop a project scope statement.
10. Set a baseline project plan.
• It is the third phase of project management process in
which plans created in previous phases (initiation and
planning) are put in to action. Following are the five
project execution activities:
• Executing the baseline project
• Monitoring project progress against the baseline project
plan.
• Managing changes to the baseline project plan.
• Maintaining the project workbook.
• Communicating the project status.
1. Closing down the project (Termination)
› Types of termination:
• Natural
• Requirements have been met
• Unnatural
• Project stopped
› Documentation
› Personnel Appraisal: assessment of each team member
2. Conduct post-project reviews
› Determine strengths and weaknesses of
▪ Project deliverables
▪ Project management process
▪ Development process
3. Closing the customer contract: signing closing paper
by both parties. Once signed then officially it will
be closed.
 Senior managers who define the business issues that often have
significant influence on the project.
 Project (technical) managers who must plan, motivate, organize,
and control the practitioners who do software work.
 Practitioners who deliver the technical skills that are necessary to
engineer a product or application.
 Customers who specify the requirements for the software to be
engineered and other stakeholders who have a peripheral interest in
the outcome.
 End-users who interact with the software once it is released for
production use.
 The MOI Model
◦ Motivation. The ability to encourage (by “push or pull”)
technical people to produce to their best ability.
◦ Organization. The ability to mold existing processes (or
invent new ones) that will enable the initial concept to be
translated into a final product.
◦ Ideas or innovation. The ability to encourage people to create
and feel creative even when they must work within bounds
established for a particular software product or application.

14
The following factors must be considered when selecting a software project team
structure ...
 the difficulty of the problem to be solved.
 the size of the resultant program(s) in lines of code or function
points.
 the time that the team will stay together (team lifetime).
 the degree to which the problem can be modularized.
 the required quality and reliability of the system to be built.
 the rigidity of the delivery date.
 the degree of sociability (communication) required for the
project.
 Scope ( Answers of following questions)
 Context. How does the software to be built fit into a larger system,
product, or business context and what constraints are imposed as a
result of the context?
 Information objectives. What customer-visible data objects are
produced as output from the software? What data objects are
required for input?
 Function and performance. What function does the software
perform to transform input data into output? Are any special
performance characteristics to be addressed?
✓Software project scope must be unambiguous and
understandable at the management and technical levels.
✓A statement of software scope must be bounded.
✓That is, quantitative data (e.g., number of simultaneous
users, target environment, maximum allowable response
time) are stated explicitly,
✓constraints and/or limitations(e.g., product cost restricts
memory size) are noted

17
 Sometimes called partitioning or problem elaboration
 Once scope is defined …
◦ It is decomposed into essential functions
◦ It is decomposed into user-visible data objects
or
◦ It is decomposed into a set of problem classes
 Decomposition process continues until all functions or
problem classes have been defined.

18
 Once a process framework has been established
◦ Consider project characteristics
◦ Determine the degree of strictness required
◦ Define a task set for each software engineering activity
 Task set =
 Software engineering tasks
 Work products
 Quality assurance points
 Milestones
19
 Projects get into trouble when …
◦ Software people don’t understand their customer’s needs.
◦ The product scope is poorly defined.
◦ Changes are managed poorly.
◦ The chosen technology changes.
◦ Business needs change [or are unclear].
◦ Deadlines are unrealistic.
◦ Users are resistant.
◦ Sponsorship is lost [or was never properly obtained].
◦ The project team lacks people with appropriate skills.
◦ Managers [and practitioners] avoid best practices and lessons
learned.
20
Software Project Planning
Cost Estimation

A number of estimation techniques have been developed and are having following attributes

in common :

✓ Project scope must be established in advance

✓ Software metrics are used as a basis from which estimates are made

✓ The project is broken into small pieces which are estimated individually

To achieve reliable cost and schedule estimates, a number of options arise:

✓ Delay estimation until late in project

✓ Use simple decomposition techniques to generate project cost and schedule estimates

✓ Develop empirical models for estimation

✓ Acquire one or more automated estimation tools


Example: Suppose that a project was estimated to be 400 KLOC.
Calculate the effort and development time for each of the three
modes i.e., organic, semidetached and embedded.
E
E
Example: A project size of 200 KLOC is to be developed. Software
development team has average experience on similar type of
projects. The project schedule is not very tight. Calculate the effort,
development time, average staff size and productivity of the project.
Intermediate Model
Cost drivers
(i) Product Attributes
✓ Required s/w reliability
✓ Size of application database
✓ Complexity of the product
(ii) Hardware Attributes
✓ Run time performance constraints
✓ Memory constraints
✓ Virtual machine volatility
✓ Turnaround time
(iii) Personal Attributes
✓ Analyst capability
✓ Application experience
✓ Programmer capability
✓Virtual m/c experience
✓ Programming language experience
(iv) Project Attributes
✓ Modern programming practices
✓ Use of software tools
✓ Required development Schedule
Example: A new project with estimated 400 KLOC embedded
system has to be developed. Project manager has a choice of hiring
from two pools of developers: Very high quality (Application
experience) with very little experience in the programming
language being used
Or
Developers of low quality (Application experience) but a lot of
experience with the programming language. What is the impact of
hiring all developers from one or the other pool ?
quality
quality little
Risk Management
Software Risk Management
✓We Software developers are extremely optimists.
✓ We assume, everything will go exactly as planned.
Other view
✓not possible to predict what is going to happen ?
✓Software surprises
………………….Never good news
Risk management is required to reduce this surprise factor
✓ Dealing with concern before it becomes a crisis.
✓Quantify probability of failure & consequences of failure.
What is risk ?
Tomorrow’s problems are today’s risks.
“Risk is a problem that may cause some loss or threaten the success of the
project, but which has not happened yet”.
Risk management is the process of identifying, addressing and eliminating
these problems before they can damage the project.
Typical Software Risk
Capers Jones has identified the top five risk factors that
threaten projects in different applications.
1. Dependencies on outside agencies or factors.
• Availability of trained, experienced persons
• Inter group dependencies
• Customer-Furnished items or information
• Internal & external subcontractor relationships

Either situation results in unpleasant surprises and unhappy


customers.
• Lack of clear product vision
• Unprioritized requirements
• Lack of agreement on product requirements
• New market with uncertain needs
• Rapidly changing requirements
• Inadequate Impact analysis of requirements changes
3. Management Issues
Project managers usually write the risk management plans, and
most people do not wish to air their weaknesses in public.
• Inadequate planning
• Inadequate visibility into actual project status
• Unclear project ownership and decision making
• Staff personality conflicts
• Unrealistic expectation
• Poor communication
4. Lack of knowledge
• Inadequate training
• Poor understanding of methods, tools, and techniques
• Inadequate application domain experience
• New Technologies
• Ineffective, poorly documented or neglected processes
5. Other risk categories
• Unavailability of adequate testing facilities
• Turnover of essential personnel
• Unachievable performance requirements
• Technical approaches that may not work
Risk Assessment
Risk Identification
Risk analysis involves examining how project outcomes might change
with modification of risk input variables.
Risk prioritization focus for severe risks.
Risk exposure: It is the product of the probability of incurring a loss
due to the risk and the potential magnitude of that loss.
Risk Control
Risk Management Planning produces a plan for dealing with each
significant risks.
Record decision in the plan.
Risk resolution is the execution of the plans of dealing with each risk.
Software Configuration
Management
Why Software Configuration Management ?
➢ The problem:
 Multiple people have to work on software that is changing

 More than one version of the software has to be supported:

 Released systems
 Custom configured systems (different functionality)
 System(s) under development
 Software must run on different machines and operating systems

➢ Need for coordination


 Software Configuration Management
 manages evolving software systems

 controls the costs involved in making changes to a system


Configuration management
Definition:
A set of management disciplines within the software
engineering process to develop a baseline.
Description:
Software Configuration Management includes the
disciplines and techniques of initiating, evaluating and
controlling change to software products during and after the
software engineering process.
SCM Activities
 Software Configuration Management (SCM) Activities:
 Configuration item identification

 Promotion management

 Release management

 Branch management

 Variant management

 Change management

 No fixed rules:
 SCM functions are usually performed in different ways

(formally, informally) depending on the project type and


life-cycle phase (research, development, maintenance).
SCM Activities (continued)
 Configuration item identification
 modeling of the system as a set of evolving components

 Promotion management
 is the creation of versions for other developers

 Release management
 is the creation of versions for the clients and users

 Branch management
 is the management of concurrent development

 Variant management
 is the management of versions intended to live

 Change management
 is the handling, approval and tracking of change requests
 Configuration Manager
SCM Roles
 Responsible for identifying configuration items. The configuration

manager can also be responsible for defining the procedures for


creating promotions and releases.
 Change control board member
 Responsible for approving or rejecting change requests

 Developer
 Creates promotions triggered by change requests or the normal

activities of development. The developer checks in changes and


resolves conflicts
 Auditor
 Responsible for the selection and evaluation of promotions for release

and for ensuring the consistency and completeness of this release


Terminology and Methodology

 What are
 Configuration Items

 Baselines

 SCM Directories

 Versions, Revisions and Releases

✓ The usage of the terminology presented here is not strict but


varies for different configuration management systems.
Terminology: Configuration Item
“Configuration item is defined as a combination of hardware,
software, or both, that is designated for configuration management
and treated as a single entity in the configuration management
process.”
❖ Software configuration items are not only program code segments
but all type of documents according to development, e.g
 all type of code files

 drivers for tests

 analysis or design documents

 user or developer manuals

 system configurations (e.g. version of compiler used)


Terminology: Baseline
Baseline: A specification or product that has been formally
reviewed and agreed to by responsible management, that
thereafter serves as the basis for further development, and can be
changed only through formal change control procedures.”
Examples:
Baseline A: The API of a program is completely defined; the
bodies of the methods are empty.
Baseline B: All data access methods are implemented and
tested; programming of the GUI can start.
Baseline C: GUI is implemented, test-phase can start.
More on Baselines

 As systems are developed, a series of baselines is developed,


usually after a review (analysis review, design review, code
review, system testing, client acceptance, ...)
 Developmental baseline (RAD, SDD, Integration Test, ...)
 Goal: Coordinate engineering activities.
 Functional baseline (first prototype, alpha release, beta
release)
 Goal: Get first customer experiences with functional
system.
 Product baseline (product)
 Goal: Coordinate sales and customer support.
Baselines in SCM

Baseline A (developmental)
All changes relative to baseline A
Baseline B (functional)
All changes relative to baseline B
Baseline C (beta test)
All changes relative to baseline C

Official Release
SCM Directories
 Programmer’s Directory (IEEE: Dynamic Library)
 Library for holding newly created or modified software entities.
The programmer’s workspace is controlled by the programmer
only.
 Master Directory (IEEE: Controlled Library)
 Manages the current baseline(s) and for controlling changes made
to them. Entry is controlled, usually after verification. Changes
must be authorized.
 Software Repository (IEEE: Static Library)
 Archive for the various baselines released for general use. Copies
of these baselines may be made available to requesting
organizations.
Change management
 Change management is the handling of change requests
 A change request leads to the creation of a new release

 General change process


 The change is requested (this can be done by anyone including users and

developers)
 The change request is assessed against project goals

 Following the assessment, the change is accepted or rejected

 If it is accepted, the change is assigned to a developer and implemented

 The implemented change is audited.

 The complexity of the change management process varies with the project.
Small projects can perform change requests informally and fast while
complex projects require detailed change request forms and the official
approval by one more managers.
Version vs. Revision vs. Release
 Version:
 An initial release or re-release of a configuration item

associated with a complete compilation or recompilation of


the item. Different versions have different functionality.
 Revision:
 Change to a version that corrects only errors in the

design/code, but does not affect the documented functionality.


 Release:
 The formal distribution of an approved version.
SCM planning

 Software configuration management planning starts during the

early phases of a project.


 The outcome of the SCM planning phase is the

Software Configuration Management Plan (SCMP)


which might be extended or revised during the rest of the
project.
 The SCMP can either follow a public standard like the IEEE

828, or an internal (e.g. company specific) standard.


The Software Configuration Management Plan

 Defines the types of documents to be managed and a document

naming scheme.

 Defines who takes responsibility for the CM procedures and

creation of baselines.

 Defines policies for change control and version management.

 Describes the tools which should be used to assist the CM

process and any limitations on their use.

 Defines the configuration management database used to record

configuration information.
Outline of a Software Configuration
Management Plan (SCMP, IEEE 828-1990)
 1. Introduction  4. Schedule (WHEN?)
 Describes purpose, scope of  Establishes the sequence and
application, key terms and references coordination of the SCM activities
 2. Management (WHO?) with project mile stones.
 Identifies the responsibilities and
 5. Resources (HOW?)
authorities for accomplishing the
 Identifies tools and techniques
planned configuration management
required for the implementation of the
activities
SCMP
 3. Activities (WHAT?)
 6. Maintenance
 Identifies the activities to be
 Identifies activities and responsibilities
performed in applying to the project.
on how the SCMP will be kept current
during the life-cycle of the project.
Tools for Software Configuration Management
 Software configuration management is normally supported by tools
with different functionality.
 Examples:
 RCS

 very old but still in use; only version control system


 CVS

 based on RCS, allows concurrent working without locking


 Perforce

 Repository server; keeps track of developer’s activities


 ClearCase

 Multiple servers, process modeling, policy check mechanisms


An example of change management process
Anybody Control Board Developer Quality Control
Request change Team

Assess request

[inconsistent with goals] [consistent with goals]

Reject request

Approve request

Assign change

Implement change

Validate change
UNIT-II Software Requirement Analysis And Specification: Need
for SRS, Problem Analysis, Requirements Specification. Software
Design: Design objectives and principles. Module level concepts,
Coupling and Cohesion. Design Notations and specifications.
Structured Design Methodology, Object Oriented Design. Detailed
Design: Detailed Design, Verification (Design Walkthroughs, Critical
Design Review, Consistency Checkers), Metrics.
Requirements describe What not How
Produces one large document written in natural language
contains a description of what the system will do without
describing how it will do it.
Types
Functional requirements describe what the software has
to do. They are often called product features.
Non Functional requirements are mostly quality
requirements. That stipulate how well the software does,
what it has to do.
User and system requirements
• User requirement are written for the users and include
functional and non functional requirement.
• System requirement are derived from user
requirement.
• The user system requirements are the parts of software
requirement and specification (SRS) document.
Requirements Documentation
SRS Should
-- Correctly define all requirements
-- not describe any design details
-- not impose any additional constraints
Characteristics of a good SRS: An SRS Should be
✓ Correct
✓ Unambiguous
✓ Complete
✓ Consistent
✓Ranked for important and/or stability
✓ Verifiable
✓ Modifiable
✓ Traceable
Requirements Documentation
✓Correct
An SRS is correct if and only if every requirement
stated therein is one that the software shall meet.
✓Unambiguous
An SRS is unambiguous if and only if, every
requirement stated therein has only one interpretation.
✓Complete
An SRS is complete if and only if, it includes the
following elements
i)All significant requirements, whether related to
functionality, performance, design constraints,
attributes or external interfaces.
ii) Responses to both valid & invalid inputs.
(iii) Full Label and references to all figures, tables and
diagrams in the SRS and definition of all terms and units
of measure.
✓ Consistent
An SRS is consistent if and only if, no subset of individual
requirements described in it conflict.
✓Verifiable
An SRS is verifiable, if and only if, every requirement stated
therein is verifiable.
✓Modifiable
An SRS is modifiable, if and only if, its structure and style are such
that any changes to the requirements can be made easily,
completely, and consistently while retaining structure and style.
✓Traceable
An SRS is traceable, if the origin of each of the requirements is
clear and if it facilitates the referencing of each requirement in
future development or enhancement documentation.
Organization of the SRS
IEEE has published guidelines and standards to
organize an SRS. First two sections are same. The
specific tailoring occurs in section-3.
Software Design

•More creative than analysis


• Problem solving activity
Conceptual design answers :
✓ Where will the data come from ?
✓ What will happen to data in the system?
✓ How will the system look to users?
✓ What choices will be offered to users?
✓ What is the timings of events?
✓ How will the reports & screens look like?
Technical design describes :
✓ Hardware configuration
✓ Software needs
✓ Communication interfaces
✓ I/O of the system
✓ Software architecture
✓ Network architecture
✓ Any other thing that translates the requirements in to a
solution to the customer’s problem.
The design needs to be:
✓ Correct & complete
✓ Understandable
✓ At the right level
✓ Maintainable

MODULARITY

A modular system consist of well defined manageable units with


well defined interfaces among the units.
Properties :
i. Well defined subsystem
ii. Well defined purpose
iii. Can be separately compiled and stored in a
library.
iv. Module can use other modules
v. Module should be easier to use than to build
vi. Simpler from outside than from the inside.
UNIT-II Software Requirement Analysis And Specification: Need
for SRS, Problem Analysis, Requirements Specification. Software
Design: Design objectives and principles. Module level concepts,
Coupling and Cohesion. Design Notations and specifications.
Structured Design Methodology, Object Oriented Design. Detailed
Design: Detailed Design, Verification (Design Walkthroughs, Critical
Design Review, Consistency Checkers), Metrics.
Modularity is the single attribute of software that allows
a program to be intellectually manageable. It enhances
design clarity, which in turn eases implementation,
debugging, testing, documenting, and maintenance of
software product.
This can be achieved as:
✓Controlling the number of parameters passed amongst
modules.
✓ Avoid passing undesired data to calling module.
✓ Maintain parent / child relationship between calling
& called modules.
✓ Pass data, not the control information.
Coupling
✓ The degree of interdependence between two modules”
✓ We aim to minimise coupling - to make modules as
independent as possible.

Low coupling can be achieve by:


✓ eliminating unnecessary relationships
✓ reducing the number of necessary relationships
✓ easing the ‘tightness’ of necessary relationships
Given two procedures A & B, we can identify number
of ways in which they can be coupled.
1, Data Coupling : The dependency between module A and
B is said to be data coupled if their dependency is based on
the fact they communicate by only passing of data. Other
than communicating through data, the two modules are
independent.
✓ Modules communicate by parameters
✓ Each parameter is an elementary piece of data
✓ Each parameter is necessary to the communication
✓ Nothing extra is needed.
2. Stamp coupling: Stamp coupling occurs between
module A and B when complete data structure is passed
from one module to another.
✓ A composite data is passed between modules
✓ Internal structure contains data not used
✓ Bundling - grouping of unrelated data into an artificial
structure
3. Control coupling
Module A and B are said to be control coupled if they
communicate by passing of control information. This is
usually accomplished by means of flags that are set by
one module and reacted upon by the dependent module.
✓ A module controls the logic of another module
through the parameter.
✓ Controlling module needs to know how the other
module works.
4. Common coupling
A form of coupling in which multiple modules
share the same global data. Global data areas are
commonly found in programming languages.
Making a change to the common data means
tracing back to all the modules which access that
data to evaluate the effect of changes.
✓ Use of global data as communication between
modules
• So common coupling has got disadvantages like
difficulty in reusing modules, reduced ability to control
data accesses and reduced maintainability.
5. Content coupling
In a content coupling, one module can modify the data of
another module or control flow is passed from one module
to the other module. This is the worst form of coupling
and should be avoided.
When a module can directly access or modify or refer to
the content of another module, it is called content level
coupling. One module directly references contents of the
other.
1. Example of Content Coupling Part of program handles
lookup for customer. When customer not found, component adds
customer by directly modifying the contents of the data structure
containing customer data.
2. Example of Content Coupling Occurs when one component
modifies an internal data item in another component, or when one
component branches into the middle of another component.
✓ A module refers to the inside of another
module
✓ Branch into another module
✓ Refers to data within another module
✓ Changes the internal workings of another
module
✓ Mostly by low-level languages
UNIT-II Software Requirement Analysis And Specification: Need
for SRS, Problem Analysis, Requirements Specification. Software
Design: Design objectives and principles. Module level concepts,
Coupling and Cohesion. Design Notations and specifications.
Structured Design Methodology, Object Oriented Design. Detailed
Design: Detailed Design, Verification (Design Walkthroughs, Critical
Design Review, Consistency Checkers), Metrics.
Module Cohesion

Cohesion is a measure of the degree to which the elements


of a module are functionally related.
Features Of Cohesion In Software Engineering
1. Elements that contribute to cohesion are : instructions,
groups of instructions, data definition, call of another
module.
2. We aim for strongly cohesive modules.
3. Everything in module should be related to one another -
focus on the task.
4. Strong cohesion will reduce relations between modules -
minimize coupling.
Types of cohesion
✓ Functional cohesion
✓ Sequential cohesion
✓ Procedural cohesion
✓ Temporal cohesion
✓ Logical cohesion
✓ Coincident cohesion
1. Functional cohesion: A and B are part of a single
functional task. This is very good reason for them to be
contained in the same procedure.
✓ All elements contribute to the execution of one and only
one problem-related task
✓ Focussed - strong, single-minded purpose
✓ No elements doing unrelated activities
Examples of functional cohesive modules:
•Compute cosine of angle
•Read transaction record
•Assign seat to airline passenger
2. Sequential Cohesion: Module A outputs some data
which forms the input to B. This is the reason for them to
be contained in the same procedure.
✓ Elements are involved in activities such that output data from
one activity becomes input data to the next
✓ Usually has good coupling and is easily maintained
✓ Not so readily reusable.
Example of Sequential Cohesion
Module format and cross-validate record
• cross-validate fields in raw record • use raw record
• return formatted cross-validated record • format raw record
3. Communicational Cohesion:
✓ Two elements operate on the same input data or
contribute towards the same output data.
✓ Communicational cohesion is when parts of a module
are grouped because they operate on the same data
(e.g., a module which operates on the same record of
information).
✓ Elements contribute to activities that use the same
input or output data.
Example of Communicational Cohesion
Module determine customer details:
• use customer account no
• find customer name
• find customer loan balance
• return customer name, loan balance
Examples of Communicational Cohesion
4. Procedural cohesion: Procedural Cohesion occurs in
modules whose instructions although accomplish different
tasks yet have been combined because there is a specific order
in which the tasks are to be completed.
✓ Elements are related only by sequence, otherwise the
activities are unrelated
✓ Similar to sequential cohesion, except for the fact that
elements are unrelated
✓ Commonly found at the top of hierarchy, such as the main
program module.
Example of Procedural Cohesion
Module write read and edit something
• use out record
• write out record
• read in record
• pad numeric fields with zeros
• return in record
5. Temporal Cohesion: When a module includes functions
that are associated by the fact that all the methods must be
executed in the same time, the module is said to exhibit
temporal cohesion.
✓ Elements are involved in activities that are related in time
Commonly found in initialisation and termination
modules.
✓ Elements are basically unrelated, so the module will be
difficult to reuse.
✓ Good practice is to initialise as late as possible and
terminate as early as possible.
Example of Temporal Cohesion
Module initialize
•set counter to 0
•open student file
•clear error message variable
•initialize array
6. Logical Cohesion
Logical cohesion occurs in modules that contain
instructions that appear to be related because they fall
into the same logical class of functions.
✓ Elements contribute to activities of the same general
category type.
✓ For example, a report module, display module or I/O
module,
✓ Usually have control coupling, since one of the
activities will be selected.
Example of Logical Cohesion
Module display record
• use record-type, record
if record-type is student then
display student record
else if record-type is staff then
display staff record
7. Coincidental Cohesion: Coincidental cohesion exists in
modules that contain instructions that have little or no
relationship to one another.
✓ Elements contribute to activities with no meaningful
relationship to one another.
✓ Similar to logical cohesion, except the activities may not
even be the same type.
✓ Mixture of activities.
✓ Difficult to understand and maintain, with strong
possibilities of causing ‘side effects’ every time the
module is modified.
Example of Coincidental Cohesion
Module miscellaneous functions
use customer record
display customer record
calculate total sales
read transaction record
return transaction record
Determining Module Cohesion
Procedural

A
Relationship between Cohesion & Coupling
If the software is not properly modularized, changes will
result into death of the project. Therefore, a software
engineer must design the modules with goal of high
cohesion and low coupling.
Differentiate between Coupling and Cohesion

Coupling Cohesion
Coupling is also called Inter-Module Cohesion is also called Intra-Module
Binding. Binding.
Coupling shows the relationships Cohesion shows the relationship within
between modules. the module.
Coupling shows the Cohesion shows the module's
relative independence between the relative functional strength.
modules.
While creating, you should aim for low While creating you should aim for high
coupling, i.e., dependency among cohesion, i.e., a cohesive component/
modules should be less. module focuses on a single function (i.e.,
single-mindedness) with little interaction
with other modules of the system.

In coupling, modules are linked to the In cohesion, the module focuses on a


other modules. single thing.
UNIT-II Software Requirement Analysis And Specification:
Need for SRS, Problem Analysis, Requirements Specification.
Software Design: Design objectives and principles. Module
level concepts, Coupling and Cohesion. Design Notations and
specifications. Structured Design Methodology, Object
Oriented Design. Detailed Design: Detailed Design,
Verification (Design Walkthroughs, Critical Design Review,
Consistency Checkers), Metrics.
✓ Software Design is the process to transform the user
requirements into some suitable form, which helps
the programmer in software coding and
implementation.
✓ During the software design phase, the design
document is produced, based on the customer
requirements as documented in the SRS document.
✓ The aim of this phase is to transform the SRS
document into the design document.
The following items are designed and documented
during the design phase:
✓ Different modules required.
✓ Control relationships among modules.
✓ Interface among different modules.
✓ Data structure among the different modules.
✓ Algorithms required to implement among the
individual modules.
Objectives of Software Design:
1.Correctness: A good design should be correct i.e. it
should correctly implement all the functionalities of the
system.
2.Efficiency: A good software design should address the
resources, time and cost optimization issues.
3.Understandability: A good design should be easily
understandable, for which it should be modular and all the
modules are arranged in layers.
4. Completeness: The design should have all the
components like data structures, modules, and external
interfaces, etc.
5. Maintainability: A good software design should be
easily manageable to change whenever a change
request is made from the customer side.
Software Design Principles
✓ Software design principles are concerned with
providing means to handle the complexity of the
design process effectively.
✓ Effectively managing the complexity will not
only reduce the effort needed for design but can
also reduce the scope of introducing errors
during design.
Following are the principles of Software Design:

1. Problem Partitioning
2. Abstraction
3. Modular
4. Strategy of Design: Top-down Approach & Bottom-
up Approach
1. Problem Partitioning:
• For small problem, we can handle the entire problem at once
but for the significant problem, divide the problems and
conquer the problem it means to divide the problem into
smaller pieces so that each piece can be captured separately.
• For software design, the goal is to divide the problem into
manageable pieces.
• These pieces cannot be entirely independent of each other as
they together form the system. They have to cooperate and
communicate to solve the problem. This communication
adds complexity.
Structured Design
✓ Structured design is a conceptualization of problem into
several well-organized elements of solution.
✓ Structured design is mostly based on ‘divide and conquer’
strategy where a problem is broken into several small
problems and each small problem is individually solved
until the whole problem is solved.
✓ A good structured design always follows some rules for
communication among multiple modules, namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
Function Oriented Design
✓ Function oriented design inherits some properties of
structured design where divide and conquer methodology
is used.
✓ In function-oriented design, the system is comprised of
many smaller sub-systems known as functions.
✓ These functions are capable of performing significant task
in the system.
✓ The system is considered as top view of all functions.
2. Abstraction
An abstraction is a tool that enables a designer to
consider a component at an abstract level without
bothering about the internal details of the
implementation. There are two common abstraction
mechanisms
1.Functional Abstraction
2.Data Abstraction
Functional Abstraction
i.A module is specified by the method it performs.
ii.The details of the algorithm to accomplish the functions are
not visible to the user of the function.
Functional abstraction forms the basis for Function oriented
design approaches.
Data Abstraction
Details of the data elements are not visible to the users of
data. Data Abstraction forms the basis for Object Oriented
design approaches.
3. Modularity
Modularity specifies to the division of software into
separate modules which are differently named and
addressed and are integrated later on in to obtain the
completely functional software. It is the only property
that allows a program to be intellectually manageable.
Single large programs are difficult to understand and
read due to a large number of reference variables, control
paths, global variables, etc.
The desirable properties of a modular system are:
•Each module is a well-defined system that can be
used with other applications.
•Each module has single specified objectives.
•Modules can be separately compiled and saved in the
library.
•Modules should be easier to use than to build.
•Modules are simpler from outside than inside.
4. Strategy of Design
A good system design strategy is to organize the program
modules in such a method that are easy to develop and latter
too, change. Structured design methods help developers to deal
with the size and complexity of programs. Analysts generate
instructions for the developers about how code should be
composed and how pieces of code should fit together to form a
program.
To design a system, there are two possible approaches:
1.Top-down Approach
2.Bottom-up Approach
1. Top-down Approach: This approach starts with the
identification of the main components and then
decomposing them into their more detailed sub-
components.
2. Bottom-up Approach: A bottom-up approach begins
with the lower details and moves towards up the hierarchy,
as shown in fig. This approach is suitable in case of an
existing system.
Different levels of Software Design:
The software design process can be divided into the
following three levels of phases of design:
1.Interface Design
2.Architectural Design
3.Detailed Design
1. Interface Design: Interface design is the
specification of the interaction between a system and its
environment. This phase proceeds at a high level of
abstraction with respect to the inner workings of the
system i.e, during interface design, the internal of the
systems are completely ignored and the system is
treated as a black box.
Interface design should include the following details:
✓ Precise description of events in the environment, or
messages from agents to which the system must respond.
✓ Precise description of the events or messages that the
system must produce.
✓ Specification on the data, and the formats of the data
coming into and going out of the system.
✓ Specification of the ordering and timing relationships
between incoming events or messages, and outgoing
events or outputs.
2. Architectural Design: Architectural design is the
specification of the major components of a system,
their responsibilities, properties, interfaces, and the
relationships and interactions between them. In
architectural design, the overall structure of the system
is chosen, but the internal details of major components
are ignored.
Issues in architectural design includes:
✓ Gross decomposition of the systems into major
components.
✓ Allocation of functional responsibilities to
components.
✓ Component Interfaces.
✓ Communication and interaction between components.
3. Detailed Design: Design is the specification of the internal
elements of all major system components, their properties,
relationships, processing, and often their algorithms and the data
structures.
The detailed design may include:
•Decomposition of major system components into program
units.
•Allocation of functional responsibilities to units.
•User interfaces
•Data and control interaction between units
•Algorithms and data structures
UNIT-II Software Requirement Analysis And Specification:
Need for SRS, Problem Analysis, Requirements Specification.
Software Design: Design objectives and principles. Module
level concepts, Coupling and Cohesion. Design Notations and
specifications. Structured Design Methodology, Object Oriented
Design. Detailed Design: Detailed Design, Verification (Design
Walkthroughs, Critical Design Review, Consistency Checkers),
Metrics.
Design Notations
Design notations are largely meant to be used during the
process of design and are used to represent design or
design decisions. For a function oriented design, the
design can be represented graphically or mathematically
by the following:
✓ Data flow diagrams
✓ Data Dictionaries
✓ Structure Charts
✓ Pseudocode
Data Dictionaries
✓ A data dictionary is a file or a set of files that includes
a database's metadata.
✓ The data dictionary hold records about other objects in
the database, such as data ownership, data
relationships to other objects, and other data.
✓ Typically, only database administrators interact with
the data dictionary.
The data dictionary, in general, includes information
about the following:
✓ Name of the data item
✓ Aliases
✓ Description/purpose
✓ Related data items
✓ Range of values
✓ Data structure definition/Forms
1. The name of the data item is self-explanatory.
2. Aliases include other names by which this data item is
called DEO for Data Entry Operator and DR for Deputy
Registrar.
3. Description/purpose is a textual description of what the
data item is used for or why it exists.
4. Related data items capture relationships between data
items e.g., total_marks must always equal to
internal_marks plus external_marks.
5. Range of values records all possible values, e.g. total
marks must be positive and between 0 to 100.
6. Data structure Forms: Data flows capture the name of
processes that generate or receive the data items.
The scheme of organizing related information is known as ‘data structure’.
The types of data structure are:
Lists: A group of similar items with connectivity to the previous or/and next
data items.
Arrays: A set of homogeneous values
Records: A set of fields, where each field consists of data belongs to one
data type.
Trees: A data structure where the data is organized in a hierarchical
structure. This type of data structure follows the sorted order of insertion,
deletion and modification of data items.
Tables: Data is persisted in the form of rows and columns. These are similar
to records, where the result or manipulation of data is reflected for the whole
table.
✓ A structure chart (SC) in software engineering and
organizational theory is a chart which shows the
breakdown of a system to its lowest manageable levels.
✓ They are used in structured programming to arrange
program modules into a tree.
✓ Each module is represented by a box, which contains
the module's name.
✓ Structure Chart partitions the system into black boxes
(functionality of the system is known to the users but
inner details are unknown).
✓ Structure Chart represent hierarchical structure of
modules.
✓ Inputs are given to the black boxes and appropriate
outputs are generated.
✓ Modules at top level called modules at low level.
✓ Components are read from top to bottom and left to
right.
Symbols used in construction of structured chart
1.Module
It represents the process or task of the system. It is of three
types.
1.Control Module: A control module branches to more
than one sub module.
2.Sub Module: Sub Module is a module which is the
part (Child) of another module.
3.Library Module: Library Module are reusable and
invokable from any module.
2. Conditional Call: It represents that control module
can select any of the sub module on the basis of some
condition.
3. Loop (Repetitive call of module): It represents the
repetitive execution of module by the sub module.
A curved arrow represents loop in the module. All the sub
modules cover by the loop repeat execution of module.
4. Data Flow: It represents the flow of data between the
modules. It is represented by directed arrow with empty
circle at the end.
5. Control Flow: It represents the flow of control
between the modules. It is represented by directed arrow
with filled circle at the end.

6. Physical Storage: Physical Storage is that where all


the information are to be stored.
Example : Structure chart for an Email server
Pseudocode
✓ Pseudo code is a term which is often used in
programming and algorithm based fields.
✓ It is a methodology that allows the programmer to
represent the implementation of an algorithm.
✓ Pseudocode is a "text-based" detail (algorithmic)
design tool.
✓ Pseudo code: It’s simply an implementation of an
algorithm in the form of notations and informative
text written in plain English.
✓ It has no syntax like any of the programming
language and thus can’t be compiled or interpreted by
the computer.
✓ Pseudocode often uses structural conventions of a
normal programming language, but is intended for
human reading rather than machine reading.
How to write a Pseudo-code?

1.Arrange the sequence of tasks and write the pseudocode


accordingly.
2.Start with the statement of a pseudo code which
establishes the main goal or the aim.
Example:
This program will allow the user to check the number
whether it's even or odd.
3. The way the if-else, for, while loops are indented in a
program, indent the statements likewise, as it helps to
comprehend the decision control and execution
mechanism. They also improve the readability to a great
extent. Example:
if "1“
print response
"I am case 1“
if "2"
print response
"I am case 2"
4. Use appropriate naming conventions. the naming
must be simple and distinct.
5. Elaborate everything which is going to happen in the
actual code. Don’t make the pseudo code abstract.
6. Use standard programming structures such as ‘if-
then’, ‘for’, ‘while’, ‘cases’ the way we use it in
programming.
7. Check whether all the sections of a pseudo code is
complete, finite and clear to understand and
comprehend.
Advantages of Pseudocode
✓ Improves the readability of any approach. It’s one of the
best approaches to start implementation of an algorithm.
✓ Acts as a bridge between the program and the algorithm
or flowchart. It works as a rough documentation, so the
program of one developer can be understood easily
when a pseudo code is written out.
✓ The main goal of a pseudo code is to explain what
exactly each line of a program should do, hence making
the code construction phase easier for the programmer.
UNIT-II Software Requirement Analysis And Specification:
Need for SRS, Problem Analysis, Requirements Specification.
Software Design: Design objectives and principles. Module
level concepts, Coupling and Cohesion. Design Notations and
specifications. Structured Design Methodology, Object Oriented
Design. Detailed Design: Detailed Design, Verification (Design
Walkthroughs, Critical Design Review, Consistency Checkers),
Metrics.
Purpose of an SDD

The SDD shows how the software system will be


structured to satisfy the requirements identified in the
SRS. It is basically the translation of requirements into a
description of the software structure, software
components, interfaces, and data necessary for the
implementation phase. Hence, SDD becomes the blue
print for the implementation activity.
Different Verification Methods Used for Detailed Design
✓ Design Verification is a method to confirm if the output
of a designed software product meets the input
specifications by examining and providing evidence.
✓ The goal of the design verification process
during software development is ensuring that
the designed software product is the same as specified.
The three verification methods we consider are design
walkthrough, critical design review, and consistency
checkers.
1. DESIGN WALKTHROUGH

✓ A design walkthrough is a manual method of verification.


✓ The definition and use of walkthroughs change from
organization to organization.
✓ A design walkthrough is done in an informal meeting
called by the designer or the leader of the designer’s
group.
✓ The walkthrough group is usually small and contains,
along with designer, the group and/or another designer of
the group.
1. DESIGN WALKTHROUGH

✓ Design walkthroughs provide designers with a way


to identify and assess early on whether the proposed
design meets the requirements and addresses the
project's goal.
✓ A design walkthrough is a quality practice that allows
designers to obtain an early validation of design
decisions.
The following guidelines to plan, conduct, and participate
in design walkthroughs and increase their effectiveness.
1. Plan for a Design Walkthrough: A design
walkthrough should be scheduled when detailing the
micro-level tasks of a project.
2. Get the Right Participants: It is important to invite the
right participants to a design walkthrough. The
reviewers/experts should have the appropriate skills and
knowledge to make the walkthrough meaningful for
all.
3. Understand Key Roles and Responsibilities: All
participants in the design walkthrough should clearly
understand their role and responsibilities so that they can
consistently practice effective and efficient reviews.
4. Prepare for a Design Walkthrough: Besides
planning, all participants need to prepare for the design
walkthrough. If all participants are adequately prepared
as per their responsibilities, the design walkthrough is
likely to be more effective.
5. Use a Well-Structured Process: A design walkthrough
should follow a well-structured, documented process. This
process should help to define the key purpose of the
walkthrough and should provide systematic practices and
rules of conduct that can help participants.
6. Review and Critique the Product, Not the Designer:
The design walkthrough should be used as a means to
review and critique the product, not the person who
created the design.
7. Review, Do Not Solve Problems: A design
walkthrough has only one purpose, to find defects. A
moderator needs to prevent this from happening and
ensure that the walkthrough focuses on the defects or
weaknesses rather than identifying fixes or resolutions.
2. Critical Design Review
✓ A Critical Design Review (CDR) is a multi-disciplined
technical review to ensure that a system can proceed into
construction, demonstration, and test and can meet stated
performance requirements within cost, schedule, and risk.
✓ The Critical Design Review (CDR) closes the critical
design phase of the project.
✓ A CDR must be held and signed off before design freeze
and before any significant production begins. The design
at CDR should be complete and comprehensive.
A CDR should:
✓ Determine that detail design of the configuration item
under review satisfies cost (for cost type contracts),
schedule, and performance requirements.
✓ Establish detail design compatibility among the
configuration item and other items of equipment, facilities,
computer software and personnel.
✓ Assess configuration item risk areas (on a technical, cost,
and schedule basis).
✓ Review preliminary hardware product specifications.
✓ Determine the acceptability of the detailed design,
performance, and test characteristics of the design solution,
and on the adequacy of the operation and support
documents.
Completion of CDR should provide
✓ An system initial Product Baseline.
✓ An updated risk assessment.
✓ An updated Cost Analysis Requirements Description
(CARD) based on the system product baseline,
✓ An updated program development schedule including
construction, test and evaluation, and software coding etc.
3. Consistency checkers.

✓ Design reviews and walkthrough are manual


processes; the people involved in the review and
walkthrough determine the error in the design .
✓ If the design is specified in PDL or some other
formally defined design language, it is possible to
detect some design defects by using consistency
checkers.
✓ Consistency checkers are essentially compilers that take as
input the design specified in a design language (PDL).
Clearly, they cannot produce executable code because the
inner syntax of PDL (program design language) allows
natural language and many activities are specified in the
natural language.
✓ A consistency checker can ensure that any modules invoke or
used by a given module actually exist in the design or not.
✓ It can also check that whether the interface used by the
calling module is consistent with the interface definition of
the called module or not.
OBJECT-ORIENTED

STRUCTURED ANALYSIS ANALYSIS

1. The main focus in on data

1. The main focus is on process and structure and real-world objects

procedures of system. that are important.

2. It uses System Development Life

Cycle (SDLC) methodology for

different purposes like planning, 2. It uses Incremental or Iterative

analyzing, designing, implementing, methodology to refine and extend

and supporting an information system. our design.


OBJECT-ORIENTED

STRUCTURED ANALYSIS ANALYSIS

3. It is suitable for well-defined

projects with stable user 3. It is suitable for large projects

requirements. with changing user requirements.

4. Risk while using this analysis 4. Risk while using this analysis

technique is high and reusability is technique is low and reusability

also low. is also high.


5. Requirement engineering

5. Structuring requirements include includes Use case model (find Use

DFDs (Data Flow Diagram), cases, Flow of events, Activity

Structured English, ER (Entity Diagram), the Object model (find

Relationship) diagram, CFD Classes and class relations, Object

(Control Flow Diagram), Data interaction, Object to ER mapping),

Dictionary, Decision table/tree, State chart Diagram, and

State transition diagram. deployment diagram.

6. This technique is old and is not 6. This technique is new and is

preferred usually. mostly preferred.


UNIT-III
Software Implementation: Implementation issues, Coding.
Programming Practices: Structured coding and object
oriented coding techniques, Modern programming language
features. Verification and Validation techniques (Code reading,
Static Analysis, Symbolic Execution, Proving Correctness,
Code Inspections or Reviews, Unit Testing). Coding:
Programming Principles and guidelines, Coding Process
Metrics: Size Measures, Complexity Metrics, Style Metrics.
Documentation: Internal and External Documentation.
Verification and Validation is the process of investigating
that a software system satisfies specifications and
standards and it fulfils the required purpose.
Barry Boehm described verification and validation as the
following:
Verification: Are we building the product right?
(Process Review)
Validation: Are we building the right product?
(Product Review)
Verification: It is a process of checking documents, design, code,
and program in order to check if the software has been built according
to the requirements or not. The main goal of verification process is to
ensure quality of software application, design, architecture etc. The
verification process involves activities like reviews, walk-throughs

and inspection. Verification is Static Testing.


Activities involved in verification:
1.Inspections
2.Reviews
3.Walkthroughs
4.Desk-checking
Validation: It is a dynamic mechanism of testing and validating if the
software product actually meets the exact needs of the customer or
not. The process helps to ensure that the software fulfils the desired
use in an appropriate environment. The validation process involves
activities like unit testing, integration testing, system testing and user
acceptance testing. Validation is the Dynamic Testing.
Activities involved in validation:
1.Black box testing
2.White box testing
3.Unit testing
4.Integration testing
Example of verification and validation
•In Software Engineering, consider the following specification
A clickable button with name Submet
•Verification would check the design doc and correcting the
spelling mistake.
•Otherwise, the development team will create a button like
VERIFICATION VALIDATION
It includes checking
It includes testing and validating
documents, design, codes and
the actual product.
programs.
Verification is the static
Validation is the dynamic testing.
testing.
It does not include the It includes the execution of the
execution of the code. code.
Methods used in validation are
Methods used in verification
Black Box Testing, White Box
are reviews, walkthroughs,
Testing and non-functional
inspections and desk-checking.
testing.
It checks whether the software
It checks whether the software
meets the requirements and
conforms to specifications or
expectations of a customer or
not.
not.
VERIFICATION VALIDATION

It can find the bugs in the It can only find the bugs that
early stage of the could not be found by the
development. verification process.
The goal of verification is
The goal of validation is an
application and software
actual product.
architecture and specification.
Validation is executed on
Quality assurance team does
software code with the help of
verification.
testing team.
It comes before validation. It comes after verification.
It consists of checking of It consists of execution of
documents/files and is program and is performed by
performed by human. computer.
Software Measurement and Metrics

 Measurement is the action of measuring something.


 It is the assignment of a number to a characteristic of
an object or event, which can be compared with other
objects or events.
 Formally it can be defined as, the process by which
numbers or symbols are assigned to attributes of
entities in the real world, in such a way as to describe
them according to clearly defined rules.
Need of Software Measurement: Software is
measured to:
1.Create the quality of the current product or process.
2.Anticipate future qualities of the product or process.
3.Enhance the quality of a product or process.
4.Regulate the state of the project in relation to budget
and schedule.
Classification of Software Measurement:
There are 2 types of software measurement:
Direct Measurement: These are the measurements that can be
measured without the involvement of any other entity or
attribute. The following direct measures are commonly used in
software engineering.
•Length of source code by LOC
•Duration of testing
•Number of defects discovered during the testing process by
counting defects
•The time a programmer spends on a program
Indirect Measurement: These are measurements that can be
measured in terms of any other entity or attribute. The
following indirect measures are commonly used in software
engineering.
• Programmer Productivity = LOC produced / Person
months of effort
• Requirement Stability = Number of initial requirements /
Total number of requirements
• Module Defect Density=Number of defects / Module size
Metrics:
A metrics is a measurement of the level that any attribute
belongs to a system product or process. There are 4
functions related to software metrics:
1.Planning
2.Organizing
3.Controlling
4. Improving
1.Characteristics of software Metrics:
1. Quantitative: Metrics must possess quantitative
nature. It means metrics can be expressed in values.
2. Understandable: Metric computation should be easily
understood ,the method of computing metric should be
clearly defined.
3. Applicability: Metrics should be applicable in the
initial phases of development of the software.
4. Repeatable: The metric values should be same when
measured repeatedly and consistent in nature.
5. Economical: Computation of metric should be
economical.
6. Language Independent: Metrics should not depend
on any programming language.
UNIT-III
Software Implementation: Implementation issues, Coding.
Programming Practices: Structured coding and object
oriented coding techniques, Modern programming language
features. Verification and Validation techniques (Code reading,
Static Analysis, Symbolic Execution, Proving Correctness,
Code Inspections or Reviews, Unit Testing). Coding:
Programming Principles and guidelines, Coding Process
Metrics: Size Measures, Complexity Metrics, Style Metrics.
Documentation: Internal and External Documentation.
Opportunities
for
measurement
during the
software life
cycle
Subjective and Objective Measures
 A subjective measure requires human judgment. There is no
guarantee that two different people making a subjective
measure will arrive at the same value.
 Example: defect severity, function points, readability and
usability.
 An objective measure requires no human judgment. There
are precise rules for quantifying an objective measure. When
applied to the same attribute, two different people will arrive
at the same answer.
 Example: effort, cost and LOC.
 A measure provides a quantitative indication of the
extent, dimension, size, capacity, efficiency,
productivity or reliability of some attributes of a
product or process.
 Measurement is the act of evaluating a measure.
 A metric is a quantitative measure of the degree to
which a system, component or process possesses a
given attribute.
Static metrics: Static metrics are obtainable at the early phases
of software development life cycle and deals with structural
features of software. These metrics does not deal with object
oriented features, real- time systems. Static complexity metrics
estimate the amount of effort needed to develop, and maintain
the code.
Dynamic metrics: Dynamic metrics are accessible at the late
stage of the software development life cycle. These metrics
capture the dynamic behaviour of the system and very hard to
obtain and obtained from traces of code. Dynamic metrics
supports all object-oriented features and real time systems.
Software metrics is a standard of measure that contains many activities
which involve some degree of measurement. It can be classified into three
categories: product metrics, process metrics, and project metrics.
•Product metrics describe the characteristics of the product such as size,
complexity, design features, performance, and quality level.
•Process metrics can be used to improve software development and
maintenance. Examples include the effectiveness of defect removal during
development, the pattern of testing defect arrival, and the response time of
the fix process.
•Project metrics describe the project characteristics and execution.
Examples include the number of software developers, the staffing pattern
over the life cycle of the software, cost, schedule, and productivity.
Software Project Planning
 In order to conduct a successful software project, we must
understand:
 Scope of work to be done
 The risk to be incurred
 The resources required
 The task to be accomplished
 The cost to be expended
 The schedule to be followed
Software planning begins before technical work starts,
continues as the software evolves from concept to reality,
and ends only when the software is retired.
Lines of Code: It is one of the earliest and simpler metrics
for calculating the size of the computer program. It is
generally used in calculating and comparing the
productivity of programmers. These metrics are derived
by normalizing the quality and productivity measures by
considering the size of the product as a metric.

Following are the points regarding LOC measures:


1.In size-oriented metrics, LOC is considered to be the
normalization value.
2.Size-oriented metrics depend on the programming
language used.
3.Productivity is defined as KLOC / EFFORT, where
effort is measured in person-months.
4.LOC measure requires a level of detail which may not
be practically achievable.
5. The more expressive is the programming language, the lower
is the productivity.
6. LOC method of measurement does not apply to projects that
deal with visual (GUI-based) programming. As already
explained, Graphical User Interfaces (GUIs) use forms
basically. LOC metric is not applicable here.
7. It requires that all organizations must use the same method
for counting LOC. This is so because some organizations use
only executable statements, some useful comments, and some
do not. Thus, the standard needs to be established.
8. These metrics are not universally accepted.
Lines of Code (LOC)
If LOC is simply a
count of the number of
lines then figure shown
below contains 18 LOC
.
When comments and
blank lines are ignored,
the program in figure 2
shown below contains
17 LOC.
13
“A line of code is any line of program text that is not a
comment or blank line, regardless of the number of
statements or fragments of statements on the line. This
specifically includes all lines containing program
header, declaration, and executable and non-executable
statements”.
Productivity and Effort
Productivity is defined as the rate of output, or
production per unit of effort, i.e. the output achieved
with regard to the time taken but irrespective of the cost
incurred.
Hence most appropriate unit of effort is Person Months
(PMs), meaning thereby number of persons involved for
specified months. So, productivity may be measured as
LOC/PM (lines of code produced/person month)
15
LOC

 productivity = KLOC/person-month
 quality = faults/KLOC
 cost = $$/KLOC
 documentation = doc_pages/KLOC

16
UNIT-III
Software Implementation: Implementation issues, Coding.
Programming Practices: Structured coding and object
oriented coding techniques, Modern programming language
features. Verification and Validation techniques (Code reading,
Static Analysis, Symbolic Execution, Proving Correctness,
Code Inspections or Reviews, Unit Testing). Coding:
Programming Principles and guidelines, Coding Process
Metrics: Size Measures, Complexity Metrics, Style Metrics.
Documentation: Internal and External Documentation.
Halstead's Software Metrics
 According to Halstead's "A computer program is an
implementation of an algorithm considered to be a collection of
tokens which can be classified as either operators or operand.”
 Halstead’s metrics are included in a number of current
commercial tools that count tokens and determine which are
operators and which are operands.The following base measures
can be collected :
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
 Size of program (N): In terms of the total tokens
used, the size of the program can be expressed as:
N = N1 + N2.
 Size of Vocabulary (n)
The size of the vocabulary of a program, which consists
of the number of unique tokens used to build a program,
is defined as:
n = n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands

 Program Volume (V): The unit of measurement


of volume is the standard unit for size "bits." It is
the actual size of a program.
V=N*log2n
Example: Consider the sorting program as shown in fig: List out the
operators and operands and also calculate the value of software
science measure like n, N, V.
Operators Occurrences Operands Occurrences
int 4 SORT 1
() 5 x 7
, 4 n 3
[] 7 i 8
if 2 j 7
< 2 save 3
; 11 im1 3
for 2 2 2
= 6 1 3
- 1 0 1
<= 2 - -
++ 2 - -
return 2 - -
{} 3 - -
n1=14 N1=53 n2=10 N2=38
Here N1=53 and N2=38. The program length
N=N1+N2
N=53+38 =91
Vocabulary of the program
n=n1+n2
n=14+10=24
Volume V= N * log2 n
V=91 × log2 24
=417 bits.
Function Count
Alan Albrecht while working for IBM, recognized the
problem in size measurement in the 1970s, and
developed a technique (which he called Function Point
Analysis), which appeared to be a solution to the size
measurement problem.
The principle of Albrecht’s function point analysis
(FPA) is that a system is decomposed into functional
units.
Function Count
 The basic and primary purpose of the functional point
analysis is to measure and provide the software
application functional size to the client, customer, and
the stakeholder on their request.
 FPs of an application is found out by counting the
number and types of functions used in the
applications. Various functions used in an application
can be put under five types, as shown in Table:
Types of FP Attributes
Measurements Parameters Examples
1.Number of External Input screen and tables
Inputs(EI)
2. Number of External Output screens and reports
Output (EO)
3. Number of external Prompts and interrupts.
inquiries (EQ)
4. Number of internal files Databases and directories
(ILF)
5. Number of external Shared databases and shared
interfaces (EIF) routines.
1. All these parameters are then individually assessed for
complexity.
2.The effort required to develop the project depends on what
the software does.
3. FP is programming language independent.
4. FP method is used for data processing systems, business
systems like information systems.
5. The five parameters mentioned above are also known as
information domain characteristics.
6. LOCs of an application can be estimated from FPs.
That is, they are interconvertible. This process is known
as backfiring. For example, 1 FP is equal to about 100
lines of COBOL code.
7. FP metrics is used mostly for measuring the size of
Management Information System (MIS) software.
Special features
Function point approach is independent of the language,
tools, or methodologies used for implementation; i.e. they
do not take into consideration programming languages,
data base management systems, processing hardware or
any other data base technology.
Function points can be estimated from requirement
specification or design specification, thus making it
possible to estimate development efforts in early phases of
development.
Function points are directly linked to the statement of
requirements; any change of requirements can easily be
followed by a re-estimate.
Function points are based on the system user’s
external view of the system, non-technical users of the
software system have a better understanding of what
function points are measuring.
The weighting factors are identified for all functional units and
multiplied with the functional units accordingly. The procedure for
the calculation of Unadjusted Function Point (UFP) is given in
table shown above.
11. Is the code designed to be reusable ?
12. Are conversion and installation included in the design ? 13.
13. Is the system designed for multiple installations in different organizations ?
14. 14. Is the application designed to facilitate change and ease of use by the
user ?
Function Oriented Metrics

 Productivity = FP/person-month
 quality = faults/FP
 cost = $$/FP
 documentation = doc_pages/FP
Differentiate between FP and LOC

FP LOC
1. FP is specification 1. LOC is an analogy based.
based.
2. FP is language 2. LOC is language dependent.
independent.
3. FP is user-oriented. 3. LOC is design-oriented.
4. It is extendible to LOC. 4. It is convertible to FP
(backfiring)
UNIT-III
Software Implementation: Implementation issues, Coding.
Programming Practices: Structured coding and object
oriented coding techniques, Modern programming language
features. Verification and Validation techniques (Code reading,
Static Analysis, Symbolic Execution, Proving Correctness,
Code Inspections or Reviews, Unit Testing). Coding:
Programming Principles and guidelines, Coding Process
Metrics: Size Measures, Complexity Metrics, Style Metrics.
Documentation: Internal and External Documentation.
Information Flow Metrics
Information Flow metrics measure the information flowing among
modules of the system. It is responsive to the complexity due to
interconnection among system components. This measure also
includes the complexity of a software module, which is the sum of all
the complexities of the methods present in the module.
A procedure contributes complexity due to the following two factors.
 The complexity of the process code itself.
 The complexity due to the process's linkage to other processes. The
effect of the first case has been included through the LOC (Line Of
Code) measure. For estimating the second case, Henry and Kafura
have described two terminologies: FAN IN and FAN OUT.
FAN-IN: FAN-IN of a component is a count of the number of
other components that can call or pass information to that
component.
FAN -OUT: FAN-OUT of a component is the number of
components that are called by or receive information from that
component.
The figure given below shows a fragment of a system design
having component 'A,' for which we can define three measures:
‘FAN-IN’ is a count of the number of other components calling or
passing control to A.
‘FAN-OUT’ is the number of components called by A.
3. The information flow index
of component A, abbreviated as
IF(A), is derived from the first
two components by using the
following formula-
IF(A) = [ FAN-IN(A) * FAN-
OUT(A) ]2
Ques. Consider the following system. Calculate FAN-IN and
FAN-OUT of A, and what do they indicate?

FAN-IN(A) = 3, FAN-OUT(A) = 2
1. High FAN-IN indicates this module has
been used heavily. This shows the
reusability of modules and thus reduces
redundancy in the coding.
2. High FAN-OUT indicates a highly
coupled module, thus more dependency
on other modules.
 Information Flow metrics are applied to the
Components of a system design.
 This metrics is based on the measurement of the
information flow among system modules.
 It is sensitive to the complexity due to interconnection
among system component.
 This measure includes the complexity of a software
module is defined to be the sum of complexities of
the procedures included in the module.
Fig. shows a fragment of such a design, and for
component ‘A’ we can define three measures, but these
are the simplest models of IF.
1. ‘FAN IN’ is simply a count of the number of other
Components that can call, or pass control, to
Component A.
2. ‘FANOUT’ is the number of Components that are
called by Component A.
3. This is derived from the first two by using the following
formula. We will call this measure the INFORMATION
FLOW index of Component A, abbreviated as IF(A).
IF(A) = [FAN IN(A) x FAN OUT (A)]2
4. Calculate the IF value for each Component using the
above formula.
5. Sum the IF value for all Components within each level
which is called as the LEVEL SUM.
6. Sum the IF values for the total system design which is
called the SYSTEM SUM.
The following is a step-by-step guide to calculate IF metrics.
1. Note the level of each Component in the system design.
2. For each Component, count the number of calls so that
Component – this is the FAN IN of that Component. Some
organizations allow more than one Component at the highest
level in the design, so for Components at the highest level
which should have a FAN IN of zero, assign a FAN IN of
one. Also note that a simple model of FAN IN can penalize
reused Components.
3. For each Component, count the number of calls from the
Component. For Component that call no other, assign a
FAN OUT value of one.
4. Calculate the IF value for each Component using the
above formula.
5. Sum the IF value for all Components within each level
which is called as the LEVEL SUM.
6. Sum the IF values for the total system design which is
called the SYSTEM SUM.
A More Sophisticated Information Flow Model
a = the number of components that call A.
b = the number of parameters passed to A from
components higher in the hierarchy.
c = the number of parameters passed to A from
components lower in the hierarchy.
d = the number of data elements read by component A.
Then:
FAN IN(A)= a + b + c + d
Also let:
e = the number of components called by A;
f = the number of parameters passed from A to components
higher in the hierarchy;
g = the number of parameters passed from A to
components lower in the hierarchy;
h = the number of data elements written to by A.
Then:
FAN OUT(A)= e + f + g + h
Cyclomatic Complexity

Flow Graph
The control flow of a program can be analysed using a
graphical representation known as flow graph. The flow
graph is a directed graph in which nodes are either entire
statements or fragments of a statement, and edges
represents flow of control.
Fig.: The basic construct of the flow graph
Cyclomatic Complexity may Cyclomatic
Meaning
Complexity
be defined as-
 Structured and Well
• It is a software metric that Written Code
1 – 10
measures the logical  High Testability
 Less Cost and Effort
complexity of the program
 Complex Code
code.
10 – 20  Medium Testability
• It counts the number of  Medium Cost and Effort
decisions in the given  Very Complex Code
program code. 20 – 40  Low Testability
• It measures the number of  High Cost and Effort

linearly independent paths  Highly Complex Code

through the program code. > 40  Not at all Testable


 Very High Cost and Effort
Importance of Cyclomatic Complexity-
• It helps in determining the software quality.
• It is an important indicator of program code’s readability,
maintainability and portability.
• It helps the developers and testers to determine independent
path executions.
• It helps to focus more on the uncovered paths.
• It evaluates the risk associated with the application or
program.
• It provides assurance to the developers that all the paths
have been tested at least once.
Properties of Cyclomatic Complexity-
• It is the maximum number of independent paths
through the program code.
• It depends only on the number of decisions in the
program code.
• Insertion or deletion of functional statements from the
code does not affect its cyclomatic complexity.
• It is always greater than or equal to 1.
Calculating Cyclomatic Complexity-

• Cyclomatic complexity is calculated using the control flow


representation of the program code.
• In control flow representation of the program code,
• Nodes represent parts of the code having no branches.
• Edges represent possible control flow transfers during program
execution

There are 3 commonly used methods for calculating the cyclomatic


complexity-
Method-01:
Cyclomatic Complexity = Total number of closed regions in the
control flow graph + 1
Method-02:
Cyclomatic Complexity = E – N + 2
Here-
E = Total number of edges in the control flow graph
N = Total number of nodes in the control flow graph
Method-03:
Cyclomatic Complexity = P + 1
Here, P = Total number of predicate nodes contained in the control
flow graph.
Calculate cyclomatic We draw the following control flow
complexity for the given graph for the given code-
code-
IF A = 354
THEN IF B > C
THEN A = B
ELSE A = C
END IF
END IF
PRINT A
Using the above control flow graph, Method-02:
the cyclomatic complexity may be Cylomatic Complexity
calculated as- =E–N+2
Method-01: =8–7+2
Cyclomatic Complexity =3
= Total number of closed regions in Method-03:
the control flow graph + 1
Cyclomatic Complexity
=2+1 =P+1
=2+1
=3
=3
imagine a main program M and two called subroutines A and B
having a flow graph shown in Fig. below

On a flow graph: Arrows called edges represent flow of control.


Circles called nodes represent one or more actions. Areas bounded
by edges and nodes called regions. A predicate node is a node
containing a condition.
 Let us denote the total graph above with 3 connected
components as:
Example:
Control Flow Graph of above
if A = 10 then
example will be:
if B > C
A=B
else A = C
endif
endif
print A, B, C
Using the above control flow graph, the cyclomatic complexity
may be calculated as-

Method-01:
Cyclomatic Complexity
= Total number of closed regions in the control flow graph + 1
=2+1
=3
Method-02:
Cyclomatic Complexity
=E–N+2
=8–7+2
=3
Method-03:
Cyclomatic Complexity
=P+1
=2+1
=3
UNIT-III
Software Implementation: Implementation issues, Coding.
Programming Practices: Structured coding and object
oriented coding techniques, Modern programming language
features. Verification and Validation techniques (Code reading,
Static Analysis, Symbolic Execution, Proving Correctness,
Code Inspections or Reviews, Unit Testing). Coding:
Programming Principles and guidelines, Coding Process
Metrics: Size Measures, Complexity Metrics, Style Metrics.
Documentation: Internal and External Documentation.
Software Implementation Issues

The important software implementation issues are


following:
✓ Operating Environment
✓ Installation of the System
✓ Code Conversion
✓ Change Over
✓ Training
✓ Marketing of the Software
Structured Programming
✓ In structured programming, we sub-divide the whole program
into small modules so that the program becomes easy to
understand.
✓ The purpose of structured programming is to linearize control
flow through a computer program so that the execution
sequence follows the sequence in which the code is written.
✓ This enhances the readability, testability, and modifiability of
the program.
✓ This linear flow of control can be managed by restricting the set
of allowed applications construct to a single entry, single exit
formats.
Rules of structured Coding

Rule1 of Structured Programming:


✓ A code block is structured, a box
with a single entry point and single
exit point are structured.
✓ Structured programming is a
method of making it evident that
the program is correct.
Rule 2 of Structured Programming: :
Sequence
✓ A sequence of blocks is correct if the
exit conditions of each block match
the entry conditions of the following
block.
✓ Execution enters each block at the
block's entry point and leaves through
the block's exit point.
✓ The whole series can be regarded as a
single block, with an entry point and
an exit point.
Rule 3 of Structured Programming: Alternation

✓ If-then-else is frequently
called alternation (because
there are alternative options).
✓ In structured programming,
each choice is a code block.
✓ The alternation of two code
blocks is structured.
Rule 4 of Structured Programming: Iteration

✓ The iteration of a code block is


structured.
✓ It also has one entry point and
one exit point.
✓ The entry point has conditions
that must be satisfied, and the
exit point has requirements
that will be fulfilled
Rule 4 of Structured Programming: Nested Structures

✓ A structure (of any size)


that has a single entry
point and a single exit
point is equivalent to a
code block.
✓ The nested structure of a
code block is structured.

Nested structure of a code block is structured


Object-oriented programming
✓ object-oriented programming is about creating objects
that contain both data and functions.
✓ The main aim of OOP is to bind together the data and
the functions that operate on them so that no other part
of the code can access this data except that function.
✓ Object-oriented programming aims to implement real-
world entities like inheritance, hiding, polymorphism,
etc in programming.
Building blocks of OOP

✓ a class is a template for objects, and an object is an


instance of a class.
✓ When the individual objects are created, they inherit
all the variables and functions from the class.
✓ Methods are functions that belongs to the class.
✓ Encapsulation, is to make sure that "sensitive" data is
hidden from users. To achieve this, you must declare
class variables/attributes as private (cannot be
accessed from outside the class).
✓ Inheritance means to inherit attributes and methods
from one class to another.
✓ The "inheritance concept“ has two categories:
➢ derived class (child) - the class that inherits from
another class.
➢ base class (parent) - the class being inherited from.
✓ Polymorphism means "many forms", and it occurs
when we have many classes that are related to each
other by inheritance.
✓ Inheritance inherit attributes and methods from another
class.
✓ Polymorphism uses those methods to perform different
tasks.
✓ This allows us to perform a single action in different ways.
✓ For example, think of a base class called Animal that has a
method called animalSound(). Derived classes of Animals
could be Pigs, Cats, Dogs, Birds - And they also have their
own implementation of an animal sound (the pig oinks, and
the cat meows, etc.):
Characteristics of an Object Oriented Programming
language
Coding Standards and Guidelines
✓ The main goal of the coding phase is to code from the design
document prepared after the design phase through a high-
level language and then to unit test this code.
✓ Good software development organizations want their
programmers to maintain to some well-defined and standard
style of coding called coding standards.
✓ It is very important for the programmers to maintain the
coding standards otherwise the code will be rejected during
code review.
Purpose of Having Coding Standards:
✓ A coding standard gives a uniform appearance to the
codes written by different engineers.
✓ It improves readability, and maintainability of the code
and it reduces complexity also.
✓ It helps in code reuse and helps to detect error easily.
✓ It promotes sound programming practices and
increases efficiency of the programmers.
Some of the coding standards are given below:

1. Limited use of global: These rules tell about which types


of data that can be declared global and the data that can’t
be.
2. Standard headers for different modules:
For better understanding and maintenance of the code, the
header of different modules should follow some standard
format and information. The header format must contain
below things that is being used in various companies:
✓ Name of the module
✓ Date of module creation
✓ Author of the module
✓ Modification history
✓ Synopsis of the module about what the module does
✓ Different functions supported in the module along with
their input output parameters
✓ Global variables accessed or modified by the module
3. Naming conventions for local variables, global variables,
constants and functions: Some of the naming conventions are
given below:
✓ Meaningful and understandable variables name helps anyone
to understand the reason of using it.
✓ Local variables should be named using camel case lettering
starting with small letter (e.g. localData) whereas Global
variables names should start with a capital letter
(e.g. GlobalData). Constant names should be formed using
capital letters only (e.g. CONSDATA).
✓ It is better to avoid the use of digits in variable names.
✓The name of the function must describe the reason of
using the function clearly and briefly.
4.Indentation: Proper indentation is very important to
increase the readability of the code. For making the code
readable, programmers should use White spaces properly.
Some of the spacing conventions are given below:
✓ There must be a space after giving a comma between
two function arguments.
✓ Each nested block should be properly indented and
spaced.
✓ Proper Indentation should be there at the beginning
and at the end of each block in the program.
✓ All braces should start from a new line and the code
following the end of braces also start from a new line.
5. Error return values and exception handling
conventions: All functions that encountering an error
condition should either return a 0 or 1 for simplifying the
debugging.
6. Avoid using a coding style that is too difficult to
understand: Code should be easily understandable. The
complex code makes maintenance and debugging difficult
and expensive.
7. Avoid using an identifier for multiple purposes: Each
variable should be given a descriptive and meaningful name
indicating the reason behind using it. This is not possible if
an identifier is used for multiple purposes and thus it can
lead to confusion to the reader. Moreover, it leads to more
difficulty during future enhancements.
8. Code should be well documented:
The code should be properly commented for understanding
easily. Comments regarding the statements increase the
understandability of the code.
9. Length of functions should not be very large:
Lengthy functions are very difficult to understand. That’s
why functions should be small enough to carry out small
work and lengthy functions should be broken into small
ones for completing small tasks.
10. Try not to use GOTO statement: GOTO statement
makes the program unstructured, thus it reduces the
understandability of the program and also debugging
becomes difficult.
Advantages of Coding Guidelines:
✓ Coding guidelines increase the efficiency of the software
and reduces the development time.
✓ Coding guidelines help in detecting errors in the early
phases, so it helps to reduce the extra cost incurred by the
software project.
✓ If coding guidelines are maintained properly, then the
software code increases readability and understandability
thus it reduces the complexity of the code.
✓ It reduces the hidden cost for developing the software.
Internal and External documentation
✓ Internal documentation is written in a program as
comments.
✓ Documentation which focuses on
the information that is used to determine the
software code is known as internal documentation.
✓ It describes the data structures, algorithms, and
control flow in the programs.
Generally, internal documentation comprises the following
information.
1.Name, type, and purpose of each variable and data structure
used in the code
2.Brief description of algorithms, logic, and error-handling
techniques
3.Information about the required input and expected output of
the program
4.Assistance on how to test the software
5.Information on the upgradations and enhancements in the
program.
✓ Documentation which focuses on general description of the
software code and is not concerned with its detail is known
as external documentation.
✓ It includes information such as:
✓ function of code
✓ name of the software developer who has written the code
✓ algorithms used in the software code
✓ format of the output produced by the software code
✓ structure charts for providing an outline of the program and
describing the design of the program.
UNIT- 4

Software Testing and Maintenance


UNIT-IV Software Testing and Maintenance: Testing Fundamentals: Error,
Fault and Failure, Test Oracles, Test Cases and Test Criteria, Psychology of
Testing. Testing Objectives and Principles. Approaches to Software Testing:
Black Box and White Box testing. Black Box Testing: Equivalence Class
Partitioning, Boundary Value Analysis, Cause Effect Graphing, Special
Cases. White Box Testing: Mutation Testing, Test Case Generation and Tool
Support. Testing Process: Comparison of Different Techniques, Levels of
Testing, Test Plan, Test Case Specifications, Test Case Execution and
Analysis. Software Maintenance, The Road Ahead.
✓ Software Testing is evaluation of the software against requirements
gathered from users and system specifications.
✓ Testing is conducted at the phase level in software development life cycle
or at module level in program code.
✓ Software testing comprises of Validation and Verification.
✓ Testing= Verification + Validation
Target of the test are -
Errors - These are actual coding mistakes made by developers. In
addition, there is a difference in output of software and desired output, is
considered as an error.
Fault - When error exists fault occurs. A fault, also known as a bug, is a
result of an error which can cause system to fail.
Failure - failure is said to be the inability of the system to perform the
desired task. Failure occurs when fault exists in the system.
Error, fault, failure
 Error: it is the developer mistake that produce a fault. Often, it has been
caused by human activities such as the typing errors.
 Fault: (commonly named “bug/defect”) it is a defect in a system. A failure
may be caused by the presence of one or more faults on a given system.
However, the presence of a fault in a system may or may not lead to a
failure, e.g., a system may contain a fault in its code but on a fragment of
code that is never exercised so this kind of fault do not lead to a software
failure.
Error, fault, failure
 Failure: it is an observable incorrect behavior or state of a given system. In this
case, the system displays a behavior that is contrary to its
specifications/requirements. Thus, a failure is tied (only) to system
executions/behaviors and it occurs at runtime when some part of the system enters
an unexpected state.

 an error is a human activity resulting in software containing a fault

 a fault is the manifestation of an error

 a fault may result in a failure


Error, fault, failure
 Failure: x = 3 means y =9 →Failure!

L • This is a failure of the system since the correct output would be 6


O Code
C

 Fault: The fault that causes the failure is in line 5. The * operator is
1 program double ();
used instead of +.
2 var x,y: integer;

3 begin

4 read(x);  Error: The error that leads to this fault may be:
5 y := x * x; • a typing error (the developer has written * instead of +)

6 write(y) • a conceptual error (e.g., the developer doesn't know how to double
7 end a number)
Manual Vs Automated Testing

Testing can either be done manually or using an automated testing tool:


Manual - This testing is performed without taking help of automated testing
tools. The software tester prepares test cases for different sections and levels of
the code, executes the tests and reports the result to the manager.
✓ Major portion of testing involves manual testing.
✓ Automated This testing is a testing procedure done with aid of automated
testing tools.
✓ The limitations with manual testing can be overcome using automated test
tools.
Testing Approaches
Tests can be conducted based on two approaches –
✓ Functionality testing
✓ Implementation testing
When functionality is being tested without taking the actual implementation
in concern it is known as black-box testing. The other side is known as
white-box testing where not only functionality is tested but the way it is
implemented is also analyzed.
Most Common Software problems
 Incorrect calculation.
 Incorrect data edits & ineffective data edits.
 Incorrect matching and merging of data.
 Data searches that yields incorrect results.
 Incorrect processing of data relationship.
 Incorrect coding / implementation of business rules.
 Inadequate software performance.
Most Common Software problems
 Confusing or misleading data.
 Software usability by end users & Obsolete Software.
 Inconsistent processing.
 Unreliable results or performance.
 Inadequate support of business needs.
 Incorrect or inadequate interfaces with other systems.
 Inadequate performance and security controls.
 Incorrect file handling.
Who does Software Testing
 Test manager
 manage and control a software test project

 supervise test engineers

 define and specify a test plan

 Software Test Engineers and Testers


 define test cases, write test specifications, run tests
Who does Software Testing (cont.…)
 Independent Test Group
 Development Engineers
 Only perform unit tests and integration tests

 Quality Assurance Group and Engineers


 Perform system testing

 Define software testing standards and quality control

process
Objective of a Software Tester
 Find bugs as early as possible and make sure they get fixed.
 To understand the application well.
 Study the functionality in detail to find where the bugs are likely to occur.
 Study the code to ensure that each and every line of code is tested.
Test Oracle is a mechanism, different from the program itself, that can be used
to test the accuracy of a program’s output for test cases. Conceptually, we can
consider testing a process in which test cases are given for testing and the
program under test. The output of the two then compared to determine whether
the program behaves correctly for test cases.
✓ Testing oracles are required for testing.
✓ Ideally, we want an automated oracle, which always gives the correct answer.
However, often oracles are human beings, who mostly calculate by hand what the
output of the program should be.
✓ The human oracles typically use the program’s specifications to decide what the
correct behaviour of the program should be.
✓ A complete oracle would have three capabilities and would carry them out
perfectly:
✓ A generator, to provide predicted or expected results for each test.
✓ A comparator, to compare predicted and obtained results.
✓ An evaluator, to determine whether the comparison results are sufficiently close to
be a pass.
Test Oracle

Apply input Observe output


Software

Oracle
Validate the observed output against the expected output

Is the observed output the same as the expected output?


Oracle: Example
 A tester often assumes the role of an oracle and thus serves as human oracle.
 How to verify the output of a matrix multiplication?
 Hand calculation: the tester might input two matrices and check if the output
of the program matches the results of hand calculation.
 Oracles can also be programs. For example, one might use a matrix
multiplication to check if a matrix inversion program has produced the correct
result: A × A-1 = I
Test case
Information to include in a Formal Test case
 Identification and classification:
 Each test case should have a number, title (optional).

 Indicate system, subsystem or module being tested

 Test case importance – indicate this.

 Instructions:
 Tell the tester exactly what to do.

 Tester should not normally have to refer to any other documentation in

order to execute the instructions.


Test case describes an input description and an expected output description. The set of test cases is called a
test suite. Hence any combination of test cases may generate a test suite.
Psychology of Testing
 A program is its programmer’s baby!
 Trying to find errors in one’s own program is like trying to find defects

in one’s own baby.


 It is best to have someone other than the programmer doing the testing.

 Tester must be highly skilled, experienced professional.


 Testing achievements depend a lot on what are the goals.
Myers says
 If your goal is to show absence of errors, you will not discover

many.
 If you are trying to show the program correct, your subconscious

will manufacture safe test cases.


 If your goal is to show presence of errors, you will discover large

percentage of them.
Limitations of Testing

 Testing can be used to show the presence of bugs, but never their absence
 Testing is successful if the program fails
 Testing cannot guarantee the correctness of software but can be effectively
used to find errors (of certain types)
Testing Objectives
 The Major Objectives of Software Testing
 Uncover as many as errors (or bugs) as possible in a given timeline.

 Demonstrate a given software product matching its requirement

specifications.
 Validate the quality of a software testing using the minimum cost and

efforts.
 Generate high quality test cases, perform effective tests, and issue correct

and helpful problem reports.


Testing Objectives
 Major goals
 uncover the errors (defects) in the software, including errors in

 requirements from requirement analysis


 design documented in design specifications
 coding (implementation)
 system resources and system environment
 hardware problems and their interfaces to software
Software Testing Principles
 Principle #1: Complete testing is impossible.
 Principle #2: Software testing is not simple activity.
 Reasons:

 Quality testing requires testers to understand a system/product


completely
 Quality testing needs adequate test set, and efficient testing methods
 A very tight schedule and lack of test tools.
 Principle #3: Testing is risk-based.
 Principle #4: Testing must be planned.
 Principle #5: Testing requires independence (SQA team).
 Principle #6: Quality software testing depends on:
 Good understanding of software products and related domain

application
 Cost-effective testing methodology, coverage, test methods, and tools.

 Good engineers with creativity, and solid software testing experience


UNIT- 4

Software Testing and Maintenance


UNIT-IV Software Testing and Maintenance: Testing Fundamentals: Error,
Fault and Failure, Test Oracles, Test Cases and Test Criteria, Psychology of
Testing. Testing Objectives and Principles. Approaches to Software Testing:
Black Box and White Box testing. Black Box Testing: Equivalence Class
Partitioning, Boundary Value Analysis, Cause Effect Graphing, Special
Cases. White Box Testing: Mutation Testing, Test Case Generation and Tool
Support. Testing Process: Comparison of Different Techniques, Levels of
Testing, Test Plan, Test Case Specifications, Test Case Execution and
Analysis. Software Maintenance, The Road Ahead.
Types of Testing
1. Unit Testing
It focuses on smallest unit of software design. In this we test an individual unit
or group of inter related units. It is often done by programmer by using sample
input and observing its corresponding outputs.
Example:
a) In a program we are checking if loop, method or function is working fine
b) Misunderstood or incorrect, arithmetic precedence.
c) Incorrect initialization
2. Integration Testing
The objective is to take unit tested components and build a program structure
that has been dictated by design. Integration testing is testing in which a group
of components are combined to produce output.
Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich
(iv) Big-Bang
Stubs and Drivers are the dummy programs in Integration testing used to
facilitate the software testing activity. These programs act as a substitutes
for the missing models in the testing. They do not implement the entire
programming logic of the software module but they simulate data
communication with the calling module while testing.
Stub: Is called by the Module under Test.
Driver: Calls the Module to be tested.
Big Bang Integration Testing is an approach in which all software
components (modules) are combined at once and make a complicated system.
This unity of different modules is then tested as an entity. According to this
checking method, the integration process will not be executed until all
components are completed
In Incremental integration testing, the developers integrate the modules one
by one using stubs or drivers to uncover the defects. This approach is known
as incremental integration testing.
Top Down Integration Testing is a method in which integration testing takes
place from top to bottom following the control flow of software system. The
higher level modules are tested first and then lower level modules are tested
and integrated in order to check the software functionality. Stubs are used for
testing if some modules are not ready.
Bottom-up Integration Testing is a strategy in which the lower level modules
are tested first. These tested modules are then further used to facilitate the
testing of higher level modules. The process continues until all modules at top
level are tested. Once the lower level modules are tested and integrated, then
the next level of modules are formed.
Sandwich Testing is the combination of bottom-up approach and top-down
approach, so it uses the advantage of both bottom up approach and top down
approach. Initially it uses the stubs and drivers where stubs simulate the
behaviour of missing component. It is also known as the Hybrid Integration
Testing.
How to perform Sandwich Testing: There are 3 simple steps to perform
sandwich testing which are given below.
1.Test the user interface in isolation using stubs.
2.Test the very lowest level functions by using drivers.
3.When the complete system is in integrated only main target (middle) layer
is remains for final test.
UNIT- 4

Software Testing and Maintenance


UNIT-IV Software Testing and Maintenance: Testing Fundamentals: Error,
Fault and Failure, Test Oracles, Test Cases and Test Criteria, Psychology of
Testing. Testing Objectives and Principles. Approaches to Software Testing:
Black Box and White Box testing. Black Box Testing: Equivalence Class
Partitioning, Boundary Value Analysis, Cause Effect Graphing, Special
Cases. White Box Testing: Mutation Testing, Test Case Generation and Tool
Support. Testing Process: Comparison of Different Techniques, Levels of
Testing, Test Plan, Test Case Specifications, Test Case Execution and
Analysis. Software Maintenance, The Road Ahead.
3. Regression Testing
✓ Software maintenance is an activity which includes enhancements, error
corrections, optimization and deletion of existing features. These
modifications may cause the system to work incorrectly. Therefore,
Regression Testing becomes necessary.
✓ This testing is done to make sure that new code changes should not have
side effects on the existing functionalities.
✓ It ensures that the old code still works once the latest code changes are
done.
✓ Regression Testing is nothing but a full or partial selection of already
executed test cases which are re-executed to ensure existing functionalities
work fine.
✓ Regression Testing can be carried out using the following techniques:
✓ Regression testing is performed for passed test cases while Retesting is
done only for failed test cases.
✓ Regression testing checks for unexpected side-effects while Re-testing
makes sure that the original fault has been corrected.
✓ Regression Testing doesn’t include defect verification whereas Re-testing
includes defect verification.
✓ Regression testing is known as generic testing whereas Re-testing is
planned testing.
✓ Regression Testing is possible with the use of automation whereas Re-
testing is not possible with automation.
4. Smoke Testing: It is also known as “Build Verification Testing”, is a type
of software testing that comprises of a non-exhaustive set of tests that aim at
ensuring that the most important functions work. The result of this testing is
used to decide if a build is stable enough to proceed with further testing. It
can also be used to decide whether to announce a production release or to
revert. Example:
TEST DESCRIPTIO EXPECTED ACTUAL
T.ID TEST STEP STATUS
SCENARIOS N RESULT RESULT

Test the login


functionality of
1.Launch the
the web
application
application to
2.Navigate the login as expected
Valid login ensure that a Login should
1 page Pass
credentials registered user be success
3.Enter valid username
is allowed to
4.Enter valid password
login with
5.Click on login button
username and
password

Item should get Item is not


Adding item Able to add 1.Select categories list
2 added to the getting added Fail
functionality item to the cart 2.Add the item to cart
cart to the cart
The user
Sign out Check sign out User is not able
3 1. select sign out button should be able Fail
functionality functionality to sign out
to sign out.
5. Alpha Testing
This is a type of validation testing. It is a type of acceptance testing which is
done before the product is released to customers. It is typically done by QA
people. Example: When software testing is performed internally within the
organization.
6. Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the
software. This version is released for the limited number of users for testing in
real time environment Example: When software testing is performed for the
limited number of people.
7. System Testing
In this software is tested such that it works fine for different operating system. It
is covered under the black box testing technique. In this we just focus on
required input and output without focusing on internal working.
In this we have security testing, recovery testing , stress testing and
performance testing
Example: This include functional as well as non functional testing.
8. Stress Testing
✓ It is a type of non-functional testing.
✓ It involves testing beyond normal operational capacity, often to a breaking point,
in order to observe the results.
✓ It is a form of software testing that is used to determine the stability of a given system.
✓ It put greater emphasis on robustness, availability and error handling under a heavy
load, rather than on what would be considered correct behavior under normal
circumstances.
✓ The goals of such tests may be to ensure the software does not crash in conditions of
insufficient computational resources (such as memory, disk space, network request
etc).
Stress Testing Example
✓ In order to perform stress testing of the ecommerce application,
an extremely large number of visitors hitting the application is simulated
using a stress testing tool (listed in the next slide).
✓ The number of visitors being simulated would be exponentially
higher compared to the average number of visitors expected to visit the
website on a day to day basis.
✓ These virtual users are programmed to execute common activities like
viewing products, adding, removing items from cart and purchasing the
product etc.
✓ The number of users are increased suddenly to the point of failure, until the
website crashes and is not longer able to handle additional traffic.
✓ Additional points that are noted are – How the website behaves at this time
and if it recovers gracefully.
✓ The results of the tests are used to identify performance improvement areas,
recovery / failover mechanisms etc.
Stress testing tools
1. Loadrunner: Loadrunner from HP is the widely used tool to perform stress
testing and the results provided by Loadrunner are considered as a benchmark.
2. Jmeter: It is an open source tool that is available free of charge. It is a Java
application and is intended to conduct all performance testing types including
stress testing.
3. NeoLoad: This tool is used to perform stress testing on web as well as mobile
applications. It has numerous advantages like it supports all major servers
available in the market, could be used to conduct stress testing on ERP,
CRM(Customer Relationship Management) , and Business Intelligence type
applications etc.
9. Performance Testing
It is designed to test the run-time performance of software within the context
of an integrated system. It is used to test speed and effectiveness of program.
It is also called load testing. In it we check , what is the performance of the
system in the given load.
Example: Checking number of processor cycles.
Testing Approaches
 Black box (Functional ) vs. White box (Structural) testing
 Functional testing: Generating test cases based on the functionality of the

software
 Structural testing: Generating test cases based on the structure of the program

 Black box testing and white box testing are synonyms for functional and

structural testing, respectively.


 In black box testing the internal structure of the program is hidden from the

testing process
 In white box testing internal structure of the program is taken into account.
Parameter Black Box testing White Box testing
Definition It is a testing approach which is used to It is a testing approach in which
test the software without the knowledge internal structure is known to the
of the internal structure of program or tester.
application.
Alias It also knowns as data-driven, box It is also called structural testing,
testing, data-, and functional testing. clear box testing, code-based
testing, or glass box testing.
Base of Testing Testing is based on external expectations; Internal working is known, and
internal behavior of the application is the tester can test accordingly.
unknown.
Usage This type of testing is ideal for higher Testing is best suited for a lower
levels of testing like System Testing, level of testing like Unit Testing,
Acceptance testing. Integration testing.
Programming Programming knowledge is not needed to Programming knowledge is
knowledge perform Black Box testing. required to perform White Box
Parameter Black Box testing White Box testing
Implementation Implementation knowledge is not Complete understanding needs to
knowledge requiring doing Black Box testing. implement WhiteBox testing.
Automation Test and programmer are dependent White Box testing is easy to
on each other, so it is tough to automate.
automate.
Objective The main objective of this testing is The main objective of White Box
to check what functionality of the testing is done to check the quality of
system under test. the code.
Basis for test cases Testing can start after preparing Testing can start after preparing for
requirement specification Detail design document.
document.
Tested by Performed by the end user, Usually done by tester and
developer, and tester. developers.
Testing method It is based on trial and error Data domain and internal boundaries
method. can be tested.
Parameter Black Box testing White Box testing
Time It is less exhaustive and time- Exhaustive and time-consuming
consuming. method.
Algorithm test Not the best method for algorithm Best suited for algorithm testing.
testing.
Code Access Code access is not required for Black White box testing requires code access.
Box Testing. Thereby, the code could be stolen if
testing is outsourced.
Benefit Well suited and efficient for large code It allows removing the extra lines of
segments. code, which can bring in hidden defects.

Skill level Low skilled testers can test the Need an expert tester with vast
application with no knowledge of the experience to perform white box testing.
implementation of programming
language or operating system.
Parameter Black Box testing White Box testing
Techniques Equivalence partitioning is Black Statement Coverage, Branch
box testing technique is used for coverage, and Path coverage are
Blackbox testing. White Box testing technique.

Equivalence partitioning divides Statement Coverage validates


input values into valid and whether every line of the code is
invalid partitions and selecting executed at least once.
corresponding values from each
partition of the test data. Branch coverage validates
whether each branch is executed
Boundary value analysis at least once

checks boundaries for input Path coverage method tests all


values. the paths of the program.
UNIT- 4

Software Testing and Maintenance


UNIT-IV Software Testing and Maintenance: Testing Fundamentals: Error,
Fault and Failure, Test Oracles, Test Cases and Test Criteria, Psychology of
Testing. Testing Objectives and Principles. Approaches to Software Testing:
Black Box and White Box testing. Black Box Testing: Equivalence Class
Partitioning, Boundary Value Analysis, Cause Effect Graphing, Special
Cases. White Box Testing: Mutation Testing, Test Case Generation and Tool
Support. Testing Process: Comparison of Different Techniques, Levels of
Testing, Test Plan, Test Case Specifications, Test Case Execution and
Analysis. Software Maintenance, The Road Ahead.
Equivalence Partitioning or Equivalence Class Partitioning is type of black
box testing technique which can be applied to all levels of software testing like
unit, integration, system, etc.
 In this technique, input data units are divided into equivalent partitions that
can be used to derive test cases which reduces time required for testing
because of small number of test cases.
 It divides the input data of software into different equivalence data classes.
 We can apply this technique, where there is a range in the input field.
Example 1: Equivalence and Boundary Value
 Let's consider the behavior of Order Burger Text Box.
 Burger values 1 to 10 is considered valid. A success message is shown.
 While value 11 to 99 are considered invalid for order and an error message
will appear, "Only 10 Burger can be ordered“
Here is the test condition
 Any Number greater than 10 entered in the Order Burger field(let say 11) is considered
invalid.
 Any Number less than 1 that is 0 or below, then it is considered invalid.
 Numbers 1 to 10 are considered valid
 Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of
test cases will be more than 100. To address this problem, we use
equivalence partitioning hypothesis where we divide the possible
values of tickets into groups or sets as shown below where the
system behavior can be considered the same.
Two steps are required to implement this method:
1. The equivalence classes are identified by taking each input condition and
partitioning it into valid and invalid classes. For example, if an input condition
specifies a range of values from 1 to 999, we identify one valid equivalence
class [1<item<999]; and two invalid equivalence classes [item<1] and
[item>999].
2. Generate the test cases using the equivalence classes identified in the previous
step. This is performed by writing test cases covering all the valid equivalence
classes and invalid classes.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we
pick only one value from each partition for testing. The hypothesis behind this
technique is that if one condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all other conditions in that
partition will fail.
Boundary testing is the process of testing between extreme ends or boundaries
between partitions of the input values.
 So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
"boundary testing".
 The basic idea in boundary value testing is to select input variable values at their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
 Equivalence partitioning and boundary value analysis(BVA) are
closely related and can be used together at all levels of testing.
 In Boundary Value Analysis, we test boundaries between
equivalence partitions.
Why Equivalence & Boundary Analysis Testing: This testing is used to
reduce a very large number of test cases to manageable chunks.
Examples 2: Input Box should accept the Number 1 to 10
Here we will see the Boundary Value Test Cases

Test Scenario Description Expected Outcome


Boundary Value = 0 System should NOT accept
Boundary Value = 1 System should accept
Boundary Value = 2 System should accept
Boundary Value = 9 System should accept
Boundary Value = 10 System should accept
Boundary Value = 11 System should NOT accept
Example
Consider a simple program to classify a triangle. Its inputs are a triple of positive
integers (say x, y, z) and the date type for input parameters ensures that these will be
integers greater than 0 and less than or equal to 100. The program output may be one
of the following words:[Scalene; Isosceles; Equilateral; Not a triangle]
Design the equivalence Partitioning test cases.
A triangle is valid if sum of its two sides is greater than the third side.

You might also like