Software Engineering Course File
Software Engineering Course File
& SCIENCE,
Bhopal
Department
of
Computer Science & Engineering
CS-403
CBGS-
Maximum Marks Allotted
Theory Practical
Marks Allotted End Mid Quiz, End Lab Assignment/ Total
Structure Sem Sem. Assignment Sem Work Quiz/Term Marks
MST paper
70 20 10 30 10 10 150
NOTE: MST: Minimum of two mid semester tests to be conducted.
Index
S. No. Particulars Page No.
1. Vision and Mission of the Institute
2. Vision of the Institute and the Department
3. Mission of the Institute and the Department
4. PEOs and Pos
5. Course Objective and Course Outcome
6. Mapping of Cos with POs
7. Academic Calendar
8. Course Syllabus and GATE Syllabus
9. Time Table
10. Student List
11. Lecture Plan and Course Plan
12. Assignment Sheets
13. Tutorial Sheets
14. Mid Sem Question Papers
15.* Old End Semester Exam(Final Exam)Question Papers
16.* Question Bank
17.* PPT
18.* Lecture Notes
19. Reference Materials
20. Content beyond syllabus
21. Research Article
22. Results
23. Result Analysis
24. Quality Measurement sheets(Attainment)
SAGAR INSTITUTE OF RESEARCH
TECHNOLOGY & SCIENCE,
Bhopal
Department Of Computer Science & Engg.
To motivate and mould students into world class professionals who will excel in their fields and
effectively meet challenges of the dynamic global scenario.
To produce computer professionals for industries, academics and research to meet the requirements
of Dynamic Global Scenario.
SAGAR INSTITUTE OF RESEARCH
TECHNOLOGY & SCIENCE,
Bhopal
Working towards being the best by incorporating the principles of total quality management
(TQM) and excellence. Adopting IT based knowledge management to meet global challenges.
To provide high quality IT based education and develops professionals in the field of computer
science and engineering.
SAGAR INSTITUTE OF RESEARCH
TECHNOLOGY & SCIENCE,
Bhopal
Department Of Computer Science & Engg.
PEO 3: To Prepare the students for higher studies, research and development.
The objective of the course is to teach techniques for effective learning and
increase intelligency in software engineering. The use of different paradigms of
problem solving will be used to illustrate clever and efficient ways to solve a
given problem. In addition, the analysis of the software engineering
design ,development and implementation the codes.
CO2 Analyze a problem and identify the computing requirements appropriate for its
solution
CO4 Apply mathematical foundations, algorithmic principles, and computer science theory
to the modeling and design of computer-based systems in a way that demonstrates
comprehension of the trade-offs involved in design choices
CO vs. PO Mapping
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 3 2 - - - - - - 1 - - 2
CO2 3 2 2 - - 1 - - 1 - - 2
CO3 3 2 2 - - 1 - - 1 - - 2
CO4 3 2 2 - - 1 - - 1 - - 2
CO5 3 2 2 - - 1 - - 1 - - 2
AVG 3 1.8 2 - - 0.8 - - 1 - - 2
Course PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
3 1.8 2 - - 0.8 - - 1 - - 2
CS 403
Academic Calendar
Syllabus
CS-403
RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA, BHOPAL
New Scheme Based On AICTE Flexible Curricula
Computer Science and Engineering, IV-Semester
PREREQUISITE:-
The students should have at least one year of experience in programming a high-level
language and databases. In addition, a familiarity with software development life cycle will
be useful in studying this subject.
Unit I:
The Software Product and Software Process,Software Product and Process Characteristics, Software Process
Models:LinearSequential Model, Prototyping Model, RAD Model, Evolutionary Process Models
likeIncremental Model, Spiral Model, Component Assembly Model, RUP and Agileprocesses. Software Process
customization and improvement, CMM, Product and Process Metrics
Unit II:
Requirement Elicitation, Analysis, and Specification, Functional and Non-functional requirements, Requirement Sources
and Elicitation,Techniques, Analysis Modeling for Function-oriented and Object-oriented software development, Use case
Modeling, System and Software Requirement Specifications,Requirement Validation, Traceability
Unit III:
Software Design
The Software Design Process, Design Concepts and Principles, Software Modeling andUML,Architectural Design,
Architectural Views and Styles, User Interface Design, Functionoriented Design, SA/SD Component Based Design, Design
Metrics.
Unit IV:
Software Analysis and Testing,Software Static and Dynamic analysis, Code inspections, Software Testing,
Fundamentals,Software Test Process, Testing Levels, Test Criteria, Test Case Design, TestOracles, Test
Techniques, Black-Box Testing, White-Box Unit Testing and Unit, Testing Frameworks,Integration Testing, System
Testing and other Specialized, Testing, Test Plan, Test Metrics,Testing Tools. , Introduction to Object-oriented analysis,
design and comparison with structured Software Engg.
Need and Types of Maintenance, Software Configuration Management (SCM), Software Change Management, Version
Control, Change control and Reporting, Program Comprehension Techniques, Re-engineering, Reverse Engineering, Tool
Support. Project Management Concepts, Feasilibility Analysis, Project and Process Planning, Resources Allocations,
Software efforts, Schedule, and Cost estimations, Project Scheduling and
Tracking, Risk Assessment and Mitigation, Software Quality Assurance(SQA). Project Plan,Project Metrics.
Practical and Lab work Lab work should include a running case study problem for which different deliverable sat the end of
each phase of a software development life cycle are to be developed.
This will include
modeling the requirements, architecture and detailed design. Subsequently the design models
will be coded and tested. For modeling, tools like Rational Rose products. For coding and
testing, IDE like Eclipse, Net Beans, and Visual Studio can be used.
References
1. Pankaj Jalote ,”An Integrated Approach to Software Engineering”, Narosa Pub, 2005
2. Rajib Mall, “Fundamentals of Software Engineering” Second Edition, PHI Learning
3. R S. Pressman ,”Software Engineering: A Practitioner's Approach”, Sixth edition2006,
McGraw-Hill.
4. Sommerville,”Software Enginerring”,Pearson Education.
5. Richard H.Thayer,”Software Enginerring & Project Managements”, WileyIndia
6. Waman S.Jawadekar,”Software Enginerring”, TMH
7. Bob Hughes, M.Cotterell, Rajib Mall “ Software Project Management”, McGrawHill
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
References
1.Pankaj Jalote ,”An Integrated Approach to Software Engineering”, Narosa Pub, 2005
2. Rajib Mall, “Fundamentals of Software Engineering” Second Edition, PHI Learning
3. R S. Pressman ,”Software Engineering: A Practitioner's Approach”, Sixth edition2006, McGraw-Hill.
4. Sommerville,”Software Enginerring”,Pearson Education.
5. Richard H.Thayer,”Software Enginerring & Project Managements”, WileyIndia
6. Waman S.Jawadekar,”Software Enginerring”, TMH
7. Bob Hughes, M.Cotterell, Rajib Mall “ Software Project Management”, McGrawHill
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY
& SCIENCE, Bhopal
Unit I:
The Software Product and Software Process,Software Product and Process Characteristics, Software Process
Models:LinearSequential Model, Prototyping Model, RAD Model, Evolutionary Process Models likeIncremental
Model, Spiral Model, Component Assembly Model, RUP and Agileprocesses. Software Process customization
and improvement, CMM, Product and Process Metrics
Unit II:
Requirement Elicitation, Analysis, and Specification, Functional and Non-functional requirements, Requirement
Sources and Elicitation,Techniques, Analysis Modeling for Function-oriented and Object-oriented software
development, Use case Modeling, System and Software Requirement Specifications,Requirement Validation,
Traceability
Unit III:
Software Design
The Software Design Process, Design Concepts and Principles, Software Modeling and UML,Architectural
Design, Architectural Views and Styles, User Interface Design, Function oriented Design, SA/SD Component
Based Design, Design Metrics.
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
Department Of Computer Science & Engg.
Unit IV:
Software Analysis and Testing,Software Static and Dynamic analysis, Code inspections, Software Testing,
Fundamentals,Software Test Process, Testing Levels, Test Criteria, Test Case Design, TestOracles, Test
Techniques, Black-Box Testing, White-Box Unit Testing and Unit, Testing Frameworks,Integration Testing, System
Testing and other Specialized, Testing, Test Plan, Test Metrics,Testing Tools. , Introduction to Object-oriented
analysis, design and comparison with structured Software Engg.
Need and Types of Maintenance, Software Configuration Management (SCM), Software Change Management,
Version Control, Change control and Reporting, Program Comprehension Techniques, Re-engineering, Reverse
Engineering, Tool Support. Project Management Concepts, Feasilibility Analysis, Project and Process Planning,
Resources Allocations, Software efforts, Schedule, and Cost estimations, Project Scheduling and
Tracking, Risk Assessment and Mitigation, Software Quality Assurance(SQA). Project Plan,Project Metrics. Practical
and Lab work Lab work should include a running case study problem for which different deliverable sat the end of
each phase of a software development life cycle are to be developed.
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
Assignment-I
Semester : IV
Subject Name: software engineering Subject Code: CS403
Reference to Course
SNo Questions
Outcome
Explain the meaning of software danger and its importance CO1
1
in concerned of software engineering.
Describe the importance of software Engineering? What CO1
2 should be steps taken under the process of developing a
software system.
Explain the principles which play a major role in CO1
3
development of software.
4 Explain the design principle of software Engineering. CO1
Describe the components and quality which is necessary CO1
5
for the documents of software specification.
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY
& SCIENCE, Bhopal
Assignment-II
Semester : IV
Subject Name: software engineering Subject Code: CS403
Reference to
S. No Questions
Course Outcome
CO2
1 What are the benefits of metrics in software
engineering?
2 CO2
Explain the term Configuration management.
3 CO2
Explain concept of data flow diagram.
4 CO2
Write a short note on review process.
CO2
Define the blue print methodology.&Give your views
5 about what is more important - the product or the
process.
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
Assignment-III
Semester : IV
Subject Name: software engineering Subject Code: CS403
Reference to
S. No Questions
Course Outcome
Assignment-IV
Semester : IV
Subject Name: software engineering Subject Code: CS403
Reference to
SNo Questions
Course Outcome
CO4
1 Define the meaning of quality assurance. Explain the
role of testing in Quality assurance.
CO4
2 What are the difference between alpha testing and Beta
testing?
CO4
3 What are the difference between white box testing and
black box testing techniques?
CO4
4 Explain software reliability and define how software and
hardware reliability related to each other.
CO4
Write short note on Software failure, Black box testing,
5 White box testing and Stress Testing.
Reference to
SNo Questions
Course Outcome
1 CO5
What are test cases in Software Engineering?
CO5
2
Explain the generic views of software Engineering.
What is Coding Standard? Explain the objectives of CO5
3
a)coding b) structured programming.
CO5
4 Explain the waterfall model in detail.Give a description of
prototyping model.
CO5
5 What is the process of implementation of a softwareExplain the term,
software maintenance
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
TUTORIAL-I
Semester: IV Session: Jan-June 2020
Subject: Software Engineering Sub. Code: CS-403
Max. Marks: 20
NOTE: Attempt all questions.
Q. Question Blooms CO
No.
Level
1 Determine what kind of real life situation is best L3 CO1
suited for “waterfall model” of developing software?
2 illustrate If the development team has less L4 CO1
experience on similar project then compare different
life cycle model and choose one life cycle model
which suite best for that situation
Course Outcomes
Bloom’s Taxonomy Levels:- R(L1):Remember U(L2):Understand A(L3): Apply An(L4): Analyze E(L5):
Evaluate C(L6): Create
Sagar Institute of Research and Technology
Department of Computer Science Engineering
TUTORIAL-II
Semester: IV Session: Jan-June 2020
Subject: Software Engineering Sub. Code: CS-403
Max. Marks: 20
NOTE: Attempt all questions.
Q. Question Blooms CO
No.
Level
1 Explain What is use case? Why use case scenarios L4 CO2
are considered important during requirement
gathering? Discuss with an example?
2 Implement SRS for Students Attendance System? L3 CO2
Course Outcomes
Bloom’s Taxonomy Levels:- R(L1):Remember U(L2):Understand A(L3): Apply An(L4): Analyze E(L5):
Evaluate C(L6): Create
Sagar Institute of Research and Technology
Department of Computer Science Engineering
TUTORIAL-III
Semester: IV Session: Jan-June 2020
Subject: Software Engineering Sub. Code: CS-403
Max. Marks: 20
NOTE: Attempt all questions.
Q. Question Blooms CO
No.
Level
1 Define design methodologies? How are they L1 CO3
different and applied in designing software?
2 Express How are the cohesion and coupling related? L2 CO3
Giving an example where cohesion increases and
coupling decreases.
Course Outcomes
Bloom’s Taxonomy Levels:- R(L1):Remember U(L2):Understand A(L3): Apply An(L4): Analyze E(L5):
Evaluate C(L6): Create
Sagar Institute of Research and Technology
Department of Computer Science Engineering
TUTORIAL-IV
Semester: IV Session: Jan-June 2020
Subject: Software Engineering Sub. Code: CS-403
Max. Marks: 20
NOTE: Attempt all questions.
Consider a ticketing system where children under age 6 are allowed to travel for
free, Female is given 30% discount on the ticket while other adults need
to pay RS 20.
Q. Question Blooms CO
No.
Level
1 Illustrate testing principles the software engineer L3 CO4
must apply while performing the software testing?
2 Distinguish between verification and validation with L4 CO4
examples?
Course Outcomes
Bloom’s Taxonomy Levels:- R(L1):Remember U(L2):Understand A(L3): Apply An(L4): Analyze E(L5):
Evaluate C(L6): Create
Sagar Institute of Research and Technology
Department of Computer Science Engineering
TUTORIAL-V
Semester: IV Session: Jan-June 2020
Subject: Software Engineering Sub. Code: CS-403
Max. Marks: 20
NOTE: Attempt all questions.
Q. Question Blooms CO
No.
Level
1 Consider an office automation system. There L3 CO5
are 4 major modules
Dta entry =0.6 KLOC
Data update =0.6KLOC
QUERY=.8 KLOC
REPORTS= 1.0KLOC
Attributes
Complexity =High =1.15
Storage = High =1.0capability =low =1.176
Experience =low =1.13
Programming capability = low =1.17
Use the Cocomo model Estimate the total
efforts and development time?
Course Outcomes
Bloom’s Taxonomy Levels:- R(L1):Remember U(L2):Understand A(L3): Apply An(L4): Analyze E(L5):
Evaluate C(L6): Create
SAGAR INSTITUTE OF RESEARCH TECHNOLOGY &
SCIENCE, Bhopal
UNIT 1
Software: -
Software is nothing but collection of computer programs and related documents that are planned to provide
desired features, functionalities and better performance.
Software is more than just a program code. A program is an executable code, which serves some computational
purpose. Software is considered to be collection of executable programming code, associated libraries and
documentations. Software, when made for a specific requirement is called software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific principles and
methods.
Characteristics of software: -
1. Software is developed or engineered; it is not manufactured in the classical sense:
Although some similarities exist between software development and hardware manufacturing, but few
activities are fundamentally different.
In both activities, high quality is achieved through good design, but the manufacturing phase for
hardware can introduce quality problems than software.
2. Software doesn’t “wear out.”
Hardware components suffer from the growing effects of dust, vibration, abuse, temperature extremes,
and many other environmental maladies. Stated simply, the hardware begins to wear out.
Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory,
therefore, the failure rate curve for software should take the form of the “idealized curve”.
When a hardware component wears out, it is replaced by a spare part.
There are no software spare parts.
Every software failure indicates an error in design or in the process through which design was translated
into machine executable code. Therefore, the software maintenance tasks that accommodate requests for
change involve considerably more complexity than hardware maintenance.
However, the implication is clear—software doesn’t wear out. But it does deteriorate.
Figure 1.1 Hardware Failure Curve Figure 1.2 Software Failure Cure
3. Although the industry is moving toward component-based construction, most software continues to be
custom built.
A software component should be designed and implemented so that it can be reused
programs.
Modern reusable components encapsulate both data and the processing that is applied to the data,
enabling the software engineer to create new application form reuable parts.
In the hardware world, component reuse is a natural part of the engineering process
Transitional: -
This aspect is important when the software is moved from one platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance: -
This aspect briefs about how well software has the capabilities to maintain itself in the ever-changing
environment:
Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which uses well-defined engineering concepts
required to produce efficient, durable, scalable, in-budget and on-time software products.
Software Engineering: The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to software.
Software product classified in 2 classes:
2. Generic software: Developed to solution whose requirements are very common fairly stable and well
understood by software engineer.
3. Custom software: Developed for a single customer according to their specification.
A Layered Technology:
A quality
Focus:
Every organization is rest on its commitment to quality.
Total quality management, Six Sigma, or similar continuous improvement culture and it is this culture
ultimately leads to development of increasingly more effective approaches to software engineering.
The foundation that supports software engineering is a quality focus.
Process:
The software engineering process is the glue that holds the technology layers together and enables
rational and timely development of computer software.
Process defines a framework that must be established for effective delivery of software engineering
technology.
The software process forms the basis for management control of software projects and establishes the
context in which technical methods are applied, work products are produced, milestones are established,
quality is ensured, and change is properly managed.
Methods:
Software engineering methods provide the technical aspects for building software.
Methods encompass a broad array of tasks that include communication, requirements analysis, design
modeling, program construction, testing, and support.
Software engineering methods rely on the set of modeling activities and other descriptive techniques.
Tools:
Software engineering tools provide automated or semi automated support for the process and the
method.
When tools are integrated so that information created by one tool can be used by another, a system
for the support of software development, called CASE (Computer- aided softeware engineering), is
established.
Software Product: -
A software product, user interface must be carefully designed and implemented because developers of that
product and users of that product are totally different. In case of a program, very little documentation is
expected, but a software product must be well documented. A program can be developed according to the
programmer’s individual style of development, but a software product must be developed using the accepted
software engineering principles.
Various Operational Characteristics of software are:
Correctness: The software which we are making should meet all the specifications stated by the customer.
Usability/Learn-ability: The amount of efforts or time required to learn how to use the software should be
less. This makes the software user-friendly even for IT-illiterate people.
Integrity: Just like medicines have side-effects, in the same way software may have aside-effect i.e. it may
affect the working of another application. But quality software should not have side effects.
Reliability: The software product should not have any defects. Not only this, it shouldn't fail while
execution.
Efficiency: This characteristic relates to the way software uses the available resources. The software should
make effective use of the storage space and execute command as per desired timing requirements.
Security: With the increase in security threats nowadays, this factor is gaining importance. The software
shouldn't have ill effects on data / hardware. Proper measures should be taken to keep data secure from external
threats.
Safety: The software should not be hazardous to the environment/life.
PROTOTYPING MODEL:
A prototype is a toy implementation of the system. A prototype usually exhibits limited functional capabilities,
low reliability, and inefficient performance compared to the actual software. A prototype is usually built using
several shortcuts. The shortcuts might involve using inefficient, inaccurate, or dummy functions. The shortcut
implementation of a function, for example, may produce the desired results by using a table look-up instead of
performing the actual computations. A prototype usually turns out to be a very crude version of the actual
system.
Business modeling: The information flow among business functions is modeled in a way that answers the
following questions: What information drives the business process? What information is generated? Who
generates it? Where does the information go? Who processes it?
Data modeling: The information flow defined as part of the business modeling phase is refined into a set of
data objects that are needed to support the business. The characteristics (called attributes) of each object are
identified and the relationships between these objects defined.
Process modeling: The data objects defined in the data modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing descriptions are created for adding,
modifying, deleting, or retrieving a data object.
Application generation: RAD assumes the use of fourth generation techniques. Rather than creating software
using conventional third generation programming languages the RAD process works to reuse existing program
components (when possible) or create reusable components (when necessary). In all cases, automated tools are
used to facilitate construction of the software.
Testing and turnover: Since the RAD process emphasizes reuse, many of the program components have
already been tested. This reduces overall testing time. However, new components must be tested and all
interfaces must be fully exercised.
INCREMENTAL MODEL:
The incremental model combines the elements of waterfall model and they are applied in an iterative
fashion.
The first increment in this model is generally a core product.
Each increment builds the product and submits it to the customer for any suggested modifications.
The next increment implements on the customer's suggestions and add additional requirements in the
previous increment.
This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
Page no: 48
There is lack of emphasis on necessary designing and documentation.
The project can easily get taken off track if the customer representative is not clear what final outcome
that they want.
Only senior programmers are capable of taking the kind of decisions required during the development
process. Hence it has no place for new programmers, unless combined with experienced resources.
Extreme Programming
Extreme Programming (XP) is an agile software development framework that aims to produce higher quality
software, and higher quality of life for the development team. XP is the most specific of the agile frameworks
regarding appropriate engineering practices for software development.
Extreme Programming is based on the following values-
Communication
Simplicity
Feedback
Courage
Respect
Extreme Programming takes the effective principles and practices to extreme levels.
Code reviews are effective as the code is reviewed all the time.
Testing is effective as there is continuous regression and testing.
Design is effective as everybody needs to do refactoring daily.
Integration testing is important as integrate and test several times a day.
Short iterations are effective as the planning game for release planning and iteration planning.
Page no: 49
with a capability maturity model (CMM) that defines key activities required at different levels of process
maturity. The SEI approach provides a measure of the global effectiveness of a company's software engineering
practices and establishes five process maturity levels that are defined in the following manner:
Level 1: Initial. The software process is characterized as ad hoc and occasionally even chaotic. Few processes
are defined, and success depends on individual effort.
Level 2: Repeatable. Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar
applications.
Level 3: Defined. The software process for both management and engineering activities is documented,
standardized, and integrated into an organization wide software process. All projects use a documented and
approved version of the organization's process for developing and supporting software. This level includes all
characteristics defined for level 2.
Level 4: Managed. Detailed measures of the software process and product quality are collected. Both the
software process and products are quantitatively understood and controlled using detailed measures. This level
includes all characteristics defined for level 3.
Level 5: Optimizing. Continuous process improvement is enabled by quantitative feedback from the process
and from testing innovative ideas and technologies. This level includes all characteristics defined for level 4.
The five levels defined by the SEI were derived as a consequence of evaluating responses to the SEI assessment
questionnaire that is based on the CMM. The results of the questionnaire are distilled to a single numerical
grade that provides an indication of an organization's process maturity.
Page no: 50
Software product engineering
Integrated software management
Training program
Organization process definition
Organization process focus
People: -
The primary element of any project is the people. People gather requirements, people interview users (people),
people design software, and people write software for people. No people -- no software. I'll leave the discussion
of people to the other articles in this special issue, except for one comment. The best thing that can happen to
any software project is to have people who know what they are doing and have the courage and self-discipline
to do it. Knowledgeable people do what is right and avoid what is wrong. Courageous people tell the truth when
others want to hear something else. Disciplined people work through projects and don't cut corners. Find people
who know the product and can work in the process.
Process: -
Process is how we go from the beginning to the end of a project. All projects use a process. Many project
managers, however, do not choose a process based on the people and product at hand. They simply use the same
process they've always used or misused. Let's focus on two points regarding process: (1) process improvement
and (2) using the right process for the people and product at hand.
Product: -
The product is the result of a project. The desired product satisfies the customers and keeps them coming back
for more. Sometimes, however, the actual product is something less. The product pays the bills and ultimately
allows people to work together in a process and build software. Always keep the product in focus.
Page no: 51
Processes it’s at the center of a triangle connecting three factors that have profound influence on software
quality and organizational performance
The skill and motivation of people has most influential factor in quality and performance.
The complexity of the product has impact on quality and team performance.
The technology (the software engineering methods) the process triangle exists within a circle of
environmental conditions that include the development environment, business conditions, customer
characteristics.
UNIT 2
The process to gather the software requirements from client, analyze and document them is known as
requirement engineering.
The goal of requirement engineering is to develop and maintain sophisticated and descriptive ‘System
‘equirements Specification’ document.
Types of Requirements: -
User Requirements: It is a collection of statement in natural language and description of the services the
system provides and its operational limitation. It is written for customer.
System Requirement: It is a structured document that gives the detailed description of the system services.
It is written as a contract between client and contractor.
FUNCTIONAL REQUIREMENTS:
It should describe all requirement functionality or system services. The customer should provide statement of
service. it should be clear how the system should be reacting to particular input and how a particular system
should behave in particular situation. Functional requirement is heavily depending upon he type of software
expected users and the type of system where the software is used. It describes system services in detail.
NON-FUNCTIONAL REQUIREMENTS:
Requirements, which are not related to functional aspect of software, fall into this category. They are implicit or
expected characteristics of software; which users make assumption of. Non -functional are more critical than
functional requirement if the non-functional requirement do not meet then the complete system is of no use.
Some typical non-functional requirements are:
Product requirement-
Elicitation techniques: -
When the requirements sources have been identified the requirements, engineer can start eliciting requirements
from them. It also means requirement discovery. This subtopic concentrates on techniques for getting human
stakeholders to articulate their requirements. This is a very difficult area and the requirements engineer needs
to be sensitized to the fact that (for example) users may have difficulty describing their tasks, may leave
important information unstated, or may be unwilling or unable to cooperate. It is particularly important to
understand that elicitation is not a passive activity and that even if cooperative and articulate stakeholders are
available, the requirements engineer has to work hard to elicit the right information. A number of techniques
will be covered, but the principal ones are:
Interviews:-Interviews are a ‘traditional’ means of eliciting requirements. It is important to understand the
advantages and limitations of interviews and how they should be conducted.
Scenarios: - Scenarios are valuable for providing context to the elicitation of users’ requirements. They allow
the requirements engineer to provide a framework for questions about users’ tasks by permitting ‘what if?’
and ‘how is this done?’ questions to be asked. (Conceptual modeling) because recent modeling notations have
attempted to integrate scenario notations with object-oriented analysis techniques.
Prototypes: -Prototypes are a valuable tool for clarifying unclear requirements. They can act in a similar way
to scenarios by providing a context within which users better understand what information they need to
provide. There is a wide range of prototyping techniques, which range from paper mock-ups of screen designs
to beta-test versions of software products. There is a strong overlap with the use of prototypes for
requirements validation.
Facilitated meetings: -The purpose of these is to try to achieve a summative effect whereby a group of
people can bring more insight to their requirements than by working individually. They can brainstorm and
refine ideas that may be difficult to surface using (e.g.) interviews.
Observation: -The importance of systems’ context within the organizational environment has led to the
adaptation of observational techniques for requirements elicitation. The requirements engineer learns about
users’ tasks by immersing themselves in the environment and observing how users interact with their systems
and each other. These techniques are relatively new and expensive but are instructive because they illustrate
that many user tasks and business processes are too subtle and complex for their actors to describe easily.
1. Requirement
Discovery
2. Requirement
4. Requirement Classification
Specification & Organization
3. Requirement
Prioritization
& Negotiations
Analysis model: -
The analysis model must achieve three primary objectives:
To describe what the customer requires(analysis)
To establish a basis for the creation of software with a combination of text and design are used to represent
the software requirement.
To define a set of requirements that can be validated once the software is built.
The elements of analysis model: -
At the core of the model lies the data dictionary—a repository that contains descriptions of all data objects
consumed or produced by the software. Three different diagrams surround the core. The entity relation diagram
(ERD) depicts relationships between data objects. The ERD is the notation that is used to conduct the data
modeling activity. The attributes of each data object noted in the ERD can be described using a data object
description.
The data flow diagram (DFD) serves two purposes: (1) to provide an indication of how data are transformed as
they move through the system and (2) to depict the functions (and sub functions) that transform the data flow.
The DFD provides additional information that is used during the analysis of the information domain and serves
as a basis for the modeling of function. A description of each function presented in the DFD is contained in a
process specification (PSPEC).
The state transition diagram (STD) indicates how the system behaves as a consequence of external events. To
accomplish this, the STD represents the various modes of behavior (called states) of the system and the manner
in which transitions are made from state to state. The STD serves as the basis for behavioral modeling.
Additional information about the control flow in the control specification (CSPEC). Process specification
describes each function in DFD. Data object description of various data object used.
Data Flow
Data Object Diagram
description
Entity
DFD
Relationship
Diagram
Data
Dictionary
Control
Specification
Levels of DFD
Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the entire
information system as one diagram concealing all the underlying details. Level 0 DFDs are also known as
context level DFDs.
User registration
(view the book detail)
Library User/student
Administrator
Management
System
Read and write
Book request/return
Library Database
Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level 1DFD depicts basic
modules in the system and flow of data among various modules. Level 1 DFD also mentions basic processes
and sources of information.
Account
Finance
Verification
Behavior Modeling:
Structure Charts:
Structure chart is a chart derived from Data Flow Diagram. It represents the system in more detail than DFD. It
breaks down the entire system into lowest functional modules, describes functions and sub-functions of each
module of the system to a greater detail than DFD.
A structure chart represents the software architecture, i.e. the various modules making up the system, the
dependency (which module calls which other modules), and the parameters that are passed among the different
modules. The basic building blocks which are used to design structure charts are the following:
Rectangular boxes: Represents a module.
Module invocation arrows: Control is passed from one module to another module in the direction of
the connecting arrow.
Data flow arrows: Arrows are annotated with data name; named data passes from one module to
another module in the direction of the arrow.
Library modules: Represented by a rectangle with double edges.
Selection: Represented by a diamond symbol.
Repetition: Represented by a loop around the control flow arrow.
Transform Analysis: -
Transform analysis identifies the primary functional components (modules) and the high-level inputs and
outputs for these components. The first step in transform analysis is to divide the DFD into 3 types of parts:
Input
Logical processing
Output
The input portion of the DFD includes processes that transform input data from physical (e.g. character from
terminal) to logical forms (e.g. internal tables, lists etc.). Each input portion is called an afferent branch. The
output portion of a DFD transforms output data from logical to physical form. Each output portion is called an
efferent branch. The remaining portion of a DFD is called the central transform.
Example: Structure chart for the RMS software
For this example, the context diagram was drawn earlier.
UML Diagrams:
UML can be used to construct nine different types of diagrams to capture five different views of a system. Just
as a building can be modeled from several views (or perspectives) such as ventilation perspective, electrical
perspective, lighting perspective, heating perspective, etc.; the different UML diagrams provide different
perspectives of the software system to be developed and facilitate a comprehensive understanding of the system.
Such models can be refined to get the actual implementation of the system.
The UML diagrams can capture the following five views of a system:
User’s view
Structural view
Behavioral view
Implementation view
Environmental view
User’s view: This view defines the functionalities (facilities) made available by the system to its users. The
users’ view captures the external user’s view of the system in terms of the functionalities offered by the system.
The user’s view is a black-box view of the system where the internal structure, the dynamic behavior of
different system components, the implementation etc. are not visible.
Structural view: The structural view defines the kinds of objects (classes) important to the understanding of
the working of a system and to its implementation. It also captures the relationships among the classes (objects).
The structural model is also called the static model, since the structure of a system does not change with time.
Behavioral view: The behavioral view captures how objects interact with each other to realize the system
behavior. The system behavior captures the time-dependent (dynamic) behavior of the system.
Implementation view: This view captures the important components of the system and their dependencies.
Environmental view: This view models how the different components are implemented on different pieces of
hardware.
USE-CASE MODELING:
Use case modeling was originally developed by Jacobson et al. (1993) in the 1990s and was incorporated into
the first release of the UML (Rumbaugh et al., 1999). Use case modeling is widely used to support
requirements elicitation. A use case can be taken as a simple scenario that describes what a user expects from a
system.
Use-case Model: -
The use case model for any system consists of a set of “use cases”. Intuitively, use cases represent the different
ways in which a system can be used by the users. A simple way to find all the use cases of a system is to ask the
question: “What the users can do using the system?”
Thus, for the Library Information System (LIS), the use cases could be:
issue-book
query-book
return-book Login
create-member
add-book etc
Renew
book
Search
book
C
heck
availa
bility
Reserve book
book Figur
U
User pdat Librarian
e
Boo
C k_Re
h core
e d
c
k
Return_Book
a
c
c
o
u
n
t
Figure 2.5: Use Case Diagram
Use cases correspond to the high-level functional requirements. The use cases partition the system behavior
into transactions, such that each transaction performs some useful action from the user’s
complete each transaction may involve either a single message or multiple message exchanges between the user
and the system to complete.
Text Description: -
Each ellipse on the use case diagram should be accompanied by a text description. The text description should
define the details of the interaction between the user and the computer and other aspects of the use case. It
should include all the behavior associated with the use case in terms of the mainline sequence, different
variations to the normal behavior, the system responses associated with the use case, the exceptional conditions
that may occur in the behavior, etc.
Contact persons: This section lists the personnel of the client organization with whom the use case was
discussed, date and time of the meeting, etc.
Actors: In addition to identifying the actors, some information about actors using this use case which may help
the implementation of the use case may be recorded.
Pre-condition: The preconditions would describe the state of the system before the use case execution starts.
Post-condition: This captures the state of the system after the use case has successfully completed.
Non-functional requirements: This could contain the important constraints for the design and implementation,
such as platform and environment conditions, qualitative statements, response time requirements, etc.
Exceptions, error situations: This contains only the domain-related errors such as lack of user’s access rights,
invalid entry in the input fields, etc. Obviously, errors that are not domain related, such as software errors, need
not be discussed here.
Sample dialogs: These serve as examples illustrating the use case.
Specific user interface requirements: These contain specific requirements for the user interface of the use
case. For example, it may contain forms to be used, screen shots, interaction style, etc.
Document references: This part contains references to specific domain related documents which may be useful
to understand the system operation.
Characteristics of SRS:
Correct: Requirement must be correctly mentioned and realistic by nature.
Unambiguous: Transparent and plain SRS must be written.
Complete: To make the SRS complete I should be specified what a software designer wants to create on
software.
Consistent: If there are not conflicts in the specified requirement then SRS is said to be consistent.
Stability: The SRS must contain all the essential requirement. Each requirement must be clear and
explicit.
Verifiable: the SRS should be written in such a manner that the requirement that is specified within it
must be satisfied by the software.
Modifiable: It can easily modify according to user requirement.
Traceable: If origin of requirement is properly given of the requirement are correctly mentioned then
such a requirement is called as traceable requirement.
REQUIREMENTS VALIDATION:
The work products produced as a consequence of requirements engineering are assessed for quality during a
validation step. Requirements validation examines the specification to ensure that all system requirements have
been stated unambiguously; that inconsistencies, omissions, and errors have been detected and corrected; and
that the work products conform to the standards established for the process, the project, and the product.
The primary requirements validation mechanism is the formal technical review. The review team includes
system engineers, customers, users, and other stakeholders who examine the system specification 5 looking for
errors in content or interpretation, areas where clarification may be required, missing information,
inconsistencies , conflicting requirements, or unrealistic (unachievable) requirements.
Although the requirements validation review can be conducted in any manner that results in the discovery of
requirements errors, it is useful to examine each requirement against a set of checklist questions. The following
questions represent a small subset of those that might be asked:
Are requirements stated clearly? Can they be misinterpreted?
Is the source (e.g., a person, a regulation, a document) of the requirement identified? Has the final
statement of the requirement been examined by or against the original source?
Is the requirement bounded in quantitative terms?
What other requirements relate to this requirement? Are they clearly noted via a cross-reference
matrix or other mechanism?
Does the requirement violate any domain constraints?
Is the requirement testable? If so, can we specify tests (sometimes called validation criteria) to exercise
the requirement?
Is the requirement traceable to any system model that has been created?
Is the requirement traceable to overall system/product objectives?
Is the system specification structured in a way that leads to easy understanding, easy reference, and
easy translation into more technical work products?
Has an index for the specification been created?
Have requirements associated with system performance, behavior, and operational characteristics been
clearly stated? What requirements appear to be implicit?
Requirements Management: -
Requirements management is a set of activities that help the project team to identify, control, and track
requirements and changes to requirements at any time as the project proceeds
Once requirements have been identified, traceability tables are developed. Shown schematically in Figureure,
each traceability table relates identified requirements to one or more aspects of the system or its environment.
Among many possible traceability tables are the following:
Features traceability table. Shows how requirements relate to important customer observable
system/product features.
Source traceability table. Identifies the source of each requirement
Dependency traceability table. Indicates how requirements are related to one another.
Subsystem traceability table. Categorizes requirements by the subsystem(s) that they govern.
Interface traceability table. Shows how requirements relate to both internal and
interfaces.
In many cases, these traceability tables are maintained as part of a requirements database so that they may be
quickly searched to understand how a change in one requirement will affect different aspects of the system to be
built.
TRACEABILITY:
Traceability is a property of an element of documentation or code that indicates the degree to which it can be
traced to its origin or "reason for being". Traceability also indicates the ability to establish a predecessor-
successor relationship between one work product and another.
A work product is said to be traceable if it can be proved that it complies with its specification. For example, a
software design is said to be traceable if it satisfies all the requirements stated in the software requirements
specification. Examples of traceability include:
External source to system requirements
System requirements to software requirements
Software requirements to high level design
High level design to detailed design
Detailed design to code
Software requirement to test case.
Forward to architecture
Backward from
architecture
Software Requirement Analysis (SRS)
SOFTWARE DESIGN
PROCESS:
Software design is a process to transform user requirements into some suitable form, which helps the
programmer in software coding and implementation.
For assessing user requirements, an SRS (Software Requirement Specification) document is created whereas for
coding and implementation, there is a need of more specific and detailed requirements in software terms. The
output of this process can directly be used into implementation in programming languages.
The architectural design defines the relationship between major structural elements of the software, the
“design patterns” that can be used to achieve the requirements that have been defined for the system.
Data
Dictionary Interface design
Architectural design
Design Model
Control
Specification Figure 3.1: Software Design and Software Engineering
The Design
Process:
Software design is an iterative process through which requirements are translated into a “blueprint” for
constructing the software. Initially, the blueprint depicts a holistic view of software. That is, the design is
represented at a high level of abstraction at level that can be directly traced to the specific system objective and
more detailed data, functional, and behavioral requirements. As design iterations occur, subsequent refinement
leads to design representations at much lower level of abstraction. These can still be trace or requirements, but
the connection is subtler.
DESIGN CONCEPTS AND PRINCIPLES:
Design Principles: -
Software design is both a process and a model. The design process is a sequence of steps that enable the
designer to describe all aspects of the software to be built. It is important to note, however, that the design
process is not simply a cookbook. Creative skill, past experience, a sense of what makes “good” software, and
an overall commitment to quality are critical success factors for a competent design.
The design model is the equivalent of an architect’s plans for a house. It begins by representing the totality of
the thing to be built (e.g., a three-dimensional rendering of the house) and slowly refines the thing to provide
guidance for constructing each detail (e.g., the plumbing layout). Similarly, the design model that is created for
software provides a variety of different views of the computer software. Basic design principles enable the
software engineer to navigate the design process.
Principles for software design, which have been adapted and extended in the following list:
The design process should not suffer from “tunnel vision.”
The design should be traceable to the analysis model.
The design should not reinvent the wheel.
The design should “minimize the intellectual distance” between the software and the problem as it
exists in the real world.
That is, the structure of the software design should (whenever possible) mimic the structure of the
problem domain.
The design should exhibit uniformity and integration.
The design should be structured to accommodate change.
The design should be structured to degrade gently.
The design should be assessed for quality as it is being created, not after the fact.
The design should be reviewed to minimize conceptual (semantic) errors.
Design concepts:
Following issues are considered while designing the software.
Abstraction: “Abstraction permits one to concentrate on a problem at some level of abstraction without
regard to low level detail. At the highest level of abstraction a solution is stated in broad terms using the
language of the problem environment. At lower level, a procedural orientation is taken. At the lowest
level of abstraction the solution is stated in a manner that can be directly implemented.
Types of abstraction: 1. Procedural Abstraction 2. Data Abstraction
Refinement: Process of elaboration. Refinement function defined at the abstract level, decompose the
statement of function in a stepwise fashion until programming language statements are reached.
Modularity: software is divided into separately named and addressable components called modules.
Follows “divide and conquer” concept, a complex problem is broken down into several manageable
pieces.
Architectural Styles:
The builder has used an architectural style as a descriptive mechanism to differentiate the house from other
styles (e.g., A-frame, raised ranch, Cape Cod). But more important, the architectural style is also a pattern for
construction. Further details of the house must be defined, its final dimensions must be specified, customized
features may be added, building materials are to be determined, but the pattern—a “center hall colonial”—
guides the builder in his work.
The software that is built for computer-based systems also exhibits one of many architectural styles.
Each style describes a system category that encompasses
A set of components (e.g., a database, computational modules) that perform a function required by a
system;
A set of connectors that enable “communication, co-ordinations and cooperation” among components.
Constraints that define how components can be integrated to form the system; and
Semantic models that enable a designer to understand the overall properties of a system by analyzing the
known properties of its constituent parts. In the section that follows, we consider commonly use d
architectural patterns for software.
The commonly used architectural styles are:
Data-centered Architectures: A data store (e.g., a file or database) resides at the center of this
architecture and is accessed frequently by other components that update, add, delete, or otherwise
modify data within the store. A typical Data-centered style. Clients of the are accesses a central
repository. In some cases the data repository is passive. That is, client software accesses the data
independent of any changes to the data or the actions of other client software. A variation on this
approach transforms the repository into a “blackboard” that sends notifications to client software when
data of interest to the client change.
Data-centered architectures promote integrity. That is, existing components can be changed and new
client components can be added to the architecture without concern about other clients (because the
client components operate independently). In addition, data can be passed among clients using the
blackboard mechanism (i.e., the blackboard component serves to coordinate the transfer of information
between clients). Client components independently execute processes.
Client Client
Software Software
Client
Client Software
Software
Data Store
(Repository)
Client
Client Software
Software
Client Client
Software Software
neighboring filters.
If the data flow degenerates into a single line of transforms, it is termed batch sequential. This pattern
Figure (3.3 b) accepts a batch of data and then applies a series of sequential components (filters) to
transform it.
Piper
Filter Filter
Filter Filter
Filter Filter
Filter
Filter Filter
Filter
h i l p q
e
f
Fan -in
j r
Operations Operations
Messages Messages
Class Name A
Class Name A Messages
Attributes
Attributes
Operations
Operations
Downloaded from be.rgpvnotes.in
Figure 3.5: Object-oriented Architecture
Layered Architectures: The basic structure of a layered architecture is illustrated in Figure 3.6. A
number of different layers are defined, each accomplishing operation that progressively become closer
to the machine instruction set.
At the outer layer, components service user interface operations. At the inner layer, components perform
operating system interfacing. Intermediate layers provide utility services and application software
functions.
These architectural styles are only a small subset of those available to the software designer. Once
requirements engineering uncovers the characteristics and constraints of the system to be built, the
architectural pattern (style) or combination of patterns (styles) that best fits those characteristics and
constraints can be chosen. In many cases, more than one pattern might be appropriate and alternative
architectural styles might be designed and evaluated.
User Interface
Layer
Application Layer
Utility
Layer
Core
Layer
Components
Logical Development
view view
Scenarios
The structure principle: Design should organize the user interface purposefully, in meaningful and
useful ways based on clear, consistent models that are apparent and recognizable to users, putting related
things together and separating unrelated things, differentiating dissimilar things and making similar
things resemble one another. The structure principle is concerned with overall user interface
architecture.
The simplicity principle: The design should make simple, common tasks easy, communicating clearly
and simply in the user's own language, and providing good shortcuts that are meaningfully related to
longer procedures.
The visibility principle: The design should make all needed options and materials for a given task
visible without distracting the user with extraneous or redundant information. Good designs don't
overwhelm users with alternatives or confuse with unneeded information.
The feedback principle: The design should keep users informed of actions or interpretations, changes of
state or condition, and errors or exceptions that are relevant and of interest to the user through clear,
concise, and unambiguous language familiar to users.
The tolerance principle: The design should be flexible and tolerant, reducing the cost of mistakes and
misuse by allowing undoing and redoing, while also preventing errors wherever possible by tolerating
varied inputs and sequences and by interpreting all reasonable actions.
The reuse principle: The design should reuse internal and external components and behaviors,
maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing the need
for users to rethink and remember.
Environment analysis
and modelling
Interface validation
Start of process
Release
Release
Release
Interface design
Implementation
FUNCTION-ORIENTED DESIGN:
In function-oriented design, the system is comprised of many smaller sub-systems known as functions. These
functions are capable of performing significant task in the system. The system is considered as top view of all
functions.
Function oriented design inherits some properties of structured design where divide and conquer methodology
is used.
This design mechanism divides the whole system into smaller functions, which provides means of abstraction
by concealing the information and their operation. These functional modules can share information among
themselves by means of information passing and using information available globally.
Another characteristic of functions is that when a program calls a function, the function changes the state of the
program, which sometimes is not acceptable by other modules. Function oriented design works well where the
system state does not matter and program/functions work on input rather than on a state.
Page no: 11
SA/SD COMPONENT BASED DESIGN:
Structured Analysis and Structured Design: Structured analysis is a set of techniques and graphical tools that
allow the analyst to develop a new kind of system specification that are easily understandable to the user.
Goals of SASD
Improve Quality and reduce the risk of system failure
Establish concrete requirements specifications and complete requirements documentation
Focus on Reliability, Flexibility, and Maintainability of system
Page no: 12
Extracts the business process entities that can exist independently without any associated dependency on
other entities.
Page no: 13
Recognizes and discover these independent entities as new components.
Uses infrastructure component names that reflect their implementation-specific meaning.
Models any dependencies from left to right and inheritance from top (base class) to bottom (derived
classes).
Model any component dependencies as interfaces rather than representing them as a direct
component-to-component dependency.
DESIGN METRICS
In software development, a metric is the measurement of a particular characteristic of a program's performance
or efficiency. Design metric measure the efficiency of design aspect of the software. Design model considering
three aspects:
Architectural design
Object oriented design
User interface design
Static Analysis:
Static analysis involves no dynamic execution of the software under test and can detect
possible defects in an early stage, before running the program.
Static analysis is done after coding and before executing unit tests.
Static analysis can be done by a machine to automatically “walk through” the source
code and detect non complying rules. The classic example is a compiler which finds
lexical, syntactic and even some semantic mistakes.
Static analysis can also be performed by a person who would review the code to ensure
proper coding standards and conventions are used to construct the program.
Static code analysis advantages:
It can find weaknesses in the code at the exact location.
It can be conducted by trained software assurance developers who fully understand
the code.
Source code can be easily understood by other or future developers
It allows a quicker turn around for fixes
Weaknesses are found earlier in the development life cycle, reducing the cost to fix.
Less defects in later tests
Unique defects are detected that cannot or hardly be detected using dynamic tests
o Unreachable code
o Variable use (undeclared, unused)
o Uncalled functions
o Boundary value violations
Dynamic Analysis:
Dynamic analysis is based on the system execution, often using tools.
Dynamic program analysis is the analysis of computer software that is performed with
executing programs built from that software on a real or virtual processor (analysis
performed without executing programs is known as static code analysis). Dynamic
program analysis tools may require loading of special libraries or even recompilation of
program code.
The most common dynamic analysis practice is executing Unit Tests against the code to find
any errors in code.
Automated tools are only as good as the rules they are using to scan with.
It is more difficult to trace the vulnerability back to the exact location in the code,
taking longer to fix the problem.
CODE INSPECTIONS:
Code Inspection is the most formal type of review, which is a kind of static testing to
avoid the defect multiplication at a later stage.
The main purpose of code inspection is to find defects and it can also spot any
process improvement if any.
An inspection report lists the findings, which include metrics that can be used to
aid improvements to the process as well as correcting defects in the document
under review.
Preparation before the meeting is essential, which includes reading of any source
documents to ensure consistency.
Inspections are often led by a trained moderator, who is not the author of the code.
The inspection process is the most formal type of review based on rules and
checklists and makes use of entry and exit criteria.
It usually involves peer examination of the code and each one has a defined set of
roles.
After the meeting, a formal follow-up process is used to ensure that corrective
action is completed in a timely manner.
Testing principles:
The following basic principles and fundamentals are general guidelines applicable for all
types of real-time testing:
Testing proves the presence of defects. It is generally considered better when a
test reveals defects than when it is error-free.
Testing the product should be accomplished considering the risk factor and priorities
Early testing helps identify issues prior to the development stage, which eases
error correction and helps reduce cost
Normally a defect is clustered around a set of modules or functionalities. Once
they are identified, testing can be focused on the defective areas, and yet continue
to find defects in other modules simultaneously.
Testing will not be as effective and efficient if the same kinds of tests are
performed over a long duration.
Testing has to be performed in different ways, and cannot be tested in a similar
way for all modules. All testers have their own individuality, likewise the system
under test.
Just identifying and fixing issues does not really help in setting user expectations.
Even if testing is performed to showcase the software's reliability, it is better to
assume that none of the software products are bug-free.
Test cases
Test data Test results
Test reports
figure 4.1: Testing Process
TETSING LEVELS: -
The testing can be typically carried out in levels. In software development process at
each phase some faults may get introduced. These faults are eliminated in the next
software development phase but at the same time some new faults may get introduced.
Each level of testing performs some typical activity. Levels of testing include different
methodologies that can be used while conducting software testing. The main levels of
software testing are:
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Unit Testing: In this type of testing errors are detected from each software component
individually.
Integration Testing: In this type of testing technique interacting component are
verified and the interface errors are detected.
System Testing: In this type of testing all the system elements forming the system is
tested as a whole. Acceptance Testing: Acceptance testing is a kind of testing
conducted to ensure that the software works correctly in user’s working environment.
Integration
System Testing
Acceptance Testing
Testing Requirements
Design
Client Needs
Unit Testing Coding
TEST ORACLES:
Test Oracles is a mechanism for determining whether a test has passed or failed. The
use of oracles involves comparing the output(s) of the system under test, for a given
test-case input, to the output(s) that the oracle determines that product should have.
Suppose we have written 2 test cases one test case is for the program which we want to
test and other for the test oracle. If the output of both is the same then that means
program behaves correctly otherwise there is some fault in the program.
Output
Test oracle
Test case
Advantage Disadvant
s ages
As the tester has knowledge of the Due to the fact that a skilled tester is
source code, it becomes very easy to needed to perform white-box testing,
find out which type of data can help in the costs are increased.
testing the application effectively. Sometimes it is impossible to look
into every
It helps in optimizing the code. nook and corner to find out hidden
errors that may create problems, as
Extra lines of code can be removed
many paths will go untested.
which can bring in hidden defects.
It is difficult to maintain white-box
Due to the tester's knowledge about
testing, as it requires specialized tools
the code, maximum coverage is
like code analyzers and debugging
attained during test scenario writing.
tools.
UNIT TESTING:
Unit testing, a testing technique using which individual modules are tested to determine
if there are any issues by the developer himself. It is concerned with functional
correctness of the standalone modules.
The main aim is to isolate each unit of the system to identify, analyze and fix the defects.
Advantages:
Reduces Defects in the newly developed features or reduces bugs when changing
the existing functionality.
Reduces Cost of Testing as defects are captured in very early phase.
Improves design and allows better refactoring of code.
Unit Tests, when integrated with build gives the quality of the build as well.
Module
Interface
Local data structures
Boundary conditions
Independent paths
Test s
Case
TESTING FRAMEWORKS:
Testing frameworks are an essential part of any successful automated testing process.
They can reduce maintenance costs and testing efforts and will provide a higher return
on investment (ROI) for QA teams looking to optimize their agile processes.
A testing framework is a set of guidelines or rules used for creating and designing test
cases. A framework is comprised of a combination of practices and tools that are
designed to help QA professionals test more efficiently.
INTEGRATION TESTING:
In integration testing, individual software modules are integrated logically and tested as
a group. A typical software project consists of multiple software modules, coded by
different programmers. Integration Testing focuses on checking data communication
amongst these modules.
Need of integration testing:
Although each software module is unit tested, defects still exist for various reasons like:
A Module in general is designed by an individual software developer whose
understanding and programming logic may differ from other programmers.
integration Testing becomes necessary to verify the software modules work in
unity
At the time of module development, there are wide chances of change in
requirements by the clients. These new requirements may not be unit tested and
hence system integration Testing becomes necessary.
Interfaces of the software modules with the database could be erroneous
External Hardware interfaces, if any, could be erroneous
Inadequate exception handling could cause issues.
Regression testing
Smoke testing
Top-down integration:
In this testing, the highest-level modules are tested first and progressively, lower-level
modules are tested thereafter.
In a comprehensive software development environment, bottom-up testing is usually
done first, followed by top-down testing. The process concludes with multiple tests of the
complete application, preferably in scenarios designed to mimic actual situations.
Regression Testing:
Regression testing is used to check for defects propagated to other modules by changes
made to existing program. Thus regression testing is used to reduce the side effects of the
changes.
Smoke Testing:
Smoke testing is a type of software testing which ensures that the major functionalities of
the application are
working fine. This testing is also known as ‘Build Verification testing’. It is a non-exhaustive
testing with very limited test cases to ensure that the important features are working fine
and we are good to proceed with the detailed testing.
SYSTEM TESTING:
System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified Quality
Standards. This type of testing is performed by a specialized testing team.
System testing is important because of the following reasons:
System testing is the first step in the Software Development Life Cycle, where
the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical
specifications.
The application is tested in an environment that is very close to the production
environment where the application will be deployed.
System testing enables us to test, verify, and validate both the business
requirements as well as the application architecture.
Stress Testing: Stress testing is the process of determining the ability of a computer,
network, program or device to maintain a certain level of effectiveness under
unfavorable conditions. It is used to test the stability & reliability of the system. This test
mainly determines the system on its robustness and error handling under
extremely heavy load conditions.
TEST PLAN:
Test planning, the most important activity to ensure that there is initially a list of tasks
and milestones in a baseline plan to track the progress of the project. It also defines the
size of the test effort. It is the main document often called as master test plan or a
project test plan and usually developed during the early phase of the project.
S.No. Parameter Description
1. Test plan identifier Unique identifying reference.
2. Introduction A brief introduction about the project and to the
document.
3. Test items A test item is a software item that is the application
under test.
4. Features to be tested A feature that needs to tested on the test ware.
5. Features not to be Identify the features and the reasons for not including as
tested part of testing.
6. Approach Details about the overall approach to testing.
7. Item pass/fail criteria Documented whether a software item has passed or
failed its test.
8. Test deliverables The deliverables that are delivered as part of the testing
process, such as test plans, test specifications and test
summary reports.
9. Testing tasks All tasks for planning and executing the testing.
10. Environmental needs Defining the environmental requirements such as
hardware, software, OS, network configurations, tools
required.
11. Responsibilities Lists the roles and responsibilities of the team members.
12. Staffing and training Captures the actual staffing requirements and any
needs specific skills and
training requirements.
13. Schedule States the important project delivery dates and key
milestones.
14. Risks and Mitigation High-level project risks and assumptions and a
mitigating plan for each identified risk.
15. Approvals Captures all approvers of the document, their titles and
the sign off date.
TEST METRICS:
In software testing, Metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute. Measurement is nothing but
quantitative indication of size / dimension / capacity of an attribute of a product /
process. Software metric is defined as a quantitative measure of an attribute a
software system possesses with respect to Cost, Quality, Size and Schedule. Example-
Measure - No. of Errors
Metrics - No. of Errors found per person
The most commonly used metric is cyclomatic complexity and Hallstead complexity.
Nodes
edges
TESTING TOOLS:
Tools from a software testing context can be defined as a product that supports one or
more test activities right from planning, requirements, creating a build, test execution,
defect logging and test analysis.
Classification of Tools
Tools can be classified based on several parameters. They include:
The purpose of the tool
The Activities that are supported within the tool
The Type/level of testing it supports
mercial) The
technology
T used
h S. No. Tool Type Used for
e
Test Managing, s
1. Test Management
K defect logging, tr
i Tool
analysis.
n Configuration For Implementati
d 2.
management execution, trackin
o tool changes
f 3. Static Analysis Tools Static Testing
l Test data Preparatio Analysis Des
4.
i Tool n
c s and Tes
e generation
n
s 5. Test Execution Tools Implementation,
i
n Comparing
6. Test Comparators
g and
results
( Cover age
o 7. Provides structura
measurement tools
p Monitoring the
e 8. Performance Testing
n perfo
tools
response time
s Project planning
9. For Planning
o and
u Tracking Tools
r Incident
c 10. For managing the
e
, Management
Tools
f
r Table 4.2: Testing tool
e
e Tools Implementation - process
Analyze the problem carefully to identify
w strengths, weaknesses and opportunities.
a
r The Constraints such as budgets, time and
e other requirements are noted.
, Evaluating the options and Short listing the
ones that are meets the requirement.
c Developing the Proof of Concept which
o captures the pros and cons.
m
Create a Pilot Project Rolling out the tool phase wise across the
using the selected tool organization.
within a specified team.
Object-Oriented Analysis:
Object–Oriented Analysis (OOA) is the procedure of identifying software engineering
requirements and developing software specifications in terms of a software system’s
object model, which comprises of interacting objects.
The primary tasks in object-oriented analysis (OOA) are −
Identifying objects
Organizing the objects by creating object model diagram
Defining the internals of the objects, or object attributes
Defining the behavior of the objects, i.e., object actions
Describing how the objects interact
The common models used in OOA are use cases and object models.
Types of maintenance:
Various types of software maintenance are:
Corrective Maintenance: Means the maintenance for correcting the software faults.
Adaptive Maintenance: Means maintenance for adapting the change in environment (different system
or operating systems).
Perfective Maintenance: Means modifying or enhancing the system to meet the new requirements.
Preventive Maintenance: This includes modifications and updating to prevent future problems of the
software.
Change Request
Software Configuration Items (SCIs): Information that is created as part of the software engineering process.
Baselines: A Baseline is a software configuration management concept that helps us to control change. Signals
a point of departure from one activity to the start of another activity. Helps control change without impeding
justifiable change.
Elements of SCM
There are four elements of SCM:
1. Software Configuration Identification
2. Software Configuration Control
3. Software Configuration Auditing
4. Software Configuration Status Reporting
VERSION CONTROL:
Version Control is a system or tool that captures the changes to a source code element: file, folder, image or
binary. This is beneficial for many reasons, but the most fundamental reason is it allows you to track changes
on a per file basis.
RE-ENGINEERING:
Software re-engineering means re-structuring or re-writing part or all of the software engineering system. It is needed
for the application which requires frequent maintenance.
Software re-engineering is a process of software development which is done to improve the maintainability of a
software system.
Re-engineering a software system has two key advantages:
Reduced risk: As the software already exists, the risk is less as compared to developing new software.
Reduced cost: The cost of re-engineering is significantly less than the costs of developing new
software.
Reverse
engineering
Program
Source code modularization
translation
Program
structure Structured
improvement program
Restructured code
Final specification
2. Usually done when docs are not appropriate or Modification of the system is done. E.g.
is missing. 1) Use of different programming language.
2 ) introduction of new DBMS
3) Transfer of s/w to new h/w platform.
3. Reverse engineering is a process in which the Forward-engineering is a process in which theories,
dirty or unstructured code is taken, processed methods and tools are applied to develop a
and it is restructured. professional software product.
4. Reverse Engineering is trying to recreate the Forward engineering is normal engineering. It builds
source code from the compiled code. That is devices that can do certain useful things for us:
trying to figure out how a piece of software
works given only the final system.
5. It is complex because cleaning the dirty or It is simple and straight forward approach.
unstructured code requires more efforts.
6. Documentation or specification of the product Documentation or specification of the product is
is useful to the developer. useful to the end user.
7. This process starts by understanding the This process starts by understanding user
existing unstructured code. requirements.
TOOL SUPPORT:
Upper CASE
Analysis
Design
Integrated CASE
Implementation
Lower CASE
Testing
Maintenance
Project management
tools
Programming tools
Prototyping and
simulation tools
Consistency and
completeness tools
Software configuration
Central repository management tools
Documentation tools (Data Dictionary)
Analysis and
design tools
Requirement tracing
tools
Database management
Transferring tools for and report generation
exchanging data in tools
different formats
Figure 5.7: Block Diagram for CASE
FEASIBILITY ANALYSIS:
When the client approaches the organization for getting the desired product developed, it comes up with a rough
idea about what all functions of the software must perform and which all features are expected from the
software.
Referencing to this information, the analysts do a detailed study about whether the desired system and its
functionality are feasible to develop or not.
This feasibility study is focused towards goal of the organization. This study analyses whether the software
product can be practically materialized in terms of implementation, contribution of project to organization, cost
constraints, and as per values and objectives of the organization. It explores technical aspects of the project and
product such as usability, maintainability, productivity, and integration ability.
The output of this phase should be a feasibility study report that should contain adequate comments and
recommendations for management about whether or not the project should be undertaken.
RESOURCE ALLOCATIONS:
Once the objectives of the project management are achieved, the project management is to estimate the
resources for the project. Various recourses of the project are:
• Human or people
• Reusable software components
• Hardware or software components
The resources are available in limited quantity and stay in the organization as a pool of assets. The shortage of
resources hampers development of the project and it can lag behind the schedule. Allocating extra resources
increases development cost in the end. It is therefore necessary to estimate and allocate adequate resources for
the project.
Resource management includes:
•Defining proper organization project by creating a project team and allocating responsibilities to each team
member.
•Determining resources required at a particular stage and their availability.
•Manage Resources by generating resource request when they are required and de-allocating them when
they are no more needed.
SOFTWARE EFFORTS:
Project Estimation
For an effective management, accurate estimation of various measures is a must. With the correct estimation,
managers can manage and control the project more efficiently and effectively.
Project estimation may involve the following:
•Software size estimation: Software size may be estimated either in terms of KLOC (Kilo Line of Code)
or by calculating number of function points in the software. Lines of code depend upon coding practices.
Function points vary according to the user or software requirement.
•Effort estimation: The manager estimates efforts in terms of personnel requirement and man-hour
required to produce the software. For effort estimation software size should be known. This can either be
derived by manager’s experience, historical data of organization, or software size can be converted into
efforts by using some standard formula.
•Time estimation: Once size and efforts are estimated, the time required to produce the software can be
estimated. An effort required is segregated into sub categories as per the requirement specifications and
interdependency of various components of software. Software tasks are divided into smaller tasks,
activities or events by Work Breakthrough Structure (WBS). The tasks are scheduled on day-to-day
basis or in calendar months. The sum of time required to complete all tasks in hours or days is the total
time invested to complete the project.
•Cost estimation: This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is required to consider –
o Size of the software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
o Travel involved
o Communication
o Training and support
PROJECT SCHEDULING:
Project Scheduling in a project refers to roadmap of all activities to be done with specified order and within
time slot allotted to each activity. Project managers tend to define various tasks and project milestones and then
arrange them keeping various factors in mind. They look for tasks like in critical path in the schedule, which are
necessary to complete in specific manner (because of task interdependency) and strictly within the time
allocated. During the project scheduling the total work is separated into various small activities.
Downloaded from be.rgpvnotes.in
COST ESTIMATIONS:
Cost estimation can be defined as the approximate judgments of the costs for project. Cost estimation is usually
measured in terms of effort. The effort is the amount of time for one person to work for a certain period of time.
COCOMO is one the most widely used software estimation models in the world. The Constructive Cost Model
(COCOMO) is a procedural software cost estimation model .COCOMO is used to estimate size, effort and
duration based on the cost of the software.
COCOMO predicts the effort and schedule for a software product development based on inputs relating to the
size of the software and a number of cost drivers that affect productivity.
COCOMO has three different models that reflect the complexities:
Basic Model: this model would be applied early in a projects development. It will provide a rough
estimate early on that should be refined later on with one of the other models.
Intermediate Model: this model would be used after you have more detailed requirements for a
project.
Detailed Model: when design of the project is complete you can apply this model to further refine
your estimate.
Within each of these models there are also three different modes. The mode you choose will depend on your
work environment, and the size and constraints of the project itself. The modes are:
Organic: this mode is used for “relativity small software teams developing software in a highly familiar,
in-house environment”.
Embedded: operating within tight constraints where the product is strongly tied to a “complex of
hardware, software, regulations and operational procedures”.
Semi-detached: an intermediate stage somewhere in between organic and embedded. Projects are
usually of moderate size of up to 300,000 lines of code.
Basic Model: The basic COCOMO model estimates the software development effort using only Lines Of Code
(LOC). Various equations in this model are:
Effort Applied (E) = ab(KLOC)bb [man-months]
d
Development Time (D) = cb(Effort Applied)
b
[months]
People required (P) = Effort Applied / Development Time [count]
Where, KLOC is the estimated number of delivered lines (expressed in thousands) of code for project. The
coefficients ab, bb, cb and db are given in the following table:
Software Projects ab bb cb db
Downloaded from be.rgpvnotes.in
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business competition.
RMMM Plan:
It is a part of the software development plan or a separate document.
The RMMM plan documents all work executed as a part of risk analysis and used by the project
manager as a part of the overall project plan.
The risk mitigation and monitoring starts after the project is started and the documentation of RMMM is
completed.
Quality Control:
Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or
performed service adheres to a defined set of quality criteria or meets the requirements of the client or customer.
Quality Assurance:
It is planned and systematic pattern of activities necessary to provide a high degree of confidence in the quality
of a product. It provides quality assessment of the quality control activities and determines the validity of the
data or procedures for determining quality.
PROJECT PLAN:
A project plan is a formal document designed to guide the control and execution of a project. A project plan is
the key to a successful project and is the most important document that needs to be created when starting any
business project.
A project plan is used for the following purposes:
To document and communicate stakeholder products and project expectations
To control schedule and delivery
To calculate and manage associated risks
PROJECT METRICS:
Metrics is a quantitative measure of the degree to which a system, component, or process possesses a given
attribute.
Project metrics are quantitative measures that enable software engineers to gain insight into the efficiency of
the software process and the projects conducted using the process framework. Project metrics are used by a
project manager and a software team to adapt project work flow and technical activities.
o Productivity=KLOC/ Person-month
o Quality= number of faults/KLOC
o Cost=$/KLOC
o Documentation= Pages of documentation/KLOC
Number of user inputs. Each user input that provides distinct application oriented data to the software is
counted. Inputs should be distinguished from inquiries, which are counted separately.
Number of user outputs. Each user output that provides application oriented information to the user is
counted. In this context output refers to reports, screens, error messages, etc. Individual data items within a
report are not counted separately.
Number of user inquiries. An inquiry is defined as an on-line input that results in the generation of some
immediate software response in the form of an on-line output. Each distinct inquiry is counted.
Number of files. Each logical master file (i.e., a logical grouping of data that may be one part of a large
database or a separate file) is counted.
Number of external interfaces. All machine readable interfaces (e.g., data files on storage media) that are used
to transmit information to another system are counted.
FP = count total [0.65 + 0.01 Σ(Fi)] where count total is the sum of all FP entries .
The Fi (i = 1 to 14) are "complexity adjustment values" based on responses to the following questions:
References
1.Pankaj Jalote ,”An Integrated Approach to Software Engineering”, Narosa Pub, 2005
2. Rajib Mall, “Fundamentals of Software Engineering” Second Edition, PHI Learning
3. R S. Pressman ,”Software Engineering: A Practitioner's Approach”, Sixth edition2006, McGraw-
Hill.
4. Sommerville,”Software Enginerring”,Pearson Education.
5. Richard H.Thayer,”Software Enginerring & Project Managements”, WileyIndia
6. Waman S.Jawadekar,”Software Enginerring”, TMH
7. Bob Hughes, M.Cotterell, Rajib Mall “ Software Project Management”, McGrawHill