0% found this document useful (0 votes)
40 views192 pages

Oose Notes

The document outlines the syllabus for the Object Oriented Software Engineering course (CCS356) at DMI College of Engineering for the academic year 2024-2025. It covers key topics such as software processes, agile development, requirements analysis, software design, testing, maintenance, and project management. Additionally, it discusses various software development methodologies including Waterfall, Incremental, and RAD models, along with their advantages and disadvantages.

Uploaded by

Viola Sharon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views192 pages

Oose Notes

The document outlines the syllabus for the Object Oriented Software Engineering course (CCS356) at DMI College of Engineering for the academic year 2024-2025. It covers key topics such as software processes, agile development, requirements analysis, software design, testing, maintenance, and project management. Additionally, it discusses various software development methodologies including Waterfall, Incremental, and RAD models, along with their advantages and disadvantages.

Uploaded by

Viola Sharon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 192

DMI COLLEGE OF ENGINEERING

PALANCHUR, CHENNAI – 123.


DEPARTMENT OF INFORMATION TECHNOLOGY

SUBJECT NAME : CCS356 OBJECT ORIENTED SOFTWARE ENGINEERING

ACADEMIC YEAR : 2024 – 2025

SEMESTER : EVEN SEM

CLASS : III YEAR IT


CCS356 OBJECT ORIENTED SOFTWARE ENGINEERING
LTPC3024

UNIT I SOFTWARE PROCESS AND AGILE DEVELOPMENT 9

Introduction to Software Engineering, Software Process, Perspective and Specialized


Process Models –Introduction to Agility-Agile process-Extreme programming-XP
Process-Case Study.

UNIT II REQUIREMENTS ANALYSIS AND SPECIFICATION 9

Requirement analysis and specification – Requirements gathering and analysis –


Software Requirement Specification – Formal system specification – Finite State
Machines – Petrinets – Object modelling using UML – Use case Model – Class diagrams
– Interaction diagrams – Activity diagrams – State chart diagrams – Functional modelling
– Data Flow Diagram- CASE TOOLS.

UNIT III SOFTWARE DESIGN 9

Software design – Design process – Design concepts – Coupling – Cohesion – Functional


independence – Design patterns – Model-view-controller – Publish-subscribe – Adapter –
Command – Strategy – Observer – Proxy – Facade – Architectural styles – Layered -
Client Server - Tiered - Pipe and filter- User interface design-Case Study.

UNIT IV SOFTWARE TESTING AND MAINTENANCE 9

Testing – Unit testing – Black box testing– White box testing – Integration and System
testing– Regression testing – Debugging - Program analysis – Symbolic execution –
Model Checking-Case Study

UNIT V PROJECT MANAGEMENT 9

Software Project Management- Software Configuration Management - Project


Scheduling- DevOps: Motivation-Cloud as a platform-Operations- Deployment Pipeline:
Overall Architecture Building and Testing-Deployment- Tools- Case Study
UNIT-I

SOFTWARE PROCESS AND AGILE DDEVELOPMENT


1. INTRODUCTION TO SOFTWARE
ENGINEERINGSoftware: Software is
(1) Instructions (computer programs) that provide desired features, function,
and performance,when executed
(2) Data structures that enable the programs to adequately manipulate information,
(3) Documents that describe the operation and use of the
programs. There are three components of the software:
 Program: Program is a combination of source code & object code.
 Documentation: Documentation consists of different types of manuals.
Examples ofdocumentation manuals are: Data Flow Diagram, Flow
Charts, ER diagrams, etc.
 Operating Procedures: Operating Procedures consist of instructions to set up and
use thesoftware system and instructions on how react to the system failure.
Example of operatingsystem procedures manuals is: installation guide, Beginner's
guide, reference guide, system administration guide, etc.
Characteristics of Software:
(1) Software is developed or engineered; it is not manufactured in the classicalsense.
(2) Software does not “wear out”
(3) Although the industry is moving toward component-based construction,
most softwarecontinues to be custom built.
Software Engineering:
(1) The systematic, disciplined quantifiable approach to the development,
operation andmaintenance of software; that is, the application of engineering
to software..

Need of Software Engineering

o Huge Programming: It is simpler to manufacture a wall than to a house or


building, similarly, as the measure of programming become extensive engineering
has to step to give it a scientific process.
o Adaptability: If the software procedure were not based on scientific and
engineering ideas, it would be simpler to re-create new software than to scale an
existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing
has let down the cost of computer and electronic hardware. But the cost of
programming remains high if the proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming
hugely depends upon the environment in which the client works. If the quality of
the software is continually changing, new upgrades need to be done in the existing
one.
o Quality Management: Better procedure of software development provides a better
and quality software product.

Characteristics of a good software engineer

The features that good software engineers should possess are as follows:

 Good technical knowledge of the project range (Domain knowledge).


 Good programming abilities.
 Good communication skills. These skills comprise of oral, written, and
interpersonal skills.
 High motivation.
 Sound knowledge of fundamentals of computer science.
 Intelligence.
 Ability to work in a team
 Discipline
 Importance of Software Engineering

The importance of Software engineering is as follows:

1. Reduces complexity: Big software is always complicated and challenging to


progress. Software engineering has a great solution to reduce the complication of
any project. Software engineering divides big problems into various small issues. And
then start solving each small issue one by one. All these small problems are solved
independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software
engineers are highly paid experts. A lot of manpower is required to develop
software with a large number of codes. But in software engineering, programmers
project everything and decrease all those things that are not needed. In turn, the cost
for software productions becomes less as compared to any software that does not
use software engineering method.
3. To decrease time: Anything that is not made according to the project always
wastes time. And if you are making great software, then you may need to run many
codes to get the definitive running code. This is a very time-consuming procedure,
and if it is not well handled, then this can take a lot of time. So if you are making
your software according to the software engineering method, then it will decrease a
lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they
need lots of patience, planning, and management. And to invest six and seven
months of any company, it requires heaps of planning, direction, testing, and
maintenance. No one can say that he has given four months of a company to the
task, and the project is still in its first stage. Because the company has provided
many resources to the plan and it should be completed. So to handle a big project
without any problem, the company has to go for a software engineering method.
5. Reliable software: Software should be secure, means if you have delivered the
software, then it should work for at least its given time or subscription. And if any
bugs come in the software, the company is responsible for solving all these bugs.
Because in software engineering, testing and maintenance are given, so there is no
worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the
standards. Software standards are the big target of companies to make it more
effective. So Software becomes more effective in the act with the help of software
engineering.

2. Software Processes

The term software specifies to the set of computer programs, procedures and associated
documents (Flowcharts, manuals, etc.) that describe the program and how they are to be
used.

A software process is the set of activities and associated outcome that produce a software
product. Software engineers mostly carry out these activities. These are four key process
activities, which are common to all software processes. These activities are:

1. Software specifications: The functionality of the software and constraints on its


operationmust be defined.
2. Software development: The software to meet the requirement must be produced.
3. Software validation: The software must be validated to ensure that it does
what thecustomer wants.
4. Software evolution: The software must evolve to meet changing client needs.

A software process, also known as a software development process or software


engineering process, is a set of activities, methods, practices, and transformations that
are used to develop and maintain software systems. The goal of a software process is to
produce high-quality software thatmeets the requirements of the users or customers in a
timely and cost-effective manner.

Key components of a software process include:

Requirement Analysis: Understanding and defining the needs and expectations of the
users or customers. This involves gathering and documenting requirements for the
software.

Design: Creating a blueprint or plan for the software system based on the
requirements. This phase involves architectural design, detailed design, and often
includes decisions about data structures, algorithms, and user interfaces.

Implementation or Coding: Writing the actual code for the software based on the
designspecifications. This is the phase where the software is built.

Testing: Verifying that the software behaves as intended and meets the specified
requirements.Testing can include various levels such as unit testing, integration testing,
system testing, and acceptance testing.

Deployment: Installing and configuring the software in the target environment. This may
alsoinvolve creating documentation and training materials for end-users.

Maintenance: After deployment, the software requires ongoing maintenance to fix bugs,
addressissues, and implement updates or enhancements.

Software processes can be categorized into different models or methodologies, each with
its ownset of principles and practices. Some common software development
methodologies include:
Water fall Model: Sequential and linear, where each phase must be completed before
moving onto the next.

Agile Model: Iterative and incremental, with a focus on flexibility and responsiveness to
change.Agile methodologies include Scrum, Kanban, and Extreme Programming (XP).
Incremental model: Similar to the waterfall model but divides the project into small,
manageableparts, or increments, with each increment building upon the previous one.

Spiral Model: Combines elements of both the waterfall and iterative models, emphasizing
riskassessment and adaptation to changes.

DevOps: An approach that emphasizes collaboration and communication between


developmentand operations teams, with a focus on automation and continuous delivery.

3. Perspective and Specialized Process Models

3.1 Perspective Models


A perspective process model is a model that describes “how to do” according to certain
softwareprocess systems.

It prescribes a new software system should be developed. . Perspective model is used


as guidelines or frame-works to organize and structure how software development
activities shouldbe performed and in what order.

Perspective Models

o Waterfall Model
o Incremental Process Model
 Incremental Model
 RAD Model
o Evolutionary Model
 Prototyping
 Spiral model
 Concurrent model

3.2 Specialized Process Models


o Component –Based Development
o The Formal Method Model
o Aspect-Oriented software Development

3.1.1 Waterfall Model:

The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear-sequential life cycle model. It is very simple to understand
and use. In a waterfall model, each phase must be completed before the next
phase can begin and there is no overlapping in the phases. The Waterfall model is
the earliest SDLC approach that was used for software development.

The waterfall Model illustrates the software development process in a linear


sequential flow. This means that any phase in the development process begins only
if the previous phase is complete. In this waterfall model, the phases do not
overlap.

The different phases of waterfall model is,

Requirement Gathering and analysis: All possible requirements of the system to


be developed are captured in this phase and documented in a requirement
specification document.

System Design: The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in

Specifying hardware and system requirements and helps in defining the overall
system architecture.

Implementation: With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next phase.
Each unit is developed and tested for its functionality, which is referred to as Unit
Testing.

Integration and Testing: All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire
system is tested for any faults and failures.

Deployment of System: Once the functional and non-functional testing is done;


the product is deployed in the customer environment or released into the market.

Maintenance: There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
Advantages:

 Easy and simple to understand


 Phases are processed and completed one at a time
 It has clearly defined
 Tests are easily arranged

Disadvantages:

 Practically, no project follows a perfect sequential flow. Change realized


at onestage causes confusion, re-tracking the previous steps, all leading to
loss of timeand money
 This model is not suitable for complex and object oriented projects
 This model cannot accommodate changing requirements.
 Customers see the software only on delivery. They need to wait till
such timefrom the requirement gathering stage.
3.1.2. Incremental Model

Incremental Model is a process of software development where requirements divided into


multiple standalone modules of the software development cycle. In this model, each
module goes through the requirements, design, implementation and testing phases. Every
subsequent release of the module adds function to the previous release. The process
continues until the complete system achieved.
The various phases of incremental model are as follows:

1. Requirement analysis: In the first phase of the incremental model, the product
analysis expertise identifies the requirements. And the system functional requirements are
understood by the requirement analysis team. To develop the software under the
incremental model, this phase performs a crucial role.

2. Design & Development: In this phase of the Incremental model of SDLC, the design of
the system functionality and the development method are finished with success. When
software develops new practicality, the incremental model uses style and development
phase.

3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various
methods are used to test the behaviour of each task.

4. Implementation: Implementation phase enables the coding phase of the development


system. It involves the final coding that design in the designing and development phase
and tests the functionality in the testing phase. After completion of this phase, the number
of the product working is enhanced and upgraded up to the final system product

When we use the Incremental Model?

o When the requirements are superior.


o A project has a lengthy development schedule.
o When Software team are not very well skilled or trained.
o When the customer demands a quick release of the product.
o You can develop prioritized requirements first.

Advantage of Incremental Model

o Errors are easy to be recognized.


o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.

Disadvantage of Incremental Model

o Need for good planning


o Total Cost is high.
o Well defined module interfaces are needed.

3.1.2RAD Model:

The RAD (Rapid Application Development) model is based on prototyping and


iterative development with no specific planning involved. The process of writing the
software itself involves the planning required for developing the product. RAD uses
predefinedprototyping techniques and tools to produce software applications.

Rapid Application Development focuses on gathering customer requirements through


workshops or focus groups, early testing of the prototypes by the customer using iterative
concept, reuse of the existing prototypes (components),

Continuous integration and rapid delivery. Rapid application development (RAD) is a


suite of software development methodology techniques used to further software
application development. RAD uses predefined prototyping techniques and tools to
produce software applications.

RAD model distributes the analysis, design, build and test phases into a series of
short, iterative development cycles.
The phases in the rapid application development (RAD) model are:
Business modelling: The information flow is identified between various
business functions.
Data modelling: Information gathered from business modeling is used to
define data objects that are needed for the business.
Process modelling: Data objects defined in data modeling are converted to
achieve the business information flow to achieve some specific business
objective. Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process
models intocode and the actual system.
Testing and turnover: Test new components and all the interfaces.
Advantages:

 Reduced development time.


 Increases reusability of components
 Quick initial reviews occur
 Encourages customer feedback
 Large projects can be done easy.
 Modularity makes the development task easier.

Disadvantages:
 A proper time-frame should have to be maintained for both end customer as well
as developersfor completing the system.

 RAD model-based software development fails because of a lack of commitment


and dedication.

 A slight complexity in the modularizing in RAD model can lead to failure of the
entire project.

 Requires highly skilled developers/designers

3.1.3 Evolutionary Model

3.1.3.1 Prototyping model

The Software Prototyping refers to working model of software or building


software application prototypes which display the functionality of the product under
development but may not actually hold the exact logic of the original software. Software
prototyping is becoming very popular as a software development model, as it enables to
understand customer requirements at an early stage of development. It helps get
valuable feedback from the customer and helps software designers and developers
understand about what exactly is expected from the product under development.

Prototyping is a process model which is used to develop softwares. The main


purpose of the prototyping model is to satisfy the customer’s need. To acquire this,
developers implement the prototype and present it to the customer for evaluation. After
evaluation customer suggests the modifications in the prototype. The suggested
modifications are then implemented in the prototype and again it is presented to the
customer for evaluation.

Software Prototyping Types:

Throwaway/Rapid Prototyping: Throwaway prototyping is also called as rapid or


close ended prototyping. This type of prototyping uses very little efforts with minimum
requirement analysis to build a prototype. Once the actual requirements are understood,
the prototype is discarded and the actual system is developed with a much clear
understanding of user requirements.

Evolutionary Prototyping: Evolutionary prototyping also called as breadboard


prototyping is based on building actual functional prototypes with minimal functionality in
the beginning. The prototype developed forms the heart of the future prototypes on top of
which the entire system is built. Using evolutionary prototyping only well understood
requirements are included in the prototype and the requirements are added as and when
they are understood.
Incremental Prototyping: Incremental prototyping refers to building multiple
functional prototypes of the various sub systems and then integrating all the available
prototypes to form a complete system.

Extreme Prototyping: Extreme prototyping is used in the web development


domain. It consists of three sequential phases. First, a basic prototype with all the existing
pages is presented in the html format. Then the data processing is simulated using a
prototype services layer. Finally the services are implemented and integrated to the final
prototype. This process is called Extreme Prototyping used to draw attention to the second
phase of the process, where a fully functional UI is developed with very little regard to the
actual service.

The various phases of prototyping model is,

Communication:

At this stage, the developers communicate with the customer to gather the
customer’s requirement. The objective of the software and the area where the definitions
are still unclear are outlined. The requirements which are clear and perfectly known are
also outlined. Analyzing the customer requirements; the developers proceed to construct
the prototype.
Construct Prototype:

While constructing prototype the developers establish objectives such as what will
be the use of prototype? What features of the final system the prototype would reflect? It is
taken into consideration that the cost of the developed prototype should be low and the
speed of prototype development should be fast.

The speed and cost of the prototype are maintained by ignoring the requirements that have
nothing to do with the customer’s interest. Generally, the prototypes are developed based
on the requirements of customer’s interest like user interface and unclear functions and so
on.

Customer Evaluation:

Once the prototype of a final software is developed it is demonstrated to the customer for
evaluation. Customer evaluates the prototype against the requirements they have specified
in the communication phase. If the customers are satisfied, then the developers start
developing the complete version of the software.In case, the customer is not satisfied with
the prototype, they are supposed to suggest modifications.

Iterate Prototype:

Based on the modifications suggested by the customer, developers start modifying


the prototype and the modified prototype is again demonstrated to the customer for
evaluation.

In this way, the prototype is iterated until the customer is satisfied with the prototype.
Once the customer is satisfied with the prototype the developers get engaged in developing
the complete version of the software.

Deploy Software:

Once the objective of the prototype is served it is thrown and the software is
developed using other process models. The main objective of the prototype is to
understand the customer’s requirement properly and completely.

As all the requirements are now understood, the developers develop the software and
deliver it to the customer with the expectation that the developed software meets all the
requirements specifiedby the customer.

Advantages:
It helps the developer to understand the certain and uncertain requirements of the
customer.
It helps the customer to easily realize the required modification before
finalimplementation of the system.
The customer does not have to wait for a long to see the working model of
the finalsystem.
Customer satisfaction is achieved.
This model is flexible in design

Disadvantages:

It has poor documentation because of continuously changing customer


requirements. Customers sometimes demand the actual product to be delivered soon
after seeing an earlyprototype.
There is certainty in determining the number of
iterations. There may be incomplete or inadequate
problem analysis. There may increase the complexity of
the system.

3.1.3.2 Spiral Model


Spiral model is one of the most important Software Development Life
Cycle models, which provides support for Risk Handling. In its diagrammatic
representation, it looks like a spiral with many loops. The exact number of loops of
the spiral is unknown and can vary from project to project. Each loop of the spiral is
called a Phase of the software development process. The exact number of phases
needed to develop the product can be varied by the project manager depending upon the
project risks. As the project manager dynamically determines the number of phases, so the
project manager has an important role to develop a product using spiral model.
The Radius of the spiral at any point represents the cost of the project so far, and the
angular dimension represents the progress made so far in the current phase.
The different phases of Spiral model is,

Requirements gathering:

Requirements are gathered during the planning phase. Requirements like ‘BRS’
that is ‘Business Requirement Specifications’ and ‘SRS’ that is ‘System Requirement
specifications’. Allthe needed requirements are collect from customers.

Risk Analysis:

In the risk analysis phase, a process is undertaken to identify risk and alternating
solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found
duringthe risk analysis then alternate solutions are suggested and implemented.

Engineering:

In this phase software is developed, along with testing at the end of the phase.
Hence inthis phase the development and testing is done.
Evaluation:
This phase allows the customer to evaluate the output of the project to date
before theproject continues to the next spiral.
Advantages:
Changing requirements can be accommodated
Requirements can be captured accurately.
Planning and estimation happens at each stage
Prototyping at each stage helps to reduce risk
Disadvantages:
 Management is more complex
 Process is complex
 Spiral may go indefinitely
 This model is not suitable for small low risk projects
 If the customer keeps changing requirements, the number of spirals
increases andsoftware project manager could not close the project at all.
Specialized Process Models
These model tend to be applied when a specialized define software engineering
approach
o Component –Based Development
Commercial off- the –self(COTS) software components developed by vendors
who offerthere as product, provide targeting functionality with well-defined
interfaces that enable the component to be integrated in the software that is to be
built, The component based development model incorporates many of the
characteristics of spiral model. the component based development model
constructs application from pre-packaged softwarecomponent
Model incorporates the following steps:
1. Available component based products are researched and evaluated
for theapplication domain in question
2. Component integration issues are considered.
3. Software architecture is design to accommodate the component.
4. Components are integrated in to the architecture.
5. Comprehensive testing is conducted to ensure proper functionality.
o The Formal Method Model
Encompasses a set of activities that lead to formal mathematical specification of
computersoftware-formal methods enable to specify, develop and verify a
computer based system by applying a rigorous mathematical notation. A
variation on this approach called cleanroom software engineering. The
development a formal model is currently quite time consuming because few
software developers have been necessary background to apply
formal method, extensive training is required it’s difficult to use the model as a
communication.
o Aspect-Oriented software Development
It provides a process and methodological approach for defining, specifying
.designing andconstruct aspect –“mechanism beyond subroutines and inheritance
for localizing the expression of cross-cutting concern “common systematic aspect
include user interface collaborative work, distribution , persistency, memory
management, transaction processing, security, integrity and so on.
4. Introduction to Agility

4.1 What is Agility?

o Effective (rapid and adaptive) response to change


o Effective communication among all stakeholders
o Drawing the customer onto the team
o Organizing a team so that it is in control of the work
o performed
o Rapid, incremental delivery of software

4.2 AGILITY PRINCIPLES:

1. Our highest priority is to satisfy the customer through early and


continuous delivery ofsoftware.

2. Welcome changing requirements, even late in development.

3. Deliver working software frequently, with a preference to the shorter timescale.

4. Business people and developers must work together daily throughout the project.

5. Build projects around motivated individuals. Give them the environment and support
they need.

6. The most efficient and effective method of conveying information is face-to


face conversation.

7. Working software is the primary measure of progress.

8. Agile processes promote sustainable development.


5. AGILE METHODS
Agile method is an alternative to traditional software development method like
waterfall model. Actually this method is designed to overcome the disadvantages of the
traditional implementation methods. Agile methodology is based on iterative and
incremental development where requirements and solutions evolve through collaboration
between cross functional teams. Every increment is called “Sprint‟ and the team consists
of both customer (i.e system user) and contractor (i.e System developer).
An agile model refers to the group of development processes. Processes are:

1. The frequent interaction: Every team member is motivated to interact


with other teammembers very frequently.

2. Working Software: Working software is more important than producing


many types ofdocuments.

3. Customer collaboration: Customer is made put on the team so that requirements,


are definedand refined on a day to day basis.

4. Responding to change: Quick response to change and


continuous development areemphasized.
Features of Agile methods are:
 In agile method, the feature requirements are divided into several small
parts. Each part isdeveloped in iteration.
 Each iteration is taken as a easily manageable, short term plan. The time taken
to completean iteration is called “time-box”
 Agile method uses face to face communication over written documents. The team
size issmall and it consists of only 5-9 members, which provides effective
communication.
 Contacts between team members may be done through e-mail,
video conferencing,telephone etc.
 In agile method, a customer representation is present to review the
progress made, re-evaluate the requirements and to provide suitable
feedback to the development team.
 Agile methods usually follow pair programming.ie) Here, two programmers work
together at one workstation one person types the code and other person reviews it.
The two persons can change their role for every hour or so. This helps to reduce the
errors.
The advantages of agile development process are:
 Shorter Development Cycles
 Wider Market
 Early Customer Feedback
 Continuous Improvement
There are various agile approaches. However very popular amongst these are
 Extreme Programming (XP)
 SCRUM
EXTREME PROGRAMMING (XP):
Extreme Programming is one of the Agile software development methodologies. It
provides values and principles to guide the team behavior. This
Programming support to Software development that improve software quality by
responding positively to changing customer requirements
XP, an agile technology, introduces the concept of constant involvement of
customer with the development team and its manager. This way the customer-Programmer
contacts and there by their understanding improves. The essential documents are only
emphasized like software source code, test plan, test data. The customer is involved in
continual integration testing. Here, two programmers, chief programmer & technical
Assistant, take the responsibility.
Four core values are presented as the foundation of XP.
1. Communication and feedback:
- The best method of communication is face to face method
- Documentation is avoided
2. Simplicity:
- It implements the user’s requirements should be done in a simpler design.
Complex methods should be avoided.
3. Responsibility:
-The developers are the responsible for the quality of the software.
4. Courage:
- Trying out new ideas and if they don’t work out, they should be scrapped.
Core Practices of XP:
 Planning exercise:
In XP, code is developed in iterations, periods of one to 4 weeks duration, during
whichspecific features of the software are created, These are called as “releases”.
 Small Release:
The time between releases of functionality to the users should be short ie)it
shouldbe a month or two months.
 Metaphor:
The system to be built will be software code that reflects things that exist
and happen in the real world.
 Simple Design:
The practical implementation of the value of simplicity that was described
above.
 Testing:
Testing is done at the same time as coding.It should be done to check
whether the expected resulted arrive for the test inputs.Testing is carried out
automated testing tools.
 Refactoring:
Modifying the part of code as a result of doing some changes is called
refactoring.
We have to ensure that no bug has been introduces due to refactoring.
 Pair programming:
All software code is written by pairs of developers, one actually doing the
typing and other observing. This is used to reduce errors.
 Collective ownership:
-This is really the corollary of pair programming
-The term as whole takes the collective
responsibility forThe code in the system.
 Continuous integration:
- This is another aspect of testing practice.
- As changes are made to software units, integrated tests can be run regularly
to ensure correctness of components.
 Forty-hour weeks:
- It points out that working excessive hours can lead to ill-health and be
generally counterproductive. The principle is that normally developers
should not work more than 40 hours a week.
 On-site Customers:
- Fast and effective communication with the users is achieved by having a
user domain expert on-site with the developers.
 Coding standards:
- If code is genuinely to be shared then there must be common, accepted,
coding standards to support the understanding and ease of modification of
the code.
Advantages
1. Customer-contractor understanding improves.
2. Internal group coordination is better
3. Continual testing improves software quality.
Limitations of XP:
The successful use of XP is based on certain condition of these do not exist, and
then itspractice could be difficult. These conditions include the following.
 There must be easy access to users
 Development staff need to be physically located in the same office
 Large, complex systems may initially need significant architectural effort.
Scrum

In this model, projects are divided into small units of work. These are delivered
over time boxes which are called “sprints”. Each sprint takes only a couple of weeks of
complete. At the endof each sprint, the progress of the project is analysed and suggestions
are given to make improvements.
In the scrum process, similar to chief programmer approach, a chief architect defines in
the initialphase,
 Overall architecture
 Release date
 Desired features
Next phases are called “sprints‟ (run at full speed for a short distance).
Sprints are carried out by groups, lasting one to four weeks, which develops the
specific desired features of the product. Scrum teams work in parallel on different sprints
& complete tasks on the same day. Every sprint is reviewed every day for 15 minutes to
remove any bottlenecks.
There are three vital members in a scrum model. They are
1. Owner
2. Scrum master
3. Team member
Owner:
The product owner communicates the requirements of the customer to the
development team.
Scrum master:
It keeps the team focused on its goal and acts as a interface between owner and team.
Team member:
It is responsible for development of the project according to the backlog. Creates a
wishlist called a product back log.
Advantage of Scrum Model:
 It is used to implement complex projects
 It can improve the team work and communication
 Productivity can be improved with daily meetings
 Product can be delivered in a scheduled time.
 Each person’s progress is visible on a day to day basis.
 Release date is predetermined & known to all
Disadvantages:
1. If the task is not well- defined, sprint process will take much time
2. Team members should be well committed to the task. If they
fail, projectWill also fail.
3. In experienced team members should not be able to complete project in time.
4. Regression testing should be conducted after each sprint to
implement quality management.
UNIT II- REQUIREMENTS ANALYSIS AND SPECIFICATION

2.1. SOFTWARE REQUIREMENTS:


 The process of finding out, analyzing, documenting and checking these services and
constraints is called Requirements engineering (RE).
 ‘User requirements’ to mean the high-level abstract requirements and ‘system requirements’
to mean the detailed description of what the system should do.
1. User requirements are statements, in a natural language plus diagrams, of what services
the system is expected to provide to system users and the constraints under which it must operate.
2. System requirements are more detailed descriptions of the software system’s functions,
services, and operational constraints. The system requirements document (sometimes called a
functional specification) should define exactly what is to be implemented. It may be of the contract
between the system buyer and the software developers.

Software system requirements are classified as functional requirements, nonfunctional


requirements and domain requirements:
1. Functional requirements:
These are statements of services the system should provide, how the system should react to
particular inputs and how the system should behave in particular situations. In some cases, the
functional requirements may also explicitly state what the system should not do.
2. Non-functional requirements
These are constraints on the services or functions offered by the system. They include timing
constraints, constraints on the development process and standards. Non-functional requirements
often apply to the system as a whole. They do not usually just apply to individual system features or
services.
3. Domain requirements
These are requirements that come from the application domain of the system and that reflect
characteristics and constraints of that domain. They may be functional or non-functional
requirements
 The functional requirements for a system describe what the system should do. These
requirements depend on the type of software being developed, the expected users of the
software, and the general approach taken by the organization when writing requirements.
 When expressed as user requirements, functional requirements are usually described in an
abstract way that can be understood by system users.
 More specific functional system requirements describe the system functions, its inputs and
outputs, exceptions, etc., in detail.
 The functional requirements part discusses the functionalities required from the system. The
system is considered to perform a set of high level functions {fi}. The functional view of the
system is shown in fig. Each function fi of the system can be considered as a transformation
of a set of input data (ii) to the corresponding set of output data (oi).

output
Fig: View of a system performing a set of functions
 The user can get some meaningful piece of work done using a high-level function.
 The functional requirements specification of a system should be both complete and
consistent.
 Completeness means that all services required by the user should be defined.
 Consistency means that requirements should not have contradictory definitions.
 In practice, for large, complex systems, it is practically impossible to achieve requirements
consistency and completeness.
Reasons are:
1. It is easy to make mistakes and omissions when writing specifications for complex systems.
2. There are many stakeholders in a large system. A stakeholder is a person or role that is
affected by the system in some way. Stakeholders have different— and often inconsistent—
needs. These inconsistencies may not be obvious when the requirements are first specified,
so inconsistent requirements are included in the specification.
Identifying functional requirements from a problem description:
The high-level functional requirements often need to be identified either from an informal problem
description document or from a conceptual understanding of the problem. Each high-level
requirement characterizes a way of system usage by some user to perform some meaningful piece
of work. There can be many types of users of a system and their requirements from the system may
be very different. So, it is often useful to identify the different types of users
who might use the system and then try to identify the requirements from each user’s perspective.
Here we list all functions {fi} that the system performs. Each function fi is considered as a
transformation of a set of input data to some corresponding output data.
Example:-
Consider the case of the library system, where -
F1: Search Book function (fig. 3.3)
Input: an author’s name
Output: details of the author’s books and the location of these books in the library

Fig: Book Function


So the function Search Book (F1) takes the author's name and transforms it into
book details.
Functional requirements actually describe a set of high-level requirements, where
each high-level requirement takes some data from the user and provides some
data to the user as an output. Also each high-level requirement might consist of
several other functions.

Documenting functional requirements:


For documenting the functional requirements, we need to specify the set of functionalities
supported by the system. A function can be specified by identifying the state at which the data is to
be input to the system, its input data domain, the output data domain, and the type of processing to
be carried on the input data to obtain the output data. Let us first try to document the withdraw-cash
function of an ATM (Automated Teller Machine) system. The withdraw-cash is a high-level
requirement. It has several sub-requirements corresponding to the different user interactions. These
different interaction sequences capture the different scenarios.

Example: - Withdraw Cash from


ATM R1: withdraw cash
Description: The withdraw cash function first determines the type of account that the user has and
the account number from which the user wishes to withdraw cash. It checks the balance to
determine whether the requested amount is available in the account. If enough balance is available,
it outputs the required cash, otherwise it generates an error message.
R1.1 select withdraw amount option
Input: “withdraw amount” option
Output: user prompted to enter the account type
R1.2: select account type
Input: user option
Output: prompt to enter amount
R1.3: get required amount
Input: amount to be withdrawn in integer values greater than 100 and less than
10,000 in multiples of 100.
Output: The requested cash and printed transaction statement.
Processing: the amount is debited from the user’s account if sufficient balance is available,
otherwise an error message displayed.
2.3. NON-FUNCTIONAL REQUIREMENTS:
 Non-functional requirements are requirements that are not directly concerned with the
specific services delivered by the system to its users.
 They may relate to emergent system properties such as reliability, response time, and store
occupancy. Alternatively, they may define constraints on the system implementation such as
the capabilities of I/O devices or the data representations used in interfaces with other
systems.
 Non-functional requirements, such as performance, security, or availability, usually specify
or constrain characteristics of the system as a whole.
 Non-functional requirements are often more critical than individual functional requirements.
 However, failing to meet a non-functional requirement can mean that the whole system is
unusable.
Example:
 If an aircraft system does not meet its reliability requirements, it will not be certified as safe
for operation; if an embedded control system fails to meet its performance requirements, the
control functions will not operate correctly.
 Although it is often possible to identify which system components implement specific
functional requirements, it is often more difficult to relate components to non-functional
requirements.

Reasons are:
1. Non-functional requirements may affect the overall architecture of a system rather than the
individual components.
2. A single non-functional requirement, such as a security requirement, may generate a number
of related functional requirements that define new system services that are required.
Non-functional requirements arise through user needs, because of budget constraints, organizational
policies, the need for interoperability with other software or hardware systems, or external factors
such as safety regulations or privacy legislation.
Classifications of non-functional requirements are
1. Product requirements:
 These requirements specify or constrain the behavior of the software.
 Examples include performance requirements on how fast the system must execute
and how much memory it requires, reliability requirements that set out the
acceptable failure rate, security requirements, and usability requirements.
2. Organizational requirements
 These requirements are broad system requirements derived from policies and
procedures in the customer’s and developer’s organization.
 Examples include operational process requirements that define how the system will
be used, development process requirements that specify the programming language,
process standards to be used, and environmental requirements that specify the operating
environment of the system.
3. External requirements:
 This broad heading covers all requirements that are derived from factors external to
the system and its development process.
 Regulatory requirements set out what must be done for the system to be approved for
 use by a regulator, such as a central bank;
 Legislative requirements that must be followed to ensure that the system operates
within the law;
 Ethical requirements that ensure that the system will be acceptable to its users and
the general public.

Metrics for specifying non-functional requirements

Identifying non-functional requirements:

Nonfunctional requirements are the characteristics of the system which can not be expressed as
functions - such as the maintainability of the system, portability of the system, usability of the
system, etc.
Nonfunctional requirements may include:
# reliability issues,
# performance issues,
# human - computer interface issues,
# interface with other external systems,
# security and maintainability of the system, etc.

2.4. DOMAIN REQUIREMENTS:


Domain requirements are derived fromthe application domain of the
system rather than from the specific needs of system users.
 They usually include specialised domain terminology or reference to domain concepts. They
may be new functional requirements in their own right, constrain existing functional
requirements or set out how particular computations must be carried out.
 Because these requirements are specialised, software engineers often find it difficult to
understand how they are related to other system requirements.

 Domain requirements are important because they often reflect fundamentals of the
application domain. If these requirements are not satisfied, it may be impossible to make the
system work satisfactorily.

2.5. USER REQUIREMENTS:

 The user requirements for a system should describe the functional and non functional
requirements. So that they are understandable by system users without detailed technical
knowledge.
 They should only specify the external behavior of the system and should avoid, system
design characteristics.
 Consequently, if you are writing user requirements, you should not use software jargon,
structured notations or formal notations, or describe the requirement by describing the
system implementation.
 User requirements are written in simple language, with simple tables and forms and intuitive
diagrams.

However, various problems can arise when requirements are written in natural language sentences
in a text document:
1. Lack of clarity: It is sometimes difficult to use language in a precise and unambiguous
way without making the document wordy and difficult to read.
2. Requirements confusion: Functional requirements, non-functional requirements, system
goals and design information may not be clearly distinguished.
3. Requirements amalgamation: Several different requirements may be expressed together
as a single requirement.
It is good practice to separate user requirements from more detailed system requirements in
a requirements document. Otherwise, non-technical readers of the user requirements may be
overwhelmed by details that are really only relevant for technicians.

Guidelines to minimize misunderstandings when writing user requirements are:


1. Invent a standard format and ensure that all requirement definitions adhere to that format.
2. Use language consistently.
3. Use text highlighting (bold, italic or colour) to pick out key parts of the requirement.
4. Avoid the use of computer jargon.

2.6. SYSTEM REQUIREMENTS:


 System requirements are expanded versions of the user requirements that are used by
software engineers as the starting point for the system design. They add detail and explain
how the user requirements should be provided by the system.
 They may be used as part of the contract for the implementation of the system and should
therefore be a complete and consistent specification of the whole system.
 The system requirements should simply describe the external behavior of the system and its
operational constraints. They should not be concerned with how the system should be
designed or implemented.
There are several reasons for this:
1. There may be need to Design an initial architecture of the system to help structure the
requirements specification. The system requirements are organized according to the

2. different sub-systems that make up the system.


3. In most cases, systems must interoperate with other existing systems. These constrain the
design, and these constraints impose requirements on the new system.
4. The use of a specific architecture to satisfy non-functional requirements may be necessary.

5. An external regulator who needs to certify that the system is safe may specify that an
architectural design that has already been certified be used.

 Natural language is often used to write system requirements specifications as well as user
requirements.

 However, because system requirements are more detailed than user requirements,
natural language specifications can be confusing and hard to understand:

1. Natural language understanding relies on the specification readers and writers using the
same words for the same concept.
2. A natural language requirements specification is over flexible.
3. There is no easy way to modularize natural language requirements.
Because of these problems, requirements specifications written in natural language
areprone to misunderstandings.
These are often not discovered until later phases of the softwareprocess and
may then be very expensive to resolve.:
REQUIREMENT ENGINEERING PROCESS
 Requirements engineering (RE) refers to the process of defining, documenting and
maintaining requirements.
 Requirements engineering emphasizes the use of systematic and repeatable techniques that
ensure the completeness, consistency, and relevance of the system requirements.
 The goal of the requirements engineering process is to create and maintain a system
requirements document.
Requirements engineering process includes four sub-processes.
1) Feasibility study: Assessing whether the system is useful to the business.
2) Elicitation and analysis :
 Requirements elicitation is the process of discovering, reviewing, documenting,
andunderstanding the user's needs and constraints for the system.
 Requirements analysis is the process of refining the user's needs and constraints.
3) Specification: Converting these requirements into some standard form. It is the process
ofdocumenting the user's needs and constraints clearly and precisely.
4) Validation: Checking that the requirements actually define the system that the customer
wants.

 Figure illustrates the relationship between the activities. It also shows the documents
produced at each stage of the requirements engineering process.
 The activities are concerned with the discovery, documentation and checking of
requirements.
 In all systems, normally requirements change frequently.
o Reasons for changing requirements:
o The people involved, develop a better understanding of what they want the software
to do;
o The organisation buying the system changes;
o Modifications are made to the system’s hardware, software and organisational
environment.
 The process of managing these changing requirements is called requirements management.

Figure .Spiral models of requirements engineering processes


Spiral model of requirements engineering process
 An alternative perspective on the requirements engineering process is spiral model of
requirements engineering process. This presents the process as a three-stage activity where the
activities are organized as an iterative process around a spiral.
 The amount of time and effort devoted to each activity in iteration depends on the stage of
the overall process and the type of system being developed.
 Early in the process, most effort will be spent on understanding high-level business and non-
functional requirements and the user requirements.

 Later in the process, in the outer rings of the spiral, more effort will be devoted to system
requirements engineering and system modeling.
 This spiral model accommodates approaches to development in which the requirements are
developed to different levels of detail. The number of iterations around the spiral can vary,
so the spiral can be exited after some or all of the user requirements have been elicited.
 If the prototyping activity shown under requirements validation is extended to include
iterative development, this model allows the requirements and the system implementation to
be developed together.
2.9. FEASIBILITY STUDIES:
For all new systems, the requirements engineering process should start with a feasibility study. The
input to the feasibility study is:
 A set of preliminary business requirements, an outline description of the system and how the
system is intended to support business processes.
The results of the feasibility study should be
 A report that recommends whether or not it is worth carrying on with the requirements
engineering and system development process.

A feasibility study is a short, focused study that aims to answer a number of questions:
1. Does the system contribute to the overall objectives of the organisation?
2. Can the system be implemented using current technology and within given cost and schedule
constraints?
3. Can the system be integrated with other systems which are already in place?

Carrying out a feasibility study involves 3 activities


1) Information assessment
2) Information collection
3) Report writing.

1) Information assessment:
 The information assessment phase identifies the information that is required to answer the
three questions set out above.
 Once the information have been identified, then talk with information sources to discover
the answers to these questions.

Some examples of possible questions that may be put are:


a) How would the organization cope if this system were not implemented?
b) What are the problems with current processes and how would a new system help alleviate
these problems?
c) What direct contribution will the system make to the business objectives and requirements?
d) Can information be transferred to and from other organizational systems?
e) Does the system require technology that has not previously been used in the organization?
f) What must be supported by the system and what need not be supported?
2) Information collection :
 Consult with information sources such as the managers of the departments where the system
will be used, software engineers who are familiar with the type of system that is proposed,
technology experts and end-users of the system.
 Feasibility study should be completed in two or three weeks.

3) Report writing:
 Once information is collected, write the feasibility study report. Report can contain a
recommendation about whether or not the system development should continue.

 Report can propose changes to the scope, budget and schedule of the system and suggest
further high-level requirements for the system.
REQUIREMENT ELICITATION ANALYSIS
 Software engineers work with customers and system end-users to find out about the
application domain, what services the system should provide, the required performance of
the system, hardware constraints, and so on.
 Requirements elicitation and analysis may involve a variety of people in an organisation.
 The term stakeholder is used to refer to any person or group who will be affected by
the system, directly or indirectly.
o Stakeholders include end-users who interact with the system and everyone else in an
organization that may be affected by its installation.
o Other system stakeholders may be engineers who are developing or maintaining
related systems, business managers, domain experts and trade union representatives.

Eliciting and understanding stakeholder requirements is difficult for several reasons:


1) Stakeholders often don’t know what they want from the computer system except in the
mostgeneral terms.
2) Stakeholders naturally express requirements in their own terms and with implicit knowledge
of their own work.
3) Different stakeholders have different requirements.
4) Political factors may influence the requirements of the system

.
Figure. The requirements elicitation and analysis process

 The activities are interleaved as the process proceeds from the inner to the outer rings of the
spiral.
The process activities are:
1. Requirements discovery:
This is the process of interacting with stakeholders in the system to collect their
requirements. Domain requirements from stakeholders and documentation are also
discovered during this activity.
2. Requirements classification and organization :
This activity takes the unstructured collection of requirements, groups related requirements
and organizes them into coherent clusters.
3.Requirement prioritization :
Inevitably, where multiple stakeholders are involved, requirements will conflict. This
activity is concerned with prioritizing requirements, and finding and resolving requirements
conflicts through negotiation.
4. Requirements documentation:
The requirements are documented and input into the next round of the spiral. Formal or
informal requirements documents may be produced.

2.10.1 Requirements discovery:


Requirements discovery is the process of gathering information about the proposed and existing
systems and distilling the user and system requirements from this information.
Sources of information during the requirements discovery phase include
 documentation,
 System stakeholders and
 Specifications of similar systems.
Interact with stakeholders through interviews and observation, and may use scenarios and
prototypes to help with the requirements discovery.
Stakeholders range from system end-users through managers and external stakeholders such as
regulators who certify the acceptability of the system.

For example: 1. System stakeholders for a bank ATM include:


a) Current bank customers who receive services from the system
b) Representatives from other banks who have reciprocal agreements that allow each other’s
ATMs to be used
c) Managers of bank branches who obtain management information from the system
d) Counter staff at bank branches who are involved in the day-to-day running of the system
e) Database administrators who are responsible for integrating the system with the bank’s
customer database
f) Bank security managers who must ensure that the system will not pose a security hazard
g) The bank’s marketing department who are likely be interested in using the system as a
means of marketing the bank
h) Hardware and software maintenance engineers who are responsible for maintaining and
upgrading the hardware and software
i) National banking regulators who are responsible for ensuring that the system conforms to
banking regulations
For example: 2. System stakeholders for the mental healthcare patient information system
include:
1. Patients whose information is recorded in the system.
2. Doctors who are responsible for assessing and treating patients.
3. Nurses who coordinate the consultations with doctors and administer some treatments.
4. Medical receptionists who manage patients’ appointments.
5. IT staff who are responsible for installing and maintaining the system.
6. A medical ethics manager who must ensure that the system meets current ethical
guidelines for patient care.
7. Healthcare managers who obtain management information from the system.
8. Medical records staff who are responsible for ensuring that system information can be
maintained and preserved, and that record keeping procedures have been properly implemented.

In addition to system stakeholders, requirements may come from the application domain and
from other systems that interact with the system being specified. All of these must be considered
during the requirements elicitation process.
Techniques used for requirements discovery are
1) Viewpoint
2) Interviewing
3) Scenarios
4) Ethnography

1) Viewpoints:
 The requirements sources (stakeholders, domain, systems) can all be represented as system
viewpoints, where each viewpoint presents a sub-set of the requirements for the system.
 Each viewpoint provides a fresh perspective on the system, but these perspectives are not
completely independent—they usually overlap so that they have common requirements.
 A key strength of viewpoint-oriented analysis is that it recognizes multiple perspectives and
provides a framework for discovering conflicts in the requirements proposed by different
stakeholders.
 Viewpoints can be used as a way of classifying stakeholders and other sources of
requirements.
Three generic types of viewpoint are
a) Interactor viewpoints: It represents people or other systems that interact directly with the
system. In the bank ATM system, examples of interactor viewpoints are the bank’s
customers and the bank’s account database.
b) Indirect viewpoints: It represents stakeholders who do not use the system themselves but
who influence the requirements in some way. In the bank ATM system, examples of
indirect viewpoints are the management of the bank and the bank security staff.
c) Domain viewpoints: It represents domain characteristics and constraints that influence the
system requirements. In the bank ATM system, an example of a domain viewpoint would be
the standards that have been developed for interbank communications.
 Interactor viewpoints provide detailed system requirements covering the system features
and interfaces.
 Indirect viewpoints are more likely to provide higher-level organizational requirements and
constraints.
 Domain viewpoints normally provide domain constraints that apply to the system.

Figure.Viewpoints in LIBSYS

Engineering viewpoints may be important for two reasons.


a) The engineers developing the system may have experience with similar systems and may be
able to suggest requirements from that experience.
b) Technical staff who have to manage and maintain the system may have requirements that
will help simplify system support.
Web-based systems must present a favourable image of the organisation as well as deliver
functionality to the user. For software products, the marketing department should know what
system features will make the system more marketable to potential buyers.
Viewpoints in the same branch are likely to share common requirements.
Once viewpoints have been identified and structured, try to identify the most important viewpoints
and start with them when discovering system requirements.

2) Interviewing:
Formal or informal interviews with system stakeholders are part of most requirements engineering
processes.
In these interviews, the requirements engineering team puts questions to stakeholders about the
system that they use and the system to be developed. Requirements are derived from the
answers to these questions.
Interviews may be of two types:
(1) Closed interviews where the stakeholder answers a predefined set of questions.
(2) Open interviews where there is no predefined agenda.
Interviews are good for getting an overall understanding of what stakeholders do, how they
might interact with the system and the difficulties that they face with current systems.
People like talking about their work and are usually happy to get involved in interviews.
However, interviews are not so good for understanding the requirements from the application
domain.
It is hard to elicit domain knowledge during interviews for two reasons:
(1) All application specialists use terminology and jargon that is specific to a domain.
(2) Some domain knowledge is so familiar to stakeholders that they either find it difficult to
explain or they think it is so fundamental that it isn’t worth mentioning.
Two characteristics of Effective interviewers:
(1) They are open-minded, avoid preconceived ideas about the requirements and are willing
to listen to stakeholders. If the stakeholder comes up with surprising requirements, they are
willing to change their mind about the system.
(2) They prompt the interviewee to start discussions with a question, a requirements
proposal or by suggesting working together on a prototype system. Saying to people ‘tell me
what you want’ is unlikely to result in useful information. Most people find it much easier to
talk in a defined context rather than in general terms.
Interviews should be used alongside other requirements elicitation techniques.
3) Scenarios:
Scenarios can be particularly useful for adding detail to an outline requirements description. They
are descriptions of example interaction sessions.
Each scenario covers one or more possible interactions. Several forms of scenarios have been
developed, each of which provides different types of information at different levels of detail
about the system.
The scenario starts with an outline of the interaction, and, during elicitation, details are added to
create a complete description of that interaction.
A scenario may include:
1. A description of what the system and users expect when the scenario starts
2. A description of the normal flow of events in the scenario
3. A description of what can go wrong and how this is handled
4. Information about other activities that might be going on at the same time
5. A description of the system state when the scenario finishes.
Figure. Scenario for article downloading in LIBSYS

Use-cases:

Figure . A simple use-case for article printing


Use-cases are a scenario-based technique for requirements elicitation which were first introduced in
the Objectory method. They have now become a fundamental feature of the UML notation for
describing object-oriented system models. A use-case identifies the type of interaction and the
actors involved.
Figure . Use cases for the library system
Actors in the process are represented as stick figures, and each class of interaction is
represented as a named ellipse.
The set of use-cases represents all of the possible interactions to be represented in the
system requirements.
Use-cases identify the individual interactions with the system. They can be
documented with text or linked to UML (Unified Modelling Language) models that
develop the scenario in more detail.

Sequence diagrams:
Sequence diagrams are often used to add information to a use-case. These sequence
diagrams show the actors involved in the interaction, the objects they interact with and
the operations associated with these objects.

Figure. System interactions for article printing

Essentially, a user request for an article triggers a request for a copyright form. Once
the user has completed the form, the article is downloaded and sent to the printer. Once
printing is complete, the article is deleted from the LIBSYS workspace.
Scenarios and use-cases are effective techniques for eliciting requirements for interactor
viewpoints, where each type of interaction can be represented as a usecase.
They can also be used in conjunction with some indirect viewpoints where these
viewpoints receive some results from the system.
Drawbacks:
They are not as effective for eliciting constraints or high-level business and non-
functionalrequirements from indirect viewpoints or for discovering domain requirements.
4) Ethnography:
Ethnography is an observational technique that can be used to understand social and
organizationalrequirements.
An analyst immerses him or herself in the working environment where the system will
be used. He or she observes the day-to-day work and notes made of the actual tasks in
which participants are involved.
The value of ethnography is that it helps analysts discover implicit system requirements
that reflect the actual rather than the formal processes in which people are involved.
Social and organizational factors that affect the work but that are not obvious to
individuals may only become clear when noticed by an unbiased observer.
Ethnography is particularly effective at discovering two types of requirements:
1. Requirements that are derived from the way in which people actually work rather than
the way in which process definitions say they ought to work.
2. Requirements that are derived from cooperation and awareness of other people’s activities.

Figure. Ethnography and prototyping for requirements


Ethnography may be combined with prototyping .The ethnography informs the
developmentof the prototype so that fewer prototype refinement cycles are required.
Furthermore, the prototyping focuses the ethnography by identifying problems and
questions that can then be discussed with the ethnographer.
Ethnographic studies can reveal critical process details that are often missed by other
requirements elicitation techniques.
Drawbacks:
(1) This approach is not appropriate for discovering organisational or domain requirements.
(2) Ethnographic studies cannot always identify new features that should be added to a system.
(3) Ethnography is not a complete approach to elicitation on its own, and it should be
used tocomplement other approaches, such as use-case analysis.

2.6 Software Requirement Specification

The software requirements document is the specification of the system. It include both a
definition and a specification of requirements. It is not a design docu As far as
possible, it should set of what the system should do rather than how it s do it.

Software Requirements Specification


The software requirements provide a basis for creating the Software Requireme Specifications
(SRS).
The SRS is useful in estimating cost, planning team activities, performing tasks,
tracking the team's progress throughout the development activity.
Typically software designers use IEEE STD 830-1998 as the basis for the en Software
Specifications. The standard template for writing SRS is as given below.

Document Title

Author(s)
Affiliation
Address
Date
Document Version

1. Introduction
1.1 Purpose of this document
Describes the purpose of the document.
1.2 Scope of this document
Describes the scope of this requirements definition effort. This section also de any
constraints that were placed upon the requirements elicitation process. as schedules,
costs.
1.3 Overview
Provides a brief overview of the product defined as a result of the requirem elicitation
process.
2. General description
 Describes the general functionality of the product such as similar system information,
user characteristics, user objective, general constraints placed on design team.
 Describes the features of the user community, including their expected expertise with
software systems and the application domain.
3. Functional requirements
This section lists the functional requirements in ranked order. A functional requirement
describes the possible effects of a software system, in other words, what the system
must accomplish. Each functional requirement should be specified in following
manner
 Short, imperative sentence stating highest ranked functional requirement.
1. Description
A full description of the requirement.
2. Criticality
Describes how essential this requirement is to the overall system.
3. Technical issues
Describes any design or implementation issues involved in satisfying this requirement.
4. Cost and schedule
Describes the relative or absolute costs of the system.
5. Risks
Describes the circumstances under which this requirement might not able to be satisfied.
6. Dependencies with other requirements
Describes interactions with other requirements.
7 any other appropriate

4. Interface requirements
This section describes how the software interfaces with other software products or users
for input or output. Examples of such interfaces include library routines, token.
streams, shared memory, data streams, and so forth.

4.1 User Interfaces


o Describes how this product interfaces with the user.
o Describes the graphical user interface if present. This section should ins set
of screen dumps to illustrate user interface features.
4.1.2 CLI
Describes the command-line interface if present. For each commate description of all
arguments and example values and invocations should be provided.
4.1.3 API
Describes the application programming interface, if present
4.2 Hardware Interfaces
Describes interfaces to hardware devices
4.3 Communications Interfaces
Describes network
interfaces
4.4 Software Interfaces
Describes any remaining software interfaces not included above.
5. Performance requirements
Specifies speed and memory requirements
6. Design constraints
Specifies any constraints for the design team such as software or hardware limitation
7. Other non-functional attributes
Specifies any other particular non functional attributes required by the system.
as:

7.1 Security
7.2 Binary Compatibility
7.3 Reliability
7.4 Maintainability
7.5 Portability
7.6 Extensibility
7.7 Reusability
7.8 Application Compatibility
7.9 Resource Utilizationc
7.10 Serviceability
... others as appropriate

8. Operational scenarios
This section should describe a set of scenarios that illustrate, from the user's perspective,
what will be experienced when utilizing the system under various situations.
9. Preliminary schedule
This section provides an initial version of the project plan, including the major tasks to be
accomplished, their interdependencies, and their tentative start/stop dates
10. Preliminary budget
This section provides an initial budget for the project.
11. Appendices
11.1 Definitions, Acronyms, Abbreviations
Provides definitions terms, and acronyms, can be provided.
11.2 References
Provides complete citations to all documents and meetings referenced.

2.6.1 Characteristics of SRS


Various characteristics of SRS are
 Correct - The SRS should be made up to date when appropriate requirements are
identified.
 Unambiguous - When the requirements are correctly understood then only it is
possible to write an unambiguous SRS.
 Complete To make the SRS complete, it should be specified what a software
designer wants to create a software
 Consistent - It should be consistent with reference to the functionalities identified.
 Specific - The requirements should be mentioned specifically.
 Traceable - What is the need for mentioned requirement? This should be correctly
identified.

2.7 Formal system specification

A formal system specification for software requirements is an unambiguous software


system
description. In the formal system specification, formal methods are used to represent the
requirements
of the system
Formal methods provide us with tools to precisely describe a system and show that a
system is correctly implemented. We say a system is correctly implemented when it
satisfies its given specification.
The specification of a system can be given either as a list of its desirable properties
(property-oriented approach) or as an abstract model of the system (model-oriented
approach)

2.741 Concept of Formal Technique

A formal technique is a mathematical method used to specify a hardware of software


system.
The formal methods verify that the implementation satisfies the requirements
specification, they prove the properties of the system without necessarily running the
system
The formal specification language consists of the syntactic domain, semantic domain and
a relationship called satisfaction relation.
The formal techniques can be used at every stage of the software development lite cycle
(requirement specification, design, coding and implementation) to verify that one stage's
output conforms to the previous stage's output.

Semantic domain

Abstract Data Type(ADT) specification languages are used to specify algebras and
programs
The programming languages are used to specify functions from input to output values.
The distributed system specification languages are used to specify state sequences event
sequences, state transition sequences and finite state machines.

Syntactic domain

The syntactic domain of formal specification language consists of alphabets symbols and
set of rules.
These rules are used to construct well-formed formula using alphabets Basically these well
formed formulas are used to specify the system.

Satisfaction relation
For any model of a system, it is important to determine if elements of the semantic domain
satisfy its specification. This satisfaction is determined by a function which is known as
semantic abstraction function. The semantic abstraction function maps the elements of
semantic domain to equivalent classes
There are different specifications that are used to describe different aspects of the system.
For example a specification for describing system behavior or a specification describing
the system structure

2.741.2 Merits and Limitations of Formal

Methods Merits

1) Formal methods provide a precise and unambiguous way to describe the behavior and
properties of a software system
2) Formal methods can be used to prove the correctness of a software system concerning its
specification. This is particularly valuable for safety-critical and mission-critical systems
3) Formal specification can serve as a clear and comprehensive documentation of system
requirements
4) The formal methods can identify errors in the specification itself. This allows corrections
before implementation.
5) The formal methods allow rigorous analysis of complex systems. Hence formal methods
promote the construction of rigorous specification of the system
6) The mathematical basis of formal methods facilitates automating the analysis of
specifications. The possibility of automatic verification is one of the most important
advantages of formal methods

Limitations

1) Formal methods are difficult to learn and use.


2) It is difficult to check the absolute correctness of systems using theorem- proving
techniques.
3) Formal techniques are not able to handle complex problems.
4) It can be challenging to create complete formal specifications that capture all aspects of a
real-world system.

2.741.3 Model-Oriented Approach and Property-Oriented Approach

• The formal method is classified using two approaches - Model oriented approad and
Property-oriented approach
Formal methods

o Model-oriented approach

o Property oriented approach

In model oriented approach, the system's behavior is represented by directly constructing a model
of the system with the help of mathematical structures such as tuples, relations, functions, sets
and sequences.
In property property-oriented approach, system's behaviour is defined indirectly by stating its
properties. These properties are specified in terms of a set of axioms
The property-oriented approach is more suitable for requirements specification and model-
oriented approach is more suitable for system design specification
There property oriented approach is classified into two categories Axiomatic specification and
algebraic specification

Property oriented approach

Axiomatic specification
Algebraic specification

2.741.4 Axiomatic Specification

 The axiomatic specification is based on formal logic or predicate calculus.


 The main purpose of this type of semantic method is formal program verification
 Axioms or inference rules are defined for each statement type in the language to allow
transformations of logic expressions into more formal logic expressions
 An Axiom is a logical statement that is assumed to be true.
 An Inference Rule is a method of inferring the truth of one assertion based on the values of other
assertions.

S1,S2,S3….Sn/S

The rule states that if S₁. S and S, are true, then the truth of S can be inferred
The top part of an inference rule is called its antecedent, the bottom part is called it consequent

Concept of assertion
The logic expressions are called assertions
An assertion before the statement or command is called a precondition. This condition
states the relationships and constraints among variables that are true at that point in execution.
An assertion following a statement is called postcondition
<precondition> statement <postcondition>

The axiomatic semantic is specified in the following manner

{P} statement {Q}

Where P is a precondition, Q is the postcondition. For

example-
x=y+1

One possible precondition: {y > 10}

Axiomatic specification for assignment statement


The precondition and postcondition of an assignment statement together Se precisely its
meaning
Let x E be a general assignment statement and N be its postcondition.
Then, its precondition, M. is defined by the axiom
M=Nx → E
That means M is computed as N with all instances of x replaced by the expression E

How to develop axiomatic specifications ?

Step 1: Establish the range of input values over which the function should behave correctly

Step 2: Establish the constraints on the input parameters as a predicate

Step 3: Specify a predicate defining the condition which must hold on the output of the
function

Step 4: Establish the changes made to the function's input parameters after execution the
function

Step 5: Combine all of the above into pre and post-conditions of the function

2.7.7.5 Algebraic Specification

 In the algebraic specification technique, an object class or type is specified in terme of


relationships that exist between the operations defined on that type

 The algebraic specifications define a system as heterogeneous algebra. The


heterogeneous algebra is a collection of different sets on which several operations are
defined Traditional algebras are homogeneous. A homogeneous algebu consists of a
single set and several operations

 The algebraic specification consists of four sections-

1. Type section: In this section, the data types being used are specified.
2. Exception section: In this section, exceptional conditions that may occur during the
operations are defined.
3. Syntax section: This section defines the signatures of the interface procedures The
collection of sets that form the input domain of an operator and the sof where the output is
produced is called the signature of the operator
4. Equations section: This section specifies the set of equations or rewrite rules These rules
define the meaning of the interface procedures

The algebraic specification consists of the following set of operators-

1. Basic construction operators: These operators are used to create or modify entities of a type
For example create and append are basic construction operators.
2. Extra construction operators: These are the construction operators other than the basic
construction operators. For example - The remove is an extra construction operator
3. Basic inspection operators: These operators evaluate attributes of type without modifying
them. For example eval, get are the basic inspection operators
4. Extra inspection operators: These extra inspection operators are other than the basic
inspection operators

Example of algebraic specification

Following is an algebraic specification that represents the Cartesian coordinates. The operations
defined on X and Y coordinates which evaluates the x and y attributes of an entity. The IsEq
compares two entities for equality

Types section defines

Coord
uses Integer, Boolean Syntax

section

Create (Integer, Integer)→Coord; X


(Coord) Integer;
Y (Coord) Integer;
IsEq (Coord, Coord) → Boolean.

Equations section

X (Create (x, y))=x


Y (Create (x, y)) =y
IsEq (Create (x1, y1), Create (x2, y2)) ((x1-x2) and (y1-

y2)) Properties algebraic specification

1) Kite Termination Property: This property of algebraic specification ensan at any sequence
of operations applied to a data structure or system wa erminate in a finite number of steps. In
other words, it guarantees that there wi be no infinite loops or non-terminating computations.
2) Unique termination property: The unique termination property of an algebraic specification
states that any two sequences of operations involving the interface procedures of the
specification will eventually terminate and produce the same result.
3) Completeness: The completeness property of an algebraic specification states that the
specification is sufficient to define the behavior of the system for all possible inputs and
outputs.

Pros and cons of algebraic specification Pros

1) The algebraic specifications are based on mathematical structures. Hence they are
unambiguous and specification.
2) Using algebraic specification, the effect of the arbitrary sequence of operations can be
studied.

Cons

1) Algebraic specifications are hard to understand.


2) The algebraic specifications are difficult to integrate with programming languages.

2.8 Finite state machine


 Finite state machine is used to recognize patterns.

o Finite automata machine takes the string of symbol as input and changes its state accordingly. In the
input, when a desired symbol is found then the transition occurs.
o While transition, the automata can either move to the next state or stay in the same state.
o FA has two states: accept state or reject state. When the input string is successfully processed and the
automata reached its final state then it will accept.

A finite automata consists of following:

Q:finitesetofstates
∑:finitesetofinputsymbol
q0:initialstate
F:finalstate
δ: Transition function

Transition function can be define as

1. δ: Q x ∑ →Q
FA is characterized into two ways:
1. DFA (finite automata)
2. NDFA (non deterministic finite

automata) DFA

DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the computation.
In DFA, the input character goes to one state only. DFA doesn't accept the null move that means the DFA cannot
change state without any input character.

DFA has five tuples {Q, ∑, q0, F, δ} Q:setofallstates


∑:finitesetofinputsymbol,whereδ:Qx∑→Q
q0:initialstate
F:finalstate
δ: Transition function
Example

See an example of deterministic finite automata:

1. Q = {q0, q1, q2}


2. ∑ = {0, 1}
3. q0 = {q0}
4.
F={q3}

NDFA

NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of states for a
particular input. NDFA accepts the NULL move that means it can change state without reading the symbols.

NDFA also has five states same as DFA. But NDFA has different transition function.

Transition function of NDFA can be defined as:

δ: Q x ∑ →2Q
Example

See an example of non deterministic finite automata:

1. Q = {q0, q1, q2}


2. ∑ = {0, 1}
3. q0 = {q0}
4. F = {q3}

2.9 PETRI NETS


Petri nets are a basic model of parallel and distributed systems. The basic idea is to
describestate changes in a system with transitions.
Petri nets — Formal technique for describing concurrent interrelated activities
Invented by Carl Adam Petri, 1962
Petri net Consists of four parts
(1) A set of places
(2) A set of transitions
(3) An input function
(4) An output function

 Petri nets contain places (Stelle) and transitions or | (Transition) that


may beconnected by directed arcs.
 Transitions symbolise actions; places symbolise states or conditions that need to be met
beforean action can be carried out.
● Marking of a Petri net
 Assignment of tokens
 Tokens enable transitions
● Petri nets are non-deterministic
Petri nets and their firing rule:
A place may contain several tokens, which may be interpreted as resources.
• There may be several input and output arcs between a place and a transition.
• The number of these arcs is represented as the weight of a single arc.
• A transition is enabled if its each input place contains at least as many tokens
as thecorresponding input arc weight indicates.
• When an enabled transition is fired, its input arc weights are subtracted from the input
placemarkings and its output arc weights are added to the output place markings.

Fig.A Pertinet Fig. A marked Petrinet

More formally, a Petri net is a 4-tuple C = (P, T,


I, O)P = {p , p ,…,p } is a finite set of places, n
≥0
T = {t1, t2,…,tm} is a finite set of transitions, m ≥ 0, with P and T
I : T → P∞ is the input function, a mapping from transitions to bags of
placesO : T → P∞ is the output function, a mapping from
Petri net in the above figure has,
Set of places P is {p1, p2, p3, p4}
Set of transitions T is {t1, t2}
Input functions: I(t1) =
{p2, p4}
I(t2)=
{p2}
Output functions: O(t1) =
{p1}
O(t2) =
{p3, p3}

Fig. After transision t1 fires Fig. After transition t2 fire

Fig. A petrinet with an


inhibitor arcInhibitor arcs:
An inhibitor arc is marked by a small circle, not an arrowhead. Transition t1 is
enabled.A marked Petri net is then a 5-tuple (P, T, I, O, M ).
In general, a transition is enabled if there is at least one token on each (normal) input arc,
and no

tokens on any inhibitor input arcs.

CASE Tools for Classical Analysis


● Two classes of CASE tools are helpful during classical analysis

 Drawing by hand is a lengthy and time consuming process


 Changes can result in having to redraw from scratch
● A data dictionary
 A tool for storing name and representation (format) of every component of every data item
● CASE tools to combine graphical tools and data dictionaries
 E.g., Analyst/Designer, Software through Pictures, System Architect
 Incorporate an automatic consistency checker: Consistency between specification
documentand design document
● An analysis technique is unlikely to receive widespread acceptance unless a tool-rich
CASEenvironment supports that technique
Metrics for Classical Analysis
 It is necessary to measure five fundamental metrics: Size, cost, duration, effort, and quality
 Number of pages in specification document
 Fault statistics of specification inspection
 Number of items in data dictionary
Challenges of Classical Analysis
 Resolving contradiction of specification document being simultaneously informal
enough for client to understand and formal enough for development team to use as
sole descriptionof product to be built
 The boundary line between analysis (“what”) and design (“how”) is all too easy to cross
 Specification document describes what to do, and not how to do it
 List all constraints without stating how to achieve them
Comparison of Classical Analysis Techniques

Classical Category
Analysis Strength Weaknesses
Method
Natural Language Informal  Easy to learn  Imprecise
 Easy to use  Specification can
 Easy for the client to be ambiguous,
understand contradictory or
incomplete
Entity RelationshipSemiformal  Can be understood by  Not as precise as
modelling client formal techniques
Structured system  More precise than  Cannot handle
Analysis informal techniques timing
Petrinet Formal  Extremely Precise  Hard for the
 Can reduce analysis faults development team
 Can reduce development to learn
cost and effort  Hard to use
 Can support for  Impossible for
correctness proving most clients to
understand
Object Modelling using
UML Introduction:
In late 1960’s people were concentrating on Procedure Oriented Languages such as COBOL,
FORTRAN, PASCAL…etc. Later on they preferred Object OrientedLanguages. In the middle of 1970-
80 three Scientists named as BOOCH, RUMBAUGH and JACOBSON found a new language named as
Unified Modeling Language. It encompasses the Designing of the System/Program. It is a Defacto
language. What is UML?

• Is a language. It is not simply a notation for drawing diagrams, but a complete language for
capturing knowledge (semantics) about a subject and expressing knowledge (syntax) regarding the
subject for
the purpose of communication.

• Applies to modeling and systems. Modeling involves a focus on understanding a subject (system)
and capturing and being able to communicate in this knowledge.

• It is the result of unifying the information systems and technology industry’s best
engineering practices (principals, techniques, methods and tools).

• Used for both database and software modeling


Overview of the UML

• The UML is a language for

– visualizing

– specifying

– constructing
Visual modeling (visualizing)

• A picture is worth a thousand words!


- Uses standard graphical notations
- Semi-formal
- Captures Business Process from enterprise information systems to distributed Web-
based applications and even to hard real time embedded systems
Specifying

• Building models that are: Precise, Unambiguous, Complete

• UML symbols are based on well-defined syntax and semantics.

• UML addresses the specification of all important analysis, design, and implementation decisions.
Constructing

• Models are related to OO programming languages.

• Round-trip engineering requires tool and human intervention to avoid information loss

–Forward engineering — direct mapping of a UML model into code.


–Reverse engineering — reconstruction of a UML model from an implementation.
Documenting
Architecture, Requirements, Tests, Activities (Project planning, Release management)
Conceptual Model of the UML

To understand the UML, you need to form a conceptual model of the language, and this requires learning
three major elements.
Elements:
1. Basic building blocks
2. Rules
3. Common Mechanisms

1. Basic Building Blocks of the UML


The vocabulary of the UML encompasses three kinds of building blocks:
1.1. Things
1.2. Relationships
1.3. Diagrams

1.1 Things in the UML

• There are four kinds of things in the UML:

1.1.1. Structural — nouns of UML models.


1.1.2. Behavioral — dynamic (verbal) parts of UML models.
1.1.3. Grouping — organizational parts of UML models.
1.1.4. Annotational — explanatory parts of UML models.

1.1.1. Structural Things:

• These are the Nouns and Static parts of the model.


• These are representing conceptual or physical
elements. There are seven kinds of structural things:
1. Class
2. Interface
3. Collaboration
4. Use Case
5. Active Class
6. Component
7. Node
1. Class
Is a description of set of objects that share the same attributes, operations methods, relationships and
semantics.

Class Name

Attributes

Operations
2. Interface
A collection of operations that specify a service (for a resource or an action) of a class or component.
It describes the externally visible behavior of that element

Interface

3. Collaboration
– Define an interaction among two or more classes.
– Define a society of roles and other elements.
– Provide cooperative behavior.
– Capture structural and behavioral dimensions.
– UML uses ‘pattern” as a synonym
(careful). It is representing with the dashed
ellipse.
4. Use Case
– A sequence of actions that produce an observable result for a specific actor.
– A set of scenarios tied together by a common user goal.
– Provides a structure for behavioral things.
– Realized through a collaboration (usually realized by a set of actors and the system to be built).

Place order

Actor

5. Active Class
– Special class whose objects own one or more processes or threads.
– Can initiate control activity.

Name
Event Manager

Thread
Attributes
Time
Operations
Suspend ()

6. Component

• Replaceable part of a system.


Orderform.java
• Components can be packaged logically.

• Conforms to a set of interfaces.

• Provides the realization of an interface.

• Represents a physical module of code


7. Node

• Element that exists at run time.

• Represents a computational resource. Web Server

• Generally has memory and processing power.


1.1.2. Behavioral Things

• These are Verbs of UML models.

• These are Dynamic parts of UML models: “behavior over time and space”.

• Usually connected to structural things in UML. There are two kinds of Behavioral Things:
1. Interaction

• Is a behavior of a set of objects comprising of a set of messages exchanges within a particular context
to accomplish a specific purpose.

Display
2. State Machine

• Is a behavior that specifies the sequences of states an object or an interaction goes through during
its lifetime in response to events, together with its responses to those events.

Idl Waitin
1.1.3. eThings
Grouping g

• These are the organizational parts of the UML models.


● There is only one primary kind of group thing:
1. Packages
- General purpose mechanism for organizing elements into groups.
- Purely conceptual; only exists at development time.
- Contains behavioral and structural things.
- Can be nested.
- Variations of packages are: Frameworks, models, & subsystems.
Business rules

1.1.4. Annotational Things

• These are Explanatory parts of UML models

• These are the Comments regarding other UML elements (usually called adornments in
UML) There is only one primary kind of annotational thing:
1. Note
A note is simply a symbol for rendering constraints and comments attached to an element or collection of
elements. Is best expressed in informal or formal text.

1.2. Relationships
There are four kinds of relationships:
1.2.1 Dependency
1.2.2. Association
1.2.3. Generalization
1.2.4. Realization
» These relationships tie things together.
» It is a semantic connection among elements.
» These relationships are the basic relational building blocks of the UML.

1.2.1. Dependency
Is a semantic relationship between two things in which a change to one thing (the independent thing) may affect
the semantics of the other thing (the dependent thing).

1.2.2. Association

Is a structural relationship that describes a set of links, a link being a connection among objects.
employer employee

0...1 *
Aggregation
» Is a special kind of association. It represents a structural relationship between the whole and its parts.
» Represented by black diamond.

1.2.3. Generalization
Is a specialization/generalization relationship in which objects of the specialized element (the child) are
more specific than the objects of the generalized element?

4. Realization
A semantic relationship between two elements, wherein one element guarantees to carry out what
is expected by the other element.

Where

Between interfaces and classes that realize them…


Between use cases and the collaborations that realize them...
1.3. Diagrams

• A diagram is the graphical presentation of a set of elements.

• Represented by a connected graph: Vertices are things; Arcs are


behaviors. UML includes nine diagrams:
1.3.1. Class Diagram;
1.3.2. Object Diagram
1.3.3. Use case Diagram
1.3.4. .Sequence Diagram
1.3.5. Collaboration Diagram
1.3.6. State chart Diagram
1.3.7. Activity Diagram
1.3.8. Component Diagram
1.3.9. Deployment Diagram

Static Modeling Dynamic Modeling


Class Diagram Object Use case Diagram Sequence
Diagram Component Diagram Collaboration
Diagram Deployment Diagram State chart
Diagram Diagram
Activity Diagram

Both Sequence and Collaboration diagrams are called Interaction Diagrams.


3.1. Class Diagram

• Class Diagrams describe the static structure of a system, or how it is structured rather than how
it behaves.

• A class diagram shows the existence of classes and their relationships in the logical view of a
system These diagrams contain the following elements.
– Classes and their structure and behavior
– Association, aggregation, dependency, and inheritance relationships
– Multiplicity and navigation indicators
– Role names
These diagrams are the most common diagrams found in O-O modeling systems.
Examples:

Registration
Student

Object Diagrams

• Shows a set of objects and their relationships.


• A static snapshot of instances.

• Object Diagrams describe the static structure of a system at a particular time. Whereas a class
model describes all possible situations, an object model describes a particular situation.
Object diagrams contain the following elements:
Objects which represent particular entities. These are instances of classes.
Linkswhich represent particular relationships between objects. These are instances of associations.

1.3.2. Use case diagrams


Use Case Diagrams describe the functionalityof a system and users of the system.
These diagrams contain the following elements:
Actors: which represent users of a system, including human users and other systems.
Use Cases: which represent functionality or services provided by a system to users.

Course

Registrar Student
1.3.3. Sequence Diagrams
● Sequence Diagrams describe interactions among classes. These interactions are modeled as exchanges
of messages.
● These diagrams focus on classes and the messages they exchange to accomplish some desired behavior.
● Sequence diagrams are a type of interaction
diagrams. Sequence diagrams contain the following
elements:
Class roles: which represent roles that objects may play within the interaction.
Lifelines: which represent the existence of an object over a period of time.
Activations: which represent the time during which an object is performing an
operation. Messages: which represent communication between objects.

1.3.4. Collaboration Diagram


Collaboration Diagrams describe interactions among classes and associations. These interactions are
modeled as exchanges of messages between classes through their associations. Collaboration diagrams are a
type of interaction diagram.
Collaboration diagrams contain the following elements.
Class roles: which represent roles that objects may play within the interaction?
Association roles: which represent roles that links may play within the interaction?
Message flows: which represent messages sent between objects via links. Links transport or implement
the delivery of the message.
1.3.5. Statechart Diagrams
State chart (or state) diagrams describe the states and responses of a class. Statechart diagrams
describe the behavior of a class in response to external stimuli. These diagrams contain the following
elements: States: which represent the situations during the life of an object in which it satisfies some
condition, performs some activity, or waits for some occurrence.
Transitions: which represent relationships between the different states of an object.

1.3.6. Activity Diagrams

Activity diagrams describe the activities of a class. These diagrams are similar to state chart diagrams
and use similar conventions, but activity diagrams describe the behavior of a class in response to internal
processing rather than external events as in state chart diagram.
Swim lanes: which represent responsibilities of one or more objects for actions within an overall activity;
that is, they divide the activity states into groups and assign these groups to object that must perform the
activities.
Action States: which represent atomic or non interruptible, actions of entities or steps in the execution of
an algorithm?
Action flows: which represent relationships between the different action states of an entity.
Object flows: which represent the utilization of objects by action states and the influence of action states
on objects?
Data Flow Diagram
A Data Flow Diagram (DFD) is a graphical representation of the "flow" of data
through aninformation system.
It is common practice to draw a System Context Diagram first which shows
the interactionbetween the system and outside entities.
The DFD is designed to show how a system is divided into smaller portions and to
highlight theflow of data between those parts.
Level 0 DFD i.e. Context level DFD should depict the system as a single.
Primary input and primary output should be carefully identified.
Information flow from continuity must be maintained from level to level
Four basic symbols

Symbol Notation
External Entity: External entities are
objectsoutside the system, with which the External Entity
system communicsates. External entities
are sources and destinations of the
system’s inputs and
outputs.
Process : A process tranforms incoimg
dataflow into outgoing data flow Level
Process

Transition: It represents the flow


of information from one entity to
another
Data Store: Data store are repositories of
datain the system. They are sometimes
also referred to as files or databases.

Figure .symbol of structured system

analysisRules for Designing DFD:


1. No process can have only outputs or only inputs. The process must have both
Outputs andinputs.

2. The verb phrases in the problem description can be identified as processes in the system.
3. There should not be a direct flow between data stores and external entity. This
flow should gothrough a process.
1
4. Data store labels should be noun phrases from problem description.
5. No data should move directly between external entities. The data flow
should go through aprocess.
6. Generally source and sink labels are noun phrases.

Step 1: Draw the Data Flow Diagram (DFD)


 A pictorial representation of all aspects of the logical data flow
 Logical data flow — What happens
 Physical data flow — How it happens
 Any non-trivial product contains many elements
 DFD is developed by stepwise refinement
 For large products a hierarchy of DFDs instead of one DFD
 Constructed by identifying data flows: Within requirements document or rapid
prototype

Step 2: Decide what sections to computerize and how (batch or online)


 Depending on client’s needs and budget limitations ƒ
 Cost-benefit analysis is applied
Step 3: Determine details of data flows
 Decide what data items must go into various data flows
 Stepwise refinement of each flow
 For larger products, a data dictionary is generated.
 Data dictionary - keeps track of all data element.
o A data dictionary is a collection of data about data.
o It maintains information about the definition, structure, and use of each
data elementthat an organization uses.

Step 4: Define logic of processes


 Determine what happens within each process
 Use of decision trees to consider all cases

Step 5: Define data stores


 Exact contents of each store and its representation (format)

Step 6: Define physical resources


 File names, organization (sequential, indexed, etc.), storage medium, and
records 2
 If a database management system (DBMS) used: Relevant information for
each table
Step 7: Determine input-output specifications
 Input forms and screens
 Printed outputs

Step8: Deerminising
Computing numerical data to determine hardware requirements
 Volume of input (daily or hourly)
 Frequency of each printed report and its deadline
 Size and number of records of each type to pass between CPU and mass
storage
 Size of each file

Step 9: Determine hardware requirements


 Use of sizing information to determine mass storage requirements
 Mass storage for backup
 Determine if client’s current hardware system is adequate

After approval by client: Specification document is handed to design team, and software
processcontinues

3
UNIT- III SOFTWARE DESIGN

1.1. INTRODUCTION
 Software design encompasses the set of principles, concepts, and practices that
lead to the development of a high-quality system or product.
 Design creates representation or model of the software. Design model provides
detail about software architecture, data structure, interfaces and components that
are necessary to implement the system.
 Software design sits at the technical kernel of software engineering and is applied
regardless of the software process model that is used.
 Beginning once software requirements have been analyzed and modeled, software
design is the last software engineering action within the modeling activity and sets
the stage for construction (code generation and testing).

Figure. Translating the requirements model into the design model

Four design models required for a complete specification of design


1) Data/class design
The data/class design transforms class models into design class realizations and
the requisite data structures required to implement the software.
The objects and relationships defined in the CRC(class responsibility collaborator)
diagram and the detailed data content depicted by class attributes and other
notation provide the basis for the data design action.

2) Architectural design
The architectural design defines the relationship between major structural
elements of the software, the architectural styles and design patterns and the
constraints that affect the way inwhich architecture can be implemented.
3) Interface design

The interface design describes how the software communicates with systems that
interoperate with it, and with humans who use it. An interface implies a flow
of information and a specific type of behavior. Therefore, usage scenarios and
behavioral models provide much of the information required for interface design.
4) Component-level design
The component-level design transforms structural elements of the software
architecture into a procedural description of software components. Information
obtained from the class-based models, flow models, and behavioral models serve
as the basis for component design.

1.2. DESIGN PROCESS:


Software design is an iterative process through which requirements are translated into a
“blueprint” forconstructing the software.
Software Quality Guidelines and Attributes
Three characteristics that serve as a guide for the evaluation of a good design: (or) goals of
good design

1. The design must implement all of the explicit requirements contained in the
requirementsmodel, and it must accommodate all of the implicit requirements
desired by stakeholders.
2. The design must be a readable, understandable guide for those who
generate code and forthose who test and subsequently support the software.
3. The design should provide a complete picture of the software, addressing
the data,functional, and behavioral domains from an implementation
perspective.

Quality Guidelines
1. A design should exhibit an architecture that
a. Has been created using recognizable architectural styles or patterns,
b. Is composed of components that exhibit good design characteristics
c. Can be implemented in an evolutionary fashion, thereby facilitating
implementation andtesting.
2. A design should be modular; that is, the software should be logically partitioned
into elements orsubsystems.
3. A design should contain distinct representations of data, architecture, interfaces,
and components.
4. A design should lead to data structures that are appropriate for the classes to
be implementedand are drawn from recognizable data patterns.
5. A design should lead to components that exhibit independent
functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections
betweencomponents and with the external environment.
7. A design should be derived using a repeatable method that is driven by
information obtainedduring software requirements analysis.
8. A design should be represented using a notation that effectively communicates
its meaning.

Quality Attributes.
A set of software quality attributes that has been given the acronym
FURPS—functionality,usability, reliability, performance, and supportability.

The FURPS quality attributes represent a target for all software design:
 Functionalityis assessed by evaluating the feature set and capabilities of the
program, the generality of the functions that are delivered, and the security of the
overall system.
 Usabilityis assessed by considering human factors, overall aesthetics, consistency,
and documentation.

 Reliability is evaluated by measuring the frequency and severity of failure, the


accuracy of output results, the mean-time-to-failure (MTTF), the ability to
recover from failure, and the predictability of the program.

 Performance is measured by considering processing speed, response time,


resource consumption, throughput, and efficiency.

 Supportability combines the ability to extend the program (extensibility),


adaptability, serviceability—these three attributes represent a more common
term, maintainability—and in addition, testability, compatibility, configurability
(the ability to organize and control elements of the software configuration), the
ease with which a system can be installed, and the ease with which problems can
be localized.

 Not every software quality attribute is weighted equally as the software design
is developed.
 One application may stress functionality with a special emphasis on security.
 Another may demand performance with particular emphasis on processing speed.
 A third might focus on reliability.
 Regardless of the weighting, it is important to note that these quality attributes
must beconsidered as design commences, not after the design is complete and
construction has begun.

1.3. DESIGN CONCEPTS:


Design creates a representation or model of the software, the design model provides
detail about software architecture, data structures, interfaces, and components that are
necessary to implement the system. Fundamental software design concepts provide the
necessary framework for “getting it right”.

Important software design concepts


1. Abstraction
2. Architecture
3. Patterns
4. Separation of Concerns
5. Modularity
6. Information Hiding
7. Functional Independence
8. Refinement
9. Aspects
10. Refactoring
11. Object-Oriented Design Concepts
12. Design Classes

1) Abstraction
“Abstraction permits one to concentrate on a problem at some level of abstraction without
regard to lowlevel details”
• Procedural Abstraction
– Sequence of instructions that have a specific and limited function.
– Instructions are given in a named sequence
– Each instruction has a limited function
– The name of a procedural abstraction implies these functions, but specific
details are suppressed.
– An example of a procedural abstraction would be the word open for a door.
Open impliesa long sequence of procedural steps (e.g., walk to the door,
reach out and grasp knob,turn knob and pull door, step away from moving
door, etc.)

• Data Abstraction
– This is a named collection of data that describes a data object.
– Data abstraction includes a set of attributes that describe an object.
– The data abstraction for door would encompass set of attributes that
describe the door (e.g., door type, swing direction, opening mechanism,
weight, dimensions). It follows that the procedural abstraction open would
make use of information contained in the attributes of the data abstraction
door.

• Control Abstraction
– A program control mechanism without specifying internal details, e.g.,
semaphore,rendezvous

2) Architecture
Architecture is the structure or organization of program components (modules), the
manner in which these components interact, and the structure of data that are used by the
components. Components can be generalized to represent major system elements and
their interactions.

Desired properties of an architectural design


• Structural Properties
– This defines the components of a system and the manner in which these
interact with one another.

• Extra Functional Properties


– This addresses how the design architecture achieves requirements for
performance,reliability, capacity, adaptability, and security
• Families of Related Systems
– The ability to reuse architectural building blocks

Kinds of Models
1) Structural models: represent architecture as an organized collection of components.
2) Framework models: increase the level of design abstraction by identifying
repeatable architecturedesign frameworks (patterns)

3)Dynamic models: address the behavior aspects of the program architecture

4)Process models: focus on the design of the business or technical process


5)Functional models: can be used to represent the functional hierarchy of a system

• Horizontal Partitioning
– Easier to test
– Easier to maintain (questionable)
– Propagation of fewer side effects (questionable)
– Easier to add new features
F1 (Ex: Input) F2 (Process) F3(Output)

• Vertical Partitioning
– Control and work modules are distributed top down
– Top level modules perform control functions
– Lower modules perform computations
• Less susceptible to side effects
• Also very maintainable

3) Pattern
 A design pattern describes a design structure that solves a particular design
problem within a specific context and amid “forces” that may have an impact on
the manner in which the pattern isapplied and used.

The intent of each design pattern is to provide a description that enables a designer to determine
(1) Whether the pattern is applicable to the current work,
(2) Whether the pattern can be reused (hence, saving design time), and
(3) Whether the pattern can serve as a guide for developing a similar, but functionally or
structurallydifferent pattern.
4) Separation of Concerns
 Separation of concerns is a design concept that suggests that any complex problem
can be more easily handled if it is subdivided into pieces that can each be solved
and/or optimized independently.
 A concern is a feature or behavior that is specified as part of the requirements
model for the software.
 By separating concerns into smaller, and therefore more manageable pieces, a
problem takes less effort and time to solve.
 For two problems, p1 and p2, if the perceived complexity of p1 is greater than the
perceived complexity of p2, it follows that the effort required to solve p1 is greater
than the effort required to solve p2. As a general case, this result is intuitively
obvious. It does take more time to solve a difficult problem.
 It also follows that the perceived complexity of two problems when they are
combined is often greater than the sum of the perceived complexity when each is
taken separately. This leads to a divide-and-conquer strategy

5) Modularity
Software is divided into separately named and addressable components called
modules that are integratedto satisfy problem requirements.
• Follows “divide and conquer” concept, a complex problem is broken down into
several manageable pieces
• Let p1 and p2 be two program parts, and E the effort to solve the
problem. Then,E(p1+p2) > E(p1)+E(p2), often >>
• A need to divide software into optimal sized modules.
• Monolithic software (i.e., a large program composed of a single module) cannot be
easily grasped by a software engineer. The number of control paths, span of
reference, number of variables, and overall complexity would make understanding
more difficult.
Modularity & Software Cost

Objectives of modularity in a design method


• Modular Decomposability
– Provide a systematic mechanism to decompose a problem into sub problems

• Modular Composability
– Enable reuse of existing components to be assembled into a new system

• Modular Understandability
– Can the module be understood as a stand alone unit? Then it is easier to
understand andchange.
• Modular Continuity
– If small changes to the system requirements result in changes to
individual modules,rather than system-wide changes, the impact of the side
effects is reduced

• Modular Protection
– If there is an error in the module, then those errors are localized and not
spread to othermodules

Benefits of modularize a design


• Development can be more easily planned;
• Software increments can be defined and delivered;
• Changes can be more easily accommodated;
• Testing and debugging can be conducted more efficiently,
• Long-term maintenance can be conducted without serious side effects.

6) Information Hiding
• Modules are characterized by design decisions that are hidden from others.
Modules should be specified and designed so that information (algorithms and
data) contained within a module is inaccessible to other modules that have no
need for such information.
• Modules communicate only through well defined interfaces
• Enforce access constraints to local entities and those visible through interfaces
• Very important for accommodating change and reducing coupling.
• Abstraction helps to define the procedural (or informational) entities that make up
the software.
• Hiding defines and enforces access constraints to both procedural detail within
a module andany local data structure used by the module

Benefits Information Hiding:


• Inadvertent errors introduced during modification are less likely to propagate
• Reduces the likelihood of “side effects”
• Limits the global impact of local design decisions
• Emphasizes communication through controlled interfaces
• Discourages the use of global data
• Leads to encapsulation—an attribute of high quality design
• Results in higher quality software

7) Functional Independence
• Functional independence is achieved by developing modules with
“singleminded” function andan “aversion” to excessive interaction with other
modules.
• Each module addresses a specific subset of requirements and has a simple
interface whenviewed from other parts of the program structure.
• Critical in dividing system into independently implementable parts
• Measured by two qualitative criteria

– Cohesion : Relative functional strength of a module


– Coupling : Relative interdependence among module
Modular Design – Cohesion
• A cohesive module performs a single task requiring little interaction with other
components inother parts of a program.
• Different levels of cohesion
– Coincidental, logical, temporal, procedural, communications, sequential,
functional

• Coincidental Cohesion
- The parts of a component are not related but simply
bundled into a singlecomponent.
- Harder to understand and not reusable

• Logical Cohesion
- Similar functions such as input, error handling, etc. put together.
Functions fallin same logical class. May pass a flag to
determine which ones executed.
- Interface difficult to understand. Code for more than one
function may beintertwined, leading to severe maintenance
problems.
- Difficult to reuse

• Temporal Cohesion
- All of statements activated at a single time, such as start up
or shut down, arebrought together. Initialization, clean up.
- Functions weakly related to one another, but more strongly
related to functionsin other modules so may need to change lots of
modules when do maintenance.

• Procedural cohesion:
- A single control sequence, e.g., a loop or sequence of decision
statements. Oftencuts across functional lines. May contain only part
of a complete function or parts of several functions.
- Functions still weakly connected, and again unlikely to
be reusable in anotherproduct.

• Communicational cohesion:
- Operate on same input data or produce same output data. May be
performing more than one function. Generally acceptable if
alternate structures with higherCohesion cannot be easily
identified.
- Still problems with reusability.

• Sequential cohesion:
- Output from one part serves as input for another part. May
contain severalfunctions or parts of different functions.

• Informational cohesion:
- Performs a number of functions, each with its own entry point,
with independentcode for each function, all performed on same
data structure. Different than logical cohesion because functions
not intertwined.
• Functional cohesion:
- Each part necessary for execution of a single function.
e.g., compute square rootor sort the array.
- Usually reusable in other contexts. Maintenance easier.

• Type cohesion:
- Modules that support a data abstraction.
- Not strictly a linear scale. Functional much stronger than rest
while first twomuch weaker than others. Often many levels may
be applicable when considering two elements of a module.
Cohesion of module considered as highest level of cohesion that
is applicable to all elements in the module.

Modular Design – Coupling


• Coupling describes the interconnection among modules
• Coupling depends on the interface complexity between modules, the point
at which entryorreference is made to a module, and what data pass across the
interface.

• Data coupling
– Occurs when one module passes local data values to another as parameters
• Stamp coupling
– Occurs when part of a data structure is passed to another module as a
parameter
– similar to common coupling except that global variables are shared
selectively among routines that require the data. E.g., packages in Ada.
More desirable than common coupling because fewer modules will have
to be modified if a shared data structure is modified. Pass entire data
structure but need only parts of it.

• Control Coupling
– Occurs when control parameters are passed between modules. So that
one module controls the sequence of processing steps in another module

• Common Coupling
– Occurs when multiple modules access common data areas such as Fortran
Common or C extern

• Content Coupling
– If one module directly references the contents of the other.
– When one module modifies local data values or instructions in another module.
– If one refers to local data in another module.
– If one branches into a local label of another.
• Subclass Coupling
– The coupling between a class and its parent class
Examples of Coupling

Different between Cohesion and Coupling

Cohesio n Coupling

Cohesion is the indication of the relationship Coupling is the indication of the


within module. relationshipsbetween modules.
Cohesion shows the module’s relative functional Coupling shows the
strength. relative independence among the
modules.
Cohesion is a degree (quality) to which a Coupling is a degree to which a
component / modulefocuses on the single component /module is connected
thing. to the other modules.
While designing you should strive for high cohesion While designing you should strive
i.e. a cohesive component/ module focus on a for low coupling i.e.
single task (i.e., single- mindedness) with little dependency between modules
interaction with other modules of the should be less.
system.
Cohesion is the kind of natural extension of data Making private fields, private
hiding for example, class having all members methods and nonpublic classes
visible with a package havingdefault visibility. provides loose coupling.
Cohesion is Intra – Module Concept. Coupling is Inter -Module Concept.

8) Refinement
• Refinement is actually a process of elaboration.
• Refinement is a process where one or several instructions of the program are
decomposed into more detailed instructions.
• Begin with a statement of function (or description of information) that is defined
at a high level of abstraction and then elaborate on the original statement,
providing more and more detail as each successive refinement (elaboration)
occurs.
• Refinement helps to reveal low-level details as design progresses.
• Stepwise refinement is a top down strategy
– Basic architecture is developed iteratively
– Step wise hierarchy is developed
9) Aspects
 An aspect is a representation of a crosscutting concern.
 For example, generic security requirement that states that a registered user must be
validated prior to using an application. This requirement is applicable for all functions
that are available to registered users of the system.
 The design representation, of the requirement a registered user must be validated
prior to using the system, is an aspect of the system.
 An aspect is implemented as a separate module (component) rather than as
software fragments thatare “scattered” or “tangled” throughout many components
 The design architecture should support a mechanism for defining an aspect—a
module that enablesthe concern to be implemented across all other concerns that it
crosscuts.

10) Refactoring
 "Refactoring is the process of changing a software system in such a way that it
does not alter theexternal behavior of the code [design] yet improves its internal
structure.”
 Refactoring is a reorganization technique that simplifies the design (or code) of a
component
without changing its function or behavior.
• When software is refactored, the existing design is examined for
– Redundancy
– Unused design elements
– Inefficient or unnecessary algorithms
– Poorly constructed or inappropriate data structures or any other design
failure that can becorrected to yield a better design.

1.4 Design Patterns


Design patterns are basically defined as reusable solutions to the common problems that arise during
software design and development. They are general templates or best practices that guide
developers in creating well-structured, maintainable, and efficient code.
Types of Design Patterns
Creational Design Pattern abstract the instantiation process. They help in making a
system independent of how its objects are created, composed and represented.
Types of Creational Design Patterns
 Factory Method Design Pattern
 Abstract Factory Method Design Pattern
 Singleton Method Design Pattern
 Prototype Method Design Pattern
 Builder Method Design Pattern

Structural Design Patterns


Structural Design Patterns are concerned with how classes and objects are composed to form
larger structures. Structural class patterns use inheritance to compose interfaces or
implementations.
Types of Structural Design Patterns
 Adapter Method Design Patterns
 Bridge Method Design Patterns
 Composite Method Design Patterns
 Decorator Method Design Patterns
 Facade Method Design Patterns
 Flyweight Method Design Patterns
 Proxy Method Design Patterns

Behavioral Design Patterns


Behavioral Patterns are concerned with algorithms and the assignment of responsibilities
between objects. Behavioral patterns describe not just patterns of objects or classes but also the
patterns of communication between them. These patterns characterize complex control flow
that’s difficult to follow at run-time.

Types of Behavioral Design Patterns


 Chain Of Responsibility Method Design Pattern
 Command Method Design Pattern
 Interpreter Method Design Patterns
 Mediator Method Design Pattern
 Memento Method Design Patterns
 Observer Method Design Pattern
 State Method Design Pattern
 Strategy Method Design Pattern
 Template Method Design Pattern
 Visitor Method Design Pattern

1.5 Model-View-Controller
The Model-View-Controller (MVC) framework is an architectural/design pattern that
separates an application into three main logical components Model, View, and Controller.
Each architectural component is built to handle specific development aspects of an application.
It isolates the business logic and presentation layer from each other. It was traditionally used
for desktop graphical user interfaces (GUIs). Nowadays, MVC is one of the most frequently
used industry-standard web development frameworks to create scalable and extensible projects.
It is also used for designing mobile apps.
MVC was created by Trygve Reenskaug. The main goal of this design pattern was to solve
the problem of users controlling a large and complex data set by splitting a large application
into specific sections that all have their own purpose.

Features of MVC :
 It provides a clear separation of business logic, Ul logic, and input logic.
 It offers full control over your HTML and URLs which makes it easy to design web
application architecture.
 It is a powerful URL-mapping component using which we can build applications that have
comprehensible and searchable URLs.
 It supports Test Driven Development (TDD).

To know more about the benefits of using the MVC Framework refer to the article – Benefits
of using MVC framework

Components of MVC :
The MVC framework includes the following 3 components:
 Controller
 Model
 View

MVC Architecture Design


Controller:
The controller is the component that enables the interconnection between the views and the
model so it acts as an intermediary. The controller doesn’t have to worry about handling data
logic, it just tells the model what to do. It processes all the business logic and incoming
requests, manipulates data using the Model component, and interact with the View to render
the final output.
View:
The View component is used for all the UI logic of the application. It generates a user interface
for the user. Views are created by the data which is collected by the model component but
these data aren’t taken directly but through the controller. It only interacts with the controller.
Model:
The Model component corresponds to all the data-related logic that the user works with. This
can represent either the data that is being transferred between the View and Controller
components or any other business logic-related data. It can add or retrieve data from the
database. It responds to the controller’s request because the controller can’t interact with the
database by itself. The model interacts with the database and gives the required data back to
the controller.
Working of the MVC framework with an example:
Let’s imagine an end-user sends a request to a server to get a list of students studying in a
class. The server would then send that request to that particular controller that handles
students. That controller would then request the model that handles students to return a list of
all students studying in a class.

The flow of Data in MVC Components


The model would query the database for the list of all students and then return that list back to
the controller. If the response back from the model was successful, then the controller would
ask the view associated with students to return a presentation of the list of students. This view
would take the list of students from the controller and render the list into HTML that can be
used by the browser.
The controller would then take that presentation and returns it back to the user. Thus ending
the request. If earlier the model returned an error, the controller would handle that error by
asking the view that handles errors to render a presentation for that particular error. That error
presentation would then be returned to the user instead of the student list presentation.
As we can see from the above example, the model handles all of the data. The view handles all
of the presentations and the controller just tells the model and view of what to do. This is the
basic architecture and working of the MVC framework.
The MVC architectural pattern allows us to adhere to the following design principles:
1. Divide and conquer. The three components can be somewhat independently designed.
2. Increase cohesion. The components have stronger layer cohesion than if the view and
controller were together in a single UI layer.
3. Reduce coupling. The communication channels between the three components are minimal
and easy to find.
4. Increase reuse. The view and controller normally make extensive use of reusable
components for various kinds of UI controls. The UI, however, will become application-
specific, therefore it will not be easily reusable.
5. Design for flexibility. It is usually quite easy to change the UI by changing the view, the
controller, or both.

Advantages of MVC:
 Codes are easy to maintain and they can be extended easily.
 The MVC model component can be tested separately.
 The components of MVC can be developed simultaneously.
 It reduces complexity by dividing an application into three units. Model, view, and
controller.
 It supports Test Driven Development (TDD).
 It works well for Web apps that are supported by large teams of web designers and
developers.
 This architecture helps to test components independently as all classes and objects are
independent of each other
 Search Engine Optimization (SEO) Friendly.

Disadvantages of MVC:
 It is difficult to read, change, test, and reuse this model
 It is not suitable for building small applications.
 The inefficiency of data access in view.
 The framework navigation can be complex as it introduces new layers of abstraction which
requires users to adapt to the decomposition criteria of MVC.
 Increased complexity and Inefficiency of data

1.6 Publisher-Subscriber Pattern


Consider a scenario of synchronous message passing. You have two components in your
system that communicate with each other. LeLet’ss call the sender and receiver. The receiver
asks for a service from the sender and the sender serves the request and waits for an
acknowledgment from the receiver. There is another receiver that requests a service from the
sender. The sender is blocked since it hasn’t yet received any acknowledgment from the first
receiver. The sender isn’t able to serve the second receiver which can create problems. To solve
this drawback, the Pub-Sub model was introduced.
Publisher Subscriber basically known as Pub-Sub is an asynchronous message-passing system that
solves the drawback above. The sender is called the publisher whereas the receiver is called the
subscriber. The main advantage of pub-sub is that it decouples the subsystem which means all
the components can work independently. This is very important when it comes to writing APIs
or working with databases.
The publisher never sends a direct message to the subscribers, it is sent to a broker and then the
subscribers will get it. Sending a message to the broker is called publishing whereas listening to
incoming messages is called subscribing. In simple terms, a subscriber subscribes to a
topic(which is like a message broker) and the publisher pushes messages to the topic and then
the topic will push messages to the subscribers.

What are the Different Components of Pub/Sub Architecture?


Let’s look at the concepts which run the pub-sub system
1. Topic: It is basically a place where messages are sent.

2. Subscription: It is basically a service provided by the publisher. It represents a stream of


messages that is to be sent to the subscriber-only.

3. Publisher: An entity that sends messages to topics.

4. Subscriber: An entity that receives messages from topics based on Subscription. We can
understand it by taking the analogy of ott platforms. Ott platforms allow you to stream the
services only if you have a subscription. Similarly, a subscription here works.

5. Acknowledgment: It is a message that subscribers send after they receive a message from the
topic

There are two delivery methods: push and pull. A subscriber receives the message by either
pushing the message to the subscriber or by the subscriber pulling the message from the topic.
The below diagram represents the architecture of pub-sub.


1.7 Adapter
This pattern is easy to understand as the real world is full of adapters. For example
consider a USB to Ethernet adapter. We need this when we have an Ethernet interface on one
end and USB on the other. Since they are incompatible with each other. we use an adapter that
converts one to other. This example is pretty analogous to Object Oriented Adapters. In design,
adapters are used when we have a class (Client) expecting some type of object and we have an
object (Adaptee) offering the same features but exposing a different interface.
To use an adapter:
1. The client makes a request to the adapter by calling a method on it using the target interface.
2. The adapter translates that request on the adaptee using the adaptee interface.
3. Client receive the results of the call and is unaware of adapter’s presence
4. Definition: The adapter pattern convert the interface of a class into another interface clients
expect. Adapter lets classes work together that couldn’t otherwise because of incompatible
interfaces. Class Diagram:

1.8 Command

Definition: The command pattern encapsulates a request as an object, thereby letting us


parameterize other objects with different requests, queue or log requests, and support undoable
operations.
The definition is a bit confusing at first but let’s step through it. In analogy to our problem above
remote control is the client and stereo, lights, etc. are the receivers. In a command pattern,
there is a Command object that encapsulates a request by binding together a set of actions on a
specific receiver. It does so by exposing just one method execute() that causes some actions
to be invoked on the receiver.
Parameterizing other objects with different requests in our analogy means that the button used to
turn on the lights can later be used to turn on stereo or maybe open the garage door.
queue or log requests, and support undoable operations means that Command’s Execute operation
can store state for reversing its effects in the Command itself. The Command may have an
added unExecute operation that reverses the effects of a previous call to execute.It may also
support logging changes so that they can be reapplied in case of a system crash.
Advantages:
 Makes our code extensible as we can add new commands without changing existing code.
 Reduces coupling between the invoker and receiver of a command.

Disadvantages:
 Increase in the number of classes for each individual command

1.9 Strategy pattern


 Strategy pattern is a behavioral design pattern that allows the behavior of an object to be
selected at runtime. It is one of the Gang of Four (GoF) design patterns, which are
widely used in object-oriented programming.
 The Strategy pattern is based on the idea of encapsulating a family of algorithms into
separate classes that implement a common interface. The pattern consists of three main
components: the Context, the Strategy, and the Concrete Strategy.
 The Context is the class that contains the object whose behavior needs to be changed
dynamically. The Strategy is the interface or abstract class that defines the common
methods for all the algorithms that can be used by the Context object. The Concrete
Strategy is the class that implements the Strategy interface and provides the actual
implementation of the algorithm.

Advantages:
1. A family of algorithms can be defined as a class hierarchy and can be used interchangeably
to alter application behavior without changing its architecture.
2. By encapsulating the algorithm separately, new algorithms complying with the same
interface can be easily introduced.
3. The application can switch strategies at run-time.
4. Strategy enables the clients to choose the required algorithm, without using a “switch”
statement or a series of “if-else” statements.
5. Data structures used for implementing the algorithm are completely encapsulated in
Strategy classes. Therefore, the implementation of an algorithm can be changed without
affecting the Context class.

Disadvantages:
1. The application must be aware of all the strategies to select the right one for the right
situation.
2. Context and the Strategy classes normally communicate through the interface specified by
the abstract Strategy base class. Strategy base class must expose interface for all the
required behaviours, which some concrete Strategy classes might not implement.
3. In most cases, the application configures the Context with the required Strategy object.
Therefore, the application needs to create and maintain two objects in place of one.

1.10 Observer pattern


To understand observer pattern, first you need to understand the subject and observer objects. The
relation between subject and observer can easily be understood as an analogy to magazine
subscription.
 A magazine publisher(subject) is in the business and publishes magazines (data).
 If you(user of data/observer) are interested in the magazine you subscribe(register), and if a
new edition is published it gets delivered to you.
 If you unsubscribe(unregister) you stop getting new editions.
 Publisher doesn’t know who you are and how you use the magazine, it just delivers it to
you because you are a subscriber(loose coupling).

Definition:
The Observer Pattern defines a one to many dependency between objects so that one object changes
state, all of its dependents are notified and updated automatically.

Explanation:
 One to many dependency is between Subject(One) and Observer(Many).
 There is dependency as Observers themselves don’t have access to data. They are
dependent on Subject to provide them data.
Class diagram:

Advantages:
Provides a loosely coupled design between objects that interact. Loosely coupled objects are flexible
with changing requirements. Here loose coupling means that the interacting objects should have
less information about each other. Observer pattern provides this loose coupling as:
 Subject only knows that observer implement Observer interface.Nothing more.
 There is no need to modify Subject to add or remove observers.
 We can reuse subject and observer classes independently of each other.
Disadvantages:
 Memory leaks caused by Lapsed listener problem because of explicit register and
unregistering of observers.

1.11 Proxy
 Proxy means ‘in place of’, representing’ or ‘in place of’ or ‘on behalf of’ are literal
meanings of proxy and that directly explains Proxy Design Pattern.
Proxies are also called surrogates, handles, and wrappers. They are closely related in
structure, but not purpose, to Adapters and Decorators.
 A real world example can be a cheque or credit card is a proxy for what is in our bank
account. It can be used in place of cash, and provides a means of accessing that cash
when required. And that’s exactly what the Proxy pattern does – “Controls and manage
access to the object they are protecting“.

Benefits:
 One of the advantages of Proxy pattern is security.
 This pattern avoids duplication of objects which might be huge size and memory intensive.
This in turn increases the performance of the application.
 The remote proxy also ensures about security by installing the local code proxy (stub) in the
client machine and then accessing the server with help of the remote code.

Drawbacks/Consequences:
This pattern introduces another layer of abstraction which sometimes may be an issue if the
RealSubject code is accessed by some of the clients directly and some of them might access the
Proxy classes. This might cause disparate behaviour.
Interesting points:
 There are few differences between the related patterns. Like Adapter pattern gives a
different interface to its subject, while Proxy patterns provides the same interface from the
original object but the decorator provides an enhanced interface. Decorator pattern adds
additional behaviour at runtime.
 Proxy used in Java API: java.rmi.*;
3.13 Facade
Facade Method Design Pattern is a part of the Gang of Four design patterns and it is
categorized under Structural design patterns. Before we dive deep into the details of it, imagine
a building, the facade is the outer wall that people see, but behind it is a complex network of
wires, pipes, and other systems that make the building function. The facade pattern is like that
outer wall. It hides the complexity of the underlying system and provides a simple interface
that clients can use to interact with the system.

Advantages of Facade Method Design Pattern


 Simplified Interface:
 Reduced Coupling:
 Encapsulation:
 Improved Maintainability:

Disadvantages of Facade Method Design Pattern


 Increased Complexity:
 Reduced Flexibility:
 Overengineering:
 Potential Performance Overhead:

3.5. ARCHITECTURAL STYLES


 The software that is built for computer-based systems also exhibits one
of manyarchitecturalstyles. Each style describes a system category that
encompasses

(1) A set of components (e.g., a database, computational


modules) that perform afunctionrequired by a system;
(2) A set of connectors that enable “communication, coordination and
cooperation” amongcomponents;
(3) Constraints that define how components can be integrated to form the system;
(4) Semantic models that enable a designer to understand the overall
properties of a system byanalyzing the known properties of its parts.

 An architectural style is a transformation that is imposed on the design of an


entire system. Theintent is to establish a structure for all components of the system.
A pattern differs from a style in a number of fundamental ways:

(1) The scope of a pattern is less broad, focusing on one aspect of the architecture
rather than thearchitecture in its entirety;
(2) A pattern imposes a rule on the architecture, describing how the software will
handle some aspect ofits functionality at the infrastructure level (e.g., concurrency);
(3) Architectural patterns tend to address specific behavioral issues within the context of
the architecture(e.g., how real-time applications handle synchronization or interrupts).
Patterns can be used in conjunction with an architectural style to shape theoverall
structure of a system.

ARCHITECTURAL STYLES ARE:


1) Layered architectures.
 A number of different layers are defined, each accomplishing operations that
progressivelybecome closer to the machine instruction set.
 At the outerlayer, components service user interface operations.
 At the inner layer, components perform operating system interfacing.
 Intermediate layers provide utility services and application software functions.

Figure. Layered architecture

Client-Server style

The Client-server model is a distributed application structure that partitions task or workload
between the providers of a resource or service, called servers, and service requesters called
clients. In the client-server architecture, when the client computer sends a request for data to the
server through the internet, the server accepts the requested process and deliver the data packets
requested back to the client. Clients do not share any of their resources. Examples of Client-
Server Model are Email, World Wide Web, etc.
How the Client-Server Model works ?
In this article we are going to take a dive into the Client-Server model and have a look at how the
Internet works via, web browsers. This article will help us in having a solid foundation of the
WEB and help in working with WEB technologies with ease.
 Client: When we talk the word Client, it mean to talk of a person or an organization using a
particular service. Similarly in the digital world a Client is a computer (Host) i.e. capable of
receiving information or using a particular service from the service providers (Servers).

 Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its present in
the database.

Tiered Architecture
In Tier Architecture, there is another layer between the client and the server. The client does
not directly communicate with the server. Instead, it interacts with an application server which
further communicates with the database system and then the query processing and transaction
management takes place. This intermediate layer acts as a medium for the exchange of partially
processed data between the server and the client. This type of architecture is used in the case of
large web applications.

DBMS 3-Tier Architecture


Pipe and Filter Software Architecture

This software architecture pattern decomposes a task that performs complex processing into a
series of separate elements that can be reused, where processing is executed sequentially step by
step.
There are four main components:
1. Data Source: The original, unprocessed data
2. Data Sink: The final processed data
3. Filter: Components that perform processing
4. Pipe: Components that pass data from a data source to a filter, or from a filter to another
filter, or from a filter to a data sink

Generic Pipe & Filter


Diagram Advantages
 Suitable for processing that requires clear, systematic steps in order to transform successive
pieces of data, because of the intuitive flow of processing
 Each filter can be modified easily — as long as the data input and data output remain the
same
 Filters are reusable, old filters can be replaced by new ones, or new filters can be inserted
easily into the flow of processing — as long as the data input and output between filters are
compatible
 Each component is implemented as a separate, distinct task, hence having a natural
separation of concerns

Disadvantages
 Inefficient and inconvenient to pass around the full set of data throughout the entire pipe and
filter system, because not every component will require the full set of data
 Reliability may be an issue if data is lost on the way between components
 Having too many filters can slow down your application, introducing bottlenecks or
deadlocks if one particular filter processes slowly or fails
Real-World Example
I will now talk about how pipes and filters are used in Unix. It can be used for chaining two or
more commands so that the output of one command becomes the input for the next command.
3.10. USER INTERFACE DESIGN

User interface design creates an effective communication medium between a human and a computer.
1. GOLDEN RULES:
1) Place the user in control.
2) Reduce the user’s memory load.
3) Make the interface consistent.
These golden rules actually form the basis for a set of user interface design principles that guide
thisimportant aspect of software design.

1) Place the User in Control:


Most interface constraints and restrictions that are imposed by a designer are intended to
simplify the mode of interaction. But for whom?.
As a designer, you may be tempted to introduce constraints and limitations to simplify the
implementation of the interface. The result may be an interface that is easy to build, but
frustrating to use.
Design principles that allow the user to maintain control are

1. Use modes judiciously (modeless)


2. Allow users to use either the keyboard or mouse (flexible)
3. Allow users to change focus (interruptible)
4. Display descriptive messages and text(Helpful)
5. Provide immediate and reversible actions, and feedback (forgiving)
6. Provide meaningful paths and exits (navigable)
7. Accommodate users with different skill levels (accessible)
8. Make the user interface transparent (facilitative)
9. Allow users to customize the interface (preferences)
10. Allow users to directly manipulate interface objects (interactive)

2) Reduce the User’s Memory Load:

The more a user has to remember, the more error-prone the interaction with the system will be. It
is for this reason that a well-designed user interface does not tax the user’s memory. Whenever
possible, the system should “remember” pertinent information and assist the user with an
interaction scenario that assists recall
Design principles that enable an interface to reduce the user’s memory load are
1. Relieve short-term memory (remember)
2. Rely on recognition, not recall (recognition)
3. Provide visual cues (inform)
4. Provide defaults, undo, and redo (forgiving)
5. Provide interface shortcuts (frequency)
6. Promote an object-action syntax (intuitive)
7. Use real-world metaphors (transfer)
8. User progressive disclosure (context)
9. Promote visual clarity (organize)
3) Make the Interface Consistent:
The interface should present and acquire information in a consistent fashion. This implies that
(1) All visual information is organized according to design rules that are maintained
throughoutall screen displays
(2) Input mechanisms are constrained to a limited set that is used consistently
throughout theapplication
(3) Mechanisms for navigating from task to task are consistently defined
and implemented.Set of design principles that help make the interface consistent
are:
1. Sustain the context of users’ tasks (continuity)
2. Maintain consistency within and across products (experience)
3. Keep interaction results the same (expectations)
4. Provide aesthetic appeal and integrity (attitude)
5. Encourage exploration (predictable)

2. USER INTERFACE ANALYSIS AND DESIGN


 The overall process for analyzing and designing a user interface begins with the
creation ofdifferent models of system function.

1) Interface Analysis and Design Models:


Four different models come into play when a user interface is to be analyzed and designed.
i) User model: Establishes the profile of the end-users of the system Based on age, gender,
physical abilities, education, cultural or ethnic background, motivation, goals, and
personality. A human engineer or the software engineer establishes a user model

ii) Design model: The software engineer creates a design model. Derived from the analysis
model of the requirements. Incorporates data, architectural, interface, and procedural
representationsof the software.

iii) Mental model: The end user develops a mental image. Often called the user's system
perception.Consists of the image of the system that users carry in their heads.

iv) Implementation model: The implementers of the system create an implementation


model. Consists of the look and feel of the interface combined with all supporting
information (books, videos, help files) that describe system syntax and semantics

Users can be categorized as:


i) Novices: No syntactic knowledge1 of the system and little
semantic knowledge2of the application or computer usage in general.

ii) Knowledgeable, intermittent users: Reasonable semantic knowledge of the


applicationbut relatively low recall of syntactic information necessary to use the interface

iii) Knowledgeable, frequent users: Good semantic and syntactic knowledge that often
leads to the “power-user syndrome”; that is, individuals who look for shortcuts and abbreviated
modes ofinteraction.
2) The Process:
The analysis and design process for user interfaces is iterative and can be represented using a
spiralmodel.

Fig. The user interface design process

Four distinct framework activities are


(1) Interface analysis and modeling
(2) Interface design
(3) Interface construction
(4) Interface validation.
The spiral implies that each of these tasks will occur more than once, with each pass around the
spiralrepresenting additional elaboration of requirements and the resultant design.
In most cases, the construction activity involves prototyping—the only practical way to
validate whathas been designed.

(1) Interface analysis focuses on the profile of the users who will interact with the system.
 Skill level, business understanding, and general receptiveness to the new system are
recorded;and different user categories are defined.
 For each user category, requirements are elicited. In essence, understand the system
perception for each class of users.
 Once general requirements have been defined, a more detailed task analysis is conducted.
Those tasks that the user performs to accomplish the goals of the system are identified,
described, and elaborated over a number of iterative passes through the spiral.
 Finally, analysis of the user environment focuses on the physical work environment.
Among the questions to be asked are
 Where will the interface be located physically?
 Will the user be sitting, standing, or performing other tasks unrelated to the interface?
 Does the interface hardware accommodate space, light, or noise constraints?
 Are there special human factors considerations driven by environmental factors?

 The information gathered as part of the analysis action is used to create an analysis model
for the interface. Using this model as a basis, the design action commences.

(2) The goal of interface design is to define a set of interface objects, actions and their screen
representations that enable a user to perform all defined tasks in a manner that meets
every usability goal defined for the system.
(3) Interface construction normally begins with the creation of a prototype that enables
usage scenarios to be evaluated. As the iterative design process continues, a user interface
tool kit may be used to complete the construction of the interface.

(4) Interface validation focuses on


 The ability of the interface to implement every user task correctly, to accommodate
all taskvariations, and to achieve all general user requirements;
 The degree to which the interface is easy to use and easy to learn
 The users’ acceptance of the interface as a useful tool in their work.
 Subsequent passes through the process elaborate task detail, design information, and the
operational features of the interface.

3.11. INTERFACE ANALYSIS

Understand the problem before you attempt to design a solution. In the case of user interface
design,understanding the problem means understanding
(1) The people (end users) who will interact with the system through the interface
(2) The tasks that end users must perform to do their work
(3) The content that is presented as part of the interface
(4) The environment in which these tasks will be conducted.

1. User Analysis:
 The phrase “user interface” is probably all the justification needed to spend some time
understanding the user before worrying about technical matters.
 Information from a broad array of sources can be used.

User Interviews.
 The most direct approach, members of the software team meet with end users to better
understand their needs, motivations, work culture, and a myriad of other issues. This can
be accomplished in one-on-one meetings or through focus groups.

Sales input.
 Sales people meet with users on a regular basis and can gather information that will help
the software team to categorize users and better understand their requirements.

Marketing input.
 Market analysis can be invaluable in the definition of market segments and an
understanding of how each segment might use the software in subtly different ways.

Support input.
 Support staff talks with users on a daily basis. They are the most likely source of
information on what works and what doesn’t, what users like and what they dislike, what
features generate questions and what features are easy to use.

The following set of questions will help you to better understand the users of a system:
 Are users trained professionals, technicians, clerical, or manufacturing workers?
 What level of formal education does the average user have?
 Are the users capable of learning from written materials or have they expressed a
desire forclassroom training?
 Are users expert typists or keyboard phobic?
 What is the age range of the user community?
 Will the users be represented predominately by one gender?
 How are users compensated for the work they perform?
 Do users work normal office hours or do they work until the job is done?
 Is the software to be an integral part of the work users do or will it be used only occasionally?
 What is the primary spoken language among users?
 What are the consequences if a user makes a mistake using the system?
 Are users experts in the subject matter that is addressed by the system?
 Do users want to know about the technology that sits behind the interface?

2. Task Analysis and Modeling:


The goal of task analysis is to answer the following questions:
 What work will the user perform in specific circumstances?
 What tasks and subtasks will be performed as the user does the work?
 What specific problem domain objects will the user manipulate as work is performed?
 What is the sequence of work tasks—the workflow?
 What is the hierarchy of tasks?

Use cases.
 The use case describes the manner in which an actor interacts with a system. When used
as partof task analysis, the use case is developed to show how an end user performs some
specific work-related task.
 In most instances, the use case is written in an informal style (a simple paragraph) in the
first- person.
 Use case provides a basic description of one important work task for the computer-aided
design system. From it, you can extract tasks, objects, and the overall flow of the
interaction.

Task elaboration.
 Elaboration is a mechanism for refining the processing tasks that are required for software
to accomplish some desired function.
 Task analysis for interface design uses an elaborative approach to assist in understanding
the human activities the user interface must accommodate.
Task analysis can be applied in two ways.
i) An interactive computer-based system is often used to replace a manual or semi-manual
activity. To understand the tasks that must be performed to accomplish the goal of the
activity, you must understand the tasks that people currently perform and then map
these into a similar set of tasks that are implemented in the context of the user
interface.
ii) Study an existing specification for a computer-based solution and derive a set of user
tasks that will accommodate the user model, the design model, and the system
perception.
 Regardless of the overall approach to task analysis, first define and classify tasks.

Example :
 By observing an interior designer at work, interior design comprises a number of major
activities: furniture layout, fabric and material selection, wall and window coverings
selection, presentation (to the customer), costing, and shopping. Each of these major tasks
can be elaborated into subtasks.

Using information contained in the use case, furniture layout can be refined into the following tasks:
(1) Draw a floor plan based on room dimensions,
(2) Place windows and doors at appropriate locations,
(3a) use furniture templates to draw scaled furniture outlines on the floor
plan,
(3b) use accents templates to draw scaled accents on the floor plan,
(4) Move furniture outlines and accent outlines to get the best placement,
(5) Label all furniture and accent outlines,
(6) Draw dimensions to show location, and
(7) Draw a perspective-rendering view for the customer.

Object elaboration.

 Rather than focusing on the tasks that a user must perform, examine the use case and
other information obtained from the user and extract the physical objects that are used by
the interior designer.
 These objects can be categorized into classes.
 Attributes of each class are defined, and an evaluation of the actions applied to each
object provide a list of operations.
 For example, the furniture template might translate into a class called Furniture with
attributes that might include size, shape, location, and others.
 The interior designer would select the object from the Furniture class, move it to a
position on the floor plan (another object in this context), draw the furniture outline, and
so forth.
 The tasks select, move, and draw are operations. The user interface analysis model would
not provide a literal implementation for each of these operations. However, as the design
is elaborated, the details of each operation are defined.

Workflow analysis.

 When a number of different users, each playing different roles, makes use of a user
interface, it is sometimes necessary to go beyond task analysis and object elaboration and
apply workflow analysis. This technique allows you to understand how a work process is
completed when severalpeople (and roles) are involved.
 Consider a company that intends to fully automate the process of prescribing and
delivering prescription drugs. The entire process will revolve around a Web-based
application that isaccessible by physicians (or their assistants), pharmacists, and patients.
 Workflow can be represented effectively with a UML swimlane diagram (a variation
on theactivity diagram).
 We consider only a small part of the work process: the situation that occurs when a
patient asksfor a refill.
 swimlane diagram indicates the tasks and decisions for each of the three roles noted
earlier. Thisinformation may have been elicited via interview or from use cases written by
each actor.
 Regardless, the flow of events enables you to recognize a number of key
interface characteristics:
Hierarchical representation.
 A process of elaboration occurs as you begin to analyze the interface. Once workflow
has beenestablished, a task hierarchy can be defined for each user type.
 The hierarchy is derived by a stepwise elaboration of each task identified for the
user. Forexample, consider the following user task and subtask hierarchy.

User task: Requests that a prescription be refilled


 Provide identifying information.
 Specify name.
 Specify userid.
 Specify PIN and password.
 Specify prescription number.
 Specify date refill is required.
 To complete the task, three subtasks are defined. One of these subtasks, provide
identifyinginformation, is further elaborated in three additional sub-subtasks.

3. Analysis of Display Content:


 The user tasks identified lead to the presentation of a variety of different types of content.
 For modern applications, display content can range from character-based reports (e.g., a
spreadsheet), graphical displays (e.g., a histogram, a 3-D model, a picture of a person), or
specialized information (e.g., audio or videofiles).
 The analysis modeling techniques identify the output data objects that are produced by an
application.

These data objects may be


(1) Generated by components in other parts of an application
(2) Acquired from data stored in a database that is accessible from the application
(3) Transmitted from systems external to the application in question.

 During this interface analysis step, the format and aesthetics of the content are
considered.Among the questions that are asked and answered are:
 Are different types of data assigned to consistent geographic locations on the screen (e.g.,
photosalways appear in the upper right-hand corner)?
 Can the user customize the screen location for content?
 Is proper on-screen identification assigned to all content?
 If a large report is to be presented, how should it be partitioned for ease of understanding?
 Will graphical output be scaled to fit within the bounds of the display device that is used?
 How will color be used to enhance understanding?
 How will error messages and warnings be presented to the
user?The answers to these questions will help to establish
requirements

4. Analysis of the Work Environment:


 People do not perform their work in isolation. They are influenced by the activity around
them, the physical characteristics of the workplace, the type of equipment they are using,
and the work relationships they have with other people.
 If the products you design do not fit into the environment, they may be difficult or
frustrating to use.
 In some applications the user interface for a computer-based system is placed in a “user-
friendly location” (e.g., proper lighting, good display height, easy keyboard access), but in
others (e.g., a factory floor or an airplane cockpit), lighting may be suboptimal, noise may be
a factor, a keyboard or mouse may not be an option, display placement may be less than
ideal.
 The interface designer may be constrained by factors that mitigate against ease of use.
 In addition to physical environmental factors, the workplace culture also comes into play.
 Will system interaction be measured in some manner (e.g., time per transaction or
accuracy of atransaction)?
 Will two or more people have to share information before an input can be provided?
 How will support be provided to users of the system? These and many related questions
shouldbe answered before the interface design commences.

3.12. INTERFACE DESIGN STEPS

 Once interface analysis has been completed, all tasks (or objects and actions) required by
the enduser have been identified in detail and the interface design activity commences.
 Interface design is an iterative process. Each user interface design step occurs a number
of times,elaborating and refining information developed in the preceding step.
 Although many different user interface design models have been proposed, all suggest
somecombination of the following steps:

1. Using information developed during interface analysis define interface objects and
actions(operations).
2. Define events (user actions) that will cause the state of the user interface to change.
Model thisbehavior.
3. Depict each interface state as it will actually look to the end user.
4. Indicate how the user interprets the state of the system from information provided through the
interface.

Regardless of the sequence of design tasks, you should


(1) Always follow the golden rules
(2) Model how the interface will be implemented
(3) Consider the environment (e.g., display technology, operating system, development
tools) that will beused.

1. Applying Interface Design Steps:


 The definition of interface objects and the actions that are applied to them is an important
step ininterface design.
 To accomplish this, user scenarios are parsed. That is, a use case is written.
 Nouns (objects) and verbs (actions) are isolated to create a list of objects and actions.
 Once the objects and actions have been defined and elaborated iteratively, they are
categorized by type.
 Target, source, and application objects are identified. A source object (e.g., a report icon)
is dragged and dropped onto a target object (e.g., a printer icon).
 The implication of this action is to create a hard-copy report. An application object
represents application-specific data that are not directly manipulated as part of screen
interaction.
 For example, a mailing list is used to store names for a mailing. The list itself might be
sorted, merged, or purged (menu-based actions), but it is not dragged and dropped via user
interaction.
 When you are satisfied that all important objects and actions have been defined (for one
design iteration), screen layout is performed.
 Like other interface design activities, screen layout is an interactive process in which
graphical design and placement of icons, definition of descriptive screen text,
specification and titling for windows, and definition of major and minor menu items are
conducted.
 If a real-world metaphor is appropriate for the application, it is specified at this time, and
the layout is organized in a manner that complements the metaphor.
 To provide a brief illustration of the design steps noted previously, consider a user
scenario for the SafeHome system (discussed in earlier chapters). A preliminary use case
(written by the homeowner) for the interface follows:

Based on this use case, the following homeowner tasks, objects, and data items are identified:
 accesses the SafeHome system
 enters an ID and password to allow remote access
 checks system status
 arms or disarms SafeHome system
 displays floor plan and sensor locations
 displays zones on floor plan
 changes zones on floor plan
 displays video camera locations on floor plan
 selects video camera for viewing
 views video images (four frames per second)
 pans or zooms the video camera

 Objects (boldface) and actions (italics) are extracted from this list of homeowner tasks.
The majority of objects noted are application objects. However, video camera location (a
source object) is dragged and dropped onto video camera (a target object) to create a
video image (a window with video display).
Fig. Preliminary screen layout

 A preliminary sketch of the screen layout for video monitoring is created . To invoke the
video image, a video camera location icon, C, located in the floor plan displayed in the
monitoring window is selected. In this case a camera location in the living room (LR) is
then dragged and dropped onto the video camera icon in the upper left-hand portion of the
screen.
 The video image window appears, displaying streaming video from the camera located in
the LR. The zoom and pan control slides are used to control the magnification and
direction of the video image.
 To select a view from another camera, the user simply drags and drops a different camera
location icon into the camera icon in the upper left-hand corner of the screen.
 The layout sketch shown would have to be supplemented with an expansion of each menu
item within the menu bar, indicating what actions are available for the video monitoring
mode (state). A complete set of sketches for each homeowner task noted in the user
scenario would be created during the interface design.

2. User Interface Design Patterns:


 Graphical user interfaces have become so common that a wide variety of user interface
design patterns has emerged. As I noted earlier in this book, a design pattern is an
abstraction that prescribes a design solution to a specific, well-bounded design problem.
 As an example of a commonly encountered interface design problem, consider a situation
in which a user must enter one or more calendar dates, sometimes months in advance.
There are many possible solutions to this simple problem, and a number of different
patterns that might be proposed.
3. Laakso suggests a pattern called CalendarStrip that produces a continuous, scrollable
calendar in which the current date is highlighted and future dates may be selected by
picking them fromthe calendar. The calendar metaphor is well known to every user and
provides an effective mechanism for placing a future date in context.\
Design Issues:
Six common design issues are
i) System response time
ii) User help facilities
iii) Error information handling
iv) Command labeling
v) Application accessibility.
vi) Internationalization.

 Unfortunately, many designers do not address these issues until relatively late in the
design process.
 Unnecessary iteration, project delays, and end-user frustration often result. It is far better
to establish each as a design issue to be considered at the beginning of software design,
when changes are easy and costs are low.
i) Response time.
 System response time is the primary complaint for many interactive applications. In
general, system response time is measured from the point at which the user performs some
control action (e.g., hits the return key or clicks a mouse) until the software responds with
desired output or action.
 System response time has two important characteristics: length and variability. If system
response is too long, user frustration and stress are inevitable.
 Variability refers to the deviation from average response time, and in many ways, it is the
most important response time characteristic. Low variability enables the user to establish
an interactionrhythm, even if response time is relatively long.
 For example, a 1-second response to a command will often be preferable to a response
that varies from 0.1 to 2.5 seconds. When variability is significant, the user is always off
balance, always wondering whether something “different” has occurred behind the scenes.
ii) Help facilities.
 Almost every user of an interactive, computer-based system requires help now and then.
In some cases, a simple question addressed to a knowledgeable colleague can do the trick.
In others, detailed research in a multivolume set of “user manuals” may be the only
option.
 In most cases, however, modern software provides online help facilities that enable a user
to get aquestion answered or resolve a problem without leaving the interface.

A number of design issues must be addressed when a help facility is considered:


 Will help be available for all system functions and at all times during system interaction?
Optionsinclude help for only a subset of all functions and actions or help for all functions.
 How will the user request help? Options include a help menu, a special function key, or a
HELPcommand.
 How will help be represented? Options include a separate window, a reference to a
printeddocument (less than ideal), or a one- or two-line suggestion produced in a fixed
screen location.
 How will the user return to normal interaction? Options include a return button displayed
on thescreen, a function key, or control sequence.
 How will help information be structured?
 Options include a “flat” structure in which all information is accessed through a keyword,
a layered hierarchy of information that provides increasing detail as the user proceeds into
the structure, or the use of hypertext.

iii) Error handling.


 Error messages and warnings are “bad news” delivered to users of interactive systems
when something has gone awry. At their worst, error messages and warnings impart
useless or misleading information and serve only to increase user frustration.
 There are few computer users who have not encountered an error of the form:
“Application XXX has been forced to quit because an error of type 1023 has been
encountered.” Somewhere, an explanation for error 1023 must exist; otherwise, why
would the designers have added the identification?
 Yet, the error message provides no real indication of what went wrong or where to look
to get additional information. An error message presented in this manner does nothing to
assuage user anxiety or to help correct the problem.

In general, every error message or warning produced by an interactive system should have the following
characteristics:
 The message should describe the problem in jargon that the user can understand.
 The message should provide constructive advice for recovering from the error.
 The message should indicate any negative consequences of the error (e.g., potentially
corrupted data files) so that the user can check to ensure that they have not occurred (or
correct them if theyhave).
 The message should be accompanied by an audible or visual cue. That is, a beep might be
generated to accompany the display of the message, or the message might flash
momentarily or be displayed in a color that is easily recognizable as the “error color.”
 The message should be “nonjudgmental.” That is, the wording should never place blame
on the user.
 Because no one really likes bad news, few users will like an error message no matter how
well designed. But an effective error message philosophy can do much to improve the
quality of an interactive system and will significantly reduce user frustration when
problems do occur.

iv) Menu and command labeling.


 The typed command was once the most common mode of interaction between user and
system software and was commonly used for applications of every type.
 Today, the use of window-oriented, point-and pick interfaces has reduced reliance on
typed commands, but some power-users continue to prefer a command-oriented mode of
interaction. A number of design issues arise when typed commands or menu labels are
provided as a mode of interaction:
 Will every menu option have a corresponding command?
 What form will commands take? Options include a control sequence (e.g., alt-P),
function keys,or a typed word.
 How difficult will it be to learn and remember the commands? What can be done if a
command isforgotten?
 Can commands be customized or abbreviated by the user?
 Are menu labels self-explanatory within the context of the interface?
 Are submenus consistent with the function implied by a master menu item?

v) Application accessibility.
 Accessibility for users who may be physically challenged is an imperative for ethical,
legal, and business reasons.
 A variety of accessibility guidelines many designed for Web applications but often
applicable to all types of software—provide detailed suggestions for designing interfaces
that achieve varying levels of accessibility.
 Others provide specific guidelines for “assistive technology” that addresses the needs of
those with visual, hearing, mobility, speech, and learning impairments.

vi) Internationalization.
 Software engineers and their managers invariably underestimate the effort and skills
required to create user interfaces that accommodate the needs of different locales and
languages. Too often, interfaces are designed for one locale and language and then
Make shift to work in other countries.
 The challenge for interface designers is to create “globalized” software. That is, user
interfaces should be designed to accommodate a generic core of functionality that can be
delivered to all who use the software. Localization features enable the interface to be
customized for a specific market.
UNIT IV- TESTING AND IMPLEMENTATION

4.1. SOFTWARE TESTING FUNDAMENTALS:

Objective of Testing:
The goal of testing is to find errors, and a good test is one that has a high probability
of finding an error. The tests must exhibit a set of characteristics that achieve the
goal of finding the most errors with a minimum of effort.

Testability.
“Software testability is simply how easily a computer program can be tested.”

Characteristics of testability:
1. Operability - “The better it works, the more efficiently it can be tested.”
2. Observability - “What you see is what you test.”
3. Controllability - “The better we can control the software, the more the testing
can beautomated and optimized.”
4. Decomposability - “By controlling the scope of testing, we can more quickly
isolateproblems and perform smarter retesting.”
5. Simplicity - “The less there is to test, the more quickly we can test it.”
6. Stability - “The fewer the changes, the fewer the disruptions to testing.”
7. Understandability - “The more information we have, the smarter we will test.”

Test Characteristics.
The following are attributes of a “good” test:
1) A good test has a high probability of finding an error.
2) A good test is not redundant.
3) A good test should be “best of breed”

4.2. INTERNAL AND EXTERNAL VIEWS OF


TESTING: (OR) WHITE BOX AND BLACK BOX
TESTING
Any engineered product can be tested in one of two ways:
The first test approach takes an external view and is called black-box testing. The
second requires an internal view and is termed white-box testing.

1. Black-box testing (External testing):


Black-box testing are conducted at the software interface. A black-box test examines
some fundamental aspect of a system with little regard for the internal logical
structure ofthe software.

2. White-box testing(Internal Testing):


White-box testing of software is predicated on close examination of procedural detail.
Logical paths through the software and collaborations between components
are tested by exercising specific sets of conditions and/or loops.

4.3. WHITE-BOX TESTING

White-box testing, sometimes called glass-box testing or structural testing is a test-


case design philosophy that uses the control structure described as part of component-
level design to derive test cases.
Using white-box testing methods, you can derive test cases that
(1) Guarantee that all independent paths within a module have been exercised at least
once
(2) Exercise all logical decisions on their true and false sides
(3) Execute all loops at their boundaries and within their operational bounds
(4) Exercise internal data structures to ensure their validity.

4.3.1 BASIS PATH TESTING


Basis path testing is a white-box testing technique. The basis path method enables
the test-case designer to derive a logical complexity measure of a procedural design and
use this measure as a guide for defining a basis set of execution paths.
Test cases derived to exercise the basis set are guaranteed to execute every
statement in the program at least one time during testing.

4.3.1.1 Flow Graph Notation


The flow graph depicts logical control flow using the notation illustrated in
Figure. Each structured construct has a corresponding flow graph symbol.

Figure. Flow graph Notation


 Each circle, called a flow graph node, represents one or more procedural
statements. A sequence of process boxes and a decision diamond can map into a
single node. The arrows on the flow graph, called edges or links, represent flow of
control and areanalogous to flowchart arrows.
 An edge must terminate at a node, even if the node does not represent any procedural
statements.

 Areas bounded by edges and nodes are called regions. When countingregions, we include the
area outside the graph as a region.
(a) Flowchart and (b) flow graph

4.3.1.2. Independent Program Paths


 An independent path is any path through the program that introduces at least one
new set of processing statements or a new condition. When stated in terms of a
flow graph, an independent path must move along at least one edge that has not
been traversed before the path is defined. For example, a set of independent paths
for the flow graph illustrated in Figure (b) is
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11

 Note that each new path introduces a new edge. The path is not considered to be
an independent path because it is simply a combination of already specified paths
and does not traverse any new edges.
 Cyclomatic complexity is software metric that provides a quantitative measure of
the logical complexity of a program. When used in the context of the basis path
testing method, the value computed for cyclomatic complexity defines the number
of independent paths in the basis set of a program and provides you with an upper
bound forthe number of tests that must be conducted to ensure that all statements
have been executed at least once.

Complexity is computed in one of three ways:


1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is
defined asV(G)= E- N+ 2
where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also
defined asV(G) = P+ 1
where P is the number of predicate nodes contained in the flow graph G.

 Referring once more to the flow graph in Figure (b), the cyclomatic complexity
can becomputed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + 2 = 4.
3. V(G)= 3 predicate nodes +1 = 4.
 Therefore, the cyclomatic complexity of the flow graph in Figure (b) is 4.

4.3.1.3. Deriving Test Cases


The basis path testing method can be applied to a procedural design or to source code.
The following steps can be applied to derive the basis set:

1) Using the design or code as a foundation, draw a corresponding flow


graph. A flow graph is created using the symbols and construction rules
.

2) Determine the cyclomatic complexity of the resultant flow graph.


The cyclomatic complexity V(G) is determined by applying the algorithms . It
should be noted that V(G) can be determined without developing a flow graph by
counting all conditional statements in the PDL (for the procedure average, compound
conditions count as two) and adding 1.

Compute Cyclomatic Complexity using formulas


V(G) = e – n + 2
=9–9+2=2
Therefore we have to find 2 independence paths for basis path testing
3) Prepare test cases that will force execution of each path in the basis set.
Independent path X Y Expected Result (z)
Path 1 10 5 5
2-3-4-5-6-8-9-10 End program
Path 2 5 10 5
2-3-4-5-7-8-9-10 End program

4) Determine a basis set of linearly independent paths.


The value of V(G) provides the upper bound on the number of linearly independent paths
through the program control structure. In the case of procedure average, we expect to
specify 2paths:
Path 1-: 2-3-4-5-6-8-9-10
Path 2 -: 2-3-4-5-7-8-9-10
4.3.1.4 Data should be chosen so that conditions at the predicate nodes are appropriately set
as each path is tested. Each test case is executed and compared to expected
results..Graph Matrices
 A graph matrix is a square matrix whose size (i.e., number of rows and columns)
is equal to the number of nodes on the flow graph. Each row and column
corresponds to an identified node, and matrix entries correspond to connections
(an edge) between nodes.

Figure. Graph matrix

 The link weight provides additional information about control flow. In its simplest
form, the link weight is 1 (a connection exists) or 0 (a connection does not exist).
But link weights can be assigned other, more interesting properties:
 The probability that a link (edge) will be execute.
 The processing time expended during traversal of a link
 The memory required during traversal of a link
 The resources required during traversal of a link.
The analysis required to design test cases can be partially or fully automated.

4.3.2 CONTROL STRUCTURE TESTING

The basis path testing technique is one of a number of techniques for control
structure testing. Although basis path testing is simple and highly effective, it is not
sufficient in itself. The following control structure testing broadens testing coverage and
improves the quality of white-box testing.
1) Condition Testing:
Condition testing is a test-case design method that exercises the logical conditions
contained in a program module. A simple condition is a Boolean variable or a relational
expression, possibly preceded with one NOT (¬) operator. A relational expression takes
the form
E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<,<=,=,!= (nonequality),>, or >=.

2) Data Flow Testing:


 The data flow testing method selects test paths of a program according to the
locations ofdefinitions and uses of variables in the program.
 For a statement with S as its statement number,
(S) = {X | statement S contains a definition of X}
(S) = {X | statement S contains a use of X}

 If statement S is an if or loop statement, its DEF set is empty and its USE set is
based on the condition of statement S. The definition of variable X at statement S
is said to be live at statement S’ if there exists a path from statement S to statement
S’ that contains no other definition of X.
 A definition-use (DU) chain of variable X is of the form [X, S, S’], where S and S’
are statement numbers, X is in DEF(S) and USE(S’), and the definition of X in
statement S islive at statement S’.

3) Loop Testing
 Loops are the cornerstone for the vast majority of all algorithms implemented in
software. And yet, we often pay them little heed while conducting software tests.
Loop testing is a white-box testing technique that focuses exclusively on the
validity of loop constructs. Four different classes of loops can be defined: simple
loops, concatenated loops, nested loops, and unstructured loops.

Figure. Classes of Loops


Simple loops.
 The following set of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.

1. Skip the loop entirely.


2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n -1, n, n +1 passes through the loop.

Nested loops.
 If we were to extend the test approach for simple loops to nested loops, the
number of possible tests would grow geometrically as the level of nesting increases.
This would result in an impractical number of tests. an approach that will help to
reduce the number of tests are:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range
or excluded values.Work outward, conducting tests for the next loop, but keeping all
other outer loops atminimum values and other nested loops to “typical” values.
3. Continue until all loops have been tested.

Concatenated loops.
 Concatenated loops can be tested using the approach defined for simple loops, if
each of the loops is independent of the other. However, if two loops are
concatenated and theloop counter for loop 1 is used as the initial value for loop 2,
then the loops are not independent. When the loops are not independent, the
approach applied to nested loops is recommended.

Unstructured loops.
 Whenever possible, this class of loops should be redesigned to reflect the use of
the structured programming constructs.

4.4. BLACK-BOX TESTING


 Black-box testing, also called behavioral testing, focuses on the functional
requirements of the software. That is, black-box testing techniques enable you to
derive sets of input conditions that will fully exercise all functional requirements
for a program.
 Black-box testing is not an alternative to white-box techniques. Rather, it is a
complementary approach that is likely to uncover a different class of errors than
white - box methods.

Black-box testing attempts to find errors in the following categories:


(1) Incorrect or missing functions
(2) Interface errors
(3) Errors in data structures or external database access
(4) Behavior or performance errors
(5) Initialization and termination errors.

By applying black-box techniques, a set of test cases can be derived that satisfy the followingcriteria:
(1) Test cases that reduce, by a count that is greater than one, the number of additional
test casesthat must be designed to achieve reasonable testing
(2) Test cases that tell you something about the presence or absence of classes of
errors, ratherthan an error associated only with the specific test at hand.
4.4.1. Graph-Based Testing Methods
 The first step in black-box testing is to understand the objects that are modeled in
software and the relationships that connect these objects. Once this has been
accomplished, the next step is to define a series of tests that verify “all objects
have the expected relationship to one another”.
 Stated in another way, software testing begins by creating a graph of important objects
and their relationships and then devising a series of tests that will cover the graph so
that each object and relationship is exercised and errors are uncovered.

Figure. (a) Graph notation; (b) simple example

A number of behavioral testing methods that can make use of graphs are:
1) Transaction flow modeling.
2) Finite state modeling.
3) Data flow modeling.
4) Timing modeling.

4.4.2 .Equivalence Partitioning


 Equivalence partitioning is a black-box testing method that divides the input
domain of a program into classes of data from which test cases can be derived. An
ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing
of all character data) that might otherwise require many test cases to be executed
before the general error is observed.
 Test-case design for equivalence partitioning is based on an evaluation of
equivalence classes for an input condition. Using concepts introduced in the
preceding section, if a set of objects can be linked by relationships that are
symmetric, transitive, and reflexive, an equivalence class is present.
 An equivalence class represents a set of valid or invalid states for input conditions.
Typically, an input condition is either a specific numeric value, a range of values,
a set of related values, or a Boolean condition.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence
classesare defined.
2. If an input condition requires a specific value, one valid and two invalid
equivalenceclasses are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalenceclass are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.By
applying the guidelines for the derivation of equivalence classes, test cases for each input
domain data item can be developed and executed. Test cases are selected so that the
largest number of attributes of an equivalence class are exercised at once.

Example#1:
For a software that computes the square root of an input integer which can assume
valuesin the range of 0 to 5000, there are three quivalence classes:
The set of negative integers,the set of integers in the range of 0 and 5000, and the integers
larger than 5000. Therefore, the test cases must include representatives for each of the
three equivalenceclasses and a possible test set can be: {-5,500,6000}.

Example#2:
Design the black-box test suite for the following program. The program computes the
intersection point of two straight lines and displays the result. It reads two integer pairs
(m1, c1) and (m2, c2) defining the two straight lines of the form y=mx + c.
The equivalence classes are the following:
•Parallel lines (m1=m2, c1≠c2)
•Intersecting lines (m1≠m2)
•Coincident lines (m1=m2, c1=c2)
Now, selecting one representative value from each equivalence class, the test suit (2, 2) (2,
5),(5, 5) (7, 7) , (10, 10) (10, 10) are obtained.

4.4.3. Boundary Value Analysis


 A greater number of errors occurs at the boundaries of the input domain rather
than in the “center.” It is for this reason that boundary value analysis (BVA) has
been developed as atesting technique. Boundary value analysis leads to a selection
of test cases that exercise bounding values.
 Boundary value analysis is a test-case design technique that complements
equivalence partitioning. Rather than selecting any element of an equivalence
class, BVA leads to the selection of test cases at the “edges” of the class. Rather
than focusing solely on input conditions, BVA derives test cases from the output
domain as well.
Guidelines for BVA are similar in many respects to those provided for equivalence
partitioning:
1) If an input condition specifies a range bounded by values a and b, test cases
should be designed with values a and b and just above and just below a and b.
2) If an input condition specifies a number of values, test cases should be
developed that exercise the minimum and maximum numbers. Values just above and
below minimum and maximum are also tested.
3) Apply guidelines 1 and 2 to output conditions. For example, assume that a
temperature versus pressure table is required as output from an engineering analysis
program. Test cases should be designed to create an output report that produces the
maximum (and minimum) allowable number of table entries.
4) If internal program data structures have prescribed boundaries (e.g., a table has
a defined limit of 100 entries), be certain to design a test case to exercise the data
structure at its boundary.
 Most software engineers intuitively perform BVA to some degree. By applying
these guidelines, boundary testing will be more complete, thereby having a higher
likelihood for error detection.

4.4.4. Orthogonal Array Testing


 . Orthogonal array testing can be applied to problems in which the input
domain isrelatively small but too large to accommodate exhaustive testing.
 The orthogonal array testing method is particularly useful in finding region
faults—anerror category associated with faulty logic within a software component.
 When orthogonal array testing occurs, an L9 orthogonal array of test cases is
created. TheL9 orthogonal array has a “balancing property”.
 Detect all double mode faults.
If there exists a consistent problem when specific levels of two parameters occur
together, it is called a double mode fault. Indeed, a double mode fault is an
indication of pairwise incompatibility or harmful interactions between two test
parameters.
 Multimode faults.
Orthogonal arrays [of the type shown] can assure the detection of only single and
double mode faults. However, many multimode faults are also detected by these
tests.

4.5. REGRESSION TESTING:

 Each time a new module is added as part of integration testing, the software
changes. New data flow paths are established, new I/O may occur, and new
control logic is invoked. These changes may cause problems with functions that
previously worked flawlessly.
 In the context of an integration test strategy, regression testing is the reexecution
of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
 In a broader context, successful tests (of any kind) result in the discovery of errors,
and errors must be corrected. Whenever software is corrected, some aspect of the
software configuration (the program, its documentation, or the data that support
it) is changed. Regression testing helps to ensure that changes (due to testing or
for other reasons) do not introduce unintended behavior or additional errors.
 Regression testing may be conducted manually, by reexecuting a subset of all test
cases or using automated capture/playback tools. Capture/playback tools enable
the software engineer to capture test cases and results for subsequent playback
and comparison.

The regression test suite (the subset of tests to be executed) contains three different
classesof test cases:
1) A representative sample of tests that will exercise all software functions.
2) Additional tests that focus on software functions that are likely to be affected
by thechange.
3) Tests that focus on the software components that have been changed.
 As integration testing proceeds, the number of regression tests can grow quite
large. Therefore, the regression test suite should be designed to include only those
tests that address one or more classes of errors in each of the major program
functions.
 It is impractical and inefficient to reexecute every test for every program function
once a change has occurred.

4.6. UNIT TESTING:

 Unit testing focuses verification effort on the smallest unit of software design—the
software component or module.
 The relative complexity of tests and the errors those tests uncover is limited by the
constrained scope established for unit testing. The unit test focuses on the internal
processing logic and data structures within the boundaries of a component. This
type of testing can be conducted in parallel for multiple components.

Figure. Unit test

Unit-test considerations:
 The module interface is tested to ensure that information properly flows into and
out ofthe program.
 Local data structures are examined to ensure that integrity is maintained.
 All independent paths are exercised to ensure that all statements in a module
have beenexecuted at least once.
 Boundary conditions are tested to ensure that the module operates properly at
boundariesestablished to limit or restrict processing.
 All error handling paths should be tested.

Unit-test procedures:
 The design of unit tests can occur before coding begins or after source code has
been generated. A review of design information provides guidance for establishing
test cases that are likely to uncover errors in each of the categories discussed
earlier. Each test case should be coupled with a set of expected results.
 Because a component is not a stand-alone program, driver and/or stub software
must often be developed for each unit test.
 In most applications a driver is nothing more than a “main program” that
accepts test case data, passes such data to the component (to be tested), and prints
relevant results. Stubs serve to replace modules that are subordinate (invoked by)
the component to be tested.
 A stub or “dummy subprogram” uses the subordinate module’s interface, may do
minimal data manipulation, prints verification of entry, and returns control to the
module undergoing testing.
 Drivers and stubs represent testing “overhead.” That is, both are software that
must be written (formal design is not commonly applied) but that is not delivered
with the final software product. If drivers and stubs are kept simple, actual
overhead is relatively low.
 Unfortunately, many components cannot be adequately unit tested with “simple”
overhead software. In such cases, complete testing can be postponed until the
integration test step (where drivers or stubs are also used).
 Unit testing is simplified when a component with high cohesion is designed. When
only one function is addressed by a component, the number of test cases is
reduced and errors can be more easily predicted and uncovered.

Figure. Unit-test environment


4.7. INTEGRATION TESTING:
 Integration testing is a systematic technique for constructing the software
architecture while at the same time conducting tests to uncover errors associated
with interfacing. The objective is to take unit-tested components and build a
program structure that has been dictated by design.
 There is often a tendency to attempt non incremental integration; that is, to
construct the program using a “big bang” approaches. All components are
combined in advance. The entire program is tested as a whole. And chaos usually
results! A set of errors is encountered. Correction is difficult because isolation of
causes is complicated by the vast expanse of the entire program. Once these errors
are corrected, new ones appear and the process continues in a seemingly endless
loop.
4.7.1. In Incremental integration, the program is constructed and
tested in small increments, where errors are easier to
isolate and correct; interfaces are more likely to be tested
completely; and a systematic test approach may be
applied.

4.7.2. Top-down integration:


 Top-down integration testing is an incremental approach to construction of the
software architecture. Modules are integrated by moving downward through the
control hierarchy, beginning with the main control module (main program).
Modules subordinate (and ultimately subordinate) to the main control module are
incorporated into the structure in either a depth-first or breadth-first manner.

Figure. Top-down integration


 Referring to Figure, depth-first integration integrates all components on a major
controlpath of the program structure.
 For example, selecting the left-hand path, components M1, M2 , M5 would be
integratedfirst. Next, M8 or (if necessary for proper functioning of M2) M6 would
be integrated. Then, the central and right-hand control paths are built.
 Breadth-first integration incorporates all components directly subordinate at each
level,moving across the structure horizontally.
 From the figure, components M2, M3, and M4 would be integrated first. The next
controllevel, M5, M6, and so on, follows.
The integration process is performed in a series of five steps:
1) The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2) Depending on the integration approach selected (i.e., depth or breadth first),
subordinatestubs are replaced one at a time with actual components.
3) Tests are conducted as each component is integrated.
4) On completion of each set of tests, another stub is replaced with the real component.
5) Regression testing may be conducted to ensure that new errors have not been
introduced.As a tester, you are left with three choices:

(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module, or
(3) Integrate the software from the bottom of the hierarchy upward.

4.7.3. Bottom-up integration:


Bottom-up integration testing, as its name implies, begins construction and testing
with atomic modules (i.e., components at the lowest levels in the program structure).
Because components are integrated from the bottom up, the functionality provided by
components subordinate to a given level is always available and the need for stubs is
eliminated.
A bottom-up integration strategy may be implemented with the following steps:

1. Low-level components are combined into clusters


(sometimes called builds) that perform a specific software
subfunction.A driver (a control program for testing) is
written to coordinate test case input and output.
2. The cluster is tested.
3. Drivers are removed and clusters are combined moving upward in the program structure.

Figure. Bottom-up Integration


As integration moves upward, the need for separate test drivers lessens. In fact, if
the top two levels of program structure are integrated top down, the number of drivers
can be reduced substantially and integration of clusters is greatly simplified.

Smoke Testing:
Smoke testing is an integration testing approach that is commonly used when
product software is developed. It is designed as a pacing mechanism for time-critical
projects, allowing the software team to assess the project on a frequent basis.

Smoke-testing approach encompasses the following activities:


1. Software components that have been translated into code are integrated into a build. A
build includes all data files, libraries, reusable modules, and engineered components that
are requiredto implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build from properly
performingits function.

3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily.
The daily frequency of testing the entire product may surprise some readers.
However, frequent tests give both managers and practitioners a realistic assessment of
integration testing progress.
The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive, but it should be capable of exposing major problems. The smoke test
should be thorough enough that if the build passes, you can assume that it is stable
enough to be testedmore thoroughly.
Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
 Integration risk is minimized.
 The quality of the end product is improved.
 Error diagnosis and correction are simplified.
 Progress is easier to assess.

4.8 . VALIDATION TESTING:


 Validation testing begins at the culmination of integration testing, when individual
components have been exercised, the software is completely assembled as a
package, andinterfacing errors have been uncovered and corrected.
 Validation succeeds when software functions in a manner that can be reasonably
expected by the customer.

1. Validation-Test Criteria:
 Software validation is achieved through a series of tests with requirements. A test
plan outlines the classes of tests to be conducted, and a test procedure defines
specific test cases that are designed to ensure that all functional requirements are
satisfied, all behavioral characteristics are achieved, all content is accurate and
properly presented, all performance requirements are attained, documentation is
correct, and usability and other requirements are met.

After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristic conforms to specification and is accepted or
(2) A deviation from specification is uncovered and a deficiency list is created.
2. Configuration Review:
An important element of the validation process is a configuration review. The
intent ofthe review is to ensure that all elements of the software configuration have
been properly developed, are cataloged, and have the necessary detail to bolster the
support activities.

3. Alpha and Beta Testing:


It is virtually impossible for a software developer to how the customer will really
use a program. Instructions for use may be misinterpreted; strange combinations of
data may beregularly used; output that seemed clear to the tester may be unintelligible to
a user in the field.
When custom software is built for one customer, a series of acceptance tests are
conducted to enable the customer to validate all requirements.
Conducted by the end user rather than software engineers, an acceptance test can
range from an informal “test drive” to a planned and systematically executed series of
tests.
Acceptance testing can be conducted over a period of weeks or months, thereby
uncovering cumulative errors that might degrade the system over time.

Alpha Test:
The alpha test is conducted at the developer’s site by a representative group of end
users. The software is used in a natural setting with the developer “looking over the
shoulder” of the users and recording errors and usage problems. Alpha tests are
conducted in a controlled environment.

Beta Test:
The beta test is conducted at one or more end-user sites. Unlike alpha testing, the
developer generally is not present. Therefore, the beta test is a “live” application of the
software in an environment that cannot be controlled by the developer. The customer
records all problems (real or imagined) that are encountered during beta testing and
reports these to the developer at regular intervals. As a result of problems reported during
beta tests, you make modifications and then prepare for release of the software product to
the entire customer base.

Acceptance Testing:
A variation on beta testing, called customer acceptance testing, is sometimes
performed when custom software is delivered to a customer under contract. The customer
performs a series of specific tests in an attempt to uncover errors before accepting the
software from the developer. In some cases (e.g., a major corporate or governmental
system) acceptance testing can be very formal and encompass many days or even weeks
of testing.
4.9. SYSTEM TESTING:

 Software is incorporated with other system elements (e.g., hardware, people,


information), and a series of system integration and validation tests are conducted.
 These tests fall outside the scope of the software process and are not conducted
solely by software engineers. However, steps taken during software design and
testing can greatly improve the probability of successful software integration in
the larger system.
 A classic system-testing problem is “finger pointing.” This occurs when an error is
uncovered, and the developers of different system elements blame each other for
the problem.
Rather than indulging in such nonsense, you should anticipate potential interfacing problems and
(1) Design error-handling paths that test all information coming from other elements
of thesystem,
(2) conduct a series of tests that simulate bad data or other potential errors at the
softwareinterface,
(3) Record the results of tests to use as “evidence” if finger pointing does occur, and
(4) Participate in planning and design of system tests to ensure that software is adequately tested.

Types of system tests are


1) Recovery Testing
2) Security Testing
3) Stress Testing
4) Performance Testing
5) Deployment Testing

1) Recovery Testing:
Recovery testing is a system test that forces the software to fail in a variety of
ways andverifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), reinitialization, checkpointing
mechanisms, data recovery, and restart are evaluated for correctness.
If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated todetermine whether it is within acceptable limits.

2) Security Testing:
Security testing attempts to verify that protection mechanisms built into a system
will, infact, protect it from improper penetration.
“The system’s security must be tested for invulnerability from frontal attack—
but mustalso be tested for invulnerability from flank or rear attack.”
During security testing, the tester may attempt to acquire passwords through
externalclerical means; may attack the system, thereby denying service to others; may
purposely cause
system errors, hoping to penetrate during recovery; may browse through insecure data,
hoping tofind the key to system entry.
. The role of the system designer is to make penetration cost more than the value
of theinformation that will be obtained.

3) Stress Testing:
Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: “How high can we crank this up
before it fails?”
Stress testing executes a system in a manner that demands resources in
abnormalquantity, frequency, or volume.

For example,
(1) Special tests may be designed that generate ten interrupts per second, when one or
two is theaverage rate.
(2) Input data rates may be increased by an order of magnitude to determine how input
functionswill respond.
(3) Test cases that require maximum memory or other resources are executed.
(4) Test cases that may cause thrashing in a virtual operating system are designed.
(5) Test cases that may cause excessive hunting for disk-resident data are created.

Essentially,the tester attempts to break the program.


A variation of stress testing is a technique called sensitivity testing. In some
situations, a very small range of data contained within the bounds of valid data for a
program may cause extreme and even erroneous processing or profound performance
degradation. Sensitivity testing attempts to uncover data combinations within valid input
classes that may cause instability or improper processing.

4) Performance Testing:
Performance testing is designed to test the run-time performance of software
within the contextof an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the
unit level, the performance of an individual module may be assessed as white-box tests
are conducted.
However, it is not until all system elements are fully integrated that the true
performance of asystem can be ascertained.
Performance tests are often coupled with stress testing and usually require both
hardware andsoftware instrumentation.

5) Deployment Testing:
In many cases, software must execute on a variety of platforms and under more
than one operating system environment. Deployment testing, sometimes called
configuration testing, exercises the software in each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will be
used to introduce the software to end users.

4.10. DEBUGGING:
Software testing is a process that can be systematically planned and specified. Test
case design can be conducted, a strategy can be defined, and results can be evaluated
against prescribed expectations.
Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.1. The
Debugging Process:
The debugging process begins with the execution of a test case. Results are
assessed and a lack of correspondence between expected and actual performance is
encountered.
In many cases, the non corresponding data are a symptom of an underlying cause
as yet hidden. The debugging process attempts to match symptom with cause, thereby
leading to error correction.

The debugging process will usually have one of two outcomes:


(1) The cause will be found and corrected or
(2) The cause will not be found. In the latter case, the person performing debugging may
suspect a cause, design a test case to help validate that suspicion, and work toward error
correction in an iterative fashion.

Few characteristics of bugs provide some clues:


1. The symptom and the cause may be geographically remote. That is, the symptom may
appear in one part of a program, while the cause may actually be located at a site that is
far removed. Highly coupled components exacerbate this situation.
2. The symptom may disappear (temporarily) when another error is corrected.
3. The symptom may actually be caused by non errors (e.g., round-off inaccuracies).
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time
application inwhich input ordering is indeterminate).
7. The symptom may be intermittent. This is particularly common in embedded
systems thatcouple hardware and software inextricably.
8. The symptom may be due to causes that are distributed across a number of tasks
running ondifferent processors.
During debugging, you’ll encounter errors that range from mildly annoying (e.g.,
an incorrect output format) to catastrophic (e.g., the system fails, causing serious
economic or physical damage). As the consequences of an error increase, the amount of
pressure to find the cause also increases. Often, pressure forces some software developers
to fix one error and at the same time introduce two more.2 .Psychological Considerations:
Debugging is one of the more frustrating parts of programming. It has elements of
problem solving or brain teasers, coupled with the annoying recognition that you have
made a mistake.
Heightened anxiety and the unwillingness to accept the possibility of errors
increases the task difficulty. Fortunately, there is a great sigh of relief and a lessening of
tension when the bug is ultimately . . . corrected.

3. Debugging Strategies:
Regardless of the approach that is taken, debugging has one overriding objective
— to find and correct the cause of a software error or defect. The objective is realized by
a combination of systematic evaluation, intuition, and luck.

In general, three debugging strategies have been proposed:


(1) Brute force
(2) Backtracking
(3) Cause elimination.
Each of these strategies can be conducted manually, but modern debugging
tools canmake the process much more effective.
Debugging tactics.

1) Brute force:
The brute force category of debugging is probably the most common and least
efficientmethod for isolating the cause of a software error.
Using a “let the computer find the error”, memory dumps are taken, run-time
tracesare invoked, and the program is loaded with output statements.
Although the mass of information produced may ultimately lead to success, it
morefrequently leads to wasted effort and time.

2) Backtracking:
Backtracking is a fairly common debugging approach that can be used successfully in
small programs.
Beginning at the site where a symptom has been uncovered, the source code is
traced backward (manually) until the cause is found. Unfortunately, as the number of
source lines increases, the number of potential backward paths may become
unmanageably large.

3) Cause elimination:
The third approach to debugging—cause elimination—is manifested by induction
or deduction and introduces the concept of binary partitioning. Data related to the error
occurrence are organized to isolate potential causes.
Alternatively, a list of all possible causes is developed and tests are conducted to
eliminate each Automated debugging.
Each of these debugging approaches can be supplemented with debugging tools
that can provide you with semiautomated support as debugging strategies are attempted.
Integrated development environments (IDEs) provide a way to capture some of
the language specific predetermined errors (e.g., missing end-of-statement characters,
undefinedvariables, and so on) without requiring compilation.”

Correcting the Error:


Once a bug has been found, it must be corrected. But, as we have already
noted, thecorrection of a bug can introduce other errors and therefore do more harm than
good.
Three simple questions that you should ask before making the “correction” that
removesthe cause of a bug:
1. Is the cause of the bug reproduced in another part of the program?
2. What “next bug” might be introduced by the fix I’m about to
make?3.What could we have done to prevent this bug in the first
place?

4.11 Program Analysis


There are two ways by which the program analysis is carried out
1) Static analysis
2) Dynamic analysis

1) Static analysis
Definition: Static testing is a testing technique in which software is testedwithout executing
the code. As the code, requirement documents, and design documents ar tested manually in
order to find errors, it is called static
This kind of testing is also called verification testing.

Static analysis testing techniques:


1. Informal reviews:
In this technique, the team of reviewers just checks the documents and gives comments
The purpose is to maintain the quality from the initial stage. It is non- documented in nature.
2. Formal reviews:
It is well structured and documented and follows six main steps: Planning, kick off,
preparation, review meeting, rework follow-up

Technical reviews:
The team of technical experts will review the software for technical
specifications. The purpose is to pin out the difference between the required
specification and product design and then correct the flaws. It focuses on technical
documents such as test strategy, test plan and requirement specification documents.

4. Walk-through:
The author explains the software to the team and teammates can raise questions if they
have any. It is headed by the author and review comments are noted down.

5. Inspection process:
The meeting is headed by a trained moderator. A formal review is done, a record is
maintained for all the errors and the authors are informed to make rectifications on the given
feedbacks.

6. Static code review:


Code is reviewed without execution, it is checked for syntax, coding standards and
code optimization. It is also referred to as white bon testing

Advantages
1) lt is a fast and easy technique used to fix errors.
2) It helps in identifying flaws in code
3) With the help of automated tools it become very easy and convenient to scan and review
the software
4) With static testing it is possible to find errors at an early stage of the development life cycle

Disadvantages
1) It takes a lot of time to conduct the testing procedure it done manually
2) Automated tools work for a restricted set of programming languages
3) The automated tools simply scan the code and can not test the code deeply

2) Dynamic analysis
Definition Dynamic testing is a process by which code is executial to check how sitware will
perform in a runtime environment. As this type of testing a conducted during code execution
it is called dynamic. It is also called validation testing

Dynamic testing techniques

Unit testing: As the name suggests individual units or modules are tested. The source code is
tested by the developers.
• Integration testing Individual modules are clubbed and tested by the developers. It is
performed in order to ensure that modules are working in the right manner and will continue
to perform flawlessly even after integration.
• System testing: It is performed on a complete system to ensure that the application is
designed according to the requirement specification document.
Advantages
1) It identifies weak areas in a runtime
2) It helps in performing detailed analysis of code
3) It can be applied with any application

Disadvantages
1) It is not easy to find a trained software tester to perform dynamic testing
2) It becomes costly to fix errors in dynamic testing

4.12 Symbolic Execution


Symbolic execution is a testing technique in which we use symbols to see what the
output of the code without actually running the code. This helps to find and the errors in the
software without doing a real lest
Symbolic execution is a technique for testing, software by symbolically evaluating the
program's control flow and data flow. It is a type of dynamic analysis, us means that it tests
the program by executing it with symbolic input values instead of concrete input values
Symbolic execution involves analyzing the code without running it with actual values

For example-

function divide(int a & int b)


{ If(b!=0)
Result=a/b;
Else
Result=-1;
Return result;
}

In the above code


the result is a symbolic variable we can analyze the code's behavior in both cases without having
specific values for 'a’ and ‘b’ at this stage. This helps us understand how the code e will react
to various scenarios
 Symbolic execution is often used in more complex programs to find the major bugs.
 This technique also helps us to understand how the code responds to different inputs
without running it with actual data.
 It is an important technique far debugging and testing software to identify provide
problems and ensure robustness

4.13Model Checking
Model checking is a technique for verifying that a software system satisfies the desired
properties. In this technique, a model of the system is created and then using a model checker
we exhaustively explore all possible states of the model to check the properties the system.
Model checking can be used in both software testing and debugging in software testing, it can
be used to verify that the system meets its requirements. In software debugging, it can be used
to identify the root cause of a bug.
Once the model and the desired properties have been specified, a model checker can be used
to verify that the model satisfies the properties

For example-Consider a ATM system in which we have to design a function withdrawal of


money say withdraw). This function allows the user to withdraw money from his/her account.
In order to implement this function, following requirements need be satisfied-

1. The user must have a positive balance in their account in order to withdraw money.
2. The user cannot withdraw more money than they have in their account.
3. The user's balance must be updated after each withdrawal.
Before actually executing the code, we can use model checking technique to verify the
withdraw() function meets these requirements. In this technique, we first create a model of the
withdraw() function. We must specify the above requirements as properties of that model.
The model checker would explore all possible states of the function to verify that the function
satisfies all these properties. For instance It there is repo balance, then there must not be a
path to withdraw amount state. Similarly it the b amount greater than the withdrawal amount
then only there should be a p withdraw amount' state. If the model checker finds a state in
which the function shoes no update the user's balance after a withdrawal then it should report
a bug by identifying this issue in our model, we can locate the corresponding code in our
software and correct the logic to prevent this scenario from happening
Model checking is particularly valuable for complex systems, where manually testing all
possible combinations and paths would be impractical.
Unit – 5 Project Management

Software Project Management

5.1 Software Project Management-Introduction


Management is an essential activity for the computer based systems and product.
The term project management involves various activities such as planning,
monitoring, control of people, process and various events that occur during the
development of | software.
Building the software system is a complex activity and many people get
involved in - this activity for relatively long time. Hence, it is necessary to
manage the project. The project management is carried out with the help of 4
P's i.e. people, product, process and project.

Hence, we will start our discussion by focusing on these elements.

5.1.1 Management Spectrum

Effective software project management focuses four P's i.e. people, product,
process and project. Thesuccessful project management is done with the help
of these four factors where the order of these elements is not arbitrary.
Project manager has to motivate the communication between stakeholders.
He should also prepare a project plan for the success of the product.

5.1.2 The People

People factor is an important issue in software industry. There is a strong


need for motivated and highly skilled people for developing the software
product. The Software Engineering Institute (SEI) has developed the People
Management Capability Maturity Model (PM-CMM)

By using PM-CMM model software organizations become capable for


undertaking complex applications which ultimately attracts or motivates the
talented people.
Following are some key practice areas for software people -
 Recruitment
 Selection
 Performance management
1
 Training compensation
 Career development
 Organization and Work design
 Culture development.
5.1.3 The Product
Before planning the project three important tasks need to be done -
 Product objectives and scope must be established.
 Alternative solutions should be considered.
 Technical and management constraints must be identified.

The software developer and customer must communicate with each other in
order to define the objectives and scope of the product. This is done as the
first step in requirement gathering and analysis. The scope of the project
identifies primary data, functions and behaviour of the product.
After establishing the objectives and scope of the product the alternative
solutions are considered.
Finally, the constraints imposed by-delivery deadline or budgetary restrictions,
personal availability can be identified.

5.1.4 The Process

The software process provides the framework from which the software
development plan can beestablished.
There are various framework activities that needs to be carried out during the
software developmentprocess. These activities can be of varying size and
complexities.
Different task sets-tasks, milestones, work products and quality assurance points
enable framework activities to adapt the software requirements and certain
characteristics of software project.
Finally, umbrella activities such as Software Quality Assurance (SQA) and
Software Configuration Management (SCM) are conducted. These umbrella
activities depend upon the framework activities.

5.1.5 Project Planning Process


The first step to be taken in project management is Project planning. There are five
major activities that are performed in project planning -

[1] Project estimation


[2] Project scheduling
[3] Risk analysis
2
[4] Quality management planning
[5] Change Management planning

Software estimation begins with a description of the scope of software product.

For the meaningful project development the scope must be bounded. The
problem f for which the product is to be built is then decomposed into a set
of smaller problems. 1 Each of these is estimated using historical data
(metrics) and / or previous experience as I a guide. The two important issues-
problem complexity and risk are considered before J final estimate is made.

There are many useful techniques for time and effort estimation. Process and
project * metrics can provide historical perspective and powerful input for
generation of \ quantitative estimates.

Estimation of recourses, cost and schedule for a software engineering effort


requires -

 Experience,
Access to good historical information (metrics) and
 The courage to commit to quantitative predictions when quantitative
information is available.

While estimating the project, both the project planner and the customer should
recognize that variability in software requirement means instability in cost
and schedule. When customer changes the requirements, then estimation
needs to be revisited.

5.1.6 Software Scope and Feasibility

Software scope describes four things -


 The function and features that are to be delivered to end-users.
 The data which can be input and output.
 The content of the software that is presented to user.
 Performance, constraints, reliability and interfaces that bounds the System.

There are two ways by which the scope can be defined -


[1] A scope can be defined using the narrative description of the
software obtained aftercommunication with all stakeholders.
[2] Scope can be defined as a set of use cases developed by the end users.

3
 In the scope description, various functions are described. These
functions are evaluated and refined to provide more details before the
estimation of the project.
 For performance consideration, processing and response time
requirements are analyzed.
 The constraints identify the limitations placed on the software by
external hardware or any otherexisting system.
After identifying the scope following questions must be asked –
° Can we build the software to meet this scope?
° Is this software project feasible?
That means after identifying the scope of the project its feasibility must be
ensured.
Following are the four dimensions of software feasibility. To ensure the
feasibility of the software project the set of questions based on these
dimension has to be answered. Those are as given below -
[1] Technology
 Is a project technically feasible?
 Is it within the state of art?
 Are the defects to be reduced to a level that satisfies the application's
need?
[2] Finance
 Is it financially feasible?
 Is the development cost completed at a cost of software
organization, its client, or marketaffordable?
 Are the defects to be reduced to a level that satisfies the application’s
need?
[3] Time
 Will the project's time to market beat the competition?
[4] Resource
 Does the organization have the resources needed to succeed?
 Putnam and Meyers suggests that scoping is not enough. Once
scope is understood, and and feasibility have been identified the
next task is estimation of the Resources requiredto accomplish the
software development effort.

5.2 Estimation
Software project estimation is a form of problem solving. Many times the

4
problem to be solved is toocomplex in software engineering. Hence for solving such
problems, we decompose the given probleminto a set of smaller problems.
The decomposition can be done using two approached decomposition of problem
or decomposition of process. Estimation uses one or both forms of decomposition
(partitioning).

5.2.1 Software Sizing


Following are certain issues based on which accuracy of software project estimate is
predicated -
1. The degree to which planner has properly estimated the size of the product
to be built.
2. The ability to translate the size estimate into human-effort, calendar time
and money
3. The degree to which the project plan reflects the abilities of software team.
4. The stability of product requirements and the environment that
supports the softwareengineering effort.

Sizing represents the project planner's first major challenge. In the context
of project planning, size refers to a quantifiable outcome of the software
project.The sizing can be estimated using two approaches - a direct
approach in which lines of code is considered and an indirect approach in
which computation of function point is done.

Putnam and Myers suggested four different approaches for sizing the problem -

[1] Fuzzy logic sizing


In this approach planner must identify –
 The type of application
 Establish its magnitude on a qualitative scale and then refine the
magnitude within theoriginal range.
 Planner should also have access to historical database of the project
so that estimates canbe composed to actual experience.
[2] Function point sizing

Planner develops estimates of the information domain.


[3] Standard component sizing
There are various standard components used in software. These
components are subsystems, modules, screens, reports, interactive,
programs, batch program, files, LOC and Object-level instruction.
The project planner estimates the number of time these standard
components are used. He then uses historical project data to determine the
delivered size per standard component.

5
[4] Change sizing
This approach is used when existing software has to be modified as per
the requirement of the project. The size of the software is then estimated
by the number and type of reuse, addition of code, change made in the
code, deletion of code.

The result of each sizing approaches must be combined statistically to


create three-point estimatewhich is also known as expected-value estimate.

FP Based & LOC Based Estimation

5.2.2 Problem based Estimation


The problem based estimation is conducted using LOC based estimation, FP based
estimation, processbased estimation and use cased based estimation.
LOC and FP based data are used in two ways during software estimation -
1. These are useful to estimate the size each element of software.
2. The baseline metrics are collected from past project and LOC and FP
data is used in conjunction with estimation variable to develop cost and
effort values for the project. (LOC) and (FP) estimation are different
estimation techniques. Yet, both have number of characteristics in
common.

With a bounded statement of software scope a project planning process


begins and by using the statement of scope the software problem is
decomposed into the functions that can be estimated individually.

(LOC) or (FP) is then estimated for each function.


Baseline productively metrics are then applied to the appropriate estimation
variable and cost or effortfor the function is derived.
Function estimates are combined to obtain an overall estimate for the entire
project.
Using historical data the project planner expected value by considering following
variables -
1. Optimistic
2. Most likely
3. Pessimistic
For example, following formula

considers for "most likely" estimate where S is the estimation size variable,
represents the optimistic estimate, represents the most likely estimate
and represents the pessimistic estimate values.
6
5.3 LOC based Estimation

Size oriented measure is derived by considering the size of software that has been
produced.
The organization builds a simple record of size measure for the software
projects. It is built on pastexperiences of organizations.
It is a direct measure of software

Proje LOEffo Cost( Doc Erro Defec Peop


ct C rt $) (pgs.) rs ts le
ABC 10,00 20 170 400 100 12 4
0
PQR 20,00 60 300 1000 129 32 6
0
XYZ 35,00 65 522 1290 280 87 7
0
: : : : : : : :
: : : : : : : :

A simple set of size measure that can be developed is as given below :


 Size = Kilo Lines of Code (KLOC)
 Effort = Person/month
 Productivity = KLOC/person-month
 Quality = Number of faults/KLOC - Cost = $/KLOC
 Documentation = Pages of documentation/KLOC

The size measure is based on the lines of code computation. The lines of code
is defined as one line oftext in a source file.

While counting the lines of code the simplest standard is :


 Don't count blank lines.
 Don't count comments.
 Count everything else.
The size oriented measure is not universally accepted method.

Advantages
1. Artifact of software development which is easily counted.
2. Many existing methods use LOC as a key input.

7
3. A large body of literature and data based on LOC already exists.

Disadvantages
1. This measure is dependent upon the programming language.
2. This method is well designed but shorter program may get suffered.
3. It does not accommodate non procedural languages.
4. In early stage of development it is difficult to estimate LOC.

Example of LOC based Estimation Example


Consider an ABC project with some important modules such as
User interface and control facilities
1. 2D graphics analysis
2. 3D graphics analysis
3. Database management
4. Computer graphics display facility
5. Peripheral control function
6. Design analysis models Estimate the project in based on LOC

Solution
For estimating the given application we consider each module as separate
function and corresponding lines of code can be estimated in the following
table as

Function Estimated LOC

User Interface and Control


2500
Facilities(UICF)
2D graphics analysis(2DGA) 5600
3D Geometric Analysis
6450
function(3DGA)
Database Management(DBM) 3100
Computer Graphics Display
4740
Facility(CGDF)
Peripheral Control Function(PCF) 2250
Design Analysis Modules (DAM) 7980
Total estimation In LOC 32620

Expected LOC for 3D Geometric analysis function based on three point estimation is -

8
 Optimistic estimation 4700
 Most likely estimation 6000
 Pessimistic estimation 10000

Expected value =

A review of historical data indicates -


1. Average productivity is 500 LOC per month
2. Average labor cost is $6000
per month Then cost for lines of

code can be estimated as

By considering total estimated LOC as 32620


 Total estimated project cost = (32620*12) = $391440
 Total estimated project effort = (32620/500) = 65 Person-months

5.4 Function Oriented Metrics

The function point model is based on functionality of the


delivered application. These are generally independent of
the programming language used.
This method is developed by Albrecht in
1979 for IBM. Function points are
derived using :
1. Countable measures of the software requirements domain
2. Assessments of the software complexity.

How to calculate function point?


The data for following information domain characteristics are collected :
[1] Number of user inputs
Each user input which provides distinct application data to the software is
counted.
[2] Number of user outputs
Each user output that provides application data to the user is counted, e.g.
screens, reports, errormessages.
[3] Number of user inquiries
An on-line input that results in the generation of some immediate software
response in the form ofan output.

9
[4] Number of files
Each logical master file, i.e. a logical grouping of data that may be part of
a database or a separatefile.
[5] Number of external interfaces
All machine-readable interfaces that are used to transmit information
to another system arecounted.
The organization needs to develop criteria which determine whether a
particular entry is simple,average or complex.
The weighting factors should be determined by observations
or by experiments.The count table can be computed with
the help of above given table.
Now the software complexity can be computed by answering following
questions. These arecomplexity adjustment values
Does the system need reliable backup and recovery ?
1.
2. Are data communications required ?
3. Are there distributed processing functions ?
4. Is performance of the system critical ?
5. Will the system rim in an existing, heavily utilized operational environment
?
6. Does the system require on-line data entry ?
7. Does the on-line data entry require the input transaction to be built over
multiple screens oroperations ?
8. Are the master files updated on-line ?
9. Are the inputs, outputs, files or inquiries complex ?
10. Is the internal processing complex ?
11. Is the code which is designed being reusable ?
12. Are conversion and installation included in the design ?
13. Is the system designed for multiple installations in different organizations ?
14. Is the application designed to facilitate change and ease of use by the user
? Rate each of the above factors according to the following scale :

0 1 2 3 4 5
No inciden Moder Avera Signific Essenti
influence tal ate ge ant al
Once the functional point is calculated then we can compute various measures as
follows
 Productivity = FP/person-month
10
 Quality = Number of faults/FP
 Cost = $/FP
 Documentation = Pages of documentation/FP.

Advantages
 This method is independent of programming languages.
 It is based on the data which can be obtained in early stage of project .

Disadvantages
 This method is more suitable for business systems and can be developed for
that domain.
 Many aspects of this method are not validated.
 The functional point has no significant meaning. It is just a numerical value.

Example of FP based Estimation


FP focuses on information domain values rather than software functions. Thus
we create a functionpoint calculation table for ABC project.

For this example we assume average complexity weighting factor.


Each of the complexity weighting factor is estimated and the complexity
adjustment factor is computed using the complexity factor table below.
(Based on the 14 questions)

The estimated number of adjusted FP is derived using the following formula : -


Complexity adjustment factor = [0.65 + (0.01 * 52)] = 1.17
FP ESTIMATED = (381 * 1.17) = 446 (Function point count adjusted with
complexity adjustmentfactor)
1. Average productivity is 6.5 FP/Person month

11
2. Average labor cost is $6000 per month
Calculations for cost per function point, total estimated project cost and total effort
1. The cost per function point = (6000 / 6.5) = $923
2. Total estimated project cost = (446 * 923) = $411658
3. Total estimated effort = (446 / 6.5) = 69 Person-month.

5.5 Make/Buy Decision


Software engineering managers are often faced with a make-buy decision to
acquire a computer software. Normally following are the options that are used to
acquire the software.

1. Purchase or buy the software.


2. Reuse existing partially built components to construct the system.
3. Build the system from scratch.
4. Contract the software development to an outside vendor.
The decision of acquisition of software is critically based on the cost. A tree
structure is built to analyze the costs of software which can be acquired using
any of the above given ways.

For example - Consider the make-buy decision tree for system S.

Expected cost can be computed for each branch using following formula.

12
For example for the branch can be computed as:

Thus the expected cost at each node can be computed. It is summerised as given
below

From this we can conclude that by purchasing the software we select for
lowest expected cost option. But simply cost should not be a criterion to
acquire the software.

During decision making process for software acquisition following factors should
also be considered.
1. Availability of reliable software.
2. Experience of developer or vendor or contractor.
3. Conformance to requirements.
4. Local politics.
5. Likelihood of changes in the software.
These are some criteria which can heavily affect the decision of make-buy of
software.

5.5.1 Outsourcing

Outsourcing is a process in which software engineering activities are


contracted to a third party whodoes the work at lowest cost with high quality.

In strategic level, the significant portion of software work can be contracted to


third party.
In Tactical level, a project manager determines whether part or all of the
software can beaccomplished with good quality by contracting it.
13
In financial level, the cost is the prime factor in the decision of outsourcing.

Benefits of outsourcing
[1] Cost savings
If a software is outsourced then people and resource utilization can be
reduced. And thereby thecost of the project can be saved effectively.
[2] Accelerated development
Since some part of software gets developed simultaneously by a third
party, the overalldevelopment process gets accelerated.
Drawbacks of outsourcing
A software company loses some control over the software as it is developed by
third person.
The trend of outsourcing will be continued in software industry in order to
survive in competitiveworld.

5.6 COCOMO Model


COCOMO model (Constructive cost model) was proposed by Boehm.
This model estimates the total effort in terms of “person-months” of the technical
project staff.
Boehm introduces three forms of cocomo. It can be applied in three classes of
software project:
1. Organic mode : Relatively simple , small projects with a small team
are handled . Such a team should have good application experience to
less rigid requirements
2. Semidetached mode: For intermediate software projects(little
complex compared to organic mode projects in terms of size). Projects
may have a mix of rigid and less than rigid requirements.
3. Embedded mode: When the software project must be developed
within a tight set of hardware and software operational constraints. Ex
of complex project: Air traffic control system
Forms of cocomo model are:
1. Basic cocomo: Computes software development effort and cost as a
function of programme sizeexpressed in terms of lines of code(LOC).

The basic cocomo model takes the following


form: E=ab (KLOC)Exp(bb)
persons-months D=cb(E)Exp(db)months
Where
E- Stands for the effort applied in terms of
person months D-Development time in

14
chronological months
KLOC-Kilo lines of code of the project
Ab,bb,cb,db are the co-efficients for the three modes are given below:

From E & D we can compute the no: of people required to accomplish the project
as N=E/D
Merits of Basic Cocomo model:
Basic cocomo model is good for quick, early,rough order of magnitude estimates
of software project.
Limitations :
1. The accuracy of this model is limited because it does not consider
certain factors for cost estimation of software. These factors are hardware
constraints, personal quality and experiences, modern techniques and
tools
2. The estimates of Cocomo model are within a factor of 1.3 only 29% of
the time and within the factor of 2 only 60% of time.
Example:
Consider a software project using semi-detached mode with 30,000 lines of
code . We will obtainestimation for this project as follows:
(1) Effort estimation:

E= ab (KLOC)Exp(bb )person-
months E=3.0(30)1.12 where lines
of code=30000=30 KLOC E=135
person-month
(2) Duration estimation:

D=cb (E)Exp(db )months


=2.5(135)0.35
D=14 months
(3)Person
estimation:
N=E/D
=135/14 N
15
=10 persons approx.
Intermediate COCOMO:
Computes effort as a function as a function of programme size and a lot of
cost drivers that includes subjective assessment of product
attributes,hardware attributes , personal attributes and project attributes.
The basic model is extended to consider a set of cost driver attributes
grouped into 4 categories(Intermediate Cocomo)
(1) Product Attributes:

(a) Required software reliability


(b) Size of application software
(c) Complexity of the product
(2) Hardware Attributes:

(a) Run-time performance constraints


(b) Memory constraints
(c) Required turn around time
(d) Volatility of virtual machine
(3) Personal attributes:

(a) Analyst capability


(b) Software Engineer Capability
(c) Applications Experience
(d) Programming language experience
(e) Virtual machine
Experience
(4) Project Attributes:
(a) Use of software tools
(b) Required development schedule
(c) Application of software engineering methods
Now these 15 attributes get a 6-point scale ranging from “very low”
to “extra high”.These ratings can be viewed as:

Very Low Low Nominal High Very


high Extra high

Based on the rating effort multipliers is determined.


The product of all effort Multipliers result in “effort adjustment factor” (EAF).
The intermediate Cocomo takes the form.
E= ai(KLOC)bi*EAF
16
where E:Effort applied in terms of person-months
KLOC : Kilo lines of code for the project EAF : It is the effort
adjustment factor The values of ai and bi for various class of
software projects are:

The duration and person estimate is same as in basic


Cocomo model i.e; D=cb (E)Exp (db ) months i.e; use
values of cb and db coefficients N=E/D persons
Merits:
1. This model can be applied to almost entire software product for easy and

17
rough cost estimationduring early stage.
2. It can also be applied at the software product component level for
obtaining more accurate costestimation.
Limitations:
1. The effort multipliers are not dependent on phases.

2. A product with many components is difficult to estimate.


Example: Consider a project having 30,000 lines of code which in an embedded
software with critical area hence reliability is high.The estimation can be
E=ai (KLOC)bi*(EAF)
As reliability is high EAF=1.15(product attribute)
ai=2.8
bi=1.20 for embedded
software E=2.8(30)1.20 *1.15
=191 person month
D=cb (E)db=2.5(191)0.32
=13 months approximately
N=E/D =191/13
N=15 persons approx.

DETAILED COCOMO
The Advanced COCOMO model computes effort as a function of program size
and a set of cost drivers weighted according to each phase of the software
lifecycle. The Advanced model applies the Intermediate model at the
component level, and then a phase-based approach is used to consolidate the
estimate [Fenton, 1997]. The four phases used in the detailed COCOMO
model are: requirements planning and product design (RPD), detailed design
(DD), code and unit test (CUT), and integration and test (IT).

Analyst capability effort multiplier for detailed COCOMO Estimates for each

18
module is combined into subsystems and eventually an overall project
estimate. Using the detailed cost drivers, an estimate is determined for each
phase of the lifecycle.

COCOMO II

5.7 COCOMO II Model

COCOMO II is applied for modern software development practices


addressed for the projects in1990's and 2000's.

The sub-models of COCOMO II model are -


[1] Application composition model
For estimating the efforts required for the prototyping projects and the
projects in which the existing software components are used application-
composition model is introduced.

The estimation in this model is based on the number of application points.


The application points aresimilar to the object points.

This estimation is based on the level of difficulty of object points.


Boehm has suggested the object point productivity in the following manner.

Developers experience and Very low Nomin Hig Very


capability low al h high
CASE maturity Very L Nomin Hig Very
low o al h high
w
Productivity (NOP/Month) 4 7 13 25 50

Effort computation in application-composition model can be done as follows -

Where
PM means effort required in terms of
person-months. NAP means number of
application points required.
% reuse indicates the amount of reused components in the project.
These reusablecomponents can be screens, reports or the modules used
in previous projects.PROD is the object point productivity. These
values are given in the above table.

19
[2]An early design model
This model is used in the early stage of the project development. That is after
gathering the user requirements and before the project development actually
starts, this model is used. Hence approximate cost estimation can be made in
this model.

The estimation can be made based on the functional points.


In early stage of development different ways of implementing user requirements
can be estimated.
The effort estimation (in terms of person month) in this model can be made
using the following formula

Boehm has proposed the value of coefficient A = 2.94.


Size should be in terms of Kilo source lines of code i.e. KSLOC. The lines of
code can be computedwith the help of function point.

The value of B is varying from 1.1 to 1.24 and


depends upon the project. M is based on the
characteristics such as
 Product reliability and complexity (RCPX)
 Reuse required (RUSE)
 Platform difficulty (PDIF)
 Personnel capability (PERS)
 Personnel experience (PREX)
 Schedule (SCED)
 Support facilities (FCIL)

These characteristics values can be computed on the following scale -

Hence the effort estimation can be given as

A reuse model

20
This model considers the systems that have significant amount of code which
is reused from the earlier software systems. The estimation made in reuse
model is nothing but the efforts required to integrated the reused models into
the new systems.

There are two types of reusable codes : black box code and white box code.
The black box code is a kind of code which is simply integrated with the new
system without modifying it. The white box code is a kind of code that has to
be modified to some extent before integrating it with the new system, and
then only it can work correctly.

There is third category of code which is used in reuse model and it is the
code which can be generated automatically. In this form of reuse the standard
templates are integrated in the generator. To these generators, the system
model is given as input from which some additional information about the
system is taken and the code can be generated using the templates.

The efforts required for automatically the generated code is

where

AT is percentage of automatically generated code.

ATPROD is the productivity of engineers in estimating such code

Sometimes in the reuse model some white box code is used along with the
newly- developed code. The size estimate of newly developed code is

equivalent to the reused code. Following formula is used to calculate the


effort in such a case -

whe
re ESLOC means equivalent number of lines of new source code.

ASLOC means the source lines of code in the component that has to be
adapted.

AAM is adaptation Adjustment multiplier. This factor is used to take


into account the effortsrequired to reuse the code.

21
[3] Post architecture model
This model is a detailed model used to compute the efforts. The basic formula used
in this model is

In this model efforts should be estimated more accurately. In the above formula A is
the amount of code. This code size estimate is made with the help of three
components -
1. The estimate about new lines of code that is added in the program.
2. Equivalent number of source lines of code (ESLOC) used in reuse model.
3. Due to changes in requirements the lines of code get modified. The
estimate of amount ofcode being modified.

The exponent term B has three possible values that are related to the levels of
project complexity. The values of B are continuous rather than discrete. It
depends upon the five scale factors. These scale factors vary from very low
to extra high (i.e. from 5 to 0).

These factors are

Scale factor Descripti


for on
componen
tB
Precedentedness This factor is for previous experience of organization.
Very low means no previous experience and high means
the organization knows the application
domain.
Development Flexibility in development process. Very low means the j
flexibility typical process is used. Extra high means client is
responsible for defining the process goals.
Architecture/ri Amount of risk that is allowed to carry out. Very low:
skresolution means little riskanalysis is permitted and extra high means
f high risk analysis is made. $
Team cohesion Represents the working environment of the team. Very low
j cohesion means poor communication or interaction
between j the team members and extra high means there is
no j communication problem and team can work in a good j
spirit. 1
Process This factor affects the process maturity of the organization.
maturity This value can becomputed using Capacity Maturity Model

22
1 (CMM) questionnaire, for computing the estimates CMM
j maturity level can be subtracted from 5.
Add up all these rating and then whatever value you get, divide it by 100. Then add
the resultant valueto 1.01 to get the exponent value.
This model makes use of 17 cost attributes instead of seven. These attributes
are used to adjust initialestimate.

Cost Type Purpo


attribute se
of attribute
RELY Product System reliability that is required.
CPLX Product Complexity of system modules 5'
DATA Product Size of the data used from database
DOCU Product Some amount of documentation used
RUSE Product Percentage of reusable components
TIME Computer Amount of time required for
execution
PVOL Computer Volatility of development platform. )
STOR Computer Memory Constraint
ACAP Personnel Project analyst's capability to analyses
the project.
PCAP Personnel Programmer capability
PCON Personnel Personnel continuity.
PEXP Personnel Programmer's experience in project
domain.
LTEX Personnel Experience of languages and tools that
are used.
AEXP Personal Analyst's experience in project
domain.
TOOL Project Use of software tools
SCED Project Project schedule compression.
SITE Project Quality of inter-site and multi-site
working.

23
Scheduling and Tracking

5.8 Scheduling and Tracking

While scheduling the project, the manager has to estimate the time and
resource of the project. All the activities in the project must be arranged in
coherent sequence. The schedules must be continually updated because some
uncertain problems may occur during the project life cycle. For new projects
initial estimates can be made optimistically.

During the project scheduling the total work is separated into various small
activities. And time required for each activity must be determined by the
project manager. For efficient performance some activities are conducted in
parallel.

The project manager should be aware of the fact that: Every stage of the
project may not be problem- free. Some of the typical problems in project
development stage are:
 People may leave or remain absent.
 Hardware may get failed.
 Software resource may not be available.

To accomplish the project within given schedule the required resources


must be available when needed. Various resources required for the project
are

 Human effort
 Sufficient disk space on server
 Specialized hardware
 Software technology
 Travel allowance required by the project staff.
Project schedules are represented as the set of chart in which the work-
breakdown structure and dependencies within various activities is
represented.
24
5.8.1 Relationship between People and Effort

People work on the software project doing various activities such as


requirements gathering, design, analysis, coding and testing.

There is a common myth among the software managers that by adding more
people in the project, thedeadline can be achieved. But this is not true - as by
adding more people in the project, first we needto train them for the tools and
technologies that are getting used in the project. And only those people can
teach the new people who are already working. Thus during teaching or
training the time will be simply wasted and there won't be the progress in the
project.

Once the effort required to develop a software has been determined, it is


necessary to determine the people or staff requirement for the project.
Putnam first studied the on how much staffing is required for the projects. He
extended the work of Norden who had earlier investigated the staffing pattern
The Putnam-Norden-Rayleigh Curve(PNR Curve) represents the relationship
between effort applied and delivery time for the software project.

Following equation shows the relationship of project effort as a function of project


delivery time.
The curve rises sharply left to td indicating that project delivery time can not be
compressed beyond
0.75 td.Beyond this the failure region is shown and failure risk becomes high.
Software equation: The software equation can be derived from the PNR curve.
It represents the nonlinear relationship between time to complete the project
and human effort appliedto the project.
That also implies that by extending the end date six months, we can reduce the
number of people
25
5.8.2 Task Sets

Definition of task set: The task set is a collection of software engineering


work tasks, milestones, and work products that must be accomplished to
complete particular project.
Every process model consists of various tasks sets. Using these tasks sets the
software team define, develop or ultimately support computer software.

There is no single task that is appropriate for all the projects but for
developing large, complex projects the set of tasks are required. Hence
every effective software process should define a collection of task sets
depending upon the type of the project.
Using tasks sets the high quality software can be developed and any
unnecessary work can be avoidedduring software development.
The number of tasks sets will vary depending upon the type of the project.
Various types of projects are enlisted below -
[1] Concept Development project
These are the projects in which new business ideas or the applications
based on new technologiesare to be developed.
[2] New application development project
These projects are developed for satisfying a specific customer need.
[3] Application upgradation project
These are kind of projects in which existing software application needs
a major change. This change can be for performance improvement, or
modifications within the modules and interfaces.
[4] Application maintenance project
These are kind of projects that correct, adapt or extend the existing software
applications.
[5] Reengineering projects
These are the projects in which the legacy systems are rebuilt partly or
completely.
Various factors that influence the tasks sets are -
1. Size of project
2. Project development staff
3. Number of user of that project
4. Application longetivity
5. Complexity of application
6. Performance constraints
7. Use of technologies

26
Task set example: Consider the concept development type of the project. Various
tasks sets in thistype of project are -

1. Defining scope: This task is for defining the scope, goal or objective of the
project.
2. Planning: It includes the estimate for schedule, cost and people for
completing the desiredconcept.
3. Evaluation of technology risks: It evaluates the risk associated with
the technology used inthe project.
4. Concept implementation: It includes the concept representation in
the same manner asexpected by the end user.

5.8.3 Task Network

The task is a small unit of work.

The task network or an activity network is a graphical representation, with:


 Nodes corresponding to activities.
 Tasks or activities are linked if there is a dependency between them.
 The task network for the product development is as shown in the below
figure
The task network definition helps project manager to understand the project work
breakdownstructure.

The project manager should be aware of interdependencies among various tasks.


It should be aware ofall those tasks which lie on the critical path.

27
5.8.4 Time Line Chart

In software project scheduling the timeline chart is created. The purpose


of timeline chart is to emphasize the scope of individual task. Hence set of
tasks are given as input to the time line chart.
 The time line chart is also called as Gant chart.
 The time line chart can be developed for entire project or it can be
developed for individualfunctions.
 In time line chart
1. All the tasks are listed at the leftmost column.
2. The horizontal bars indicate the time required by the corresponding task.
3. When multiple horizontal bars occur at the same time on the
calendar, then that means concurrency can be applied for
performing the tasks.
4. The diamonds indicate the milestones.
In most of the projects, after generation of time line chart the project tables
are prepared. In project tables all the tasks are listed along with actual start
and end dates and related information.

28
5.8.5 Tracking Schedule

Project schedule is the most important factor for software project manager.
It is the duty of projectmanager to decide the project schedule and track the
schedule.

Tracking the schedule means determine the tasks and milestones in the
project as it proceeds.Following are the various ways by which tracking of
the project schedule can be done –
1. Conduct periodic meetings. In this meeting various problems related to
the project get discussed. The progress of the project is reported to the
project manager.
2. Evaluate results of all the project reviews.
3. Compare 'actual start date' and 'scheduled start date' of each of the project
task.
4. Determine if the milestones of the project is achieved on scheduled date.
5. Meet informally the software practioners. This will help the project
manager to solve manyproblems. This meeting will also be helpful for
assessing the project progress.
6. Assess the progress of the project quantitatively.

Thus for tracking the schedule of the project the project manager should be
an experienced person. Infact project manager is the only responsible person
who is controlling the software project. When some problems occur in the
project then addition resources may be demanded, skilled and experienced
staff may be employed or project schedule can be redefined.
For handling the severe deadlines, project manager uses a technique of time
boxing. In this technique each it is under stood that the complete product
cannot be delivered on given time. Part by part i.e. inthe series of increments
the product can be delivered to the customer.
The project manager uses time box technique means he is associating each
task with a box. That means each task is put in a "time box" and within that
time frame each task must be completed. When the current task reaches to
boundary of its time box, then the next task must be started (even if current
task is remaining incomplete).

Some researchers had argued upon - leaving the task incomplete when
current task reaches to the boundary but for this argument the counterpart is
that even if the task is remaining incomplete it reaches to almost completion
stage and remaining part of it can be completed in the next successive
increment.
29
5.9 Earned Value Analysis

Earned Value Analysis (EVA) is technique of performing quantitative


analysis of the softwareproject.
Earned value system provides a common value scale for every task
of software project. The EVA acts as a measure for software project
progress.
With the help of quantitative analysis made in EVA, we can know how much
percentage of theproject is completed.
The earned value analysis can be made using following steps.

1) The Budgeted Cost of Work Scheduled (BCWS) is an estimated cost


for the work that has been scheduled. This value is obtained for every
individual task in the software project. In this activity the work of each
software engineering task is planned. The BCWS i is the effort planned
for work task i. Hence at every point in the progress of project the
BCWSi are calculated and the total cost is the summation of all the
BCWSi.
At the completion of the project the BCWS values for all work tasks are summed to
derive thebudget of the project. The calculation of budgeted actual cost is

for all tasks

2) Then budgeted cost of work performed (BCWP) is computed. The


value of BCWP is the sum of all the BCWS values of all the
corresponding tasks that have actually been completed by a point in
time on the project schedule.

The difference between BCWS and BCWP is that BCWS represents values
for the project activities that are planned and BCWP represents the values of
the project activities that are Completed.
Various types of computations in EVA are given as follows

1)
Where SPI is the software performance index. It represents the project
efficiency. If the value of SPI is 1.0 then it represents that execution of
project is very efficient.
30
2)
Where SV indicates the scheduled variance.
3)
Where project scheduled for completion indicates the percentage of
work which should becompleted by time t.

Percent complete represents the percent of project which is actually


completed by time t.

4)
Where ACWP refers to .Actual Cost Work Performance. This value
helps in computing the cost factor of the project.

Where CPI indicates the cost performance index. This value represents
whether the performance of project is within the defined budget or not.
The value 1.0 indicates that the project is within the defined budget
Thus EVA helps in identifying the project performance, cost of performance
and project scheduling difficulties. This ultimately helps the project manager
to take the appropriate corrective actions.

5.9.1 Error Tracking

While developing the software project many work products such as SRS,
design document, source code are being created. Along with these work
products many errors may get generated. Project manager has to identify all
these errors to bring quality software.

Error tracking is a process of assessing the status of the software project.


The software team performs the formal technical reviews to test the software
developed. In this review various errors are identified and corrected. Any
errors that remain uncovered and are found inlater tasks are called defects.

The defect removal efficiency can be defined as

whe re
31
is the defect removal
efficiency,E is the error
R D is
defect. E

The DRE represents the effectiveness of quality assurance activities. The


DRE also helps the projectmanager to assess the progress of software project
as it gets developed through its scheduled work task.

During error tracking activity, following metrics are computed

1. Errors per requirements specification page : denoted by Ereq


2. Errors per component - design level : denoted by Edesign
3. Errors per component - code level : denoted by Ecode
4. DRE - requirement analysis
5. DRE - architectural design
6. DRE - component level design
7. DRE - coding
The project manager calculates current values for Ereq, Edesign and Ecode.
These values are then compared with past projects. If the current result
differs more than 20 % fromthe average, then there may be cause for concern
and investigation needs to be made in this regard.
These error tracking metrics can also be used for better target review and testing
resources.

5:14 DevOps

Definition: DevOps is a practice in which developed and operation engineers participate


together in the entire lifecycle activities of system development from design and
implementation to product support

The term DevOps is derived from Sofware DEVelopment and information technology
Operations"

DevOps promotes a set of processes and methods from the three department
Development, IT operations and Quality assurance that communicate and collaborate
together for development of software system

32
5.14.1 Why DevOps ?

 DevOps enhances the organization's performance improves the productivity and


efficiency of development and operations teams
 Bringing the two teams together centralizes the responsibility on the entire ham
and not specific individuals working
 DevOps is more than just a tool or a process change. It inherently mequires an
organizational culture shift.
 This cultural change is especially difficult, because of the conflicting nalome of
departimental roles

1. Operations- seeks organizational stability


2. Developers- seek change
3. Testers-seek risk reduction,

 Adoption of DevOps is driven by various factors. These factors are


1. Demand for an increased rate of production releases from application and business
unit stakeholders
2. Increased usage of data center automation and configuration management tools.
3. Use of agile and other development processes and methods.
4. Increased focus on test automation and continuous integration methods
5. Wide availability of virtualized and cloud infrastructure

5.14.2 Motivation

The Goals of DevOps are as follows.


1. To make simple processes increasing programmable and dynamic.
2. Fast delivery of product.
3. Lower failure rate of new releases
4. Shortened lead time between fixes
5. Faster mean time to recovery.

33
6. Increases net profit of organization
7. To standardize the development environment
8. To reduce work in progess
9. To reduce operating expenses
10. To set up the automated environment.

5.14.3 Benefits
Various benefits of DevOps are

Technical Benefits
1. Continuous software delivery is possible.
2. There is less complexity in managing the project.
• The problems in the project gets resolved faster.

Cultural benefits
1. The productivity of teams get increased
2. Thane is higher employee engagement.
3. There arise greater professional development opportunities.

Business benefits
1. The faster delivery of the product is possible.
2. The operating environment becomes stable.
3. Communication and collaboration are improved among the team members and
customers
4. More time is available for innovation rather than fixing and maintaining,

5.4.4.4 Agility and DevOps

 Basically Agile and DevOps are similar. But there lies some differences,
 •DevOps brings more flexibility than Agile. With Continuous Integration (CI)
and Continuous Delivery (CD), the release of software products is made often
and it is ensured that these releases actually work and meet the customer needs
 Thus in DevOps there are an increased number of releases.
 One goal of DevOps is to establish an environment where releasing more reliable
applications, faster and more frequently, can occur. This actually brings the
continuous delivery approach.
 DevOps is not a separate concept but a mere extension of Agile to include
operations as well to collaborate different agile teams together and work as ONE
team with an objective to deliver software fully to the customer.

34
5.145 Deployment Pipeline
A Pipeline is a set of automated processes that allow developers and DevOps
professionals to reliably and efficiently compile, build and deploy their code to their
production computing platform.

Various components of a pipeline are


1) Build Automation
2) Test Automation
3) Deploy Automation

• Deployment pipeline is an important concept and practice in DevOps that involves


automating and Streaming the process of delivering the software application from
development to production.

There are two important concepts used in the Deployment pipeline-


1. Continuous Integration(CI) and
2. Continuous Delivery(CD)/Continuous Deployment

1. Continuous Integration:
 The continuous integration in DevOps is a practice where developers regularly
merge their code changes into a central repository or a database after which
automated builds and tests are run.
 Continuous Integration (CI) is the practice of automating the integration of code
changes from multiple developers or testers into a single software project.
 Automated tools are used to assert the new code's correctness before integration
 Continuous integration serves as a prerequisite for the testing, deployment and
release stages of continuous delivery.
 The main benefit of performing continuous integration regularly and testing each
integration is that we can detect errors more quickly and locate them easily.

2 Continuous Delivery or Continuous Deployment:


 Continuous Delivery means a developer's changes to an application are
automatically bug-tested and uploaded to a repository.
 Continuous Deployment (CD) refers to the final stage in the pipeline that refers
to the automatic releasing of any developer changes from the repository to the
production.

35
• The key components of the deployment pipeline are-

 Source Code Management: This is the first step of the deployment pipeline in
which source code is stored in a version control system like Git and GitHub
Developers commit the changes (upload final changes) to this repository and
pipeline is triggered when there are new commits.
 Build: In this process, the code is built into executable entities. During the build
process, the code is compiled, dependencies are packaged and binaries are
created.
 Automated Testing: The pipeline runs a suite of automated tests in order to test
the software system. It includes unit testing, integration testing and performance
testing
 Deployment: Once the code passes testing it is deployed to a staging or testing
environment that closely resembles the production environment. This allows for
further testing and validation in a controlled setting

 User Acceptance Testing (UAT): In this stage, the software is tested by a group
of users or stakeholders to ensure it meets their expectations.
 Final Deployment: If all tests and checks are successful, the software is deployed
to the production environment. This can be done manually, automatically or with
a combination of both.
 Monitoring and Feedback: The application is continuously monitored, if any
issues are raised then these are fixed immediately. Feedback from the production
environment is used to inform future development and improvements.
 Documentation: Finally the comprehensive documents and reports are prepared.

31 Overall Architecture
DevOps is a practice of operations and development
There are different phases of DevOps architecture-

1) Plan: In this phase, all the requirements of the project are gathered. The schedule and
cost of the project is estimated approximately.

36
2) Code: In this phase the code is written as per the requirements. Entire project is
divided into smaller units. Each unit can be coded as a module
3) Build: In this phase, the building of all the units is done using tools such as Maven,
Gradle is submit the code to a common code source.

4) Test: At this stage, all the units are tested to find if there exists any bug in the code.
The testing can be done using tools like Selenium, JUnit, PYtest. Some important
testing techniques such as acceptability testing, safety testing integration checking,
performance testing are carried out.
5) Integrate: In this phase, a new feature is added to the existing code and testing is
performed. Continuous Development is achieved only because of continuous integration
and testing.
6) Deploy: In this stage, the code is deployed in the client's environment. Some of the
examples of the tools used for Deployment are AWS, Docker.
7) Operate: At this stage, the version can be utilized by the users. Operations are
performed on the code if required. Some of the examples of the tools used are
Kubernetes, open shift.
8) Monitor: At this stage, the monitoring of the version at the client's workplace is done.
During this phase, developers collect data, monitor each function and spot errors like
low memory or server connection are broken. The DevOps workflow is observed at this
level depending on data gathered from consumer behavior application efficiency and
other sources. Some of the examples of the tools used are Nagios, elastic stack for
monitoring

5.14.7 DevOps Lifecycle

The DevOps lifecycle phases are as follows-


1) Continuous development: In this phase, the planning and coding of software is done.
Version control mechanism is used during this phase.
2) Continuous integration In this phase, developers are required to comma changes in
37
the source code frequently. The code supporting new functionality is continuously
integrated with the existing code. Therefore, there is continuous development of
software.
3) Continuous testing: In this phase, the software is continuously tested for bugs. Many
times automation testing is preferred
4) Continuous monitoring: By continuous monitoring, we can get notified before
anything goes wrong. We can gather many performance measures, including CPU and
memory utilization, network traffic, application response times, error rates and others.
5) Continuous feedback: In this DevOps stage, the software automatically sends out
information about performance and issues experienced by the end-user. It's also an
opportunity for customers to share their experiences and provide feedback
6) Continuous deployment: In this phase, the code is deployed to the production servers.
Also, it is essential to ensure that the code is correctly used on all the servers. The
deployment process takes place continuously in this DevOps life cycle phase
7) Continuous operations: It is the last phase which involves automating the
application's release and all these updates that help you keep cycles short and give
developers more time to focus on developing

5.14.8 Tools
Various tools used in DevOps are as follows
1. Nagios: It is a monitoring solution that gives new features and a modern user
experience.
2. ELK Stack: This tool is used for collecting logs from all services, applications,
networks, tools, servers and more in an environment into a single, centralized location
for processing and analysis
3. Docker: It eases configuration management and control issues,
4. Jenkins: Jenkins is a top tool for DevOps engineers who want to monitor executions
of repeated jobs.
5. Puppet: It handles configuration management and software while making rapid
repeatable changes in it.
6. Ansible: Ansible is a configuration management tool or DevOps tool that is similar to
Puppet and Chef
7. God: It is a monitoring tool used in DevOps
8. Monit: Monit has everything DevOps engineers need for system monitoring and error
recovery
9. Consulio: This tool is used for service discovery and configuration management
activities
10. Loggly: This tool is used for log management in DevOps

5.15 Cloud as a Platform


 "Cloud as a platform is a concept that refers to using cloud computing
38
infrastructure and services as the foundation for developing, deploying and running
applications and services.
 Cloud computing is a term that refers to storing and accessing data over the
internet. It doesn't store any data on the hard disk of our personal computer. In
cloud computing, we can access data from a remote server
 It includes services for storage, databases, analytics, networking, mobile
development tools and enterprise applications.
 AWS manages and maintains hardware and infrastructure, saving organization
and individuals the cost and complexity of purchasing and running resources an
site. These resources may be accessed for free or on a pay-per-use basis
 Popularly used cloud platforms are Amazon Web Services (AWS), Microsoft
Azure and Google Cloud Platform (GCP).

Benefits of using Cloud as a Platform


1) Flexibility:
 The cloud platform always allows to use the operating system, programming
languages and web application platforms that user is comfortable with
 Flexibility means that migrating legacy applications to the cloud should be easy
 Instead of re-writing the applications to adopt new technologies, we just need to
move the applications to the cloud and tap into advanced computing capabilities

2) Cost Effective:
 Instead of purchasing and creating our own expensive servers, we can use cloud
services where we need to pay only for the tools and services that we use
 Cloud platform offers a pay-as-you-go pricing method, which means that we
only pay for the services that are needed and have been used for a period of time

3) Scalable and Elastic:


 Cloud platform is scalable because its Auto Scaling service automatically
increases the capacity of constrained resources as per requirements so that the
application is always available
 Elasticity is one of the advantages. If we use fewer resources and don't need the
rest of them, then cloud platform itself shrinks the resources to fit our
requirements. In short, upsizing and downsizing of resources is possible with
AWS

4) Secure:
 Cloud computing maintains confidentiality, integrity and availability of the user's
data.
 Each service provided by the cloud is secure

39
 Personal and business data can be encrypted to maintain data privacy

High Performance:
 High-performance computing is the ability to process massive amounts of data at
high speed
 Cloud computing offers a high-performance computing service so that the
companies need not worry about the speed

5151 Operations

It provides different services of the cloud such as Infrastructure as a Service (laas Platform
as a Service (PaaS) and packaged Software as a Service (SaaS)
 laas: The infrastructure as a service means delivering computing infrastructure on
demand. Under this service, the user purchases the cloud infrastructure including
servers, networks, operating systems and storage using virtualization technology.
These services are highly scalable, laas is used by network architects Examples
of cloud services: AWS and Microsoft Azure.
 PaaS: The Platform as a Service means it is a service where a thind-party
provider provides both hardware and software tools to the clients. It provides
elastic scaling of your application which allows developers to build applications
and services over the internet and the deployment models include public, private
and hybrid. Paas is used by developers Examples of doud servis-are Facebook
and Google Search Engine
 SaaS : Software as a Service model that hosts software to make it available to
clients. It is used by the end users. Examples of Cloud services: Google Apps.

40
DMI COLLEGE OF ENGINEERING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CCS356 OBJECT ORIENTED SOFTWARE


ENGINEERING SEMESTER / YEAR : VI / III

UNIT 1

1. Write the IEEE definition of software engineering.


Ans Software engineering is defined as the application of a systematic disc quantifiable
approach to the development, operation and maintenance of software

2. What is Software ? List the characteristics.


Ans. Software: Software is a collection of computer programs and related documents are
intended to provide desired features, functionalities and better performance
Characteristics:
1) Software is engineered not manufactured
2) Software does not ware out
3) Most software is custom built rather than being assembled from components

03. What are two types of software products?


Ans: There are two types of software products-
1. Generic. These products are developed and to be sold to the
range of the Customers
2. Custom-These types of products are developed and sold to the specific group
of customers and developed as per their requirements.

04. What is software engineering ?


Ans.: Software engineering is a discipline in which theories, methods and tools are applied to
develop professional software product.

5. Distinguish between process and methods.


Ans. Software process can be defined as the structured set of activities that are required to
develop the software system. Various activities under software process are-
● Specification
● Design and implementation
● Validation
● Evolution
Method is used mainly in object-oriented programming, the term method refers to a piece of
code that is exclusively associated either with a class (called class methods) or with an object
(called instance methods).
6. Why is software architecture important in software process ?
Ans. The system architecture defines the role of hardware, software, people, database
procedures and other system elements. The architectural design of the system help the developer to
understand the system as a whole. Hence system architecture must be built before specifications
are completed. Thus architectural design is important in software engineering.

7. What is meant by 'blocking states' in linear sequential model?


Ans. The linear nature of linear sequential model brings a situation in the project that some
project team members have to wait for other members of the team to complete the dependent tasks.
This situation is called "blocking state" in linear sequential model. For example, after performing the
requirement gathering and analysis step the design process can be started. Hence the team working on
design stage has to wait for gathering of all the necessary requirements. Similarly the programmers
can not start coding step unless and until the design of the project is completed.

8. What are the advantages of prototyping model ?


Ans. Advantages of prototyping model-
1. The working model of the system can be quickly designed by construction
of prototype. This gives the idea of final system to the user
2. The prototype is evaluated by the user and the requirements can be refined
during the process of software development
3. This method encourages active participation of developer and user.
4. This type of model is cost effective
5. This model helps to refine the potential risks associated with the delivery of the final system.
6. The system development speed can be increased with this approach.

9. Write any two software engineering challenges


. The key challenges faced by software engineering are
7. Delivery times challenge
There is increasing pressure for faster delivery of software. When the complexity of
the systems that we develop increases, this challenge becomes harder
8. Heterogeneity challenge
Sometimes systems are distributed and include a mix of hardware and software This
implies that software systems must cleanly integrate with other different software systems,
built by different organizations and team which may be using different hardware and software
platform.

10. Identify in which phase of the software life cycle the following documents are delivered.
(a) Architectural
design Design
(b) Test plan
Testing
(c) Cost estimate
Project management and planning
(d) Source code document Coding
11. Define the terms product and process in software engineering.
Ans.: The product in software engineering is a standalone entity that can be produced by
development organization and sold on the open market to any customer who is able to buy
them. The software product consists of computer programs, procedures, and associated
documentation (documentation can be in hard copy form or it may be in visual form. Some of the
examples of software product are databases, word processos, drawing tools.

The process in software engineering can be defined as the structured set of activities that
are required to develop software system. Various activities under software process are
• Specification
• Design and implementation
• Validation
• Evolution

12. What are the phases encompassed in the RAD model ?


Various phases in the RAD model are
9. Business modelling
10. Data modelling
11. Testing and turnover
12. Process modelling
13. Application generation

13. Define a system and computer based system.


Ans. System: A system can be defined as a purposeful collection of inter-related components
working together to achieve some common objectives
Computer based system: The computer based system can be defined as a set or an
arrangement of elements that are organized to accomplish some predefined goal by processing
information

14. Which process model leads to software reuse ? Why?


Ans. The object oriented model is used for the software reuse because it is based on the
incremental development of the software product. This can be more iterations. this model is done in one
or

15. State the benefits of the waterfall life cycle model for software development.
Ans.: 1. The waterfall model is simple to implement. 2. For implementation of small
systems the waterfall model is used.

16. How does "Project Risk" factor affect the spiral model of software development?
Ans.: The spiral model demands considerable risk assessment because if a major risk is not
uncovered and managed, problems will occur in the project and then it will not be acceptable by end
user.
17. Define software.
Ans. Software is a collection of computer programs and related documents that are intended to
provide desired features, functionalities and performance. A software can be of two types-1. Generic
software and 2. Custom software.

18. What is software process model? On what basis it is chosen?


Ans. The software process model can be defined as abstract representation of process. It a
based on nature of software project
19. What is software process ?
Ans. Software process can be defined as the structured set of activities that are required t
develop the software system. The fundamental activities are-

14. Specification
2 Design and Implementation
3. Validation
4. Evolution

20. Write the process framework and Umbrella activities .


Ans. Process Framework: Process framework is required for representing the common
process activities. The process framework activities are
15. Communication
16. Planning
17. Modeling
18. Construction
19. Deployment

21. What are the pros and cons of Iterative software development model?
Ans. Pros: 1) The changes in requirements or additions of functionality is possible
2) Risks can be identified and rectified before they get problematic.
Cons: 1) This model is typically based on customer communication. If the communication is
not proper the software product that gets developed will not be exactly as per the requirements.
2) The development process may get continued and never finish.

22. What led to the transition from product oriented development to process
oriented development to process oriented development ?
Software Process and Agile Development
Ans. The software process model led to the transition from product oriented development
to process oriented development

23. Mention the characteristics of software contrasting it with characteristics of hardware.


1) Software is engineered and not manufactured
2) Software does not ware out.
3) The software is custom built rather than being assembled from components.
24. If you have to develop a word processing software product, what process model will
you choose? Justify your answer.
Ans. The incremental process model will be used to develop word processing software
product
Justification: 1) The working software can be generated quickly and early during the
software life cycle
2) The customers can respond to its functionalities after every increment.

25. Depict the relationship between work product, task, activity and system.
Ans. • Each framework activity under the umbrella activities of the software process
framework consists of various task sets.
Each task set consists of work tasks, work products, quality assurance points and project
milestones.
The task sets accomodate the needs of the system getting developed

UNIT II
1. State
Characteristics of SRS document
Software requirement specification (SRS) is a document that completely describes what the
proposed software should do without describing how software will do it. The basic goal of the
requirement phase is to produce the SRS, Which describes the complete behavior of the proposed
software. SRS is also helping the clients to understand their own needs.
Characteristics of an SRS:
1. Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
6. Ranked for importance and/or stability

2. Discuss about class


based modeling
Class-based elements
i. Implied by scenarios – each usage scenario implies a set of objects that are
manipulated as an actor interacts with the system. These objects are
categorized into classes.
ii. Class diagrams depicting classes, their attributes and relationships between
classes are developed.
iii. Collaboration diagrams are developed to depict the collaboration of classes

3. Discuss the role of developer in negotiating requirements of the system to be developed.


The objective of this phase is to agree on a deliverable system that is realistic for
developers and customers.
i. Conflicting requirements of customers, end-users and stake holders are reconciled.
ii. Iterative process for – requirements prioritization, risk analysis,
cost estimation etc. Roles are:
i. Identify the key stakeholders
- These are the people who will be involved in the negotiation
ii. Determine each of the stakeholders “win conditions”
- Win conditions are not always obvious
iii. Negotiate
- Work toward a set of requirements that lead to “win-win”

4. Why scenario based modeling is getting popular in the field


of requirements modeling Scenario-based elements?
i. Using scenario-based approach the system is described from the
user’s point of view.
ii. Use-case—descriptions of the interaction between an “actor” and
the system are developed.
iii. Activity diagrams and swim-lane diagrams can be developed to
complement use- case diagrams.

5. Discuss analysis patterns of requirement engineering?


i. Certain problems reoccur across all projects within a specific application domain.
These analysis patterns suggest solutions within the application domain that can be
reused when modeling many applications.
ii. Analysis patterns are integrated into the analysis model by reference to the pattern name.
iii. They are also stored in a repository so that requirements engineers can uses
search facilities to find and apply them.
6. Identify goals of elicitation phase?
The goal of elicitation phase
is:
i. to identify the problem
ii. propose elements of the solution
iii. negotiate different approaches, and
iv. specify a preliminary set of solution requirements

7. Explain the process of validating requirements?


In this phase a review mechanism adapted that looks for
i. errors in content or interpretation
ii. areas where clarification may be required
iii. missing information
iv. inconsistencies (a major problem when large products or
systems are engineered)
8. Conflicting or unrealistic (unachievable) requirements.Illustrate ‘dependency’
relationship of class diagrams?
A dependency is a semantic connection between dependent and independent model
elements. It exists between two elements if changes to the definition of one element (the
server or target) may cause changes to the other (the client or source). This association is
uni-directional.

9. Explain ‘aggregation’ relationship of class diagrams with the help of an example?


Aggregation is a variant of the "has a" association relationship; aggregation is more specific
than association. It is an association that represents a part-whole or part-of relationship. As
shown in the image, a Professor 'has a' class to teach

10. Explain ‘Association’ relationship of class diagrams with an example?


An association represents a family of links. A binary association (with two ends) is
normally represented as a line. An association can link any number of classes. An
association with three links is called a ternary association. An association can be named,

and the ends of an association can be adorned with role names, ownership indicators,
multiplicity, visibility, and other properties.

11. Illustrate ‘Generalization’ relationship of class diagrams?


The generalization relationship is also known as the inheritance or "is a" relationship.It
indicates that one of the two related classes (the subclass) is considered to be a specialized
form of the other (the super type) and the superclass is considered a Generalization of the
subclass. In practice, this means that any instance of the subtype is also an instance of
the superclass.

12.What is Requirement Engineering and Software Requirement Engineering? `


Requirements Engineering: Requirements Engineering builds a
bridge to design and construction.
The broad spectrum of tasks and techniques that lead to an understanding of
requirements is called Requirements Engineering.
Software Requirement Engineering: Requirements analysis, also called requirements
engineering, is the process of determining user expectations for a new or modified
product.These features, called requirements, must be quantifiable, relevant and detailed.
In software engineering, such requirements are often called functional specifications.
13. Discuss the role of use cases in UML
Use Case Diagrams: Use case diagrams are usually referred to as behavior diagrams used
to describe a set of actions (use cases) that some system or systems (subject) should or can
perform in collaboration with one or more external users of the system (actors).
Actors are the different people ( or devices) play as the system operates. Every actor has
one or more goals when using the system.
14. Define case tool in software engineering.
A CASE tool is a product that helps to analyze, model and document business processes.
It is a tool or a toolset that supports the underlying principles and methods of analysis.
Some tools are specifically designed to support a particular technique while other tools are
more general in nature.

15. Write the syntax for presenting the attribute that was
suggested by UML visibility name :
type_expression = initial
_value
Where visibility is one of the following
+ public visibility
# protected visibility
- private visibility
type_expression - type of an attribute
Initial_value is a language dependent expression for the initial value of a newly created object

16. What is the need of a Class diagram?


A class diagram is used to show the existence of classes and their
relationships in the logical view of a system.
17.What is Behavior of an object?
Behavior is how an object acts and reacts in terms of its state changes and message
passing.

18.Define forward engineering and reverse engineering


Forward engineering means creating a relational schema from an existing object
model
Reverse engineering means creating an object model from an existing relational
database layout (schema).

19. Define DFD.


A data flow diagram (DFD) maps out the flow of information for any process or system.
It uses defined symbols like rectangles, circles and arrows, plus short text labels, to show
data inputs, outputs, storage points and the routes between each destination
20. Define petrinet.

A Petri Net is a collection of directed arcs connecting places and transitions. Places may hold
tokens. The state or marking of a net is its assignment of tokens to places. Here is a simple net
containing all components of a Petri Net:

21. What are the techniques used for requirements discovery?


1) View point
2) Interviewing
3) Scenarios
4) Ethnography

22. Write the classifications of non-functional requirements?


 Product requirements
 Organizational requirements
 External requirements

23. What is Requirement discovery?


Requirement discovery is the process of gathering information about the proposed and existing
systems and distilling the user and system requirements from this information.

24. Write any three limitations of formal methods.


 Formal methods are difficult to learn and use.
 It is difficult to check the absolute correctness of systems using theorem -proving techniques
 Formal techniques are not able to handle complex problems

25. Diferentiate NFA and DFA.


NFA
NFA stands for Non Deterministic Finite Automata. It is used to transit the any number of stated for
a particular input. NFA accepts the NULL move that means it can change stste without reading the
symbols.
DFA
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the
computation. In DFA the input character goes to one state only. DFA doesn’t accept the NULL move that
means the DFA cannot change state without any input character.

UNIT III

1. List out software quality attributes


a. Functionality: is assessed by evaluating the features set, the generality of
the functions that are delivered, and the security of the overall system.
b. Usability: is assessed by considering human factors, overall
aesthetics, consistency, and documentation.
c. Reliability: is evaluated by measuring the frequency and severity of failure, the
accuracy of output results, the mean-time-to-failure, the ability to recover from
failure, and the predictability of the program.
d. Performance: is measured by processing speed, response time,
resource consumption, throughput, and efficiency.
e. Supportability: combines the ability to extend the program extensibility,
adaptability, serviceability ➔maintainability. In addition, testability,
compatibility, configurability, etc.

2. List out the components of a Software Design


Model?
The components of a software design model are :
i. Data/Class Design Model
ii. Architectural Design Model
iii. Interface Design Model
iv. Component Level Design Model

3. What are the properties that should be exhibited by a good software design according
to Mitch Kapor?
i. Firmness: A program should not have any bugs that inhibit its function.
ii. Commodity: A program should be suitable for the purposes for which it
was intended.
iii. Delight: The experience of using the program should be pleasurable one.

4. Define Abstraction and types of abstraction?


Abstraction is the act of representing essential features without including the background
details or explanations.
A data abstraction is collection of data that describes a data object.
A procedural abstraction refers to a sequence of instructions that have a specific and
limited function. An example of a procedural abstraction would be the word open for
a door.
5. Define Cohesion & Coupling
Cohesion is a measure that defines the degree of intra-dependability within elements of a module.
The greater the cohesion, the better is the program design.
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other.
The lower the coupling, the better the program.

6. Define Refactoring and Aspect.


Refactoring – process of changing a software system in such a way internal structure is
improved without altering the external behavior or code design
"Refactoring is the process of changing a software system in such a way that it does not
alter the external behavior of the code [design] yet improves its internal structure.”
Aspects – a representation of a cross-cutting concern that must be accommodated as
refinement and modularization occur.

7. What are the object oriented design concepts?


i. Object Oriented Design

Concepts Design classes


a. Entity classes (business classes)
b. Boundary classes (interfaces)
c. Controller classes (controller)
ii. Inheritance—all responsibilities of a super class is immediately inherited by all subclasses
iii. Messages—stimulate some behavior to occur in the receiving object
iv. Polymorphism—a characteristic that greatly reduces the effort required to extend the design
8. What are the advantages of Modularization?
i. Smaller components are easier to maintain
ii. Program can be divided based on functional aspects
iii. Desired level of abstraction can be brought in the program
iv. Components with high cohesion can be reused again
v. Concurrent execution can be made possible
vi. Desired from security aspect
9. Define Design pattern concept
a. A design pattern “conveys the essence of a proven design solution to a
recurring problem within a certain context of computing concerns.”
b. The intent of each design pattern is to provide a description that enables a
designer to determine:
i. whether the pattern is applicable to the current work,
ii. whether the pattern can be reused, and
iii. whether the pattern can serve as a guide for developing a similar, but
functionally or structurally different pattern.

10. Define observer .


The observer pattern has subjects and dependent observers; when a subject
changes state, all of its observers are automatically notified and updated. The subject
doesn't have to depend on who the observers actually are; observers can register and
unregister with the subject dynamically.

11. Define pattern.


A pattern is a named problem/solution pair that can be applied in new context, with
advice on how to apply it in novel situations and discussion of its trade-offs

12. Define façade pattern


A facade is an object that provides a simplified interface to a larger body of code, such as
a class library. A facade can make a software library easier to use, understand and test, since the
facade has convenient methods for common tasks make code that uses the library more readable,
for the same reason;
● Wrap a poorly-designed collection of APIs with a single well-designed API (as per task needs).

13. What is the difference between adapter and proxy pattern?


An Adapter wraps an existing class with a new interface so that it becomes compatible with
the interface needed. The main differences between Adapter and Proxy patterns are: While
proxy provides the same interface, Adapter provides a different interface that's compatible
with its client

14. Define pipe and filter in architectural styles.


Pipe and Filter is another architectural pattern, which has independent entities called filters
(components) which perform transformations on data and process the input they receive, and
pipes, which serve as connectors for the stream of data being transformed, each
connected to the next component in the pipeline.
15. What is User interface design(UID)?
User Interface (UI) Design focuses on anticipating what users might need to do and ensuring
that the interface has elements that are easy to access, understand, and use to facilitate those
actions. UI brings together concepts from interaction design, visual design, and information
architecture.
16. What are architectural styles in Software engineering?
An architectural style is a set of principles and patterns that guide the organization of a
software system. It dictates how components and modules within the system interact and
communicate. In essence, it comprises a set of principles and patterns that dictates how
components within the system communicate .

17. What is the model view controller in SE?


Program development is the process of creating application programs. Program
development life cycle (PDLC) The process containing the five phases of program
development: analyzing, designing, coding, debugging and testing, and implementing and
maintaining application software.
18. What is the difference between cohesion and coupling in software engineering?
The main difference between coupling and cohesion is that coupling refers to the degree of
interdependence between modules or components in a software system. In contrast, cohesion refers to
the degree to which elements within a single module or component work together to achieve a single
purpose.

19. What is software design engineering?


Software design is the process of envisioning and defining software solutions to one or more
sets of problems. One of the main components of software design is the software requirements
analysis (SRA). SRA is a part of the software development process that lists specifications used in
software engineering.

20. What are the golden rules of user interface design?


The golden rules are divided into three groups: Place Users in Control. Reduce Users' Memory
Load. Make the Interface Consistent.

21. Define Architecture.

Architecture in the context of software engineering refers to the high-level structure of a


software system. It involves defining the overall organization of the system, including its components,
their interactions, and how they fit together to meet the system's functional and non-functional
requirements.

22. What are the architectural design various system models can be used?

In software architecture, various system models represent different ways of structuring and
organizing a software system to address specific requirements, challenges, or objectives. These
architectural models focus on different concerns like scalability, maintainability, security, and
performance.

23. What are certain issues that are considered while designing the software?

When designing software, several critical issues must be considered to ensure that the system
is robust, scalable, maintainable, and meets the needs of its users. These issues span across functional
and non-functional requirements and influence the overall architecture, user experience, and long-
term success of the software.

24. What are the Different types of cohesion?

 Functional cohesion

 Sequential cohesion

 Communicational cohesion

 Procedural cohesion
25. Why modularity is important in software projects?

Modularity is a key principle in software engineering that involves dividing a system into
smaller, self-contained units or modules. Each module is designed to perform a specific task, making
the overall system more manageable, maintainable, and flexible.

UNIT IV

1. Who does the software testing and needs it?


1. A strategy for software testing is developed by the project manager, software engineers,
and testing specialists.
2. Testing often accounts for more project effort than any other software engineering
action. If it is conducted haphazardly, time is wasted, unnecessary effort is expended,
and even worse, errors sneak through undetected. It would therefore seem reasonable
to establish
a systematic strategy for testing software.

2. Differentiate between Validation and Verification of a Software Product?


Verification refers to the set of tasks that ensure that software correctly implements a specific function.
Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Boehm states this another way:
a. Verification: "Are we building the product right?"
b. Validation: "Are we building the right product?"

3. Discuss testing strategy for small and large software testing


1. A strategy for software testing must accommodate low-level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as high
level tests that validate major system functions against customer requirements.
2. Testing is conducted by the developer of the software and (for large projects) an
independent test group.
3. To perform effective testing, you should conduct effective technical reviews. By doing this,
many errors will be eliminated before testing commences.

4. Define Software Testing? List out the advantages of Software Testing?


Software Testing: Testing is the process of exercising a program with the specific intent of
finding errors prior to delivery to the end user.
Advantages of software
testing: Software
testing shows
1. Errors
2. requirements conformance
3. performance
4. an indication of quality
5. Explain how Unit testing of a Software System is performed.
Definition: Unit testing is a method by which individual units of source code are tested to
determine if they are fit for use.
GOAL: The goal of unit testing is to segregate each part of the program and test that the
individual parts are working correctly.
The following are activities are performed in unit testing:
i. Module interfaces are tested for proper information flow.
ii. Local data are examined to ensure that integrity is maintained.
iii. Boundary conditions are tested.
iv. Basis (independent) path are tested.
v. All error handling paths should be tested.

6. List out the outcome of unit testing


Unit testing focuses verification effort on the smallest unit of software design
—the software component or module

7. What is Integration Testing?


Definition: Integration Testing is a type of software testing where individual units are
combined and tested as a group. Integration Testing exposes defects in the interfaces and
in the interactions between integrated components or systems.

8. Define Smoke Testing.

Smoke testing: Smoke Testing, also known as “Build Verification Testing”, is a type of
software testing that comprises a non-exhaustive set of tests that aim at ensuring that the
most important functions work.
9. Discuss Regression box testing?
Regression testing: it is used to check for defects propagated to other modules by
changes made to existing programs. Regression means retesting the unchanged parts of
the application.

10. Define black box testing.


Black Box Testing is a type of software Testing, either functional or non-functional,
without reference to the internal structure of the component or system..

11. Define white box testing.


 White Box Testing is Testing based on an analysis of the internal structure of the
component or system.
 It is also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing, it is a
software testing method in which the internal structure/ design/
implementationof the item being tested is known to the tester.
12. List out various system testing
a. Recovery Testing
b. Security Testing
c. Stress Testing
d. Performance Testing
e. Deployment Testing

13. What is the general Test criteria


Interface integrity – internal and external module interfaces are tested as each module
or cluster is added to the software
Functional validity – test to uncover functional defects in the

software Information content – test for errors in local or global data

structures Performance – verify specified performance bounds are

tested

14. Explain Acceptance testing

Making sure the software works correctly for intended user in his or her normal
work environment.
There are two types of acceptance testing:
a. Alpha test
b. Beta test
Alpha test – version of the complete software is tested by customer under the supervision
of the developer at the developer’s site.
Beta test – version of the complete software is tested by customer at his or her own site
withoutthe developer being present.

15. Explain Debugging Strategies


Debugging is the process of finding and resolving of defects that prevent correct operation
ofcomputer software
Debugging (removal of a defect) occurs as a consequence of successful
testing.

Common approaches (may be partially automated with debugging tools):


a. Brute force – memory dumps and run-time traces are examined for clues to error causes
b. Backtracking – source code is examined by looking backwards
from symptom to potential causes of errors
c. Cause elimination – uses binary partitioning to reduce the number of
locations potential where errors can exist)

16. What is model checking?


Software model checking is the algorithmic analysis of programs to prove prop- erties of their
executions.

17. Define symbolic execution.


Symbolic execution is used to reason about a program path-by-path which is an advantage
over reasoning about a program input-by-input as other testing paradigms use (e.g. dynamic
programanalysis).
18. What are the benefits of symbolic execution?
Symbolic execution can simultaneously explore multiple paths that a program could take
under different inputs that don't necessarily have to be defined. The analysis is done at either a source
or binary code level

19. Define program analysis.


Program analysis is the process of automatically analyzing the behavior of computer programs
regarding a property such as correctness, robustness, safety and liveness. Program analysis
focuses on two major areas: program optimization and program correctness.

20. What are the five program steps?


Program development is the process of creating application programs. Program development
life cycle (PDLC) The process containing the five phases of program development: analyzing,
designing, coding, debugging and testing, and implementing and maintaining application software.

21. How will you test simple loop?


Testing a simple loop involves verifying that the loop performs as expected under various
conditions, including boundary cases and typical usage.

22. What is static program analysis?


Static Program Analysis refers to the process of analyzing a program's code without actually
executing it. The goal is to gather information about the program's structure, behavior, and
potential issues before it runs. Static analysis examines the source code, bytecode, or intermediate
representations of a program to identify errors, vulnerabilities, inefficiencies, or areas for optimization.
23.List the objectives of testing.
The objectives of testing in software development are to ensure that the software meets its
requirements, functions as expected, and performs optimally. Testing helps identify defects and
improve the quality of the software.

24. . Why debugging is so difficult?


Debugging is often considered one of the most challenging aspects of software development,
and there are several reasons why it can be difficult. Debugging involves identifying, isolating, and
fixing errors (bugs) in software, and this process can become complex.

25. Define cyclomatic complexity.


Cyclomatic Complexity is a software metric used to measure the complexity of a program's
control flow. It was introduced by Thomas McCabe in 1976 and provides a quantitative measure of
the number of linearly independent paths through a program's source code. The idea is to quantify
how complex the control structure of a program is, based on the decision points (e.g., if, while, for,
case) in the code.
Formula for Cyclomatic Complexity
The cyclomatic complexity V(G)V(G)V(G) of a program is calculated using the
formula: V(G)=E−N+2PV(G) = E - N + 2PV(G)=E−N+2P
Where:
E = Number of edges in the control flow graph (CFG) of the program.
N = Number of nodes in the control flow graph.
P = Number of connected components (typically 1 for a single program).

UNIT V

1. What is software project management?


Software project management is the art and science of planning and leading software projects.
It is sub-discipline of project management in which software are planned, monitored, and controlled.

2. .List the characteristics of the products of software projects?


>Invisibility
>Complexity
>Flexibility

3. What are the steps in risk planning?


∙ Risk identification
∙ Risk analysis and prioritization
∙ Risk planning
∙ Risk monitoring

4. Define risk assessment.


Using this formula Risk exposure = (potential damage) * (probability of occurrence)
5. Define risk analysis and risk monitoring.
Risk Analysis considers each identified risk and makes a judgment about the probability
and seriousness of it
Risk Monitoring involves regularly assessing each identified risk to Decide whether that risk
is becoming more or less probable and whether the effect of the risk has changed.

6. Define Risk Identification.


Risk management begins with analyzing the risks involved in the project.
Risk identification is not a One-off initiative since projects are constantly evolving and new
risks arise while other risks may dissipate or reduce in importance.

7. What are the risks to business impact?


∙ Effect of this product on company revenue?
∙ Reasonableness of delivery deadline?
∙ number of customers who will use this product
∙ interoperability constraints
∙ Sophistication of end users?
∙ Costs associated with a defective product?

8. What are things to be considered in risk management?(Nov/Dec2012)


∙ Risk Identification- Organizations and project teams
∙ Risk Analysis- Includes a download demo and other Decision analysis tools
∙ Risk Planning- assessment is an important part
∙ Risk Monitoring- identifies Development Environment Risks.
9. What is configuration management?
Configuration management is a systems engineering process for establishing consistency of a
product's attributes throughout its life. In the technology world, configuration management is an IT
management process that tracks individual configuration items of an IT system.

10.What is the main objective of configuration management?


The objective of Configuration Management is to define and control the components of an IT
service and its infrastructure, and to maintain accurate configuration information. The Configuration
Management process manages service assets to support other Service Management processes.

11.What is project scheduling in software engineering?


Project scheduling consists of assigning start and end dates to individual tasks and allocating
appropriate resources within an estimated budget. This is what allows you to make sure the team can
complete their tasks on time. It only focuses on the tasks, their deadlines and project dependencies.

12.What is DevOps for software?


Definition. DevOps (a portmanteau of “development” and “operations”) is the combination of
practices and tools designed to increase an organization's ability to deliver applications and services
faster than traditional software development processes.
13.Why DevOps is used?
DevOps combines development and operations to increase the efficiency, speed, and security
of software development and delivery compared to traditional processes. A more nimble software
development lifecycle results in a competitive advantage for businesses and their customers.
14.What is cloud in DevOps?
The cloud minimizes latency and enables centralized management via a unified platform for
deploying, testing, integrating, and releasing applications. A cloud platform allows DevOps teams to
adapt to changing requirements and collaborate across distributed enterprise environments.
15.What is a Deployment Pipeline?
A deployment pipeline is a crucial concept in the field of software development and DevOps.
It represents an automated and streamlined process that facilitates the continuous integration, testing,
and deployment of code changes from development to production environments. The primary goal of
a deployment pipeline is to ensure that software releases are efficient, reliable, and maintain
consistent quality.

16.How to Build a Deployment Pipeline?


Building a deployment pipeline is crucial in the software development and DevOps process. It
helps automate and streamline the process of deploying code changes from development to
production environments.
Here’s a step-by-step guide on how to build a deployment pipeline:
1. Define Your Pipeline Requirements
2. Select a Version Control System (VCS)
3. Choose a Build Automation Tool:.
4. Setup Continuous Integration (CI)
5. Automate Testing
6. Artifact Generation
7. Implement Continuous Deployment (CD):

17. What are deployment pipeline tools?


There are several popular deployment pipeline tools available that facilitate the automation
and orchestration of the software delivery process. These tools help set up continuous integration,
continuous deployment, and continuous delivery pipelines, ensuring a streamlined and efficient
development workflow.
Here are some widely used deployment pipeline tools:
● Jenkins: , GitLab CI/CD: ,Travis CI: ,CircleCI: ,GitHub Actions:

18.What do you mean by project scheduling?


Project scheduling consists of assigning start and end dates to individual tasks and allocating
appropriate resources within an estimated budget. This is what allows you to make sure the team can
complete their tasks on time. It only focuses on the tasks, their deadlines and project dependencies

19.What are the five steps in project scheduling?


The 5 basic phases in the project management process are:
● Project Initiation., Project Planning, Project Execution, Project Monitoring and Controlling.
● Project Closing.

20.What are the steps of project scheduling?


7 steps to create a project schedule
● Define your project goals. ...
● Identify all stakeholders. ...
● Determine your final deadline. ...
● List each step or task. ...
● Assign a team member responsible for each task. ...
● Work backward to set due dates for each task. ...
● Organize your project schedule in one tool, and share it with your team.

21 What is error tracking?


Error tracking is the process of identifying, recording, managing, and resolving errors or
defects in software applications. It involves tracking and documenting the occurrence of bugs or issues,
ensuring they are addressed in a timely manner to maintain the quality, functionality, and stability of the
software.

22. How to measure the function point (FP)?


Function Point (FP) is a metric used to measure the functional size of software,
representing the functionality provided by the system to the user. The function point analysis (FPA)
method quantifies the software’s functionality by counting the number of "function points" based on the
software's features and its complexity.

23. List two advantages of COCOMO model.


Provides Quantitative Estimates for Project Planning
Supports Different Project Development Models

24. Define software measure.


A software measure is a quantitative value or metric used to assess various characteristics
or attributes of a software system, such as its quality, performance, size, complexity, or reliability. Software
measures help in evaluating and improving software development processes, guiding decision-making, and
providing insights into the effectiveness of the software.

25. What are project indicators and how do they help a project manager?
Project indicators are metrics or key performance indicators (KPIs) that provide
measurable insights into the progress, health, and performance of a project. These indicators help project
managers monitor and assess the success of a project, enabling them to make informed decisions and take
corrective actions as needed.

You might also like