Oose Notes
Oose Notes
Testing – Unit testing – Black box testing– White box testing – Integration and System
testing– Regression testing – Debugging - Program analysis – Symbolic execution –
Model Checking-Case Study
The features that good software engineers should possess are as follows:
2. Software Processes
The term software specifies to the set of computer programs, procedures and associated
documents (Flowcharts, manuals, etc.) that describe the program and how they are to be
used.
A software process is the set of activities and associated outcome that produce a software
product. Software engineers mostly carry out these activities. These are four key process
activities, which are common to all software processes. These activities are:
Requirement Analysis: Understanding and defining the needs and expectations of the
users or customers. This involves gathering and documenting requirements for the
software.
Design: Creating a blueprint or plan for the software system based on the
requirements. This phase involves architectural design, detailed design, and often
includes decisions about data structures, algorithms, and user interfaces.
Implementation or Coding: Writing the actual code for the software based on the
designspecifications. This is the phase where the software is built.
Testing: Verifying that the software behaves as intended and meets the specified
requirements.Testing can include various levels such as unit testing, integration testing,
system testing, and acceptance testing.
Deployment: Installing and configuring the software in the target environment. This may
alsoinvolve creating documentation and training materials for end-users.
Maintenance: After deployment, the software requires ongoing maintenance to fix bugs,
addressissues, and implement updates or enhancements.
Software processes can be categorized into different models or methodologies, each with
its ownset of principles and practices. Some common software development
methodologies include:
Water fall Model: Sequential and linear, where each phase must be completed before
moving onto the next.
Agile Model: Iterative and incremental, with a focus on flexibility and responsiveness to
change.Agile methodologies include Scrum, Kanban, and Extreme Programming (XP).
Incremental model: Similar to the waterfall model but divides the project into small,
manageableparts, or increments, with each increment building upon the previous one.
Spiral Model: Combines elements of both the waterfall and iterative models, emphasizing
riskassessment and adaptation to changes.
Perspective Models
o Waterfall Model
o Incremental Process Model
Incremental Model
RAD Model
o Evolutionary Model
Prototyping
Spiral model
Concurrent model
The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear-sequential life cycle model. It is very simple to understand
and use. In a waterfall model, each phase must be completed before the next
phase can begin and there is no overlapping in the phases. The Waterfall model is
the earliest SDLC approach that was used for software development.
System Design: The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in
Specifying hardware and system requirements and helps in defining the overall
system architecture.
Implementation: With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next phase.
Each unit is developed and tested for its functionality, which is referred to as Unit
Testing.
Integration and Testing: All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire
system is tested for any faults and failures.
Maintenance: There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
Advantages:
Disadvantages:
1. Requirement analysis: In the first phase of the incremental model, the product
analysis expertise identifies the requirements. And the system functional requirements are
understood by the requirement analysis team. To develop the software under the
incremental model, this phase performs a crucial role.
2. Design & Development: In this phase of the Incremental model of SDLC, the design of
the system functionality and the development method are finished with success. When
software develops new practicality, the incremental model uses style and development
phase.
3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various
methods are used to test the behaviour of each task.
3.1.2RAD Model:
RAD model distributes the analysis, design, build and test phases into a series of
short, iterative development cycles.
The phases in the rapid application development (RAD) model are:
Business modelling: The information flow is identified between various
business functions.
Data modelling: Information gathered from business modeling is used to
define data objects that are needed for the business.
Process modelling: Data objects defined in data modeling are converted to
achieve the business information flow to achieve some specific business
objective. Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process
models intocode and the actual system.
Testing and turnover: Test new components and all the interfaces.
Advantages:
Disadvantages:
A proper time-frame should have to be maintained for both end customer as well
as developersfor completing the system.
A slight complexity in the modularizing in RAD model can lead to failure of the
entire project.
Communication:
At this stage, the developers communicate with the customer to gather the
customer’s requirement. The objective of the software and the area where the definitions
are still unclear are outlined. The requirements which are clear and perfectly known are
also outlined. Analyzing the customer requirements; the developers proceed to construct
the prototype.
Construct Prototype:
While constructing prototype the developers establish objectives such as what will
be the use of prototype? What features of the final system the prototype would reflect? It is
taken into consideration that the cost of the developed prototype should be low and the
speed of prototype development should be fast.
The speed and cost of the prototype are maintained by ignoring the requirements that have
nothing to do with the customer’s interest. Generally, the prototypes are developed based
on the requirements of customer’s interest like user interface and unclear functions and so
on.
Customer Evaluation:
Once the prototype of a final software is developed it is demonstrated to the customer for
evaluation. Customer evaluates the prototype against the requirements they have specified
in the communication phase. If the customers are satisfied, then the developers start
developing the complete version of the software.In case, the customer is not satisfied with
the prototype, they are supposed to suggest modifications.
Iterate Prototype:
In this way, the prototype is iterated until the customer is satisfied with the prototype.
Once the customer is satisfied with the prototype the developers get engaged in developing
the complete version of the software.
Deploy Software:
Once the objective of the prototype is served it is thrown and the software is
developed using other process models. The main objective of the prototype is to
understand the customer’s requirement properly and completely.
As all the requirements are now understood, the developers develop the software and
deliver it to the customer with the expectation that the developed software meets all the
requirements specifiedby the customer.
Advantages:
It helps the developer to understand the certain and uncertain requirements of the
customer.
It helps the customer to easily realize the required modification before
finalimplementation of the system.
The customer does not have to wait for a long to see the working model of
the finalsystem.
Customer satisfaction is achieved.
This model is flexible in design
Disadvantages:
Requirements gathering:
Requirements are gathered during the planning phase. Requirements like ‘BRS’
that is ‘Business Requirement Specifications’ and ‘SRS’ that is ‘System Requirement
specifications’. Allthe needed requirements are collect from customers.
Risk Analysis:
In the risk analysis phase, a process is undertaken to identify risk and alternating
solutions. A prototype is produced at the end of the risk analysis phase. If any risk is found
duringthe risk analysis then alternate solutions are suggested and implemented.
Engineering:
In this phase software is developed, along with testing at the end of the phase.
Hence inthis phase the development and testing is done.
Evaluation:
This phase allows the customer to evaluate the output of the project to date
before theproject continues to the next spiral.
Advantages:
Changing requirements can be accommodated
Requirements can be captured accurately.
Planning and estimation happens at each stage
Prototyping at each stage helps to reduce risk
Disadvantages:
Management is more complex
Process is complex
Spiral may go indefinitely
This model is not suitable for small low risk projects
If the customer keeps changing requirements, the number of spirals
increases andsoftware project manager could not close the project at all.
Specialized Process Models
These model tend to be applied when a specialized define software engineering
approach
o Component –Based Development
Commercial off- the –self(COTS) software components developed by vendors
who offerthere as product, provide targeting functionality with well-defined
interfaces that enable the component to be integrated in the software that is to be
built, The component based development model incorporates many of the
characteristics of spiral model. the component based development model
constructs application from pre-packaged softwarecomponent
Model incorporates the following steps:
1. Available component based products are researched and evaluated
for theapplication domain in question
2. Component integration issues are considered.
3. Software architecture is design to accommodate the component.
4. Components are integrated in to the architecture.
5. Comprehensive testing is conducted to ensure proper functionality.
o The Formal Method Model
Encompasses a set of activities that lead to formal mathematical specification of
computersoftware-formal methods enable to specify, develop and verify a
computer based system by applying a rigorous mathematical notation. A
variation on this approach called cleanroom software engineering. The
development a formal model is currently quite time consuming because few
software developers have been necessary background to apply
formal method, extensive training is required it’s difficult to use the model as a
communication.
o Aspect-Oriented software Development
It provides a process and methodological approach for defining, specifying
.designing andconstruct aspect –“mechanism beyond subroutines and inheritance
for localizing the expression of cross-cutting concern “common systematic aspect
include user interface collaborative work, distribution , persistency, memory
management, transaction processing, security, integrity and so on.
4. Introduction to Agility
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support
they need.
In this model, projects are divided into small units of work. These are delivered
over time boxes which are called “sprints”. Each sprint takes only a couple of weeks of
complete. At the endof each sprint, the progress of the project is analysed and suggestions
are given to make improvements.
In the scrum process, similar to chief programmer approach, a chief architect defines in
the initialphase,
Overall architecture
Release date
Desired features
Next phases are called “sprints‟ (run at full speed for a short distance).
Sprints are carried out by groups, lasting one to four weeks, which develops the
specific desired features of the product. Scrum teams work in parallel on different sprints
& complete tasks on the same day. Every sprint is reviewed every day for 15 minutes to
remove any bottlenecks.
There are three vital members in a scrum model. They are
1. Owner
2. Scrum master
3. Team member
Owner:
The product owner communicates the requirements of the customer to the
development team.
Scrum master:
It keeps the team focused on its goal and acts as a interface between owner and team.
Team member:
It is responsible for development of the project according to the backlog. Creates a
wishlist called a product back log.
Advantage of Scrum Model:
It is used to implement complex projects
It can improve the team work and communication
Productivity can be improved with daily meetings
Product can be delivered in a scheduled time.
Each person’s progress is visible on a day to day basis.
Release date is predetermined & known to all
Disadvantages:
1. If the task is not well- defined, sprint process will take much time
2. Team members should be well committed to the task. If they
fail, projectWill also fail.
3. In experienced team members should not be able to complete project in time.
4. Regression testing should be conducted after each sprint to
implement quality management.
UNIT II- REQUIREMENTS ANALYSIS AND SPECIFICATION
output
Fig: View of a system performing a set of functions
The user can get some meaningful piece of work done using a high-level function.
The functional requirements specification of a system should be both complete and
consistent.
Completeness means that all services required by the user should be defined.
Consistency means that requirements should not have contradictory definitions.
In practice, for large, complex systems, it is practically impossible to achieve requirements
consistency and completeness.
Reasons are:
1. It is easy to make mistakes and omissions when writing specifications for complex systems.
2. There are many stakeholders in a large system. A stakeholder is a person or role that is
affected by the system in some way. Stakeholders have different— and often inconsistent—
needs. These inconsistencies may not be obvious when the requirements are first specified,
so inconsistent requirements are included in the specification.
Identifying functional requirements from a problem description:
The high-level functional requirements often need to be identified either from an informal problem
description document or from a conceptual understanding of the problem. Each high-level
requirement characterizes a way of system usage by some user to perform some meaningful piece
of work. There can be many types of users of a system and their requirements from the system may
be very different. So, it is often useful to identify the different types of users
who might use the system and then try to identify the requirements from each user’s perspective.
Here we list all functions {fi} that the system performs. Each function fi is considered as a
transformation of a set of input data to some corresponding output data.
Example:-
Consider the case of the library system, where -
F1: Search Book function (fig. 3.3)
Input: an author’s name
Output: details of the author’s books and the location of these books in the library
Reasons are:
1. Non-functional requirements may affect the overall architecture of a system rather than the
individual components.
2. A single non-functional requirement, such as a security requirement, may generate a number
of related functional requirements that define new system services that are required.
Non-functional requirements arise through user needs, because of budget constraints, organizational
policies, the need for interoperability with other software or hardware systems, or external factors
such as safety regulations or privacy legislation.
Classifications of non-functional requirements are
1. Product requirements:
These requirements specify or constrain the behavior of the software.
Examples include performance requirements on how fast the system must execute
and how much memory it requires, reliability requirements that set out the
acceptable failure rate, security requirements, and usability requirements.
2. Organizational requirements
These requirements are broad system requirements derived from policies and
procedures in the customer’s and developer’s organization.
Examples include operational process requirements that define how the system will
be used, development process requirements that specify the programming language,
process standards to be used, and environmental requirements that specify the operating
environment of the system.
3. External requirements:
This broad heading covers all requirements that are derived from factors external to
the system and its development process.
Regulatory requirements set out what must be done for the system to be approved for
use by a regulator, such as a central bank;
Legislative requirements that must be followed to ensure that the system operates
within the law;
Ethical requirements that ensure that the system will be acceptable to its users and
the general public.
Nonfunctional requirements are the characteristics of the system which can not be expressed as
functions - such as the maintainability of the system, portability of the system, usability of the
system, etc.
Nonfunctional requirements may include:
# reliability issues,
# performance issues,
# human - computer interface issues,
# interface with other external systems,
# security and maintainability of the system, etc.
Domain requirements are important because they often reflect fundamentals of the
application domain. If these requirements are not satisfied, it may be impossible to make the
system work satisfactorily.
The user requirements for a system should describe the functional and non functional
requirements. So that they are understandable by system users without detailed technical
knowledge.
They should only specify the external behavior of the system and should avoid, system
design characteristics.
Consequently, if you are writing user requirements, you should not use software jargon,
structured notations or formal notations, or describe the requirement by describing the
system implementation.
User requirements are written in simple language, with simple tables and forms and intuitive
diagrams.
However, various problems can arise when requirements are written in natural language sentences
in a text document:
1. Lack of clarity: It is sometimes difficult to use language in a precise and unambiguous
way without making the document wordy and difficult to read.
2. Requirements confusion: Functional requirements, non-functional requirements, system
goals and design information may not be clearly distinguished.
3. Requirements amalgamation: Several different requirements may be expressed together
as a single requirement.
It is good practice to separate user requirements from more detailed system requirements in
a requirements document. Otherwise, non-technical readers of the user requirements may be
overwhelmed by details that are really only relevant for technicians.
5. An external regulator who needs to certify that the system is safe may specify that an
architectural design that has already been certified be used.
Natural language is often used to write system requirements specifications as well as user
requirements.
However, because system requirements are more detailed than user requirements,
natural language specifications can be confusing and hard to understand:
1. Natural language understanding relies on the specification readers and writers using the
same words for the same concept.
2. A natural language requirements specification is over flexible.
3. There is no easy way to modularize natural language requirements.
Because of these problems, requirements specifications written in natural language
areprone to misunderstandings.
These are often not discovered until later phases of the softwareprocess and
may then be very expensive to resolve.:
REQUIREMENT ENGINEERING PROCESS
Requirements engineering (RE) refers to the process of defining, documenting and
maintaining requirements.
Requirements engineering emphasizes the use of systematic and repeatable techniques that
ensure the completeness, consistency, and relevance of the system requirements.
The goal of the requirements engineering process is to create and maintain a system
requirements document.
Requirements engineering process includes four sub-processes.
1) Feasibility study: Assessing whether the system is useful to the business.
2) Elicitation and analysis :
Requirements elicitation is the process of discovering, reviewing, documenting,
andunderstanding the user's needs and constraints for the system.
Requirements analysis is the process of refining the user's needs and constraints.
3) Specification: Converting these requirements into some standard form. It is the process
ofdocumenting the user's needs and constraints clearly and precisely.
4) Validation: Checking that the requirements actually define the system that the customer
wants.
Figure illustrates the relationship between the activities. It also shows the documents
produced at each stage of the requirements engineering process.
The activities are concerned with the discovery, documentation and checking of
requirements.
In all systems, normally requirements change frequently.
o Reasons for changing requirements:
o The people involved, develop a better understanding of what they want the software
to do;
o The organisation buying the system changes;
o Modifications are made to the system’s hardware, software and organisational
environment.
The process of managing these changing requirements is called requirements management.
Later in the process, in the outer rings of the spiral, more effort will be devoted to system
requirements engineering and system modeling.
This spiral model accommodates approaches to development in which the requirements are
developed to different levels of detail. The number of iterations around the spiral can vary,
so the spiral can be exited after some or all of the user requirements have been elicited.
If the prototyping activity shown under requirements validation is extended to include
iterative development, this model allows the requirements and the system implementation to
be developed together.
2.9. FEASIBILITY STUDIES:
For all new systems, the requirements engineering process should start with a feasibility study. The
input to the feasibility study is:
A set of preliminary business requirements, an outline description of the system and how the
system is intended to support business processes.
The results of the feasibility study should be
A report that recommends whether or not it is worth carrying on with the requirements
engineering and system development process.
A feasibility study is a short, focused study that aims to answer a number of questions:
1. Does the system contribute to the overall objectives of the organisation?
2. Can the system be implemented using current technology and within given cost and schedule
constraints?
3. Can the system be integrated with other systems which are already in place?
1) Information assessment:
The information assessment phase identifies the information that is required to answer the
three questions set out above.
Once the information have been identified, then talk with information sources to discover
the answers to these questions.
3) Report writing:
Once information is collected, write the feasibility study report. Report can contain a
recommendation about whether or not the system development should continue.
Report can propose changes to the scope, budget and schedule of the system and suggest
further high-level requirements for the system.
REQUIREMENT ELICITATION ANALYSIS
Software engineers work with customers and system end-users to find out about the
application domain, what services the system should provide, the required performance of
the system, hardware constraints, and so on.
Requirements elicitation and analysis may involve a variety of people in an organisation.
The term stakeholder is used to refer to any person or group who will be affected by
the system, directly or indirectly.
o Stakeholders include end-users who interact with the system and everyone else in an
organization that may be affected by its installation.
o Other system stakeholders may be engineers who are developing or maintaining
related systems, business managers, domain experts and trade union representatives.
.
Figure. The requirements elicitation and analysis process
The activities are interleaved as the process proceeds from the inner to the outer rings of the
spiral.
The process activities are:
1. Requirements discovery:
This is the process of interacting with stakeholders in the system to collect their
requirements. Domain requirements from stakeholders and documentation are also
discovered during this activity.
2. Requirements classification and organization :
This activity takes the unstructured collection of requirements, groups related requirements
and organizes them into coherent clusters.
3.Requirement prioritization :
Inevitably, where multiple stakeholders are involved, requirements will conflict. This
activity is concerned with prioritizing requirements, and finding and resolving requirements
conflicts through negotiation.
4. Requirements documentation:
The requirements are documented and input into the next round of the spiral. Formal or
informal requirements documents may be produced.
In addition to system stakeholders, requirements may come from the application domain and
from other systems that interact with the system being specified. All of these must be considered
during the requirements elicitation process.
Techniques used for requirements discovery are
1) Viewpoint
2) Interviewing
3) Scenarios
4) Ethnography
1) Viewpoints:
The requirements sources (stakeholders, domain, systems) can all be represented as system
viewpoints, where each viewpoint presents a sub-set of the requirements for the system.
Each viewpoint provides a fresh perspective on the system, but these perspectives are not
completely independent—they usually overlap so that they have common requirements.
A key strength of viewpoint-oriented analysis is that it recognizes multiple perspectives and
provides a framework for discovering conflicts in the requirements proposed by different
stakeholders.
Viewpoints can be used as a way of classifying stakeholders and other sources of
requirements.
Three generic types of viewpoint are
a) Interactor viewpoints: It represents people or other systems that interact directly with the
system. In the bank ATM system, examples of interactor viewpoints are the bank’s
customers and the bank’s account database.
b) Indirect viewpoints: It represents stakeholders who do not use the system themselves but
who influence the requirements in some way. In the bank ATM system, examples of
indirect viewpoints are the management of the bank and the bank security staff.
c) Domain viewpoints: It represents domain characteristics and constraints that influence the
system requirements. In the bank ATM system, an example of a domain viewpoint would be
the standards that have been developed for interbank communications.
Interactor viewpoints provide detailed system requirements covering the system features
and interfaces.
Indirect viewpoints are more likely to provide higher-level organizational requirements and
constraints.
Domain viewpoints normally provide domain constraints that apply to the system.
Figure.Viewpoints in LIBSYS
2) Interviewing:
Formal or informal interviews with system stakeholders are part of most requirements engineering
processes.
In these interviews, the requirements engineering team puts questions to stakeholders about the
system that they use and the system to be developed. Requirements are derived from the
answers to these questions.
Interviews may be of two types:
(1) Closed interviews where the stakeholder answers a predefined set of questions.
(2) Open interviews where there is no predefined agenda.
Interviews are good for getting an overall understanding of what stakeholders do, how they
might interact with the system and the difficulties that they face with current systems.
People like talking about their work and are usually happy to get involved in interviews.
However, interviews are not so good for understanding the requirements from the application
domain.
It is hard to elicit domain knowledge during interviews for two reasons:
(1) All application specialists use terminology and jargon that is specific to a domain.
(2) Some domain knowledge is so familiar to stakeholders that they either find it difficult to
explain or they think it is so fundamental that it isn’t worth mentioning.
Two characteristics of Effective interviewers:
(1) They are open-minded, avoid preconceived ideas about the requirements and are willing
to listen to stakeholders. If the stakeholder comes up with surprising requirements, they are
willing to change their mind about the system.
(2) They prompt the interviewee to start discussions with a question, a requirements
proposal or by suggesting working together on a prototype system. Saying to people ‘tell me
what you want’ is unlikely to result in useful information. Most people find it much easier to
talk in a defined context rather than in general terms.
Interviews should be used alongside other requirements elicitation techniques.
3) Scenarios:
Scenarios can be particularly useful for adding detail to an outline requirements description. They
are descriptions of example interaction sessions.
Each scenario covers one or more possible interactions. Several forms of scenarios have been
developed, each of which provides different types of information at different levels of detail
about the system.
The scenario starts with an outline of the interaction, and, during elicitation, details are added to
create a complete description of that interaction.
A scenario may include:
1. A description of what the system and users expect when the scenario starts
2. A description of the normal flow of events in the scenario
3. A description of what can go wrong and how this is handled
4. Information about other activities that might be going on at the same time
5. A description of the system state when the scenario finishes.
Figure. Scenario for article downloading in LIBSYS
Use-cases:
Sequence diagrams:
Sequence diagrams are often used to add information to a use-case. These sequence
diagrams show the actors involved in the interaction, the objects they interact with and
the operations associated with these objects.
Essentially, a user request for an article triggers a request for a copyright form. Once
the user has completed the form, the article is downloaded and sent to the printer. Once
printing is complete, the article is deleted from the LIBSYS workspace.
Scenarios and use-cases are effective techniques for eliciting requirements for interactor
viewpoints, where each type of interaction can be represented as a usecase.
They can also be used in conjunction with some indirect viewpoints where these
viewpoints receive some results from the system.
Drawbacks:
They are not as effective for eliciting constraints or high-level business and non-
functionalrequirements from indirect viewpoints or for discovering domain requirements.
4) Ethnography:
Ethnography is an observational technique that can be used to understand social and
organizationalrequirements.
An analyst immerses him or herself in the working environment where the system will
be used. He or she observes the day-to-day work and notes made of the actual tasks in
which participants are involved.
The value of ethnography is that it helps analysts discover implicit system requirements
that reflect the actual rather than the formal processes in which people are involved.
Social and organizational factors that affect the work but that are not obvious to
individuals may only become clear when noticed by an unbiased observer.
Ethnography is particularly effective at discovering two types of requirements:
1. Requirements that are derived from the way in which people actually work rather than
the way in which process definitions say they ought to work.
2. Requirements that are derived from cooperation and awareness of other people’s activities.
The software requirements document is the specification of the system. It include both a
definition and a specification of requirements. It is not a design docu As far as
possible, it should set of what the system should do rather than how it s do it.
Document Title
Author(s)
Affiliation
Address
Date
Document Version
1. Introduction
1.1 Purpose of this document
Describes the purpose of the document.
1.2 Scope of this document
Describes the scope of this requirements definition effort. This section also de any
constraints that were placed upon the requirements elicitation process. as schedules,
costs.
1.3 Overview
Provides a brief overview of the product defined as a result of the requirem elicitation
process.
2. General description
Describes the general functionality of the product such as similar system information,
user characteristics, user objective, general constraints placed on design team.
Describes the features of the user community, including their expected expertise with
software systems and the application domain.
3. Functional requirements
This section lists the functional requirements in ranked order. A functional requirement
describes the possible effects of a software system, in other words, what the system
must accomplish. Each functional requirement should be specified in following
manner
Short, imperative sentence stating highest ranked functional requirement.
1. Description
A full description of the requirement.
2. Criticality
Describes how essential this requirement is to the overall system.
3. Technical issues
Describes any design or implementation issues involved in satisfying this requirement.
4. Cost and schedule
Describes the relative or absolute costs of the system.
5. Risks
Describes the circumstances under which this requirement might not able to be satisfied.
6. Dependencies with other requirements
Describes interactions with other requirements.
7 any other appropriate
4. Interface requirements
This section describes how the software interfaces with other software products or users
for input or output. Examples of such interfaces include library routines, token.
streams, shared memory, data streams, and so forth.
7.1 Security
7.2 Binary Compatibility
7.3 Reliability
7.4 Maintainability
7.5 Portability
7.6 Extensibility
7.7 Reusability
7.8 Application Compatibility
7.9 Resource Utilizationc
7.10 Serviceability
... others as appropriate
8. Operational scenarios
This section should describe a set of scenarios that illustrate, from the user's perspective,
what will be experienced when utilizing the system under various situations.
9. Preliminary schedule
This section provides an initial version of the project plan, including the major tasks to be
accomplished, their interdependencies, and their tentative start/stop dates
10. Preliminary budget
This section provides an initial budget for the project.
11. Appendices
11.1 Definitions, Acronyms, Abbreviations
Provides definitions terms, and acronyms, can be provided.
11.2 References
Provides complete citations to all documents and meetings referenced.
Semantic domain
Abstract Data Type(ADT) specification languages are used to specify algebras and
programs
The programming languages are used to specify functions from input to output values.
The distributed system specification languages are used to specify state sequences event
sequences, state transition sequences and finite state machines.
Syntactic domain
The syntactic domain of formal specification language consists of alphabets symbols and
set of rules.
These rules are used to construct well-formed formula using alphabets Basically these well
formed formulas are used to specify the system.
Satisfaction relation
For any model of a system, it is important to determine if elements of the semantic domain
satisfy its specification. This satisfaction is determined by a function which is known as
semantic abstraction function. The semantic abstraction function maps the elements of
semantic domain to equivalent classes
There are different specifications that are used to describe different aspects of the system.
For example a specification for describing system behavior or a specification describing
the system structure
Methods Merits
1) Formal methods provide a precise and unambiguous way to describe the behavior and
properties of a software system
2) Formal methods can be used to prove the correctness of a software system concerning its
specification. This is particularly valuable for safety-critical and mission-critical systems
3) Formal specification can serve as a clear and comprehensive documentation of system
requirements
4) The formal methods can identify errors in the specification itself. This allows corrections
before implementation.
5) The formal methods allow rigorous analysis of complex systems. Hence formal methods
promote the construction of rigorous specification of the system
6) The mathematical basis of formal methods facilitates automating the analysis of
specifications. The possibility of automatic verification is one of the most important
advantages of formal methods
Limitations
• The formal method is classified using two approaches - Model oriented approad and
Property-oriented approach
Formal methods
o Model-oriented approach
In model oriented approach, the system's behavior is represented by directly constructing a model
of the system with the help of mathematical structures such as tuples, relations, functions, sets
and sequences.
In property property-oriented approach, system's behaviour is defined indirectly by stating its
properties. These properties are specified in terms of a set of axioms
The property-oriented approach is more suitable for requirements specification and model-
oriented approach is more suitable for system design specification
There property oriented approach is classified into two categories Axiomatic specification and
algebraic specification
Axiomatic specification
Algebraic specification
S1,S2,S3….Sn/S
The rule states that if S₁. S and S, are true, then the truth of S can be inferred
The top part of an inference rule is called its antecedent, the bottom part is called it consequent
Concept of assertion
The logic expressions are called assertions
An assertion before the statement or command is called a precondition. This condition
states the relationships and constraints among variables that are true at that point in execution.
An assertion following a statement is called postcondition
<precondition> statement <postcondition>
example-
x=y+1
Step 1: Establish the range of input values over which the function should behave correctly
Step 3: Specify a predicate defining the condition which must hold on the output of the
function
Step 4: Establish the changes made to the function's input parameters after execution the
function
Step 5: Combine all of the above into pre and post-conditions of the function
1. Type section: In this section, the data types being used are specified.
2. Exception section: In this section, exceptional conditions that may occur during the
operations are defined.
3. Syntax section: This section defines the signatures of the interface procedures The
collection of sets that form the input domain of an operator and the sof where the output is
produced is called the signature of the operator
4. Equations section: This section specifies the set of equations or rewrite rules These rules
define the meaning of the interface procedures
1. Basic construction operators: These operators are used to create or modify entities of a type
For example create and append are basic construction operators.
2. Extra construction operators: These are the construction operators other than the basic
construction operators. For example - The remove is an extra construction operator
3. Basic inspection operators: These operators evaluate attributes of type without modifying
them. For example eval, get are the basic inspection operators
4. Extra inspection operators: These extra inspection operators are other than the basic
inspection operators
Following is an algebraic specification that represents the Cartesian coordinates. The operations
defined on X and Y coordinates which evaluates the x and y attributes of an entity. The IsEq
compares two entities for equality
Coord
uses Integer, Boolean Syntax
section
Equations section
1) Kite Termination Property: This property of algebraic specification ensan at any sequence
of operations applied to a data structure or system wa erminate in a finite number of steps. In
other words, it guarantees that there wi be no infinite loops or non-terminating computations.
2) Unique termination property: The unique termination property of an algebraic specification
states that any two sequences of operations involving the interface procedures of the
specification will eventually terminate and produce the same result.
3) Completeness: The completeness property of an algebraic specification states that the
specification is sufficient to define the behavior of the system for all possible inputs and
outputs.
1) The algebraic specifications are based on mathematical structures. Hence they are
unambiguous and specification.
2) Using algebraic specification, the effect of the arbitrary sequence of operations can be
studied.
Cons
o Finite automata machine takes the string of symbol as input and changes its state accordingly. In the
input, when a desired symbol is found then the transition occurs.
o While transition, the automata can either move to the next state or stay in the same state.
o FA has two states: accept state or reject state. When the input string is successfully processed and the
automata reached its final state then it will accept.
Q:finitesetofstates
∑:finitesetofinputsymbol
q0:initialstate
F:finalstate
δ: Transition function
1. δ: Q x ∑ →Q
FA is characterized into two ways:
1. DFA (finite automata)
2. NDFA (non deterministic finite
automata) DFA
DFA stands for Deterministic Finite Automata. Deterministic refers to the uniqueness of the computation.
In DFA, the input character goes to one state only. DFA doesn't accept the null move that means the DFA cannot
change state without any input character.
NDFA
NDFA refer to the Non Deterministic Finite Automata. It is used to transit the any number of states for a
particular input. NDFA accepts the NULL move that means it can change state without reading the symbols.
NDFA also has five states same as DFA. But NDFA has different transition function.
δ: Q x ∑ →2Q
Example
Classical Category
Analysis Strength Weaknesses
Method
Natural Language Informal Easy to learn Imprecise
Easy to use Specification can
Easy for the client to be ambiguous,
understand contradictory or
incomplete
Entity RelationshipSemiformal Can be understood by Not as precise as
modelling client formal techniques
Structured system More precise than Cannot handle
Analysis informal techniques timing
Petrinet Formal Extremely Precise Hard for the
Can reduce analysis faults development team
Can reduce development to learn
cost and effort Hard to use
Can support for Impossible for
correctness proving most clients to
understand
Object Modelling using
UML Introduction:
In late 1960’s people were concentrating on Procedure Oriented Languages such as COBOL,
FORTRAN, PASCAL…etc. Later on they preferred Object OrientedLanguages. In the middle of 1970-
80 three Scientists named as BOOCH, RUMBAUGH and JACOBSON found a new language named as
Unified Modeling Language. It encompasses the Designing of the System/Program. It is a Defacto
language. What is UML?
• Is a language. It is not simply a notation for drawing diagrams, but a complete language for
capturing knowledge (semantics) about a subject and expressing knowledge (syntax) regarding the
subject for
the purpose of communication.
• Applies to modeling and systems. Modeling involves a focus on understanding a subject (system)
and capturing and being able to communicate in this knowledge.
• It is the result of unifying the information systems and technology industry’s best
engineering practices (principals, techniques, methods and tools).
– visualizing
– specifying
– constructing
Visual modeling (visualizing)
• UML addresses the specification of all important analysis, design, and implementation decisions.
Constructing
• Round-trip engineering requires tool and human intervention to avoid information loss
To understand the UML, you need to form a conceptual model of the language, and this requires learning
three major elements.
Elements:
1. Basic building blocks
2. Rules
3. Common Mechanisms
Class Name
Attributes
Operations
2. Interface
A collection of operations that specify a service (for a resource or an action) of a class or component.
It describes the externally visible behavior of that element
Interface
3. Collaboration
– Define an interaction among two or more classes.
– Define a society of roles and other elements.
– Provide cooperative behavior.
– Capture structural and behavioral dimensions.
– UML uses ‘pattern” as a synonym
(careful). It is representing with the dashed
ellipse.
4. Use Case
– A sequence of actions that produce an observable result for a specific actor.
– A set of scenarios tied together by a common user goal.
– Provides a structure for behavioral things.
– Realized through a collaboration (usually realized by a set of actors and the system to be built).
Place order
Actor
5. Active Class
– Special class whose objects own one or more processes or threads.
– Can initiate control activity.
Name
Event Manager
Thread
Attributes
Time
Operations
Suspend ()
6. Component
• These are Dynamic parts of UML models: “behavior over time and space”.
• Usually connected to structural things in UML. There are two kinds of Behavioral Things:
1. Interaction
• Is a behavior of a set of objects comprising of a set of messages exchanges within a particular context
to accomplish a specific purpose.
Display
2. State Machine
• Is a behavior that specifies the sequences of states an object or an interaction goes through during
its lifetime in response to events, together with its responses to those events.
Idl Waitin
1.1.3. eThings
Grouping g
• These are the Comments regarding other UML elements (usually called adornments in
UML) There is only one primary kind of annotational thing:
1. Note
A note is simply a symbol for rendering constraints and comments attached to an element or collection of
elements. Is best expressed in informal or formal text.
1.2. Relationships
There are four kinds of relationships:
1.2.1 Dependency
1.2.2. Association
1.2.3. Generalization
1.2.4. Realization
» These relationships tie things together.
» It is a semantic connection among elements.
» These relationships are the basic relational building blocks of the UML.
1.2.1. Dependency
Is a semantic relationship between two things in which a change to one thing (the independent thing) may affect
the semantics of the other thing (the dependent thing).
1.2.2. Association
Is a structural relationship that describes a set of links, a link being a connection among objects.
employer employee
0...1 *
Aggregation
» Is a special kind of association. It represents a structural relationship between the whole and its parts.
» Represented by black diamond.
1.2.3. Generalization
Is a specialization/generalization relationship in which objects of the specialized element (the child) are
more specific than the objects of the generalized element?
4. Realization
A semantic relationship between two elements, wherein one element guarantees to carry out what
is expected by the other element.
Where
• Class Diagrams describe the static structure of a system, or how it is structured rather than how
it behaves.
• A class diagram shows the existence of classes and their relationships in the logical view of a
system These diagrams contain the following elements.
– Classes and their structure and behavior
– Association, aggregation, dependency, and inheritance relationships
– Multiplicity and navigation indicators
– Role names
These diagrams are the most common diagrams found in O-O modeling systems.
Examples:
Registration
Student
Object Diagrams
• Object Diagrams describe the static structure of a system at a particular time. Whereas a class
model describes all possible situations, an object model describes a particular situation.
Object diagrams contain the following elements:
Objects which represent particular entities. These are instances of classes.
Linkswhich represent particular relationships between objects. These are instances of associations.
Course
Registrar Student
1.3.3. Sequence Diagrams
● Sequence Diagrams describe interactions among classes. These interactions are modeled as exchanges
of messages.
● These diagrams focus on classes and the messages they exchange to accomplish some desired behavior.
● Sequence diagrams are a type of interaction
diagrams. Sequence diagrams contain the following
elements:
Class roles: which represent roles that objects may play within the interaction.
Lifelines: which represent the existence of an object over a period of time.
Activations: which represent the time during which an object is performing an
operation. Messages: which represent communication between objects.
Activity diagrams describe the activities of a class. These diagrams are similar to state chart diagrams
and use similar conventions, but activity diagrams describe the behavior of a class in response to internal
processing rather than external events as in state chart diagram.
Swim lanes: which represent responsibilities of one or more objects for actions within an overall activity;
that is, they divide the activity states into groups and assign these groups to object that must perform the
activities.
Action States: which represent atomic or non interruptible, actions of entities or steps in the execution of
an algorithm?
Action flows: which represent relationships between the different action states of an entity.
Object flows: which represent the utilization of objects by action states and the influence of action states
on objects?
Data Flow Diagram
A Data Flow Diagram (DFD) is a graphical representation of the "flow" of data
through aninformation system.
It is common practice to draw a System Context Diagram first which shows
the interactionbetween the system and outside entities.
The DFD is designed to show how a system is divided into smaller portions and to
highlight theflow of data between those parts.
Level 0 DFD i.e. Context level DFD should depict the system as a single.
Primary input and primary output should be carefully identified.
Information flow from continuity must be maintained from level to level
Four basic symbols
Symbol Notation
External Entity: External entities are
objectsoutside the system, with which the External Entity
system communicsates. External entities
are sources and destinations of the
system’s inputs and
outputs.
Process : A process tranforms incoimg
dataflow into outgoing data flow Level
Process
2. The verb phrases in the problem description can be identified as processes in the system.
3. There should not be a direct flow between data stores and external entity. This
flow should gothrough a process.
1
4. Data store labels should be noun phrases from problem description.
5. No data should move directly between external entities. The data flow
should go through aprocess.
6. Generally source and sink labels are noun phrases.
Step8: Deerminising
Computing numerical data to determine hardware requirements
Volume of input (daily or hourly)
Frequency of each printed report and its deadline
Size and number of records of each type to pass between CPU and mass
storage
Size of each file
After approval by client: Specification document is handed to design team, and software
processcontinues
3
UNIT- III SOFTWARE DESIGN
1.1. INTRODUCTION
Software design encompasses the set of principles, concepts, and practices that
lead to the development of a high-quality system or product.
Design creates representation or model of the software. Design model provides
detail about software architecture, data structure, interfaces and components that
are necessary to implement the system.
Software design sits at the technical kernel of software engineering and is applied
regardless of the software process model that is used.
Beginning once software requirements have been analyzed and modeled, software
design is the last software engineering action within the modeling activity and sets
the stage for construction (code generation and testing).
2) Architectural design
The architectural design defines the relationship between major structural
elements of the software, the architectural styles and design patterns and the
constraints that affect the way inwhich architecture can be implemented.
3) Interface design
The interface design describes how the software communicates with systems that
interoperate with it, and with humans who use it. An interface implies a flow
of information and a specific type of behavior. Therefore, usage scenarios and
behavioral models provide much of the information required for interface design.
4) Component-level design
The component-level design transforms structural elements of the software
architecture into a procedural description of software components. Information
obtained from the class-based models, flow models, and behavioral models serve
as the basis for component design.
1. The design must implement all of the explicit requirements contained in the
requirementsmodel, and it must accommodate all of the implicit requirements
desired by stakeholders.
2. The design must be a readable, understandable guide for those who
generate code and forthose who test and subsequently support the software.
3. The design should provide a complete picture of the software, addressing
the data,functional, and behavioral domains from an implementation
perspective.
Quality Guidelines
1. A design should exhibit an architecture that
a. Has been created using recognizable architectural styles or patterns,
b. Is composed of components that exhibit good design characteristics
c. Can be implemented in an evolutionary fashion, thereby facilitating
implementation andtesting.
2. A design should be modular; that is, the software should be logically partitioned
into elements orsubsystems.
3. A design should contain distinct representations of data, architecture, interfaces,
and components.
4. A design should lead to data structures that are appropriate for the classes to
be implementedand are drawn from recognizable data patterns.
5. A design should lead to components that exhibit independent
functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections
betweencomponents and with the external environment.
7. A design should be derived using a repeatable method that is driven by
information obtainedduring software requirements analysis.
8. A design should be represented using a notation that effectively communicates
its meaning.
Quality Attributes.
A set of software quality attributes that has been given the acronym
FURPS—functionality,usability, reliability, performance, and supportability.
The FURPS quality attributes represent a target for all software design:
Functionalityis assessed by evaluating the feature set and capabilities of the
program, the generality of the functions that are delivered, and the security of the
overall system.
Usabilityis assessed by considering human factors, overall aesthetics, consistency,
and documentation.
Not every software quality attribute is weighted equally as the software design
is developed.
One application may stress functionality with a special emphasis on security.
Another may demand performance with particular emphasis on processing speed.
A third might focus on reliability.
Regardless of the weighting, it is important to note that these quality attributes
must beconsidered as design commences, not after the design is complete and
construction has begun.
1) Abstraction
“Abstraction permits one to concentrate on a problem at some level of abstraction without
regard to lowlevel details”
• Procedural Abstraction
– Sequence of instructions that have a specific and limited function.
– Instructions are given in a named sequence
– Each instruction has a limited function
– The name of a procedural abstraction implies these functions, but specific
details are suppressed.
– An example of a procedural abstraction would be the word open for a door.
Open impliesa long sequence of procedural steps (e.g., walk to the door,
reach out and grasp knob,turn knob and pull door, step away from moving
door, etc.)
• Data Abstraction
– This is a named collection of data that describes a data object.
– Data abstraction includes a set of attributes that describe an object.
– The data abstraction for door would encompass set of attributes that
describe the door (e.g., door type, swing direction, opening mechanism,
weight, dimensions). It follows that the procedural abstraction open would
make use of information contained in the attributes of the data abstraction
door.
• Control Abstraction
– A program control mechanism without specifying internal details, e.g.,
semaphore,rendezvous
2) Architecture
Architecture is the structure or organization of program components (modules), the
manner in which these components interact, and the structure of data that are used by the
components. Components can be generalized to represent major system elements and
their interactions.
Kinds of Models
1) Structural models: represent architecture as an organized collection of components.
2) Framework models: increase the level of design abstraction by identifying
repeatable architecturedesign frameworks (patterns)
• Horizontal Partitioning
– Easier to test
– Easier to maintain (questionable)
– Propagation of fewer side effects (questionable)
– Easier to add new features
F1 (Ex: Input) F2 (Process) F3(Output)
• Vertical Partitioning
– Control and work modules are distributed top down
– Top level modules perform control functions
– Lower modules perform computations
• Less susceptible to side effects
• Also very maintainable
3) Pattern
A design pattern describes a design structure that solves a particular design
problem within a specific context and amid “forces” that may have an impact on
the manner in which the pattern isapplied and used.
The intent of each design pattern is to provide a description that enables a designer to determine
(1) Whether the pattern is applicable to the current work,
(2) Whether the pattern can be reused (hence, saving design time), and
(3) Whether the pattern can serve as a guide for developing a similar, but functionally or
structurallydifferent pattern.
4) Separation of Concerns
Separation of concerns is a design concept that suggests that any complex problem
can be more easily handled if it is subdivided into pieces that can each be solved
and/or optimized independently.
A concern is a feature or behavior that is specified as part of the requirements
model for the software.
By separating concerns into smaller, and therefore more manageable pieces, a
problem takes less effort and time to solve.
For two problems, p1 and p2, if the perceived complexity of p1 is greater than the
perceived complexity of p2, it follows that the effort required to solve p1 is greater
than the effort required to solve p2. As a general case, this result is intuitively
obvious. It does take more time to solve a difficult problem.
It also follows that the perceived complexity of two problems when they are
combined is often greater than the sum of the perceived complexity when each is
taken separately. This leads to a divide-and-conquer strategy
5) Modularity
Software is divided into separately named and addressable components called
modules that are integratedto satisfy problem requirements.
• Follows “divide and conquer” concept, a complex problem is broken down into
several manageable pieces
• Let p1 and p2 be two program parts, and E the effort to solve the
problem. Then,E(p1+p2) > E(p1)+E(p2), often >>
• A need to divide software into optimal sized modules.
• Monolithic software (i.e., a large program composed of a single module) cannot be
easily grasped by a software engineer. The number of control paths, span of
reference, number of variables, and overall complexity would make understanding
more difficult.
Modularity & Software Cost
• Modular Composability
– Enable reuse of existing components to be assembled into a new system
• Modular Understandability
– Can the module be understood as a stand alone unit? Then it is easier to
understand andchange.
• Modular Continuity
– If small changes to the system requirements result in changes to
individual modules,rather than system-wide changes, the impact of the side
effects is reduced
• Modular Protection
– If there is an error in the module, then those errors are localized and not
spread to othermodules
6) Information Hiding
• Modules are characterized by design decisions that are hidden from others.
Modules should be specified and designed so that information (algorithms and
data) contained within a module is inaccessible to other modules that have no
need for such information.
• Modules communicate only through well defined interfaces
• Enforce access constraints to local entities and those visible through interfaces
• Very important for accommodating change and reducing coupling.
• Abstraction helps to define the procedural (or informational) entities that make up
the software.
• Hiding defines and enforces access constraints to both procedural detail within
a module andany local data structure used by the module
7) Functional Independence
• Functional independence is achieved by developing modules with
“singleminded” function andan “aversion” to excessive interaction with other
modules.
• Each module addresses a specific subset of requirements and has a simple
interface whenviewed from other parts of the program structure.
• Critical in dividing system into independently implementable parts
• Measured by two qualitative criteria
• Coincidental Cohesion
- The parts of a component are not related but simply
bundled into a singlecomponent.
- Harder to understand and not reusable
• Logical Cohesion
- Similar functions such as input, error handling, etc. put together.
Functions fallin same logical class. May pass a flag to
determine which ones executed.
- Interface difficult to understand. Code for more than one
function may beintertwined, leading to severe maintenance
problems.
- Difficult to reuse
• Temporal Cohesion
- All of statements activated at a single time, such as start up
or shut down, arebrought together. Initialization, clean up.
- Functions weakly related to one another, but more strongly
related to functionsin other modules so may need to change lots of
modules when do maintenance.
• Procedural cohesion:
- A single control sequence, e.g., a loop or sequence of decision
statements. Oftencuts across functional lines. May contain only part
of a complete function or parts of several functions.
- Functions still weakly connected, and again unlikely to
be reusable in anotherproduct.
• Communicational cohesion:
- Operate on same input data or produce same output data. May be
performing more than one function. Generally acceptable if
alternate structures with higherCohesion cannot be easily
identified.
- Still problems with reusability.
• Sequential cohesion:
- Output from one part serves as input for another part. May
contain severalfunctions or parts of different functions.
• Informational cohesion:
- Performs a number of functions, each with its own entry point,
with independentcode for each function, all performed on same
data structure. Different than logical cohesion because functions
not intertwined.
• Functional cohesion:
- Each part necessary for execution of a single function.
e.g., compute square rootor sort the array.
- Usually reusable in other contexts. Maintenance easier.
• Type cohesion:
- Modules that support a data abstraction.
- Not strictly a linear scale. Functional much stronger than rest
while first twomuch weaker than others. Often many levels may
be applicable when considering two elements of a module.
Cohesion of module considered as highest level of cohesion that
is applicable to all elements in the module.
• Data coupling
– Occurs when one module passes local data values to another as parameters
• Stamp coupling
– Occurs when part of a data structure is passed to another module as a
parameter
– similar to common coupling except that global variables are shared
selectively among routines that require the data. E.g., packages in Ada.
More desirable than common coupling because fewer modules will have
to be modified if a shared data structure is modified. Pass entire data
structure but need only parts of it.
• Control Coupling
– Occurs when control parameters are passed between modules. So that
one module controls the sequence of processing steps in another module
• Common Coupling
– Occurs when multiple modules access common data areas such as Fortran
Common or C extern
• Content Coupling
– If one module directly references the contents of the other.
– When one module modifies local data values or instructions in another module.
– If one refers to local data in another module.
– If one branches into a local label of another.
• Subclass Coupling
– The coupling between a class and its parent class
Examples of Coupling
Cohesio n Coupling
8) Refinement
• Refinement is actually a process of elaboration.
• Refinement is a process where one or several instructions of the program are
decomposed into more detailed instructions.
• Begin with a statement of function (or description of information) that is defined
at a high level of abstraction and then elaborate on the original statement,
providing more and more detail as each successive refinement (elaboration)
occurs.
• Refinement helps to reveal low-level details as design progresses.
• Stepwise refinement is a top down strategy
– Basic architecture is developed iteratively
– Step wise hierarchy is developed
9) Aspects
An aspect is a representation of a crosscutting concern.
For example, generic security requirement that states that a registered user must be
validated prior to using an application. This requirement is applicable for all functions
that are available to registered users of the system.
The design representation, of the requirement a registered user must be validated
prior to using the system, is an aspect of the system.
An aspect is implemented as a separate module (component) rather than as
software fragments thatare “scattered” or “tangled” throughout many components
The design architecture should support a mechanism for defining an aspect—a
module that enablesthe concern to be implemented across all other concerns that it
crosscuts.
10) Refactoring
"Refactoring is the process of changing a software system in such a way that it
does not alter theexternal behavior of the code [design] yet improves its internal
structure.”
Refactoring is a reorganization technique that simplifies the design (or code) of a
component
without changing its function or behavior.
• When software is refactored, the existing design is examined for
– Redundancy
– Unused design elements
– Inefficient or unnecessary algorithms
– Poorly constructed or inappropriate data structures or any other design
failure that can becorrected to yield a better design.
1.5 Model-View-Controller
The Model-View-Controller (MVC) framework is an architectural/design pattern that
separates an application into three main logical components Model, View, and Controller.
Each architectural component is built to handle specific development aspects of an application.
It isolates the business logic and presentation layer from each other. It was traditionally used
for desktop graphical user interfaces (GUIs). Nowadays, MVC is one of the most frequently
used industry-standard web development frameworks to create scalable and extensible projects.
It is also used for designing mobile apps.
MVC was created by Trygve Reenskaug. The main goal of this design pattern was to solve
the problem of users controlling a large and complex data set by splitting a large application
into specific sections that all have their own purpose.
Features of MVC :
It provides a clear separation of business logic, Ul logic, and input logic.
It offers full control over your HTML and URLs which makes it easy to design web
application architecture.
It is a powerful URL-mapping component using which we can build applications that have
comprehensible and searchable URLs.
It supports Test Driven Development (TDD).
To know more about the benefits of using the MVC Framework refer to the article – Benefits
of using MVC framework
Components of MVC :
The MVC framework includes the following 3 components:
Controller
Model
View
Advantages of MVC:
Codes are easy to maintain and they can be extended easily.
The MVC model component can be tested separately.
The components of MVC can be developed simultaneously.
It reduces complexity by dividing an application into three units. Model, view, and
controller.
It supports Test Driven Development (TDD).
It works well for Web apps that are supported by large teams of web designers and
developers.
This architecture helps to test components independently as all classes and objects are
independent of each other
Search Engine Optimization (SEO) Friendly.
Disadvantages of MVC:
It is difficult to read, change, test, and reuse this model
It is not suitable for building small applications.
The inefficiency of data access in view.
The framework navigation can be complex as it introduces new layers of abstraction which
requires users to adapt to the decomposition criteria of MVC.
Increased complexity and Inefficiency of data
4. Subscriber: An entity that receives messages from topics based on Subscription. We can
understand it by taking the analogy of ott platforms. Ott platforms allow you to stream the
services only if you have a subscription. Similarly, a subscription here works.
5. Acknowledgment: It is a message that subscribers send after they receive a message from the
topic
There are two delivery methods: push and pull. A subscriber receives the message by either
pushing the message to the subscriber or by the subscriber pulling the message from the topic.
The below diagram represents the architecture of pub-sub.
1.7 Adapter
This pattern is easy to understand as the real world is full of adapters. For example
consider a USB to Ethernet adapter. We need this when we have an Ethernet interface on one
end and USB on the other. Since they are incompatible with each other. we use an adapter that
converts one to other. This example is pretty analogous to Object Oriented Adapters. In design,
adapters are used when we have a class (Client) expecting some type of object and we have an
object (Adaptee) offering the same features but exposing a different interface.
To use an adapter:
1. The client makes a request to the adapter by calling a method on it using the target interface.
2. The adapter translates that request on the adaptee using the adaptee interface.
3. Client receive the results of the call and is unaware of adapter’s presence
4. Definition: The adapter pattern convert the interface of a class into another interface clients
expect. Adapter lets classes work together that couldn’t otherwise because of incompatible
interfaces. Class Diagram:
1.8 Command
Disadvantages:
Increase in the number of classes for each individual command
Advantages:
1. A family of algorithms can be defined as a class hierarchy and can be used interchangeably
to alter application behavior without changing its architecture.
2. By encapsulating the algorithm separately, new algorithms complying with the same
interface can be easily introduced.
3. The application can switch strategies at run-time.
4. Strategy enables the clients to choose the required algorithm, without using a “switch”
statement or a series of “if-else” statements.
5. Data structures used for implementing the algorithm are completely encapsulated in
Strategy classes. Therefore, the implementation of an algorithm can be changed without
affecting the Context class.
Disadvantages:
1. The application must be aware of all the strategies to select the right one for the right
situation.
2. Context and the Strategy classes normally communicate through the interface specified by
the abstract Strategy base class. Strategy base class must expose interface for all the
required behaviours, which some concrete Strategy classes might not implement.
3. In most cases, the application configures the Context with the required Strategy object.
Therefore, the application needs to create and maintain two objects in place of one.
Definition:
The Observer Pattern defines a one to many dependency between objects so that one object changes
state, all of its dependents are notified and updated automatically.
Explanation:
One to many dependency is between Subject(One) and Observer(Many).
There is dependency as Observers themselves don’t have access to data. They are
dependent on Subject to provide them data.
Class diagram:
Advantages:
Provides a loosely coupled design between objects that interact. Loosely coupled objects are flexible
with changing requirements. Here loose coupling means that the interacting objects should have
less information about each other. Observer pattern provides this loose coupling as:
Subject only knows that observer implement Observer interface.Nothing more.
There is no need to modify Subject to add or remove observers.
We can reuse subject and observer classes independently of each other.
Disadvantages:
Memory leaks caused by Lapsed listener problem because of explicit register and
unregistering of observers.
1.11 Proxy
Proxy means ‘in place of’, representing’ or ‘in place of’ or ‘on behalf of’ are literal
meanings of proxy and that directly explains Proxy Design Pattern.
Proxies are also called surrogates, handles, and wrappers. They are closely related in
structure, but not purpose, to Adapters and Decorators.
A real world example can be a cheque or credit card is a proxy for what is in our bank
account. It can be used in place of cash, and provides a means of accessing that cash
when required. And that’s exactly what the Proxy pattern does – “Controls and manage
access to the object they are protecting“.
Benefits:
One of the advantages of Proxy pattern is security.
This pattern avoids duplication of objects which might be huge size and memory intensive.
This in turn increases the performance of the application.
The remote proxy also ensures about security by installing the local code proxy (stub) in the
client machine and then accessing the server with help of the remote code.
Drawbacks/Consequences:
This pattern introduces another layer of abstraction which sometimes may be an issue if the
RealSubject code is accessed by some of the clients directly and some of them might access the
Proxy classes. This might cause disparate behaviour.
Interesting points:
There are few differences between the related patterns. Like Adapter pattern gives a
different interface to its subject, while Proxy patterns provides the same interface from the
original object but the decorator provides an enhanced interface. Decorator pattern adds
additional behaviour at runtime.
Proxy used in Java API: java.rmi.*;
3.13 Facade
Facade Method Design Pattern is a part of the Gang of Four design patterns and it is
categorized under Structural design patterns. Before we dive deep into the details of it, imagine
a building, the facade is the outer wall that people see, but behind it is a complex network of
wires, pipes, and other systems that make the building function. The facade pattern is like that
outer wall. It hides the complexity of the underlying system and provides a simple interface
that clients can use to interact with the system.
(1) The scope of a pattern is less broad, focusing on one aspect of the architecture
rather than thearchitecture in its entirety;
(2) A pattern imposes a rule on the architecture, describing how the software will
handle some aspect ofits functionality at the infrastructure level (e.g., concurrency);
(3) Architectural patterns tend to address specific behavioral issues within the context of
the architecture(e.g., how real-time applications handle synchronization or interrupts).
Patterns can be used in conjunction with an architectural style to shape theoverall
structure of a system.
Client-Server style
The Client-server model is a distributed application structure that partitions task or workload
between the providers of a resource or service, called servers, and service requesters called
clients. In the client-server architecture, when the client computer sends a request for data to the
server through the internet, the server accepts the requested process and deliver the data packets
requested back to the client. Clients do not share any of their resources. Examples of Client-
Server Model are Email, World Wide Web, etc.
How the Client-Server Model works ?
In this article we are going to take a dive into the Client-Server model and have a look at how the
Internet works via, web browsers. This article will help us in having a solid foundation of the
WEB and help in working with WEB technologies with ease.
Client: When we talk the word Client, it mean to talk of a person or an organization using a
particular service. Similarly in the digital world a Client is a computer (Host) i.e. capable of
receiving information or using a particular service from the service providers (Servers).
Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its present in
the database.
Tiered Architecture
In Tier Architecture, there is another layer between the client and the server. The client does
not directly communicate with the server. Instead, it interacts with an application server which
further communicates with the database system and then the query processing and transaction
management takes place. This intermediate layer acts as a medium for the exchange of partially
processed data between the server and the client. This type of architecture is used in the case of
large web applications.
This software architecture pattern decomposes a task that performs complex processing into a
series of separate elements that can be reused, where processing is executed sequentially step by
step.
There are four main components:
1. Data Source: The original, unprocessed data
2. Data Sink: The final processed data
3. Filter: Components that perform processing
4. Pipe: Components that pass data from a data source to a filter, or from a filter to another
filter, or from a filter to a data sink
Disadvantages
Inefficient and inconvenient to pass around the full set of data throughout the entire pipe and
filter system, because not every component will require the full set of data
Reliability may be an issue if data is lost on the way between components
Having too many filters can slow down your application, introducing bottlenecks or
deadlocks if one particular filter processes slowly or fails
Real-World Example
I will now talk about how pipes and filters are used in Unix. It can be used for chaining two or
more commands so that the output of one command becomes the input for the next command.
3.10. USER INTERFACE DESIGN
User interface design creates an effective communication medium between a human and a computer.
1. GOLDEN RULES:
1) Place the user in control.
2) Reduce the user’s memory load.
3) Make the interface consistent.
These golden rules actually form the basis for a set of user interface design principles that guide
thisimportant aspect of software design.
The more a user has to remember, the more error-prone the interaction with the system will be. It
is for this reason that a well-designed user interface does not tax the user’s memory. Whenever
possible, the system should “remember” pertinent information and assist the user with an
interaction scenario that assists recall
Design principles that enable an interface to reduce the user’s memory load are
1. Relieve short-term memory (remember)
2. Rely on recognition, not recall (recognition)
3. Provide visual cues (inform)
4. Provide defaults, undo, and redo (forgiving)
5. Provide interface shortcuts (frequency)
6. Promote an object-action syntax (intuitive)
7. Use real-world metaphors (transfer)
8. User progressive disclosure (context)
9. Promote visual clarity (organize)
3) Make the Interface Consistent:
The interface should present and acquire information in a consistent fashion. This implies that
(1) All visual information is organized according to design rules that are maintained
throughoutall screen displays
(2) Input mechanisms are constrained to a limited set that is used consistently
throughout theapplication
(3) Mechanisms for navigating from task to task are consistently defined
and implemented.Set of design principles that help make the interface consistent
are:
1. Sustain the context of users’ tasks (continuity)
2. Maintain consistency within and across products (experience)
3. Keep interaction results the same (expectations)
4. Provide aesthetic appeal and integrity (attitude)
5. Encourage exploration (predictable)
ii) Design model: The software engineer creates a design model. Derived from the analysis
model of the requirements. Incorporates data, architectural, interface, and procedural
representationsof the software.
iii) Mental model: The end user develops a mental image. Often called the user's system
perception.Consists of the image of the system that users carry in their heads.
iii) Knowledgeable, frequent users: Good semantic and syntactic knowledge that often
leads to the “power-user syndrome”; that is, individuals who look for shortcuts and abbreviated
modes ofinteraction.
2) The Process:
The analysis and design process for user interfaces is iterative and can be represented using a
spiralmodel.
(1) Interface analysis focuses on the profile of the users who will interact with the system.
Skill level, business understanding, and general receptiveness to the new system are
recorded;and different user categories are defined.
For each user category, requirements are elicited. In essence, understand the system
perception for each class of users.
Once general requirements have been defined, a more detailed task analysis is conducted.
Those tasks that the user performs to accomplish the goals of the system are identified,
described, and elaborated over a number of iterative passes through the spiral.
Finally, analysis of the user environment focuses on the physical work environment.
Among the questions to be asked are
Where will the interface be located physically?
Will the user be sitting, standing, or performing other tasks unrelated to the interface?
Does the interface hardware accommodate space, light, or noise constraints?
Are there special human factors considerations driven by environmental factors?
The information gathered as part of the analysis action is used to create an analysis model
for the interface. Using this model as a basis, the design action commences.
(2) The goal of interface design is to define a set of interface objects, actions and their screen
representations that enable a user to perform all defined tasks in a manner that meets
every usability goal defined for the system.
(3) Interface construction normally begins with the creation of a prototype that enables
usage scenarios to be evaluated. As the iterative design process continues, a user interface
tool kit may be used to complete the construction of the interface.
Understand the problem before you attempt to design a solution. In the case of user interface
design,understanding the problem means understanding
(1) The people (end users) who will interact with the system through the interface
(2) The tasks that end users must perform to do their work
(3) The content that is presented as part of the interface
(4) The environment in which these tasks will be conducted.
1. User Analysis:
The phrase “user interface” is probably all the justification needed to spend some time
understanding the user before worrying about technical matters.
Information from a broad array of sources can be used.
User Interviews.
The most direct approach, members of the software team meet with end users to better
understand their needs, motivations, work culture, and a myriad of other issues. This can
be accomplished in one-on-one meetings or through focus groups.
Sales input.
Sales people meet with users on a regular basis and can gather information that will help
the software team to categorize users and better understand their requirements.
Marketing input.
Market analysis can be invaluable in the definition of market segments and an
understanding of how each segment might use the software in subtly different ways.
Support input.
Support staff talks with users on a daily basis. They are the most likely source of
information on what works and what doesn’t, what users like and what they dislike, what
features generate questions and what features are easy to use.
The following set of questions will help you to better understand the users of a system:
Are users trained professionals, technicians, clerical, or manufacturing workers?
What level of formal education does the average user have?
Are the users capable of learning from written materials or have they expressed a
desire forclassroom training?
Are users expert typists or keyboard phobic?
What is the age range of the user community?
Will the users be represented predominately by one gender?
How are users compensated for the work they perform?
Do users work normal office hours or do they work until the job is done?
Is the software to be an integral part of the work users do or will it be used only occasionally?
What is the primary spoken language among users?
What are the consequences if a user makes a mistake using the system?
Are users experts in the subject matter that is addressed by the system?
Do users want to know about the technology that sits behind the interface?
Use cases.
The use case describes the manner in which an actor interacts with a system. When used
as partof task analysis, the use case is developed to show how an end user performs some
specific work-related task.
In most instances, the use case is written in an informal style (a simple paragraph) in the
first- person.
Use case provides a basic description of one important work task for the computer-aided
design system. From it, you can extract tasks, objects, and the overall flow of the
interaction.
Task elaboration.
Elaboration is a mechanism for refining the processing tasks that are required for software
to accomplish some desired function.
Task analysis for interface design uses an elaborative approach to assist in understanding
the human activities the user interface must accommodate.
Task analysis can be applied in two ways.
i) An interactive computer-based system is often used to replace a manual or semi-manual
activity. To understand the tasks that must be performed to accomplish the goal of the
activity, you must understand the tasks that people currently perform and then map
these into a similar set of tasks that are implemented in the context of the user
interface.
ii) Study an existing specification for a computer-based solution and derive a set of user
tasks that will accommodate the user model, the design model, and the system
perception.
Regardless of the overall approach to task analysis, first define and classify tasks.
Example :
By observing an interior designer at work, interior design comprises a number of major
activities: furniture layout, fabric and material selection, wall and window coverings
selection, presentation (to the customer), costing, and shopping. Each of these major tasks
can be elaborated into subtasks.
Using information contained in the use case, furniture layout can be refined into the following tasks:
(1) Draw a floor plan based on room dimensions,
(2) Place windows and doors at appropriate locations,
(3a) use furniture templates to draw scaled furniture outlines on the floor
plan,
(3b) use accents templates to draw scaled accents on the floor plan,
(4) Move furniture outlines and accent outlines to get the best placement,
(5) Label all furniture and accent outlines,
(6) Draw dimensions to show location, and
(7) Draw a perspective-rendering view for the customer.
Object elaboration.
Rather than focusing on the tasks that a user must perform, examine the use case and
other information obtained from the user and extract the physical objects that are used by
the interior designer.
These objects can be categorized into classes.
Attributes of each class are defined, and an evaluation of the actions applied to each
object provide a list of operations.
For example, the furniture template might translate into a class called Furniture with
attributes that might include size, shape, location, and others.
The interior designer would select the object from the Furniture class, move it to a
position on the floor plan (another object in this context), draw the furniture outline, and
so forth.
The tasks select, move, and draw are operations. The user interface analysis model would
not provide a literal implementation for each of these operations. However, as the design
is elaborated, the details of each operation are defined.
Workflow analysis.
When a number of different users, each playing different roles, makes use of a user
interface, it is sometimes necessary to go beyond task analysis and object elaboration and
apply workflow analysis. This technique allows you to understand how a work process is
completed when severalpeople (and roles) are involved.
Consider a company that intends to fully automate the process of prescribing and
delivering prescription drugs. The entire process will revolve around a Web-based
application that isaccessible by physicians (or their assistants), pharmacists, and patients.
Workflow can be represented effectively with a UML swimlane diagram (a variation
on theactivity diagram).
We consider only a small part of the work process: the situation that occurs when a
patient asksfor a refill.
swimlane diagram indicates the tasks and decisions for each of the three roles noted
earlier. Thisinformation may have been elicited via interview or from use cases written by
each actor.
Regardless, the flow of events enables you to recognize a number of key
interface characteristics:
Hierarchical representation.
A process of elaboration occurs as you begin to analyze the interface. Once workflow
has beenestablished, a task hierarchy can be defined for each user type.
The hierarchy is derived by a stepwise elaboration of each task identified for the
user. Forexample, consider the following user task and subtask hierarchy.
During this interface analysis step, the format and aesthetics of the content are
considered.Among the questions that are asked and answered are:
Are different types of data assigned to consistent geographic locations on the screen (e.g.,
photosalways appear in the upper right-hand corner)?
Can the user customize the screen location for content?
Is proper on-screen identification assigned to all content?
If a large report is to be presented, how should it be partitioned for ease of understanding?
Will graphical output be scaled to fit within the bounds of the display device that is used?
How will color be used to enhance understanding?
How will error messages and warnings be presented to the
user?The answers to these questions will help to establish
requirements
Once interface analysis has been completed, all tasks (or objects and actions) required by
the enduser have been identified in detail and the interface design activity commences.
Interface design is an iterative process. Each user interface design step occurs a number
of times,elaborating and refining information developed in the preceding step.
Although many different user interface design models have been proposed, all suggest
somecombination of the following steps:
1. Using information developed during interface analysis define interface objects and
actions(operations).
2. Define events (user actions) that will cause the state of the user interface to change.
Model thisbehavior.
3. Depict each interface state as it will actually look to the end user.
4. Indicate how the user interprets the state of the system from information provided through the
interface.
Based on this use case, the following homeowner tasks, objects, and data items are identified:
accesses the SafeHome system
enters an ID and password to allow remote access
checks system status
arms or disarms SafeHome system
displays floor plan and sensor locations
displays zones on floor plan
changes zones on floor plan
displays video camera locations on floor plan
selects video camera for viewing
views video images (four frames per second)
pans or zooms the video camera
Objects (boldface) and actions (italics) are extracted from this list of homeowner tasks.
The majority of objects noted are application objects. However, video camera location (a
source object) is dragged and dropped onto video camera (a target object) to create a
video image (a window with video display).
Fig. Preliminary screen layout
A preliminary sketch of the screen layout for video monitoring is created . To invoke the
video image, a video camera location icon, C, located in the floor plan displayed in the
monitoring window is selected. In this case a camera location in the living room (LR) is
then dragged and dropped onto the video camera icon in the upper left-hand portion of the
screen.
The video image window appears, displaying streaming video from the camera located in
the LR. The zoom and pan control slides are used to control the magnification and
direction of the video image.
To select a view from another camera, the user simply drags and drops a different camera
location icon into the camera icon in the upper left-hand corner of the screen.
The layout sketch shown would have to be supplemented with an expansion of each menu
item within the menu bar, indicating what actions are available for the video monitoring
mode (state). A complete set of sketches for each homeowner task noted in the user
scenario would be created during the interface design.
Unfortunately, many designers do not address these issues until relatively late in the
design process.
Unnecessary iteration, project delays, and end-user frustration often result. It is far better
to establish each as a design issue to be considered at the beginning of software design,
when changes are easy and costs are low.
i) Response time.
System response time is the primary complaint for many interactive applications. In
general, system response time is measured from the point at which the user performs some
control action (e.g., hits the return key or clicks a mouse) until the software responds with
desired output or action.
System response time has two important characteristics: length and variability. If system
response is too long, user frustration and stress are inevitable.
Variability refers to the deviation from average response time, and in many ways, it is the
most important response time characteristic. Low variability enables the user to establish
an interactionrhythm, even if response time is relatively long.
For example, a 1-second response to a command will often be preferable to a response
that varies from 0.1 to 2.5 seconds. When variability is significant, the user is always off
balance, always wondering whether something “different” has occurred behind the scenes.
ii) Help facilities.
Almost every user of an interactive, computer-based system requires help now and then.
In some cases, a simple question addressed to a knowledgeable colleague can do the trick.
In others, detailed research in a multivolume set of “user manuals” may be the only
option.
In most cases, however, modern software provides online help facilities that enable a user
to get aquestion answered or resolve a problem without leaving the interface.
In general, every error message or warning produced by an interactive system should have the following
characteristics:
The message should describe the problem in jargon that the user can understand.
The message should provide constructive advice for recovering from the error.
The message should indicate any negative consequences of the error (e.g., potentially
corrupted data files) so that the user can check to ensure that they have not occurred (or
correct them if theyhave).
The message should be accompanied by an audible or visual cue. That is, a beep might be
generated to accompany the display of the message, or the message might flash
momentarily or be displayed in a color that is easily recognizable as the “error color.”
The message should be “nonjudgmental.” That is, the wording should never place blame
on the user.
Because no one really likes bad news, few users will like an error message no matter how
well designed. But an effective error message philosophy can do much to improve the
quality of an interactive system and will significantly reduce user frustration when
problems do occur.
v) Application accessibility.
Accessibility for users who may be physically challenged is an imperative for ethical,
legal, and business reasons.
A variety of accessibility guidelines many designed for Web applications but often
applicable to all types of software—provide detailed suggestions for designing interfaces
that achieve varying levels of accessibility.
Others provide specific guidelines for “assistive technology” that addresses the needs of
those with visual, hearing, mobility, speech, and learning impairments.
vi) Internationalization.
Software engineers and their managers invariably underestimate the effort and skills
required to create user interfaces that accommodate the needs of different locales and
languages. Too often, interfaces are designed for one locale and language and then
Make shift to work in other countries.
The challenge for interface designers is to create “globalized” software. That is, user
interfaces should be designed to accommodate a generic core of functionality that can be
delivered to all who use the software. Localization features enable the interface to be
customized for a specific market.
UNIT IV- TESTING AND IMPLEMENTATION
Objective of Testing:
The goal of testing is to find errors, and a good test is one that has a high probability
of finding an error. The tests must exhibit a set of characteristics that achieve the
goal of finding the most errors with a minimum of effort.
Testability.
“Software testability is simply how easily a computer program can be tested.”
Characteristics of testability:
1. Operability - “The better it works, the more efficiently it can be tested.”
2. Observability - “What you see is what you test.”
3. Controllability - “The better we can control the software, the more the testing
can beautomated and optimized.”
4. Decomposability - “By controlling the scope of testing, we can more quickly
isolateproblems and perform smarter retesting.”
5. Simplicity - “The less there is to test, the more quickly we can test it.”
6. Stability - “The fewer the changes, the fewer the disruptions to testing.”
7. Understandability - “The more information we have, the smarter we will test.”
Test Characteristics.
The following are attributes of a “good” test:
1) A good test has a high probability of finding an error.
2) A good test is not redundant.
3) A good test should be “best of breed”
Areas bounded by edges and nodes are called regions. When countingregions, we include the
area outside the graph as a region.
(a) Flowchart and (b) flow graph
Note that each new path introduces a new edge. The path is not considered to be
an independent path because it is simply a combination of already specified paths
and does not traverse any new edges.
Cyclomatic complexity is software metric that provides a quantitative measure of
the logical complexity of a program. When used in the context of the basis path
testing method, the value computed for cyclomatic complexity defines the number
of independent paths in the basis set of a program and provides you with an upper
bound forthe number of tests that must be conducted to ensure that all statements
have been executed at least once.
Referring once more to the flow graph in Figure (b), the cyclomatic complexity
can becomputed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + 2 = 4.
3. V(G)= 3 predicate nodes +1 = 4.
Therefore, the cyclomatic complexity of the flow graph in Figure (b) is 4.
The link weight provides additional information about control flow. In its simplest
form, the link weight is 1 (a connection exists) or 0 (a connection does not exist).
But link weights can be assigned other, more interesting properties:
The probability that a link (edge) will be execute.
The processing time expended during traversal of a link
The memory required during traversal of a link
The resources required during traversal of a link.
The analysis required to design test cases can be partially or fully automated.
The basis path testing technique is one of a number of techniques for control
structure testing. Although basis path testing is simple and highly effective, it is not
sufficient in itself. The following control structure testing broadens testing coverage and
improves the quality of white-box testing.
1) Condition Testing:
Condition testing is a test-case design method that exercises the logical conditions
contained in a program module. A simple condition is a Boolean variable or a relational
expression, possibly preceded with one NOT (¬) operator. A relational expression takes
the form
E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<,<=,=,!= (nonequality),>, or >=.
If statement S is an if or loop statement, its DEF set is empty and its USE set is
based on the condition of statement S. The definition of variable X at statement S
is said to be live at statement S’ if there exists a path from statement S to statement
S’ that contains no other definition of X.
A definition-use (DU) chain of variable X is of the form [X, S, S’], where S and S’
are statement numbers, X is in DEF(S) and USE(S’), and the definition of X in
statement S islive at statement S’.
3) Loop Testing
Loops are the cornerstone for the vast majority of all algorithms implemented in
software. And yet, we often pay them little heed while conducting software tests.
Loop testing is a white-box testing technique that focuses exclusively on the
validity of loop constructs. Four different classes of loops can be defined: simple
loops, concatenated loops, nested loops, and unstructured loops.
Nested loops.
If we were to extend the test approach for simple loops to nested loops, the
number of possible tests would grow geometrically as the level of nesting increases.
This would result in an impractical number of tests. an approach that will help to
reduce the number of tests are:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range
or excluded values.Work outward, conducting tests for the next loop, but keeping all
other outer loops atminimum values and other nested loops to “typical” values.
3. Continue until all loops have been tested.
Concatenated loops.
Concatenated loops can be tested using the approach defined for simple loops, if
each of the loops is independent of the other. However, if two loops are
concatenated and theloop counter for loop 1 is used as the initial value for loop 2,
then the loops are not independent. When the loops are not independent, the
approach applied to nested loops is recommended.
Unstructured loops.
Whenever possible, this class of loops should be redesigned to reflect the use of
the structured programming constructs.
By applying black-box techniques, a set of test cases can be derived that satisfy the followingcriteria:
(1) Test cases that reduce, by a count that is greater than one, the number of additional
test casesthat must be designed to achieve reasonable testing
(2) Test cases that tell you something about the presence or absence of classes of
errors, ratherthan an error associated only with the specific test at hand.
4.4.1. Graph-Based Testing Methods
The first step in black-box testing is to understand the objects that are modeled in
software and the relationships that connect these objects. Once this has been
accomplished, the next step is to define a series of tests that verify “all objects
have the expected relationship to one another”.
Stated in another way, software testing begins by creating a graph of important objects
and their relationships and then devising a series of tests that will cover the graph so
that each object and relationship is exercised and errors are uncovered.
A number of behavioral testing methods that can make use of graphs are:
1) Transaction flow modeling.
2) Finite state modeling.
3) Data flow modeling.
4) Timing modeling.
Example#1:
For a software that computes the square root of an input integer which can assume
valuesin the range of 0 to 5000, there are three quivalence classes:
The set of negative integers,the set of integers in the range of 0 and 5000, and the integers
larger than 5000. Therefore, the test cases must include representatives for each of the
three equivalenceclasses and a possible test set can be: {-5,500,6000}.
Example#2:
Design the black-box test suite for the following program. The program computes the
intersection point of two straight lines and displays the result. It reads two integer pairs
(m1, c1) and (m2, c2) defining the two straight lines of the form y=mx + c.
The equivalence classes are the following:
•Parallel lines (m1=m2, c1≠c2)
•Intersecting lines (m1≠m2)
•Coincident lines (m1=m2, c1=c2)
Now, selecting one representative value from each equivalence class, the test suit (2, 2) (2,
5),(5, 5) (7, 7) , (10, 10) (10, 10) are obtained.
Each time a new module is added as part of integration testing, the software
changes. New data flow paths are established, new I/O may occur, and new
control logic is invoked. These changes may cause problems with functions that
previously worked flawlessly.
In the context of an integration test strategy, regression testing is the reexecution
of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
In a broader context, successful tests (of any kind) result in the discovery of errors,
and errors must be corrected. Whenever software is corrected, some aspect of the
software configuration (the program, its documentation, or the data that support
it) is changed. Regression testing helps to ensure that changes (due to testing or
for other reasons) do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by reexecuting a subset of all test
cases or using automated capture/playback tools. Capture/playback tools enable
the software engineer to capture test cases and results for subsequent playback
and comparison.
The regression test suite (the subset of tests to be executed) contains three different
classesof test cases:
1) A representative sample of tests that will exercise all software functions.
2) Additional tests that focus on software functions that are likely to be affected
by thechange.
3) Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow quite
large. Therefore, the regression test suite should be designed to include only those
tests that address one or more classes of errors in each of the major program
functions.
It is impractical and inefficient to reexecute every test for every program function
once a change has occurred.
Unit testing focuses verification effort on the smallest unit of software design—the
software component or module.
The relative complexity of tests and the errors those tests uncover is limited by the
constrained scope established for unit testing. The unit test focuses on the internal
processing logic and data structures within the boundaries of a component. This
type of testing can be conducted in parallel for multiple components.
Unit-test considerations:
The module interface is tested to ensure that information properly flows into and
out ofthe program.
Local data structures are examined to ensure that integrity is maintained.
All independent paths are exercised to ensure that all statements in a module
have beenexecuted at least once.
Boundary conditions are tested to ensure that the module operates properly at
boundariesestablished to limit or restrict processing.
All error handling paths should be tested.
Unit-test procedures:
The design of unit tests can occur before coding begins or after source code has
been generated. A review of design information provides guidance for establishing
test cases that are likely to uncover errors in each of the categories discussed
earlier. Each test case should be coupled with a set of expected results.
Because a component is not a stand-alone program, driver and/or stub software
must often be developed for each unit test.
In most applications a driver is nothing more than a “main program” that
accepts test case data, passes such data to the component (to be tested), and prints
relevant results. Stubs serve to replace modules that are subordinate (invoked by)
the component to be tested.
A stub or “dummy subprogram” uses the subordinate module’s interface, may do
minimal data manipulation, prints verification of entry, and returns control to the
module undergoing testing.
Drivers and stubs represent testing “overhead.” That is, both are software that
must be written (formal design is not commonly applied) but that is not delivered
with the final software product. If drivers and stubs are kept simple, actual
overhead is relatively low.
Unfortunately, many components cannot be adequately unit tested with “simple”
overhead software. In such cases, complete testing can be postponed until the
integration test step (where drivers or stubs are also used).
Unit testing is simplified when a component with high cohesion is designed. When
only one function is addressed by a component, the number of test cases is
reduced and errors can be more easily predicted and uncovered.
(1) Delay many tests until stubs are replaced with actual modules,
(2) Develop stubs that perform limited functions that simulate the actual module, or
(3) Integrate the software from the bottom of the hierarchy upward.
Smoke Testing:
Smoke testing is an integration testing approach that is commonly used when
product software is developed. It is designed as a pacing mechanism for time-critical
projects, allowing the software team to assess the project on a frequent basis.
2. A series of tests is designed to expose errors that will keep the build from properly
performingits function.
3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily.
The daily frequency of testing the entire product may surprise some readers.
However, frequent tests give both managers and practitioners a realistic assessment of
integration testing progress.
The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive, but it should be capable of exposing major problems. The smoke test
should be thorough enough that if the build passes, you can assume that it is stable
enough to be testedmore thoroughly.
Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
Integration risk is minimized.
The quality of the end product is improved.
Error diagnosis and correction are simplified.
Progress is easier to assess.
1. Validation-Test Criteria:
Software validation is achieved through a series of tests with requirements. A test
plan outlines the classes of tests to be conducted, and a test procedure defines
specific test cases that are designed to ensure that all functional requirements are
satisfied, all behavioral characteristics are achieved, all content is accurate and
properly presented, all performance requirements are attained, documentation is
correct, and usability and other requirements are met.
After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristic conforms to specification and is accepted or
(2) A deviation from specification is uncovered and a deficiency list is created.
2. Configuration Review:
An important element of the validation process is a configuration review. The
intent ofthe review is to ensure that all elements of the software configuration have
been properly developed, are cataloged, and have the necessary detail to bolster the
support activities.
Alpha Test:
The alpha test is conducted at the developer’s site by a representative group of end
users. The software is used in a natural setting with the developer “looking over the
shoulder” of the users and recording errors and usage problems. Alpha tests are
conducted in a controlled environment.
Beta Test:
The beta test is conducted at one or more end-user sites. Unlike alpha testing, the
developer generally is not present. Therefore, the beta test is a “live” application of the
software in an environment that cannot be controlled by the developer. The customer
records all problems (real or imagined) that are encountered during beta testing and
reports these to the developer at regular intervals. As a result of problems reported during
beta tests, you make modifications and then prepare for release of the software product to
the entire customer base.
Acceptance Testing:
A variation on beta testing, called customer acceptance testing, is sometimes
performed when custom software is delivered to a customer under contract. The customer
performs a series of specific tests in an attempt to uncover errors before accepting the
software from the developer. In some cases (e.g., a major corporate or governmental
system) acceptance testing can be very formal and encompass many days or even weeks
of testing.
4.9. SYSTEM TESTING:
1) Recovery Testing:
Recovery testing is a system test that forces the software to fail in a variety of
ways andverifies that recovery is properly performed.
If recovery is automatic (performed by the system itself), reinitialization, checkpointing
mechanisms, data recovery, and restart are evaluated for correctness.
If recovery requires human intervention, the mean-time-to-repair (MTTR) is
evaluated todetermine whether it is within acceptable limits.
2) Security Testing:
Security testing attempts to verify that protection mechanisms built into a system
will, infact, protect it from improper penetration.
“The system’s security must be tested for invulnerability from frontal attack—
but mustalso be tested for invulnerability from flank or rear attack.”
During security testing, the tester may attempt to acquire passwords through
externalclerical means; may attack the system, thereby denying service to others; may
purposely cause
system errors, hoping to penetrate during recovery; may browse through insecure data,
hoping tofind the key to system entry.
. The role of the system designer is to make penetration cost more than the value
of theinformation that will be obtained.
3) Stress Testing:
Stress tests are designed to confront programs with abnormal situations. In
essence, the tester who performs stress testing asks: “How high can we crank this up
before it fails?”
Stress testing executes a system in a manner that demands resources in
abnormalquantity, frequency, or volume.
For example,
(1) Special tests may be designed that generate ten interrupts per second, when one or
two is theaverage rate.
(2) Input data rates may be increased by an order of magnitude to determine how input
functionswill respond.
(3) Test cases that require maximum memory or other resources are executed.
(4) Test cases that may cause thrashing in a virtual operating system are designed.
(5) Test cases that may cause excessive hunting for disk-resident data are created.
4) Performance Testing:
Performance testing is designed to test the run-time performance of software
within the contextof an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at the
unit level, the performance of an individual module may be assessed as white-box tests
are conducted.
However, it is not until all system elements are fully integrated that the true
performance of asystem can be ascertained.
Performance tests are often coupled with stress testing and usually require both
hardware andsoftware instrumentation.
5) Deployment Testing:
In many cases, software must execute on a variety of platforms and under more
than one operating system environment. Deployment testing, sometimes called
configuration testing, exercises the software in each environment in which it is to operate.
In addition, deployment testing examines all installation procedures and specialized
installation software that will be used by customers, and all documentation that will be
used to introduce the software to end users.
4.10. DEBUGGING:
Software testing is a process that can be systematically planned and specified. Test
case design can be conducted, a strategy can be defined, and results can be evaluated
against prescribed expectations.
Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.1. The
Debugging Process:
The debugging process begins with the execution of a test case. Results are
assessed and a lack of correspondence between expected and actual performance is
encountered.
In many cases, the non corresponding data are a symptom of an underlying cause
as yet hidden. The debugging process attempts to match symptom with cause, thereby
leading to error correction.
3. Debugging Strategies:
Regardless of the approach that is taken, debugging has one overriding objective
— to find and correct the cause of a software error or defect. The objective is realized by
a combination of systematic evaluation, intuition, and luck.
1) Brute force:
The brute force category of debugging is probably the most common and least
efficientmethod for isolating the cause of a software error.
Using a “let the computer find the error”, memory dumps are taken, run-time
tracesare invoked, and the program is loaded with output statements.
Although the mass of information produced may ultimately lead to success, it
morefrequently leads to wasted effort and time.
2) Backtracking:
Backtracking is a fairly common debugging approach that can be used successfully in
small programs.
Beginning at the site where a symptom has been uncovered, the source code is
traced backward (manually) until the cause is found. Unfortunately, as the number of
source lines increases, the number of potential backward paths may become
unmanageably large.
3) Cause elimination:
The third approach to debugging—cause elimination—is manifested by induction
or deduction and introduces the concept of binary partitioning. Data related to the error
occurrence are organized to isolate potential causes.
Alternatively, a list of all possible causes is developed and tests are conducted to
eliminate each Automated debugging.
Each of these debugging approaches can be supplemented with debugging tools
that can provide you with semiautomated support as debugging strategies are attempted.
Integrated development environments (IDEs) provide a way to capture some of
the language specific predetermined errors (e.g., missing end-of-statement characters,
undefinedvariables, and so on) without requiring compilation.”
1) Static analysis
Definition: Static testing is a testing technique in which software is testedwithout executing
the code. As the code, requirement documents, and design documents ar tested manually in
order to find errors, it is called static
This kind of testing is also called verification testing.
Technical reviews:
The team of technical experts will review the software for technical
specifications. The purpose is to pin out the difference between the required
specification and product design and then correct the flaws. It focuses on technical
documents such as test strategy, test plan and requirement specification documents.
4. Walk-through:
The author explains the software to the team and teammates can raise questions if they
have any. It is headed by the author and review comments are noted down.
5. Inspection process:
The meeting is headed by a trained moderator. A formal review is done, a record is
maintained for all the errors and the authors are informed to make rectifications on the given
feedbacks.
Advantages
1) lt is a fast and easy technique used to fix errors.
2) It helps in identifying flaws in code
3) With the help of automated tools it become very easy and convenient to scan and review
the software
4) With static testing it is possible to find errors at an early stage of the development life cycle
Disadvantages
1) It takes a lot of time to conduct the testing procedure it done manually
2) Automated tools work for a restricted set of programming languages
3) The automated tools simply scan the code and can not test the code deeply
2) Dynamic analysis
Definition Dynamic testing is a process by which code is executial to check how sitware will
perform in a runtime environment. As this type of testing a conducted during code execution
it is called dynamic. It is also called validation testing
Unit testing: As the name suggests individual units or modules are tested. The source code is
tested by the developers.
• Integration testing Individual modules are clubbed and tested by the developers. It is
performed in order to ensure that modules are working in the right manner and will continue
to perform flawlessly even after integration.
• System testing: It is performed on a complete system to ensure that the application is
designed according to the requirement specification document.
Advantages
1) It identifies weak areas in a runtime
2) It helps in performing detailed analysis of code
3) It can be applied with any application
Disadvantages
1) It is not easy to find a trained software tester to perform dynamic testing
2) It becomes costly to fix errors in dynamic testing
For example-
4.13Model Checking
Model checking is a technique for verifying that a software system satisfies the desired
properties. In this technique, a model of the system is created and then using a model checker
we exhaustively explore all possible states of the model to check the properties the system.
Model checking can be used in both software testing and debugging in software testing, it can
be used to verify that the system meets its requirements. In software debugging, it can be used
to identify the root cause of a bug.
Once the model and the desired properties have been specified, a model checker can be used
to verify that the model satisfies the properties
1. The user must have a positive balance in their account in order to withdraw money.
2. The user cannot withdraw more money than they have in their account.
3. The user's balance must be updated after each withdrawal.
Before actually executing the code, we can use model checking technique to verify the
withdraw() function meets these requirements. In this technique, we first create a model of the
withdraw() function. We must specify the above requirements as properties of that model.
The model checker would explore all possible states of the function to verify that the function
satisfies all these properties. For instance It there is repo balance, then there must not be a
path to withdraw amount state. Similarly it the b amount greater than the withdrawal amount
then only there should be a p withdraw amount' state. If the model checker finds a state in
which the function shoes no update the user's balance after a withdrawal then it should report
a bug by identifying this issue in our model, we can locate the corresponding code in our
software and correct the logic to prevent this scenario from happening
Model checking is particularly valuable for complex systems, where manually testing all
possible combinations and paths would be impractical.
Unit – 5 Project Management
Effective software project management focuses four P's i.e. people, product,
process and project. Thesuccessful project management is done with the help
of these four factors where the order of these elements is not arbitrary.
Project manager has to motivate the communication between stakeholders.
He should also prepare a project plan for the success of the product.
The software developer and customer must communicate with each other in
order to define the objectives and scope of the product. This is done as the
first step in requirement gathering and analysis. The scope of the project
identifies primary data, functions and behaviour of the product.
After establishing the objectives and scope of the product the alternative
solutions are considered.
Finally, the constraints imposed by-delivery deadline or budgetary restrictions,
personal availability can be identified.
The software process provides the framework from which the software
development plan can beestablished.
There are various framework activities that needs to be carried out during the
software developmentprocess. These activities can be of varying size and
complexities.
Different task sets-tasks, milestones, work products and quality assurance points
enable framework activities to adapt the software requirements and certain
characteristics of software project.
Finally, umbrella activities such as Software Quality Assurance (SQA) and
Software Configuration Management (SCM) are conducted. These umbrella
activities depend upon the framework activities.
For the meaningful project development the scope must be bounded. The
problem f for which the product is to be built is then decomposed into a set
of smaller problems. 1 Each of these is estimated using historical data
(metrics) and / or previous experience as I a guide. The two important issues-
problem complexity and risk are considered before J final estimate is made.
There are many useful techniques for time and effort estimation. Process and
project * metrics can provide historical perspective and powerful input for
generation of \ quantitative estimates.
Experience,
Access to good historical information (metrics) and
The courage to commit to quantitative predictions when quantitative
information is available.
While estimating the project, both the project planner and the customer should
recognize that variability in software requirement means instability in cost
and schedule. When customer changes the requirements, then estimation
needs to be revisited.
3
In the scope description, various functions are described. These
functions are evaluated and refined to provide more details before the
estimation of the project.
For performance consideration, processing and response time
requirements are analyzed.
The constraints identify the limitations placed on the software by
external hardware or any otherexisting system.
After identifying the scope following questions must be asked –
° Can we build the software to meet this scope?
° Is this software project feasible?
That means after identifying the scope of the project its feasibility must be
ensured.
Following are the four dimensions of software feasibility. To ensure the
feasibility of the software project the set of questions based on these
dimension has to be answered. Those are as given below -
[1] Technology
Is a project technically feasible?
Is it within the state of art?
Are the defects to be reduced to a level that satisfies the application's
need?
[2] Finance
Is it financially feasible?
Is the development cost completed at a cost of software
organization, its client, or marketaffordable?
Are the defects to be reduced to a level that satisfies the application’s
need?
[3] Time
Will the project's time to market beat the competition?
[4] Resource
Does the organization have the resources needed to succeed?
Putnam and Meyers suggests that scoping is not enough. Once
scope is understood, and and feasibility have been identified the
next task is estimation of the Resources requiredto accomplish the
software development effort.
5.2 Estimation
Software project estimation is a form of problem solving. Many times the
4
problem to be solved is toocomplex in software engineering. Hence for solving such
problems, we decompose the given probleminto a set of smaller problems.
The decomposition can be done using two approached decomposition of problem
or decomposition of process. Estimation uses one or both forms of decomposition
(partitioning).
Sizing represents the project planner's first major challenge. In the context
of project planning, size refers to a quantifiable outcome of the software
project.The sizing can be estimated using two approaches - a direct
approach in which lines of code is considered and an indirect approach in
which computation of function point is done.
Putnam and Myers suggested four different approaches for sizing the problem -
5
[4] Change sizing
This approach is used when existing software has to be modified as per
the requirement of the project. The size of the software is then estimated
by the number and type of reuse, addition of code, change made in the
code, deletion of code.
considers for "most likely" estimate where S is the estimation size variable,
represents the optimistic estimate, represents the most likely estimate
and represents the pessimistic estimate values.
6
5.3 LOC based Estimation
Size oriented measure is derived by considering the size of software that has been
produced.
The organization builds a simple record of size measure for the software
projects. It is built on pastexperiences of organizations.
It is a direct measure of software
The size measure is based on the lines of code computation. The lines of code
is defined as one line oftext in a source file.
Advantages
1. Artifact of software development which is easily counted.
2. Many existing methods use LOC as a key input.
7
3. A large body of literature and data based on LOC already exists.
Disadvantages
1. This measure is dependent upon the programming language.
2. This method is well designed but shorter program may get suffered.
3. It does not accommodate non procedural languages.
4. In early stage of development it is difficult to estimate LOC.
Solution
For estimating the given application we consider each module as separate
function and corresponding lines of code can be estimated in the following
table as
Expected LOC for 3D Geometric analysis function based on three point estimation is -
8
Optimistic estimation 4700
Most likely estimation 6000
Pessimistic estimation 10000
Expected value =
9
[4] Number of files
Each logical master file, i.e. a logical grouping of data that may be part of
a database or a separatefile.
[5] Number of external interfaces
All machine-readable interfaces that are used to transmit information
to another system arecounted.
The organization needs to develop criteria which determine whether a
particular entry is simple,average or complex.
The weighting factors should be determined by observations
or by experiments.The count table can be computed with
the help of above given table.
Now the software complexity can be computed by answering following
questions. These arecomplexity adjustment values
Does the system need reliable backup and recovery ?
1.
2. Are data communications required ?
3. Are there distributed processing functions ?
4. Is performance of the system critical ?
5. Will the system rim in an existing, heavily utilized operational environment
?
6. Does the system require on-line data entry ?
7. Does the on-line data entry require the input transaction to be built over
multiple screens oroperations ?
8. Are the master files updated on-line ?
9. Are the inputs, outputs, files or inquiries complex ?
10. Is the internal processing complex ?
11. Is the code which is designed being reusable ?
12. Are conversion and installation included in the design ?
13. Is the system designed for multiple installations in different organizations ?
14. Is the application designed to facilitate change and ease of use by the user
? Rate each of the above factors according to the following scale :
0 1 2 3 4 5
No inciden Moder Avera Signific Essenti
influence tal ate ge ant al
Once the functional point is calculated then we can compute various measures as
follows
Productivity = FP/person-month
10
Quality = Number of faults/FP
Cost = $/FP
Documentation = Pages of documentation/FP.
Advantages
This method is independent of programming languages.
It is based on the data which can be obtained in early stage of project .
Disadvantages
This method is more suitable for business systems and can be developed for
that domain.
Many aspects of this method are not validated.
The functional point has no significant meaning. It is just a numerical value.
11
2. Average labor cost is $6000 per month
Calculations for cost per function point, total estimated project cost and total effort
1. The cost per function point = (6000 / 6.5) = $923
2. Total estimated project cost = (446 * 923) = $411658
3. Total estimated effort = (446 / 6.5) = 69 Person-month.
Expected cost can be computed for each branch using following formula.
12
For example for the branch can be computed as:
Thus the expected cost at each node can be computed. It is summerised as given
below
From this we can conclude that by purchasing the software we select for
lowest expected cost option. But simply cost should not be a criterion to
acquire the software.
During decision making process for software acquisition following factors should
also be considered.
1. Availability of reliable software.
2. Experience of developer or vendor or contractor.
3. Conformance to requirements.
4. Local politics.
5. Likelihood of changes in the software.
These are some criteria which can heavily affect the decision of make-buy of
software.
5.5.1 Outsourcing
Benefits of outsourcing
[1] Cost savings
If a software is outsourced then people and resource utilization can be
reduced. And thereby thecost of the project can be saved effectively.
[2] Accelerated development
Since some part of software gets developed simultaneously by a third
party, the overalldevelopment process gets accelerated.
Drawbacks of outsourcing
A software company loses some control over the software as it is developed by
third person.
The trend of outsourcing will be continued in software industry in order to
survive in competitiveworld.
14
chronological months
KLOC-Kilo lines of code of the project
Ab,bb,cb,db are the co-efficients for the three modes are given below:
From E & D we can compute the no: of people required to accomplish the project
as N=E/D
Merits of Basic Cocomo model:
Basic cocomo model is good for quick, early,rough order of magnitude estimates
of software project.
Limitations :
1. The accuracy of this model is limited because it does not consider
certain factors for cost estimation of software. These factors are hardware
constraints, personal quality and experiences, modern techniques and
tools
2. The estimates of Cocomo model are within a factor of 1.3 only 29% of
the time and within the factor of 2 only 60% of time.
Example:
Consider a software project using semi-detached mode with 30,000 lines of
code . We will obtainestimation for this project as follows:
(1) Effort estimation:
E= ab (KLOC)Exp(bb )person-
months E=3.0(30)1.12 where lines
of code=30000=30 KLOC E=135
person-month
(2) Duration estimation:
17
rough cost estimationduring early stage.
2. It can also be applied at the software product component level for
obtaining more accurate costestimation.
Limitations:
1. The effort multipliers are not dependent on phases.
DETAILED COCOMO
The Advanced COCOMO model computes effort as a function of program size
and a set of cost drivers weighted according to each phase of the software
lifecycle. The Advanced model applies the Intermediate model at the
component level, and then a phase-based approach is used to consolidate the
estimate [Fenton, 1997]. The four phases used in the detailed COCOMO
model are: requirements planning and product design (RPD), detailed design
(DD), code and unit test (CUT), and integration and test (IT).
Analyst capability effort multiplier for detailed COCOMO Estimates for each
18
module is combined into subsystems and eventually an overall project
estimate. Using the detailed cost drivers, an estimate is determined for each
phase of the lifecycle.
COCOMO II
Where
PM means effort required in terms of
person-months. NAP means number of
application points required.
% reuse indicates the amount of reused components in the project.
These reusablecomponents can be screens, reports or the modules used
in previous projects.PROD is the object point productivity. These
values are given in the above table.
19
[2]An early design model
This model is used in the early stage of the project development. That is after
gathering the user requirements and before the project development actually
starts, this model is used. Hence approximate cost estimation can be made in
this model.
A reuse model
20
This model considers the systems that have significant amount of code which
is reused from the earlier software systems. The estimation made in reuse
model is nothing but the efforts required to integrated the reused models into
the new systems.
There are two types of reusable codes : black box code and white box code.
The black box code is a kind of code which is simply integrated with the new
system without modifying it. The white box code is a kind of code that has to
be modified to some extent before integrating it with the new system, and
then only it can work correctly.
There is third category of code which is used in reuse model and it is the
code which can be generated automatically. In this form of reuse the standard
templates are integrated in the generator. To these generators, the system
model is given as input from which some additional information about the
system is taken and the code can be generated using the templates.
where
Sometimes in the reuse model some white box code is used along with the
newly- developed code. The size estimate of newly developed code is
whe
re ESLOC means equivalent number of lines of new source code.
ASLOC means the source lines of code in the component that has to be
adapted.
21
[3] Post architecture model
This model is a detailed model used to compute the efforts. The basic formula used
in this model is
In this model efforts should be estimated more accurately. In the above formula A is
the amount of code. This code size estimate is made with the help of three
components -
1. The estimate about new lines of code that is added in the program.
2. Equivalent number of source lines of code (ESLOC) used in reuse model.
3. Due to changes in requirements the lines of code get modified. The
estimate of amount ofcode being modified.
The exponent term B has three possible values that are related to the levels of
project complexity. The values of B are continuous rather than discrete. It
depends upon the five scale factors. These scale factors vary from very low
to extra high (i.e. from 5 to 0).
22
1 (CMM) questionnaire, for computing the estimates CMM
j maturity level can be subtracted from 5.
Add up all these rating and then whatever value you get, divide it by 100. Then add
the resultant valueto 1.01 to get the exponent value.
This model makes use of 17 cost attributes instead of seven. These attributes
are used to adjust initialestimate.
23
Scheduling and Tracking
While scheduling the project, the manager has to estimate the time and
resource of the project. All the activities in the project must be arranged in
coherent sequence. The schedules must be continually updated because some
uncertain problems may occur during the project life cycle. For new projects
initial estimates can be made optimistically.
During the project scheduling the total work is separated into various small
activities. And time required for each activity must be determined by the
project manager. For efficient performance some activities are conducted in
parallel.
The project manager should be aware of the fact that: Every stage of the
project may not be problem- free. Some of the typical problems in project
development stage are:
People may leave or remain absent.
Hardware may get failed.
Software resource may not be available.
Human effort
Sufficient disk space on server
Specialized hardware
Software technology
Travel allowance required by the project staff.
Project schedules are represented as the set of chart in which the work-
breakdown structure and dependencies within various activities is
represented.
24
5.8.1 Relationship between People and Effort
There is a common myth among the software managers that by adding more
people in the project, thedeadline can be achieved. But this is not true - as by
adding more people in the project, first we needto train them for the tools and
technologies that are getting used in the project. And only those people can
teach the new people who are already working. Thus during teaching or
training the time will be simply wasted and there won't be the progress in the
project.
There is no single task that is appropriate for all the projects but for
developing large, complex projects the set of tasks are required. Hence
every effective software process should define a collection of task sets
depending upon the type of the project.
Using tasks sets the high quality software can be developed and any
unnecessary work can be avoidedduring software development.
The number of tasks sets will vary depending upon the type of the project.
Various types of projects are enlisted below -
[1] Concept Development project
These are the projects in which new business ideas or the applications
based on new technologiesare to be developed.
[2] New application development project
These projects are developed for satisfying a specific customer need.
[3] Application upgradation project
These are kind of projects in which existing software application needs
a major change. This change can be for performance improvement, or
modifications within the modules and interfaces.
[4] Application maintenance project
These are kind of projects that correct, adapt or extend the existing software
applications.
[5] Reengineering projects
These are the projects in which the legacy systems are rebuilt partly or
completely.
Various factors that influence the tasks sets are -
1. Size of project
2. Project development staff
3. Number of user of that project
4. Application longetivity
5. Complexity of application
6. Performance constraints
7. Use of technologies
26
Task set example: Consider the concept development type of the project. Various
tasks sets in thistype of project are -
1. Defining scope: This task is for defining the scope, goal or objective of the
project.
2. Planning: It includes the estimate for schedule, cost and people for
completing the desiredconcept.
3. Evaluation of technology risks: It evaluates the risk associated with
the technology used inthe project.
4. Concept implementation: It includes the concept representation in
the same manner asexpected by the end user.
27
5.8.4 Time Line Chart
28
5.8.5 Tracking Schedule
Project schedule is the most important factor for software project manager.
It is the duty of projectmanager to decide the project schedule and track the
schedule.
Tracking the schedule means determine the tasks and milestones in the
project as it proceeds.Following are the various ways by which tracking of
the project schedule can be done –
1. Conduct periodic meetings. In this meeting various problems related to
the project get discussed. The progress of the project is reported to the
project manager.
2. Evaluate results of all the project reviews.
3. Compare 'actual start date' and 'scheduled start date' of each of the project
task.
4. Determine if the milestones of the project is achieved on scheduled date.
5. Meet informally the software practioners. This will help the project
manager to solve manyproblems. This meeting will also be helpful for
assessing the project progress.
6. Assess the progress of the project quantitatively.
Thus for tracking the schedule of the project the project manager should be
an experienced person. Infact project manager is the only responsible person
who is controlling the software project. When some problems occur in the
project then addition resources may be demanded, skilled and experienced
staff may be employed or project schedule can be redefined.
For handling the severe deadlines, project manager uses a technique of time
boxing. In this technique each it is under stood that the complete product
cannot be delivered on given time. Part by part i.e. inthe series of increments
the product can be delivered to the customer.
The project manager uses time box technique means he is associating each
task with a box. That means each task is put in a "time box" and within that
time frame each task must be completed. When the current task reaches to
boundary of its time box, then the next task must be started (even if current
task is remaining incomplete).
Some researchers had argued upon - leaving the task incomplete when
current task reaches to the boundary but for this argument the counterpart is
that even if the task is remaining incomplete it reaches to almost completion
stage and remaining part of it can be completed in the next successive
increment.
29
5.9 Earned Value Analysis
The difference between BCWS and BCWP is that BCWS represents values
for the project activities that are planned and BCWP represents the values of
the project activities that are Completed.
Various types of computations in EVA are given as follows
1)
Where SPI is the software performance index. It represents the project
efficiency. If the value of SPI is 1.0 then it represents that execution of
project is very efficient.
30
2)
Where SV indicates the scheduled variance.
3)
Where project scheduled for completion indicates the percentage of
work which should becompleted by time t.
4)
Where ACWP refers to .Actual Cost Work Performance. This value
helps in computing the cost factor of the project.
Where CPI indicates the cost performance index. This value represents
whether the performance of project is within the defined budget or not.
The value 1.0 indicates that the project is within the defined budget
Thus EVA helps in identifying the project performance, cost of performance
and project scheduling difficulties. This ultimately helps the project manager
to take the appropriate corrective actions.
While developing the software project many work products such as SRS,
design document, source code are being created. Along with these work
products many errors may get generated. Project manager has to identify all
these errors to bring quality software.
whe re
31
is the defect removal
efficiency,E is the error
R D is
defect. E
5:14 DevOps
The term DevOps is derived from Sofware DEVelopment and information technology
Operations"
DevOps promotes a set of processes and methods from the three department
Development, IT operations and Quality assurance that communicate and collaborate
together for development of software system
32
5.14.1 Why DevOps ?
5.14.2 Motivation
33
6. Increases net profit of organization
7. To standardize the development environment
8. To reduce work in progess
9. To reduce operating expenses
10. To set up the automated environment.
5.14.3 Benefits
Various benefits of DevOps are
Technical Benefits
1. Continuous software delivery is possible.
2. There is less complexity in managing the project.
• The problems in the project gets resolved faster.
Cultural benefits
1. The productivity of teams get increased
2. Thane is higher employee engagement.
3. There arise greater professional development opportunities.
Business benefits
1. The faster delivery of the product is possible.
2. The operating environment becomes stable.
3. Communication and collaboration are improved among the team members and
customers
4. More time is available for innovation rather than fixing and maintaining,
Basically Agile and DevOps are similar. But there lies some differences,
•DevOps brings more flexibility than Agile. With Continuous Integration (CI)
and Continuous Delivery (CD), the release of software products is made often
and it is ensured that these releases actually work and meet the customer needs
Thus in DevOps there are an increased number of releases.
One goal of DevOps is to establish an environment where releasing more reliable
applications, faster and more frequently, can occur. This actually brings the
continuous delivery approach.
DevOps is not a separate concept but a mere extension of Agile to include
operations as well to collaborate different agile teams together and work as ONE
team with an objective to deliver software fully to the customer.
34
5.145 Deployment Pipeline
A Pipeline is a set of automated processes that allow developers and DevOps
professionals to reliably and efficiently compile, build and deploy their code to their
production computing platform.
1. Continuous Integration:
The continuous integration in DevOps is a practice where developers regularly
merge their code changes into a central repository or a database after which
automated builds and tests are run.
Continuous Integration (CI) is the practice of automating the integration of code
changes from multiple developers or testers into a single software project.
Automated tools are used to assert the new code's correctness before integration
Continuous integration serves as a prerequisite for the testing, deployment and
release stages of continuous delivery.
The main benefit of performing continuous integration regularly and testing each
integration is that we can detect errors more quickly and locate them easily.
35
• The key components of the deployment pipeline are-
Source Code Management: This is the first step of the deployment pipeline in
which source code is stored in a version control system like Git and GitHub
Developers commit the changes (upload final changes) to this repository and
pipeline is triggered when there are new commits.
Build: In this process, the code is built into executable entities. During the build
process, the code is compiled, dependencies are packaged and binaries are
created.
Automated Testing: The pipeline runs a suite of automated tests in order to test
the software system. It includes unit testing, integration testing and performance
testing
Deployment: Once the code passes testing it is deployed to a staging or testing
environment that closely resembles the production environment. This allows for
further testing and validation in a controlled setting
User Acceptance Testing (UAT): In this stage, the software is tested by a group
of users or stakeholders to ensure it meets their expectations.
Final Deployment: If all tests and checks are successful, the software is deployed
to the production environment. This can be done manually, automatically or with
a combination of both.
Monitoring and Feedback: The application is continuously monitored, if any
issues are raised then these are fixed immediately. Feedback from the production
environment is used to inform future development and improvements.
Documentation: Finally the comprehensive documents and reports are prepared.
31 Overall Architecture
DevOps is a practice of operations and development
There are different phases of DevOps architecture-
1) Plan: In this phase, all the requirements of the project are gathered. The schedule and
cost of the project is estimated approximately.
36
2) Code: In this phase the code is written as per the requirements. Entire project is
divided into smaller units. Each unit can be coded as a module
3) Build: In this phase, the building of all the units is done using tools such as Maven,
Gradle is submit the code to a common code source.
4) Test: At this stage, all the units are tested to find if there exists any bug in the code.
The testing can be done using tools like Selenium, JUnit, PYtest. Some important
testing techniques such as acceptability testing, safety testing integration checking,
performance testing are carried out.
5) Integrate: In this phase, a new feature is added to the existing code and testing is
performed. Continuous Development is achieved only because of continuous integration
and testing.
6) Deploy: In this stage, the code is deployed in the client's environment. Some of the
examples of the tools used for Deployment are AWS, Docker.
7) Operate: At this stage, the version can be utilized by the users. Operations are
performed on the code if required. Some of the examples of the tools used are
Kubernetes, open shift.
8) Monitor: At this stage, the monitoring of the version at the client's workplace is done.
During this phase, developers collect data, monitor each function and spot errors like
low memory or server connection are broken. The DevOps workflow is observed at this
level depending on data gathered from consumer behavior application efficiency and
other sources. Some of the examples of the tools used are Nagios, elastic stack for
monitoring
5.14.8 Tools
Various tools used in DevOps are as follows
1. Nagios: It is a monitoring solution that gives new features and a modern user
experience.
2. ELK Stack: This tool is used for collecting logs from all services, applications,
networks, tools, servers and more in an environment into a single, centralized location
for processing and analysis
3. Docker: It eases configuration management and control issues,
4. Jenkins: Jenkins is a top tool for DevOps engineers who want to monitor executions
of repeated jobs.
5. Puppet: It handles configuration management and software while making rapid
repeatable changes in it.
6. Ansible: Ansible is a configuration management tool or DevOps tool that is similar to
Puppet and Chef
7. God: It is a monitoring tool used in DevOps
8. Monit: Monit has everything DevOps engineers need for system monitoring and error
recovery
9. Consulio: This tool is used for service discovery and configuration management
activities
10. Loggly: This tool is used for log management in DevOps
2) Cost Effective:
Instead of purchasing and creating our own expensive servers, we can use cloud
services where we need to pay only for the tools and services that we use
Cloud platform offers a pay-as-you-go pricing method, which means that we
only pay for the services that are needed and have been used for a period of time
4) Secure:
Cloud computing maintains confidentiality, integrity and availability of the user's
data.
Each service provided by the cloud is secure
39
Personal and business data can be encrypted to maintain data privacy
High Performance:
High-performance computing is the ability to process massive amounts of data at
high speed
Cloud computing offers a high-performance computing service so that the
companies need not worry about the speed
5151 Operations
It provides different services of the cloud such as Infrastructure as a Service (laas Platform
as a Service (PaaS) and packaged Software as a Service (SaaS)
laas: The infrastructure as a service means delivering computing infrastructure on
demand. Under this service, the user purchases the cloud infrastructure including
servers, networks, operating systems and storage using virtualization technology.
These services are highly scalable, laas is used by network architects Examples
of cloud services: AWS and Microsoft Azure.
PaaS: The Platform as a Service means it is a service where a thind-party
provider provides both hardware and software tools to the clients. It provides
elastic scaling of your application which allows developers to build applications
and services over the internet and the deployment models include public, private
and hybrid. Paas is used by developers Examples of doud servis-are Facebook
and Google Search Engine
SaaS : Software as a Service model that hosts software to make it available to
clients. It is used by the end users. Examples of Cloud services: Google Apps.
40
DMI COLLEGE OF ENGINEERING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
UNIT 1
10. Identify in which phase of the software life cycle the following documents are delivered.
(a) Architectural
design Design
(b) Test plan
Testing
(c) Cost estimate
Project management and planning
(d) Source code document Coding
11. Define the terms product and process in software engineering.
Ans.: The product in software engineering is a standalone entity that can be produced by
development organization and sold on the open market to any customer who is able to buy
them. The software product consists of computer programs, procedures, and associated
documentation (documentation can be in hard copy form or it may be in visual form. Some of the
examples of software product are databases, word processos, drawing tools.
The process in software engineering can be defined as the structured set of activities that
are required to develop software system. Various activities under software process are
• Specification
• Design and implementation
• Validation
• Evolution
15. State the benefits of the waterfall life cycle model for software development.
Ans.: 1. The waterfall model is simple to implement. 2. For implementation of small
systems the waterfall model is used.
16. How does "Project Risk" factor affect the spiral model of software development?
Ans.: The spiral model demands considerable risk assessment because if a major risk is not
uncovered and managed, problems will occur in the project and then it will not be acceptable by end
user.
17. Define software.
Ans. Software is a collection of computer programs and related documents that are intended to
provide desired features, functionalities and performance. A software can be of two types-1. Generic
software and 2. Custom software.
14. Specification
2 Design and Implementation
3. Validation
4. Evolution
21. What are the pros and cons of Iterative software development model?
Ans. Pros: 1) The changes in requirements or additions of functionality is possible
2) Risks can be identified and rectified before they get problematic.
Cons: 1) This model is typically based on customer communication. If the communication is
not proper the software product that gets developed will not be exactly as per the requirements.
2) The development process may get continued and never finish.
22. What led to the transition from product oriented development to process
oriented development to process oriented development ?
Software Process and Agile Development
Ans. The software process model led to the transition from product oriented development
to process oriented development
25. Depict the relationship between work product, task, activity and system.
Ans. • Each framework activity under the umbrella activities of the software process
framework consists of various task sets.
Each task set consists of work tasks, work products, quality assurance points and project
milestones.
The task sets accomodate the needs of the system getting developed
UNIT II
1. State
Characteristics of SRS document
Software requirement specification (SRS) is a document that completely describes what the
proposed software should do without describing how software will do it. The basic goal of the
requirement phase is to produce the SRS, Which describes the complete behavior of the proposed
software. SRS is also helping the clients to understand their own needs.
Characteristics of an SRS:
1. Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
6. Ranked for importance and/or stability
and the ends of an association can be adorned with role names, ownership indicators,
multiplicity, visibility, and other properties.
15. Write the syntax for presenting the attribute that was
suggested by UML visibility name :
type_expression = initial
_value
Where visibility is one of the following
+ public visibility
# protected visibility
- private visibility
type_expression - type of an attribute
Initial_value is a language dependent expression for the initial value of a newly created object
A Petri Net is a collection of directed arcs connecting places and transitions. Places may hold
tokens. The state or marking of a net is its assignment of tokens to places. Here is a simple net
containing all components of a Petri Net:
UNIT III
3. What are the properties that should be exhibited by a good software design according
to Mitch Kapor?
i. Firmness: A program should not have any bugs that inhibit its function.
ii. Commodity: A program should be suitable for the purposes for which it
was intended.
iii. Delight: The experience of using the program should be pleasurable one.
22. What are the architectural design various system models can be used?
In software architecture, various system models represent different ways of structuring and
organizing a software system to address specific requirements, challenges, or objectives. These
architectural models focus on different concerns like scalability, maintainability, security, and
performance.
23. What are certain issues that are considered while designing the software?
When designing software, several critical issues must be considered to ensure that the system
is robust, scalable, maintainable, and meets the needs of its users. These issues span across functional
and non-functional requirements and influence the overall architecture, user experience, and long-
term success of the software.
Functional cohesion
Sequential cohesion
Communicational cohesion
Procedural cohesion
25. Why modularity is important in software projects?
Modularity is a key principle in software engineering that involves dividing a system into
smaller, self-contained units or modules. Each module is designed to perform a specific task, making
the overall system more manageable, maintainable, and flexible.
UNIT IV
Smoke testing: Smoke Testing, also known as “Build Verification Testing”, is a type of
software testing that comprises a non-exhaustive set of tests that aim at ensuring that the
most important functions work.
9. Discuss Regression box testing?
Regression testing: it is used to check for defects propagated to other modules by
changes made to existing programs. Regression means retesting the unchanged parts of
the application.
tested
Making sure the software works correctly for intended user in his or her normal
work environment.
There are two types of acceptance testing:
a. Alpha test
b. Beta test
Alpha test – version of the complete software is tested by customer under the supervision
of the developer at the developer’s site.
Beta test – version of the complete software is tested by customer at his or her own site
withoutthe developer being present.
UNIT V
25. What are project indicators and how do they help a project manager?
Project indicators are metrics or key performance indicators (KPIs) that provide
measurable insights into the progress, health, and performance of a project. These indicators help project
managers monitor and assess the success of a project, enabling them to make informed decisions and take
corrective actions as needed.