Agile Model
Agile Model
1. Requirements gathering
3.Construction/ iterati on
4.Testing/ Quality assurance
5.Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You
should explain business opportunities and plan the time and effort needed to build
the project. Based on this information, you can evaluate technical and economic
feasibility.
2. Design the requirements: When you have identified the project, work with
stakeholders to define requirements. You can use the user flow diagram or the
high-level UML diagram to show the work of new features and show how it will
apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work
begins. Designers and developers start working on their project, which aims to
deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's
performance and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work
environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team
receives feedback about the product and works through the feedback.
Advantages:
1.Frequent Delivery
2.Due to the lack of proper documentation, once the project completes and the
developers allotted to another project, maintenance of the finished project can
become a difficulty.
Extreme Programming :
Extreme Programming (XP) was conceived and developed to address the specific
needs of software development by small teams in the face of vague and changing
requirements.
Crystal
Scrum
• People involved
• Interaction between the teams
• Community
• Skills of people involved
• Their Talents
• Communication between all the teams
Design results in a number of architectural alternatives that are each assessed to determine which is the
most appropriate for the problem to be solved.
(1) The first method uses an iterative method to assess design trade-offs.
(2)The second approach applies a pseudo-quantitative technique for assessing design quality.
• The Software Engineering Institute (SEI) has developed an architecture trade-off analysis method
(ATAM) that establishes an iterative evaluation process for software architectures.
• The design analysis activities that follow are performed iteratively.
1. Collect scenarios : A set of use cases is developed to represent the system from the user’s point of
view.
2. Elicit (Bring out) requirements, constraints, and environment description. This information is
determined as part of requirements engineering and is used to be certain that all stakeholder concerns
have been addressed.
3. Describe the architectural styles/patterns that have been chosen to address the scenarios and
requirements.
The architectural style(s) should be described using one of the following architectural views…
• Module view for analysis of work assignments with components and the degree to which
information hiding has been achieved.
• Process view for analysis of system performance.
• Data flow view for analysis of the degree to which the architecture meets functional
requirements.
4. Evaluate quality attributes : Quality attributes for architectural design assessment include reliability,
performance, security, maintainability, flexibility, testability, portability, reusability, and interoperability.
5.Identify the sensitivity of quality attributes to various architectural attributes for a specific
architectural style. This can be accomplished by making small changes in the architecture and
determining how sensitive a quality attribute, say performance, is to the change. Any attributes that are
significantly affected by variationin the architecture are termed sensitivity points..
6. Critique (Assess) candidate architectures (developed in step 3) using the sensitivity analysis
conducted in step 5.
• The Software Engineering Institute (SEI) describes this approach in the following manner
• Once the architectural sensitivity points have been determined, finding trade-off points is simply
the identification of architectural elements to which multiple attributes are sensitive. For
example, the performance of a client-server architecture might be highly sensitive to the
number of servers (performance increases, within some range, by increasing the number of
servers). . . . The number of servers, then, is a trade-off point with respect to this architecture.
Architectural Complexity
• A useful technique for assessing the overall complexity of a proposed architecture is to consider
dependencies between components within the architecture.
• These dependencies are driven by information/control flow within the system. Zhao suggests
three types of dependencies:
1. Sharing dependencies represent dependence relationships among consumers who
use the same resource or producers who produce for the same consumers.
2. For example, for two components u and v, if u and v refer to the same global
data, then there exists a shared dependence relationship between u and v.
3. Flow dependencies represent dependence relationships between producers and
consumers of resources
4. Constrained dependencies represent constraints on the relative flow of control
among a set of activities. For example, for two components u and v, u and v cannot
execute at the same time (mutual exclusion), then there exists a constrained
dependence relationship between u and v.
▪ Architectural Description Language
• The architect of a house has a set of standardized tools and notation that allow the design to be
represented in an unambiguous, understandable fashion.
• The software architect can draw on Unified Modeling Language (UML) notation, other
diagrammatic forms, and a few related tools, there is a need for a more formal approach to the
specification of an architectural design.
• Architectural description language (ADL) provides a semantics and syntax for describing a
software architecture.
• Hofmann and his colleagues suggest that
i. An ADL should provide the designer with the ability to decompose architectural
components,
ii. Compose individual components into larger architectural blocks,
iii. Represent interfaces (connection mechanisms) between components.
• Once descriptive, language based techniques for architectural design have been established, it is
more likely that effective assessment methods for architectures will be established as the design
evolves.
● Inspections are often driven by checklists of errors and heuristics that identify common errors in
different programming languages.
● For some errors and heuristics( an approach to problem solving or selfdiscovery ), it is possible to
automate the process of checking programs against this list, which has resulted in the development of
automated static analyzers for different programming languages.
● Static analyzers are software tools that scan the source text of a program and detect possible faults and
anomalies.
● They parse the program text and thus recognize the types of statements in the program.
● They can then detect whether statements are well formed, make inferences about the control flow in the
program and, in many cases, compute the set of all possible values for program data.
● They complement the error detection facilities provided by the language compiler.
● They can be used as part of the inspection process or as a separate V & V process activity.
● The intention of automatic static analysis is to draw an inspector’s attention to anomalies in the
program, such as variables that are used without initialization, variables that are unused or data whose
value could go out of range.
▪ This stage identifies and highlights loops with multiple exit or entry points and unreachable code.
▪ Unreachable code is code that is surrounded by unconditional go tostatements or that is in a
branch of a conditional statement where the guarding condition can never be true.
3. Interface analysis
▪ This analysis checks the consistency of routine and procedure declarations and their use.
▪ It is unnecessary if a strongly typed language such as Java is used for implementation as the
compiler carries out these checks.
▪ Interface analysis can detect type errors in weakly typed languages like FORTRAN and C.
▪ Interface analysis can also detect functions and procedures that are declared and never called or
function results that are never used.
▪ This phase of the analysis identifies the dependencies between input and output variables.
▪ While it does not detect anomalies, it shows how the value of each program variable is derived
from other variable values.
▪ With this information, a code inspection should be able to find values that have been wrongly
computed.
▪ Information flow analysis can also show the conditions that affect a variable’s value.
5. Path analysis
▪ This phase of semantic analysis identifies all possible paths through the program and sets out the
statements executed in that path.
▪ It essentially unravels the program’s control and allows each possible predicate to be analyzed
individually.
1. Data faults
2. Control faults
▪ Unreachable code
▪ Unconditional branches into loops
3. Input/output faults
4. Interface faults
▪ Unassigned pointers
▪ Pointer arithmetic
CASE
Computer-aided software engineering (CASE) is the implementation of computer-facilitated tools and
methods in software development. CASE is used to ensure high-quality and defect-free software. CASE
ensures a check-pointed and disciplined approach and helps designers, developers, testers, managers, and
others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements, and
design specifications. One of the major advantages of using CASE is the delivery of the final product, which is
more likely to meet real-world requirements as it ensures that customers remain part of the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest in the
concept of CASE tools years ago, but less so today, as the tools have morphed into different functions, often in
reaction to software developer needs. The concept of CASE also received a heavy dose of criticism after its
release.
CASE Tools: The essential idea of CASE tools is that in-built programs can help to analyze developing
systems in order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became
part of the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different stages
and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system structures in
a pictorial form. For example, Flow Chart Maker tool for making state-of-the-art flowcharts.
2. Computer Display and Report Generators: These help in understanding the data requirements and the
relationships involved.
3. Analysis Tools: It focuses on inconsistent, incorrect specifications involved in the diagram and data flow.
It helps in collecting requirements, automatically check for any irregularity, imprecision in the diagrams,
data redundancies, or erroneous omissions.
For example:
• (i) Accept 360, Accompa, CaseComplete for requirement analysis.
• (ii) Visible Analyst for total analysis.
4. Central Repository: It provides a single point of storage for data diagrams, reports, and documents related
to project management.
5. Documentation Generators: It helps in generating user and technical documentation as per standards. It
creates documents for technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
6. Code Generators: It aids in the auto-generation of code, including definitions, with the help of designs,
documents, and diagrams.
Advantages of the CASE approach:
• As the special emphasis is placed on the redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
• The overall quality of the product is improved as an organized approach is undertaken during the process of
development.
• Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
• CASE indirectly provides an organization with a competitive advantage by helping ensure the development
of high-quality products.
• It provides better documentation.
• It improves accuracy.
• It provides intangible benefits.
• It reduces lifetime maintenance.
• It is an opportunity to non-programmers.
• It impacts the style of working of the company.
• It reduces the drudgery in software engineer’s work.
• It increases the speed of processing.
• It is easy to program software.
Disadvantages of the CASE approach:
• Cost: Using a case tool is very costly. Most firms engaged in software development on a small scale do not
invest in CASE tools because they think that the benefit of CASE is justifiable only in the development of
large systems.
• Learning Curve: In most cases, programmers’ productivity may fall in the initial phase of
implementation, because users need time to learn the technology. Many consultants offer training and on-
site services that can be important to accelerate the learning curve and to the development and use of the
CASE tools.
• Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE
integration and data integration across all platforms is extremely important.
Software Engineering | Reverse Engineering
COCOMO MODEL
What is a Functional Requirement?
A Functional Requirement (FR) is a description of the service that the software
must offer. It describes a software system or its component. A function is nothing but
inputs to the software system, its behavior, and outputs. It can be a calculation, data
manipulation, business process, user interaction, or any other specific functionality
which defines what function a system is likely to perform. Functional Requirements
in Software Engineering are also called Functional Specification.
In software engineering and systems engineering, a Functional Requirement can
range from the high-level abstract statement of the sender’s necessity to detailed
mathematical functional requirement specifications. Functional
software requirements help you to capture the intended behaviour of the system.
• Helps you to check whether the application is providing all the functionalities
that were mentioned in the functional requirement of that application
• A functional requirement document helps you to define the functionality of a
system or one of its subsystems.
• Functional requirements along with requirement analysis help identify missing
requirements. They help clearly define the expected system service and
behavior.
• Errors caught in the Functional requirement gathering stage are the cheapest
to fix.
• Support user goals, tasks, or activities
Most people don't give a second thought to new technologies as they make
their life easier and more comfortable to drive. We need software
engineering because software engineering is important in daily life. We have
technology like Alexa only because we have software engineering. It has
made things possible which are always beyond our imagination.
2. Adding structure
Without software engineering, we have people who can code. But software
engineering methodology has a structure to everything and makes the
lifecycle and business process easy and reliable.
3. Preventing issues
The software development process has now been formalized to prevent the
software project from running over budget, mismanagement, and poor
planning. The process of quality assurance and user testing is vital as it
helps prevent future issues at lower costs. And this is only possible due to
software engineering. For the success of projects, it becomes vitally
important.
4. Huge Programming
5. Automation & AI
6. Research
Through research and development, only new technology arises from the
industry. It is possible today because software engineering is at the forefront
of new technology research and development. Through each step forward,
other parts of the industry can flourish as we stand on the shoulders of
giants.
The importance of software engineering lies in the fact that a specific piece
of Software is required in almost every industry, every business, and
purpose. As time goes on, it becomes more important for the following
reasons.
1. Reduces Complexity
Big projects need lots of patience, planning, and management, which you
never get from any company. The company will invest its resources;
therefore, it should be completed within the deadline. It is only possible if the
company uses software engineering to deal with big projects without
problems.
Software engineers are paid highly as Software needs a lot of hard work
and workforce development. These are developed with the help of a large
number of codes. But programmers in software engineering project all
things and reduce the things which are not needed. As a result of the
production of Software, costs become less and more affordable for Software
that does not use this method.
4. To Decrease Time
If things are not made according to the procedures, it becomes a huge loss
of time. Accordingly, complex Software must run much code to get definitive
running code. So, it takes lots of time if not handled properly. And if you
follow the prescribed software engineering methods, it will save your
precious time by decreasing it.
5. Effectiveness
6. Reliable Software
of discovering anomalies and defects. An inspection does not require execution of. A
system so may be used before the implementation process. There may be applied to
any representation of the system requirements, design, test data, configuration data, etc.
They have been shown to be an effective technique for discovering program error. The
software inception is conducted only when the author, i.e. developer, has made sure
that the code is ready for inspection. He decides it by performing some preliminary desk
checking and walkthrough on the code. After passing through these review methods,
Step 1: Planning
• Identify the moderator – Has the main responsibility for the inspection.
• Prepare the package for distribution – Work product for review plus supporting
docs.
Step 2: Overview
• Brief meeting – Deliver package, explain the purpose of the review, introduction,
etc.
• All team members then individually review the work product. Lists the issues they
• Ideally should be done in one sitting, and issues are recorded in the log.
Step 3: Individual Preparation
Learning Paths @ $19 Most Popular Learning Paths in Web Dev, Programming,
Cyber Security and Testing just for $19 5 to 30+ Courses | 20 to 100+ Hours of Videos | Certificates for
each Course Completed
• Notes down the issues that have come across while studying the project.
• Reviewer goes over the product line by line. At any line, all issues are raised.
• Scribe present the list of defects. If few defects, the work product is accepted;
• Group does not propose solutions through some suggestions that may be
recorded.
effectiveness.
made to repair the discovered errors. Once fixed, the author gets it OKed by the
• Once all defects are satisfactorily addressed, the review is completed, and
Inspection Roles
Various roles involved in an inspection are as follows:
Learning Paths @ $19 Most Popular Learning Paths in Data Science, Machine
Learning and AI just for $19 5 to 30+ Courses | 20 to 100+ Hours of Videos | Certificates for each Course
Completed
• Inspector: Inspector provides review comments for the code. Finds error,
omissions, and inconsistencies bin the program. May also identify the broader
process. Manages the process and facilitates the inspection. Also reports process
Advantages:
• The goal of this method is to detect all faults, violation and other side effects.
inspection.
• The reader in the inspection reads out the document sequentially in a structured
manner so that all the points and all the code is inspected thoroughly.
Disadvantages:
• Logistics and scheduling can become an issue since multiple people are involved.
• Time-consuming as it needs preparation as well as formal meetings.
• It is not always possible to go through every line of code with several parameters
and their combination to ensure the correctness of the logic, side effects and
Error List
Some programming errors which can be checked during software inspection are as
follows:
• Incompatible assignments.
A comprehensive mapping that accomplishes the transition from the requirements model to a variety of
architectural styles does not exist.
A mapping technique, called structured design, is often characterized as a data flow-oriented design
methodbecause it provides a convenient transition from a data flow diagram to software architecture. It is
accomplishedas part of a six step process:
(5) The resultant structure is refined using design measures and heuristics, and
In order to perform the mapping, the type of information flow must be determined. One type of
information flowis called transform flow. Data flows into the system along an incoming flow path. Then
it is processed at atransform center. Finally, it flows out of the system along an outgoing flow path that
transforms the data intoexternal world form.
Transform Mapping
Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to be
mappedinto a specific architectural style. To map these data flow diagrams into a software architecture,
you would initiate the following design steps: (Example Home security System)
The fundamental system model or context diagram depicts the security function as a single
transformation,representing the external producers and consumers of data that flow into and out of the
function. Figure depicts a level 0 context model, and Figure 9.11 shows refined data flow for the security
function.
Step 2. Review and refine data flow diagrams for the software.
Information obtained from the requirements model is refined to produce greater detail. For example, the
level 2 DFD for monitor sensors
Step 3. Determine whether the DFD has transform or transaction flow characteristics. Evaluating
the DFD. Input and output should be consistent for a process.
Step 4. Isolate the transform center by specifying incoming and outgoing flow boundaries.
Incoming data flows along a path in which information is converted from external to internal form;
outgoing flowconverts internalized data to external form. Different designers may select slightly different
points in the flow asboundary locations. In fact, alternative design solutions can be derived by varying the
placement of flowboundaries. The emphasis in this design step should be on selecting reasonable
boundaries, rather than lengthy iteration on placement of divisions.
This mapping is top-down distribution of control. Factoring leads to a program structure in which
When transform flow is encountered, a DFD is mapped to a specific structure (a call and return
architecture) thatprovides control for incoming, transform, and outgoing information processing. This
first-level factoring for themonitor sensors subsystem is illustrated in Figure 9.14.
A main controller (called monitor sensors executive) resides at the top of the program structure and
coordinatesthe following subordinate control functions:
• An incoming information processing controller, called sensor input controller, coordinates receipt of
allincoming data.
• A transform flow controller, called alarm conditions controller, supervises all operations on data
ininternalized form (e.g., a module that invokes various data transformation procedures).
• An outgoing information processing controller, called alarm output controller, coordinates production of
output information.
Two or even three bubbles can be combined and represented as one component, or a single bubble may be
expanded to two or more components. Review and refinement may lead to changes in this structure, but it
can serve as a “first-iteration” design.
Second-level factoring for incoming flow follows in the same manner. Factoring is again accomplished
by moving outward from the transform center boundary on the incoming flow side. The transform center
of monitor sensors subsystem software is mapped. A completed first-iteration architecture is shown in
Figure 9.16.
Components are named in a manner that implies function. The processing narrative describes the
component interface, internal data structures, a functional narrative, and a brief discussion of restrictions
and special features.
Step 7. Refine the first-iteration architecture using design heuristics for improved software quality.
Design Concepts
Introduction: Software design encompasses the set of principles, concepts, and practices that lead to
the development of a high-quality system or product. Design principles establish an overriding
philosophy that guides you in the design work you must perform. Design is pivotal to successful software
engineering The goal of design is to produce a model or representation that exhibits firmness,
commodity, and delight Software design changes continually as new methods, better analysis, and
broader understanding evolve
Software design sits at the technical kernel of software engineering and is applied regardless of the
software process model that is used. Beginning once software requirements have been analyzed and
modeled, software design is the last software engineering action within the modeling activity and sets
the stage for construction (code generation and testing).
Each of the elements of the requirements model provides information that is necessary to create the
four design models required for a complete specification of design. The flow of information during
software design is illustrated in following figure.
The architectural design defines the relationship between major structural elements of the software, the
architectural styles and design patterns that can be used to achieve the requirements defined for the
system, and the constraints that affect the way in which architecture can be implemented. The
architectural design representation—the framework of a computer- based system—is derived from the
requirements model.
The interface design describes how the software communicates with systems that interoperate with it,
and with humans who use it. An interface implies a flow of information (e.g., data and/or control) and a
specific type of behavior. Therefore, usage scenarios and behavioral models provide much of the
information required for interface design.
The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the class-based models,
flow models, and behavioral models serve as the basis for component design.
The importance of software design can be stated with a single word—quality. Design is the place where
quality is fostered in software engineering. Design provides you with representations of software that
can be assessed for quality. Design is the only way that you can accurately translate stakeholder’s
requirements into a finished software product or system. Software design serves as the foundation for
all the software engineering and software support activities that follow.
Software design is an iterative process through which requirements are translated into a “blueprint” for
constructing the software. Initially, the blueprint depicts a holistic view of software. That is, the design is
represented at a high level of abstraction
McGlaughlin suggests three characteristics that serve as a guide for the evaluation of a good design:
• The design must implement all of the explicit requirements contained in the requirements model, and
it must accommodate all of the implicit requirements desired by stakeholders.
• The design must be a readable, understandable guide for those who generate code and for those who
test and subsequently support the software.
• The design should provide a complete picture of the software, addressing the data, functional, and
behavioral domains from an implementation perspective.
Quality Guidelines. In order to evaluate the quality of a design representation, consider the following
guidelines:
1. A design should exhibit an architecture that (1) has been created using recognizable architectural
styles or patterns, (2) is composed of components that exhibit good design characteristics and (3) can be
implemented in an evolutionary fashion,2 thereby facilitating implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned into elements or
subsystems.
3. A design should contain distinct representations of data, architecture, interfaces, and components.
4. A design should lead to data structures that are appropriate for the classes to be implemented and
are drawn from recognizable data patterns.
6. A design should lead to interfaces that reduce the complexity of connections between components
and with the external environment.
7. A design should be derived using a repeatable method that is driven by information obtained during
software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.
Quality Attributes. Hewlett-Packard developed a set of software quality attributes that has been given
the acronym FURPS—functionality, usability, reliability, performance, and supportability. The FURPS
quality attributes represent a target for all software design:
• Functionality is assessed by evaluating the feature set and capabilities of the program, the generality
of the functions that are delivered, and thesecurity of the overall system..
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of output
results, the mean-time-to-failure (MTTF), the ability to recover from failure, and the predictability of the
program.
• Supportability combines the ability to extend the program (extensibility), adaptability, serviceability—
these three attributes represent a more common term, maintainability— and in addition, testability,
compatibility, configurability, the ease with which a system can be installed, and the ease with which
problems can be localized.
The evolution of software design is a continuing process that has now spanned almost six decades. Early
design work concentrated on criteria for the development of modular programs and methods for
refining software structures in a top down manner. Procedural aspects of design definition evolved into
a philosophy called structured programming.
A number of design methods, growing out of the work just noted, are being applied throughout the
industry. All of these methods have a number of common characteristics:
(1) a mechanism for the translation of the requirements model into a design representation,
DESIGN CONCEPTS
A set of fundamental software design concepts has evolved over the history of software engineering.
Each provides the software designer with a foundation from which more sophisticated design methods
can be applied. Each helps you answer the following questions:
• What criteria can be used to partition software into individual components?
• How is function or data structure detail separated from a conceptual representation of the software?
The following brief overview of important software design concepts that span both traditional and
object-oriented software development.
Abstraction
Abstraction is the act of representing essential features without including the background details or
explanations. the abstraction is used to reduce complexity and allow efficient design and
implementation of complex software systems. Many levels of abstraction can be posed. At the highest
level of abstraction, a solution is stated in broad terms using the language of the problem environment.
At lower levels of abstraction, a more detailed description of the solution is provided.As different levels
of abstraction are developed, you work to create both procedural and data abstractions.
A procedural abstraction refers to a sequence of instructions that have a specific and limited function.
The name of a procedural abstraction implies these functions, but specific details are suppressed.
Architecture
Software architecture alludes to “the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system”
Architecture is the structure or organization of program components (modules), the manner in which
these components interact, and the structure of data that are used by the components.
Shaw and Garlan describe a set of properties that should be specified as part of an architectural design:
• Structural properties. This aspect of the architectural design representation defines the
components of a system (e.g., modules, objects, filters) and the manner in which those
components are packaged and interact with oneanother.
• Extra-functional properties. The architectural design description should address how the design
architecture achieves requirements for performance, capacity, reliability, security, adaptability,
and other system characteristics.
• Families of related systems. The architectural design should draw upon repeatable patterns
that are commonly encountered in the design of families of similar systems. In essence, the
design should have the ability to reuse architectural building blocks.
The architectural design can be represented using one or more of a number of different models.
Structural models: Represent architecture as an organized collection of program components.
Framework models: Increase the level of design abstraction by attempting to identify repeatable
architectural design frameworks that are encountered in similar types of applications.
Dynamic models : Address the behavioral aspects of the program architecture, indicating how the
structure or system configuration may change as a function of external events.
Process models :Focus on the design of the business or technical process that the system must
accommodate.
A number of different architectural description languages (ADLs) have been developed to represent
these models.
Patterns
Brad Appleton defines a design pattern in the following manner: “A pattern is a named nugget of insight
which conveys the essence of a proven solution to a recurring problem within a certain context amidst
competing concerns”
A design pattern describes a design structure that solves a particular design problem within a specific
context and amid “forces” that may have an impact on the manner in which the pattern is applied and
used.
The intent of each design pattern is to provide a description that enables a designer to determine (1)
whether the pattern is applicable to the current work, (2) whether the pattern can be reused (hence,
saving design time), and (3) whether the pattern can serve as a guide for developing a similar, but
functionally or structurally different pattern.
Separation of Concerns
Separation of concerns is a design concept that suggests that any complex problem can be more easily
handled if it is subdivided into pieces that can each be solved and/or optimized independently. A
concern is a feature or behavior that is specified as part of the requirements model for the software.
Separation of concerns is manifested in other related design concepts: modularity, aspects, functional
independence, and refinement. Each will be discussed in the subsections that follow.
8.3.5 Modularity
Modularity is the most common manifestation of separation of concerns. Software is divided into
separately named and addressable components, sometimes called module.
Information Hiding
The principle of information hiding suggests that modules be “characterized by design decisions that
hides from all others.” In other words, modules should be specified and designed so that information
contained within a module is inaccessible to other modules that have no need for such information.The
use of information hiding as a design criterion for modular systems provides the greatest benefits when
modifications are required during testing and later during software maintenance. Because most data
and procedural detail are hidden from other parts of the software, inadvertent errors introduced during
modification are less likely to propagate to other locations within the software.
Functional Independence
Independence is assessed using two qualitative criteria: cohesion and coupling. Cohesion is an
indication of the relative functional strength of a module. Coupling is an indication of the relative
interdependence among modules.
Cohesion is a natural extension of the information-hiding concept. A cohesive module performs a single
task, requiring little interaction with other components in other parts of a program. Stated simply, a
cohesive module should do just one thing. Although you should always strive for high cohesion (i.e.,
single-mindedness).
Refinement
Stepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. Refinement is
actually a process of elaboration. You begin with a statement of function that is defined at a high level of
abstraction.
Abstraction and refinement are complementary concepts. Abstraction enables you to specify procedure
and data internally but suppress the need for “outsiders” to have knowledge of low-level details.
Refinement helps you to reveal low-level details as design progresses.
Aspects
Refactoring
An important design activity suggested for many agile methods, refactoring is a reorganization
technique that simplifies the design (or code) of a component without changing its function or behavior.
Fowler defines refactoring in the following manner: “Refactoring is the process of changing a software
system in such a way that it does not alter the external behavior of the code [design] yet improves its
internal structure.”
The object-oriented (OO) paradigm is widely used in modern software engineering. OO design concepts
such as classes and objects, inheritance, messages, and polymorphism, among others.
Design Classes
The requirements model defines a set of analysis classes. Each describes some element of the problem
domain, focusing on aspects of the problem that are user visible. A set of design classes that refine the
analysis classes by providing design detail that will enable the classes to be implemented, and
implement a software infrastructure that supports the business solution.
Five different types of design classes, each representing a different layer of the design architecture, can
be developed:
• User interface classes define all abstractions that are necessary for human computer interaction (HCI).
The design classes for the interface may be visual representations of the elements of the metaphor.
• Business domain classes are often refinements of the analysis classes defined earlier. The classes
identify the attributes and services (methods) that are required to implement some element of the
business domain.
• Process classes implement lower-level business abstractions required to fully manage the business
domain classes.
• Persistent classes represent data stores (e.g., a database) that will persist beyond the execution of the
software.
• System classes implement software management and control functions that enable the system to
operate and communicate within its computing environment and with the outside world.
Arlow and Neustadt suggest that each design class be reviewed to ensure that it is “well- formed.” They
define four characteristics of a well-formed design class:
• Complete and sufficient. A design class should be the complete encapsulation of all attributes
and methods that can reasonably be expected to exist for the class. Sufficiency ensures that the
design class contains only those methods that are sufficient to achieve the intent of the class, no
more and no less.
• Primitiveness. Methods associated with a design class should be focused on accomplishing one
service for the class. Once the service has been implemented with a method, the class should
not provide another way to accomplish the same thing.
• High cohesion. A cohesive design class has a small, focused set of responsibilities and single-
mindedly applies attributes and methods to implement those responsibilities.
• Low coupling. Within the design model, it is necessary for design classes to collaborate with one
another. If a design model is highly coupled, the system is difficult to implement, to test, and to
maintain over time.
The design model can be viewed in two different dimensions. The process dimension indicates the
evolution of the design model as design tasks are executed as part of the software process. The
abstraction dimension represents the level of detail as each element of the analysis model is
transformed into a design equivalent and then refined iteratively. The design model has four major
elements: data, architecture, components, and interface.
The People Capability Maturity Model (PCMM) is a framework that helps the
organization successfully address their critical people issues. Based on the best
current study in fields such as human resources, knowledge management, and
organizational development, the PCMM guides organizations in improving their
steps for managing and developing their workforces.
The PCMM subsists of five maturity levels that lay successive foundations for
continuously improving talent, developing effective methods, and successfully
directing the people assets of the organization. Each maturity level is a well-
defined evolutionary plateau that institutionalizes a level of capability for
developing the talent within the organization
SDLC MODEL
SDLC – Software Development Life Cycle: The Software Development Lifecycle is a systematic process
for building software that ensures the quality and correctness of the software built. SDLC process aims
to produce highquality software which meets customer expectations. The software development should
be complete in the pre-defined time frame and cost. It consists of a detailed plan describing how to
develop, maintain and replace specific software. Software life cycle models describe phases of the
software cycle and the order in which those phases are executed. Each phase produces deliverables
required by the next phase in the life cycle.
A typical Software Development Life Cycle (SDLC) consists of the following phases:
1. Requirement gathering
2. System Analysis
3. Design
4. Development /Implementation or coding
5. Testing 6. Deployment
6. Maintenance Software Engineering
Requirement gathering:
➢ Requirement gathering and analysis is the most important phase in software development
lifecycle. Business Analyst collects the requirement from the Customer/Client as per the client’s
business needs and documents the requirements in the Business Requirement Specification.
➢ This phase is the main focus of the project managers and stake holders. Meetings with
managers, stake holders and users are held in order to determine the requirements like; who is
going to use the system? How will they use the system? What data should be input into the
system? What data should be output by the system?
2. Analysis Phase:
• Once the requirement gathering and analysis is done the next step is to define and
document the product requirements and get them approved by the customer. This is
done through SRS (Software Requirement Specification) document.
• SRS consists of all the product requirements to be designed and developed during the
project life cycle.
• Key people involved in this phase are Project Manager, Business Analysist and Senior
members of the Team.
• The outcome of this phase is Software Requirement Specification.
3. Design Phase:
• In this third phase the system and software design is prepared from the requirement
specifications which were studied in the first phase.
• System Design helps in specifying hardware and system requirements and also helps in defining
overall system architecture.
• There are two kinds of design documents developed in this phase:
• High-Level Design (HLD): It gives the architecture of the software product to be developed and is
done by architects and senior developers. It gives brief description and name of each module. It
also defines interface relationship and dependencies between modules, database tables
identified along with their key elements
• Low-Level Design (LLD): It is done by senior developers. It describes how each and every feature
in the product should work and how every component should work. Here, only the design will
be there and not the code. It defines the functional logic of the modules, database tables design
with size and type, complete detail of the interface. Addresses all types of dependency issues
and listing of error messages.
4. Coding/Implementation Phase:
• In this phase, developers start build the entire system by writing code using the chosen
programming language.
• Here, tasks are divided into units or modules and assigned to the various developers. It is the
longest phase of the Software Development Life Cycle process.
• In this phase, Developer needs to follow certain predefined coding guidelines. They also need to
use programming tools like compiler, interpreters, debugger to generate and implement the
code.
• The outcome from this phase is Source Code Document (SCD) and the developed product.
5. Testing Phase:
• After the code is developed it is tested against the requirements to make sure that the product
is actually solving the needs addressed and gathered during the requirements phase.
• They either test the software manually or using automated testing tools depends on process
defined in STLC (Software Testing Life Cycle) and ensure that each and every component of the
software works fine. The development team fixes the bug and send back to QA for a re-test. This
process continues until the software is bug-free, stable, and working according to the business
needs of that system.
6. Deployment: After successful testing the product is delivered / deployed to the customer for their
use. As soon as the product is given to the customers they will first do the beta testing. If any changes
are required or if any bugs are caught, then they will report it to the engineering team. Once those
changes are made or the bugs are fixed then the final deployment will happen.
7. Maintenance: Software maintenance is a vast activity which includes optimization, error correction,
and deletion of discarded features and enhancement of existing features. Since these changes are
necessary, a mechanism must be created for estimation, controlling and making modifications. The
essential part of software maintenance requires preparation of an accurate plan during the
development cycle. Typically, maintenance takes up about 40-80% of the project cost, usually closer to
the higher pole. Hence, a focus on maintenance definitely helps keep costs down.
PROCESS MODEL
A software process model is an abstraction of the software development process. The models specify
the stages and order of a process. So, think of this as a representation of the order of activities of the
process and the sequence in which they are performed.
1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment
The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks,
quality assurance and change the mechanism for every project.
• The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle model'.
• In this model, each phase is fully completed before the beginning of the next phase.
• This model is used for the small projects.
• In this model, feedback is taken after each phase to ensure that the project is on the right path.
• Testing part starts only after the development is complete.
NOTE: The description of the phases of the waterfall model is same as that of the process
model.
• The waterfall model is simple and easy to understand, implement, and use.
• All the requirements are known at the beginning of the project, hence it is easy to manage.
• It avoids overlapping of phases because each phase is completed at once.
• This model works for small projects because the requirements are understood very well.
• This model is preferred for those projects where the quality is more important as compared to
the cost of the project.
Disadvantages of the waterfall model
• This model is not good for complex and object oriented projects.
• It is a poor model for long projects.
• The problems with this model are uncovered, until the software testing.
• The amount of risk is high.
• The incremental model combines the elements of waterfall model and they are applied in an
iterative fashion.
• The first increment in this model is generally a core product.
• Each increment builds the product and submits it to the customer for any suggested
modifications.
• The next increment implements on the customer's suggestions and add additional requirements
in the previous increment.
• This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
• This model is flexible because the cost of development is low and initial product delivery is
faster.
• It is easier to test and debug during the smaller iteration.
• The working software generates quickly and early during the software life cycle.
• The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model
• The cost of the final product may cross the cost estimated initially.
• This model requires a very clear and complete planning.
• The planning of design is required before the whole system is broken into small increments.
• The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model
1. Business Modeling
• Business modeling consist of the flow of information between various functions in the project.
• For example what type of information is produced by every function and which are the functions
to handle that information.
• A complete business analysis should be performed to get the essential business information.
2. Data modeling
• The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
• The attributes of each object are identified and define the relationship between objects.
3. Process modeling
• The data objects defined in the data modeling phase are changed to fulfil the information flow to
implement the business model.
• The process description is created for adding, modifying, deleting or retrieving a data object.
4. Application generation
• The prototypes are independently tested after each iteration so that the overall testing time is
reduced.
• The data flow and the interfaces between all the components are are fully tested. Hence, most
of the programming components are already tested.
.
When concerns cut across multiple system functions, features, and information,
they are often referred to as crosscutting concerns. Aspectual requirements define
those crosscutting concerns that have an impact across the software architecture.
Aspect-oriented software development (AOSD), often referred to as aspect-
oriented programming (AOP), is a relatively new software engineering paradigm
that provides a process and methodological approach for defining, specifying,
designing, and constructing aspects.
A distinct aspect-oriented process has not yet matured. However, it is likely that
such a process will adopt characteristics of both evolutionary and concurrent
process models. The evolutionary model is appropriate as aspects are identified
and then constructed. The parallel nature of concurrent development is essential
because aspects are engineered independently of localized software components
and yet, aspects have a direct impact on these components. It is essential to
instantiate asynchronous communication between the software process activities
applied to the engineering and construction of aspects and components.
Agile model
Scrum: One of the most popular agile models, Scrum consists of iterations called
sprints. Each sprint is between 2 to 4 weeks long and is preceded by planning. You
cannot make changes after the sprint activities have been defined.
Kanban: Kanban focuses on visualizations, and if any iterations are used they are
kept very short. You use the Kanban Board that has a clear representation of all
project activities and their numbers, responsible people, and progress.
Estimating project duration is like building a life plan. You know where you want to get, you know
something will likely go wrong, but you still need to establish a timeline to reach your goals.
If there is one thing to know about estimating project duration, it will have to be this: there are lots of
traps to look out for.
But with proper preparation you can make this into an easy experience.
Elapsed time is more about the progress — it looks at how long it took from the moment you
assigned someone to a project to the moment they completed it. Eventually, it will also show how
effectively you're working — are you going to meet the promised deadlines?
Top-down estimating
PMBOK explains top down estimating, also known as analogous estimating as “a technique for
estimating duration or cost of an activity or a project using historical data from a similar activity or a
project.”
In other words, here you need to look at your historical data and compare the new project to
something similar that has already been completed, assuming that the new project will take
approximately as much time and resources to complete.
Bottom up estimating
With bottom-up estimating, you go from detailed to general look — from task to project. The rule is
simple — if you cannot make an accurate estimation of a project, dissect it to the units which you
can estimate properly, like milestones or even individual tasks.
Parametric estimating
Parametric estimating is basically taking analogous estimating to another level. You also look at
historical data only get more accurate when it comes to numbers, introducing something of a
"statistical relationships".
That is to say that you need to find a comparable project in your historical data and then customize
calculations based on the numerical parameters of your new project.
Three-point estimating
As a result, the effort will be 40 hours, but the duration will be longer. For example, if you decide to
devote 5 hours a day to the project it will take you 8 days to complete it (your duration), but if your
colleague comes to help and takes half of the workload off your shoulders, the project will take only
4 days.
But whether you will be working alone or with reinforcements, here are some things you could do to
get more efficient.
Here you will be able to see when your resources are free, full, or overbooked and by how much.
This is your best bet to juggle resources and follow the "less is more" concept.
With Runn's resource scheduling, all you need to do is click, drag, and drop workload when you
need to allocate it to someone specific. You can extend, shorten, transfer, and split work among your
resources to accommodate everyone involved.
With a contingency reserve, you can be prepared for whatever happens and still keep your project
going according to plan (even if it's not the most optimistic one).
4. Don't underestimate
People have a natural tendency to be overoptimistic. In project management, this can lead to project
failure.
Have you ever moved houses? It never takes the time you expect it to take — there is always a
something causing one delay after another and you end up sleeping on a mattress for two weeks.
In a way, projects can be the same. This is why leaving space for some wiggle room, scope creep,
ad hoc requests, and the like can help you realistically estimate project duration.
Start a free trial of Runn to see how you can optimize and simplify your project planning and
estimation in a matter of a few clicks!
The broad spectrum of tasks and techniques that lead to an understanding of requirements is
engineering is a major software engineering action that begins during the communication activity
and continues into the modeling activity. It must be adapted to the needs of the process, the
provides the appropriate mechanism for understanding what the customer wants, analyzing
need, assessing feasibility, negotiating a reasonable solution, specifying the solution
unambiguously, validating the specification, and managing the requirements as they are
a) Inception. In general, most projects begin when a business need is identified or a potential
new market or service is discovered. Stakeholders from the business community define a
business case for the idea, try to identify the breadth and depth of the market, do a rough
At project inception, you establish a basic understanding of the problem, the people who want a
solution, the nature of the solution that is desired, and the effectiveness of preliminary
communication and collaboration between the other stakeholders and the software team.
b)Elicitation. Ask the customer, what the objectives for the system or product are, what is to be
accomplished, how the system or product fits into the needs of the business, and finally, how the
• Problems of scope. The boundary of the system is ill-defined or the customers/users specify
unnecessary technical detail that may confuse, rather than clarify, overall system objectives.
• Problems of understanding. The customers/users are not completely sure of what is needed,
have a poor understanding of the capabilities and limitations of their computing environment,
don’t have a full understanding of the problem domain, have trouble communicating needs to the
system engineer, omit information that is believed to be “obvious,” specify requirements that
conflict with the needs of other customers/users, or specify requirements that are ambiguous or
un testable.
• Problems of volatility. The requirements change over time. To help overcome these
a refined requirements model that identifies various aspects of software function, behavior, and
information.
Elaboration is driven by the creation and refinement of user scenarios that describe how the end
user (and other actors) will interact with the system. Each user scenario is parsed to extract
analysis classes—business domain entities that are visible to the end user. The attributes of
each analysis class are defined, and the services that are required by each class are identified.
The relationships and collaboration between classes are identified, and a variety of
d)Negotiation. It usual for customers, to given limited business resources. It’s also relatively
common for different customers or users to propose conflicting requirements, arguing that their
You have to reconcile these conflicts through a process of negotiation. Customers, users, and
other stakeholders are asked to rank requirements and then discuss conflicts
in priority. Using an iterative approach that prioritizes requirements, assesses their cost and risk,
and addresses internal conflicts, requirements are eliminated, combined, and/or modified so that
usage scenarios, a prototype, or any combination of these. Some suggest that a “standard
template” should be developed and used for a specifcation, arguing that this leads to
requirements that are presented in a consistent and therefore more understandable manner.
For large systems, a written document, combining natural language descriptions and graphical
models may be the best approach.
In an ideal setting, stakeholders and software engineers work together on the same team. In
We discuss the steps required to establish the groundwork for an understanding of software
requirements—to get the project started in a way that will keep it moving forward toward a
successful solution.
2.2.1 Identifying Stakeholders: Stakeholder is “anyone who benefits in a direct or indirect way
from the system which is being developed.” The usual stakeholders are: business operations
managers, product managers, marketing people, internal and external customers, end users,
consultants, product engineers, software engineers, support and maintenance engineers. Each
stakeholder has a different view of the system, achieves different benefits when the system is
successfully developed, and is open to different risks if the development effort should fail.
2.2.2 Recognizing Multiple Viewpoints: Because many different stakeholders exist, the
requirements of the system will be explored from many different points of view. Each of these
may conflict with one another. You should categorize all stakeholder information in a way that
will allow decision makers to choose an internally consistent set of requirements for the system.
2.2.3 Working toward Collaboration: If five stakeholders are involved in a software project,
you may have five different opinions about the proper set of requirements. Customers must
system is to result. The job of a requirements engineer is to identify areas of commonality and
areas of conflict or inconsistency. Collaboration does not necessarily mean that requirements
are defined by committee. In many cases, stakeholders collaborate by providing their view of
requirements, but a strong “project champion” may make the final decision about which
2.2.4 Asking the First Questions: Questions asked at the inception of the project should be
“context free. The first set of context-free questions focuses on the customer and other
stakeholders, the overall project goals and benefits. You might ask:
These questions help to identify all stakeholders who will have interest in the software to be
built.
In addition, the questions identify the measurable benefit of a successful implementation and
The next set of questions enables you to gain a better understanding of the problem and allows
• How would you characterize “good” output that would be generated by a successful
solution?
• Can you show me (or describe) the business environment in which the solution will be
used?
• Will special performance issues or constraints affect the way the solution is approached?
The final set of questions focuses on the effectiveness of the communication activity itself.
• Are you the right person to answer these questions? Are your answers “official”?
These questions will help to “break the ice” and initiate the communication that is essential to
successful elicitation.
Feasibility Studies
SOFTWARE TESTING
Software testing can be stated as the process of verifying and validating whether a software or
application is bug-free, meets the technical requirements as guided by its design and development, and
meets the user requirements effectively and efficiently by handling all the exceptional and boundary
cases. The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy, and usability. T he article
focuses on discussing Software Testing in detail.
White Box Testing :The white box testing is a testing method which is based on close examination of
procedural details. Hence it is also called as glass box testing. In white box testing the test cases are
derived for
1. Examining all the independent paths within a module.
2. Exercising all the logical paths with their true and false sides.
3. Executing all the loops within their boundaries and within operational bounds.
4. Exercising internal data structures to ensure their validity.
2. Certain assumptions on flow of control and data may lead programmer to make design
errors. To uncover the errors on logical path, white box testing is must.
3. There may be certain typographical errors that remain undetected even after syntax and
type checking mechanisms. Such errors can be uncovered during white box testing.
Cyclomatic Complexity
Cyclomatic complexity is a software metric that gives the quantitative measure of logic al
complexity of the program. The Cyclomatic complexity defines the number of independent
paths in the basis set of the program that provides the upper bound for the number of tests
that must be conducted to ensure that all the statements have been executed at least once. The
cyclomatic complexity can be computed by one of the following ways.
1. The number of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as:
V(G) = E - N + 2 ,
E - number of flow graph edges,
N - number of flow graph nodes
3. V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.
Structural Testing
1. The structural testing is sometime called as white-box testing.
2. In structural testing derivation of test cases is according to program structure. Hence
knowledge of the program is used to identify additional test cases.
3. Objective of structural testing is to exercise all program statements.
Condition Testing
To test the logical conditions in the program module the condition testing is used. This
condition can be a Boolean condition or a relational expression. The condition is incorrect in
following situations.
1. Boolean operator is incorrect, missing or
2. Boolean variable is incorrect.
3. Boolean parenthesis may be missing, incorrect or extra.
4. Error in relational operator.
5. Error in arithmetic expression.
The condition testing focuses on each testing condition in the program.
The branch testing is a condition testing strategy in which for a compound condition each and every
true or false branches are tested.
The domain testing is a testing strategy in which relational expression can be tested using three or
four tests.
Basis Path Testing
White-box testing technique proposed by Tom McCabe
Enables the test case designer to derive a logical complexity measure of a procedural design
Uses this measure as a guide for defining a basis set of execution paths
Test cases derived to exercise the basis set are guaranteed to execute every statement in the
program at least one time during testing
Flow Graph Notation
• A circle in a graph represents a node, which stands for a sequence of one or more procedural
statements
• A node containing a simple conditional expression is referred to as a predicate node
– Each compound condition in a conditional expression containing one or more Boolean operators
(e.g., and, or) is represented by a separate predicate node
– A predicate node has two edges leading out from it (True and False)
• An edge, or a link, is a an arrow representing flow of control in a specific direction
– An edge must start and terminate at a node
– An edge does not intersect or cross over another edge
• Areas bounded by a set of edges and nodes are called regions
• When counting regions, include the area outside the graph as a region, too
Independent Program Paths
• Defined as a path through the program from the start node until the end node that introduces at least
one new set of processing statements or a new condition (i.e., new nodes)
• Must move along at least one edge that has not been traversed before by a previous path
• Basis set for flow graph on previous slide
– Path 1: 0-1-11
– Path 2: 0-1-2-3-4-5-10-1-11
– Path 3: 0-1-2-3-6-8-9-10-1-11
– Path 4: 0-1-2-3-6-7-9-10-1-11
• The number of paths in the basis set is determined by the cyclomatic complexity
Deriving the Basis Set and Test Cases
1) Using the design or code as a foundation, draw a corresponding flow graph
2) Determine the cyclomatic complexity of the resultant flow graph
3) Determine a basis set of linearly independent paths
4) Prepare test cases that will force execution of each path in the basis set
Black-box testing is a type of software testing in which the tester is not concerned with the internal
knowledge or implementation details of the software, but rather focuses on validating the functionality
based on the provided specifications or requirements.
Each column corresponds to a rule which will become a test case for testing. So there will be 4 test
cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a software
system.
6. Compatibility testing – The test case result not only depends on the product but is also on the
infrastructure for delivering functionality. When the infrastructure parameters are changed it is still
expected to work properly. Some parameters that generally affect the compatibility of software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
BlackBoxTestingType
Unit Testing is a software testing technique by means of which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to determine
whether they are suitable for use or not. It is a testing method using which every independent module is
tested to determine if there is an issue by the developer himself. It is correlated with the functional
correctness of the independent modules. Unit Testing is defined as a type of software testing w here
individual components of a software are tested. Unit Testing of the software product is carried out
during the development of an application. An individual component may be either an individual function
or a procedure. Unit Testing is typically performed by the developer. In SDLC or V Model, Unit testing is
the first level of testing done before integration testing. Unit testing is such a type of testing technique
that is usually performed by developers. Although due to the reluctance of developers to test, quality
assurance engineers also do unit testing.
Integration testing is the process of testing the interface between two software units or modules. It
focuses on determining the correctness of the interface. The purpose of integration testing is to expose
faults in the interaction between integrated units. Once all the modules have been unit tested, integration
testing is performed.
Integration testing is a software testing technique that focuses on verifying the interactions and data
exchange between different components or modules of a software application. The goal of integration
testing is to identify any problems or bugs that arise when different components are combined and
interact with each other. Integration testing is typically performed after unit testing and before system
testing. It helps to identify and resolve integration issues early in the development cycle, reducing the
risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there should be
a proper sequence to be followed. And also if you don’t want to miss out on any integration scenarios
then you have to follow the proper sequence. Exposing the defects is the major focus of the integration
testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules
are combined and the functionality is verified after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable
only for very small systems. If an error is found during the integration testing, it is very difficult to localize
the error as the error may potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing is very expensive to fix.
Big-Bang integration testing is a software testing approach in which all components or modules of a
software application are combined and tested at once. This approach is typically used when the software
components have a low degree of interdependence or when there are constraints in the development
environment that prevent testing individual components. The goal of big-bang integration testing is to
verify the overall functionality of the system and to identify any integration problems that arise when the
components are combined. While big-bang integration testing can be useful in some situations, it can
also be a high-risk approach, as the complexity of the system and the number of interactions between
components can make it difficult to identify and diagnose problems.
Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low degree of interdependence between
components.
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are tested with
higher modules until all modules are tested. The primary purpose of this integration testing is that each
subsystem tests the interfaces among various modules making up the subsystem. This integratio n
testing uses test drivers to drive and pass appropriate data to the lower -level modules.
Advantages:
• In bottom-up testing, no stubs are required.
• A principal advantage of this integration testing is that several disjoint subsystems can be tested
simultaneously.
• It is easy to create the test conditions.
• Best for applications that uses bottom up design approach.
• It is Easy to observe the test results.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large number of small
subsystems.
• As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to simulate
the behaviour of the lower-level modules that are not yet integrated. In this integration testing, testing
takes place from top to bottom. First, high-level modules are tested and then low-level modules and
finally integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
• Easier isolation of interface errors.
• In this, design defects can be found in the early stages.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
• It is difficult to observe the test output.
• It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration testing.
A mixed integration testing follows a combination of top down and bottom-up testing approaches. In top-
down approach, testing can start only after the top-level module have been coded and unit tested. In
bottom-up approach, testing can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. It is also called
the hybrid integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up approaches.
• Parallel test can be performed in top and bottom layer tests.
Disadvantages:
• For mixed integration testing, it requires very high cost because one part has a Top -down approach
while another part has a bottom-up approach.
• This integration testing cannot be used for smaller systems with huge interdependence between
different modules.
System test falls under the black box testing category of Software testing.
White box testing is the testing of the internal workings or code of a software application.
In contrast, black box or System Testing is the opposite. System test involves the external
workings of the software from the user’s perspective.
• Unit testing performed on each module or block of code during development. Unit
Testing is normally done by the programmer who writes the code.
• Integration testing done before, during and after integration of a new module into the
main software package. This involves testing of each individual code module. One
piece of software can contain several modules which are often created by several
different programmers. It is crucial to test each module’s effect on the entire program
model.
• System testing done by a professional testing agent on the completed software
product before it is introduced to the market.
• Acceptance testing – beta testing of the product done by the actual end users.
1. Usability Testing – mainly focuses on the user’s ease to use the application,
flexibility in handling controls and ability of the system to meet its objectives
2. Load Testing – is necessary to know that a software solution will perform under
real-life loads.
3. Regression Testing – involves testing done to make sure none of the changes
made over the course of the development process have caused new bugs. It also
makes sure no old bugs appear from the addition of new software modules over
time.
4. Recovery Testing – is done to demonstrate a software solution is reliable,
trustworthy and can successfully recoup from possible crashes.
5. Migration Testing – is done to ensure that the software can be moved from older
system infrastructures to current system infrastructures without any issues.
6. Functional Testing – Also known as functional completeness testing, Functional
Testing involves trying to think of any possible missing functions. Testers might
make a list of additional functionalities that a product could have to improve it during
functional testing.
7. Hardware/Software Testing – IBM refers to Hardware/Software testing as “HW/SW
Testing”. This is when the tester focuses his/her attention on the interactions
between the hardware and software during system testing.
• Who the tester works for – This is a major factor in determining the types of system
testing a tester will use. Methods used by large companies are different than that
used by medium and small companies.
• Time available for testing – Ultimately, all 50 testing types could be used. Time is
often what limits us to using only the types that are most relevant for the software
project.
• Resources available to the tester – Of course some testers will not have the
necessary resources to conduct a testing type. For example, if you are a tester
working for a large software development firm, you are likely to have expensive
automated testing software not available to others.
• Software Tester’s Education- There is a certain learning curve for each type of
software testing available. To use some of the software involved, a tester has to
learn how to use it.
• Testing Budget – Money becomes a factor not just for smaller companies and
individual software developers but large companies as well.
It’s one of most frequent black box testing types which is performed by QA Team.
As per the below diagram, there will be a test strategy and test plan for component testing.
Where each and every part of the software or application is considered individually. For
each of this component a Test Scenario will be defined, which will be further brought down
into a High Level Test Cases -> Low Level detailed Test Cases with Prerequisites.
The usage of the term “Component Testing” varies from domain to domain and
organization to organization.
The most common reason for different perception of Component testing are
As we know Software Test Life Cycle Architecture has lots many test-artifacts (Documents
made, used during testing activities). Among many tests – artifacts, it’s the Test Policy &
Test Strategy which defines the types of testing, depth of testing to be performed in a given
project.
Who does Component Testing
Component testing is performed by testers. ‘Unit Testing’ is performed by the developers
where they do the testing of the individual functionality or procedure. After Unit Testing is
performed, the next testing is component testing. Component testing is done by the testers.
Component testing may be done with or without isolation of rest of other components in the
software or application under test. If it’s performed with the isolation of other component,
then it’s referred as Component Testing in Small.
Example 1: Consider a website which has 5 different web pages then testing each
webpage separately & with the isolation of other components is referred as Component
testing in Small.
Example 2: Consider the home page of the guru99.com website which has many
components like
Home, Testing, SAP, Web, Must Learn!, Big Data, Live Projects, Blog and etc.
Similarly, any software is made of many components and also, every component will have
its own subcomponents. Testing each modules mentioned in example 2 separately without
considering integration with other components is referred as Component Testing in Small.
Component testing done without isolation of other components in the software or application
under test is referred as Component Testing Large.
The developer has developed the component B and wants it tested. But in order
to completely test the component B, few of its functionalities are dependent on component
A and few on component C.
Functionality Flow: A -> B -> C which means there is a dependency to B from both A & C,
as per the diagram stub is the called function, and the driver is the calling function.
But the component A and component C has not been developed yet. In that case, to test
the component B completely, we can replace the component A and component C by stub
and drivers as required. So basically, component A & C are replaced by stub & driver’s
which acts as a dummy object till they are actually developed.
• Stub: A stub is called from the software component to be tested as shown in the
diagram below ‘Stub’ is called by Component A.
• Driver: A driver calls the component to be tested as shown in the diagram below
‘Component B’ is called by Driver.
When the user entered valid user-id and password in the text field and click on submit
button, the web page will be navigating to the home page of demo bank website.
So here login page is one component, and the home page is another. Now testing the
functionality of individual pages separately is called component testing.
• Enter invalid user id and verify if any user-friendly warning pop up is shown to the
end user.
• Enter invalid user id and password and click on ‘reset’ and verify if the data entered
in the text fields user-id and password are cleared out.
• Enter the valid user name and password and click on ‘Login’ button.
• Verify if the “Welcome to manager page of guru99 bank” message is being displayed
on the home page.
• Verify if all the link on the left side of the web page are clickable.
• Verify if the manager id is being displayed in the center of the home page.
• Verify the presence of the 3 different images on the home page as per the diagram.
2.5 BUILDING THE REQUIREMENTS MODEL: The intent of the analysis model is to provide a description
of the required informational, functional, and behavioral domains for a computer-based system. The
model changes dynamically as you learn more about the system to be built, and other stakeholders
understand more about what they really require. For that reason, the analysis model is a snapshot of
requirements at any given time.
2.5.1 Elements of the Requirements Model: There are many different ways to look at the requirements
for a computer-based system. Different modes of representation force you to consider requirements
from different viewpoints—an approach that has a higher probability of uncovering omissions,
inconsistencies, and ambiguity.
Scenario-based elements. The system is described from the user’s point of view using a scenario-based
approach. For example, basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Scenario-based elements of the requirements model are often the
first part of the model that is developed. Three levels of elaboration are shown, culminating in a
scenario-based representation.
Class-based elements. Each usage scenario implies a set of objects that are manipulated as an actor
interacts with the system. These objects are categorized into classes—a collection of things that have
similar attributes and common behaviors.
Behavioral elements. The behavior of a computer-based system can have a profound effect on the
design that is chosen and the implementation approach that is applied. Therefore, the requirements
model must provide modeling elements that depict behavior. The state diagram is one method for
representing the behavior of a system by depicting its states and the events that cause the system to
change state. A state is any externally observable mode of behavior. In addition, the state diagram
indicates actions taken as a consequence of a particular event.
2.5.2 Analysis Patterns: Anyone who has done requirements engineering on more than a few software
projects begins to notice that certain problems reoccur across all projects within a specific application
domain. These analysis patterns suggest solutions (e.g., a class, a function, Software Engineering Lecture
notes GPCET, Department of CSE | 53 a behavior) within the application domain that can be reused
when modeling many applications. Analysis patterns are integrated into the analysis model by reference
to the pattern name. They are also stored in a repository so that requirements engineers can use search
facilities to find and apply them. Information about an analysis pattern (and other types of patterns) is
presented in a standard template.
Boehm [Boe98] defines a set of negotiation activities at the beginning of each software process
iteration.
Rather than a single customer communication activity,the following activities are defined:
3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of win-win conditions
for all concerned.
2.7 VALIDATING REQUIREMENTS As each element of the requirements model is created, it is examined
for inconsistency, omissions, and ambiguity. The requirements represented by the model are prioritized
by the stakeholders and grouped within requirements packages that will be implemented as software
increments. A review of the requirements model addresses the following questions:
• Is each requirement consistent with the overall objectives for the system/product?
• Have all requirements been specified at the proper level of abstraction? That is, do some
requirements provide a level of technical detail that is inappropriate at this stage?
• Is the requirement really necessary or does it represent an add-on feature that may not be
essential to the objective of the system?
• Is each requirement bounded and unambiguous?
• Does each requirement have attribution? That is, is a source noted for each requirement?
• Do any requirements conflict with other requirements?
Test automation is the process of using automation tools to maintain test data, execute
tests, and analyze test results to improve software quality.
Here is a safe list of test types that can be automated without a doubt.
1. Unit Testing
Unit testing is when you isolate a single unit of your application from the rest of the
software and test its behavior. These tests don’t depend on external APIs, databases,
or anything else.
If you have a function on which you want to perform a unit test and that function uses
some external library or even another unit from the same app, then these resources will
be mocked.
The main purpose of unit testing is to see how each component of your application will
work, without being impacted by anything else. Unit testing is performed during the
development phase, is considered as the first level of testing.
2. Integration Testing
In integration testing, you test how the units are integrated logically and how they work
as a group.
The main purpose of integration testing is to verify how the modules communicate and
behave together and to evaluate the compliance of a system.
3. Smoke Testing
Smoke testing is performed to examine whether the system build is stable or not. In
short, its purpose is to examine if the main functionalities work properly so that testers
can proceed with further testing.
4. Regression Testing
Regression testing checks that a recent change in code doesn’t affect any existing
features of the app in question. In simple terms, it verifies that changes made to the
system did not break any functionality that was working correctly prior to their
implementation.
There are several types of tests that can be automated. Automated testing is when you
configure a script/program to do the same steps as you would do to manually test the
software. .
In the end, the script will perform whatever you instructed it to and it will show you if the
test result is the same as the one that you expected.
They can create functional and nonfunctional code-based test automation scripts with
tools like Selenium and Appium, among others. The SDET is always accountable for the
code-based testing.
The software developer tester creates unit and build acceptance tests.
Software developers also operate in code-based testing. They also work in UI and UX
tests, which are manual.
Test automation is a perfect solution for common, repetitive, and high-volume testing.
Coordinating and managing testing are now becoming much easier . You can track and
share testing results from a single, centralized location.
This gives you more thorough test coverage, because more testing can be
accomplished. While there is definitely manual work still involved in testing, using
Perfecto improves the accuracy and coverage of testing for teams competing in an
increasingly fast-paced software market.
It is generally seen that a large number of errors occur at the boundaries of the defined
input values rather than the center. It is also known as BVA and gives a selection of test
cases which exercise bounding values.
This black box testing technique complements equivalence partitioning. This software
testing technique base on the principle that, if a system works well for these particular
values then it will work perfectly well for all values which comes between the two boundary
values.
• If an input condition is restricted between values x and y, then the test cases should
be designed with values x and y as well as values which are above and below x and
y.
• If an input condition is a large number of values, the test case should be developed
which need to exercise the minimum and maximum numbers. Here, values above
and below the minimum and maximum values are also tested.
• Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the
minimum and the maximum values expected. It also tests the below or above
values.
• Example:
Equivalence Class Partitioning
Equivalent Class Partitioning allows you to divide set of test condition into a partition which
should be considered the same. This software testing method divides the input domain of a
program into classes of data from which test cases should be designed.
The concept behind this Test Case Design Technique is that test case of a representative
value of each class is equal to a test of any other value of the same class. It allows you to
Identify valid as well as invalid equivalence classes.
Example:
1 to 10 and 20 to 30
Hence there are five equivalence classes
--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,
1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is
considered invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test cases will be
more than 100. To address this problem, we use equivalence partitioning hypothesis where
we divide the possible values of tickets into groups or sets as shown below where the
system behavior can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick
only one value from each partition for testing. The hypothesis behind this technique is that
if one condition/value in a partition passes all others will also pass. Likewise, if one
condition in a partition fails, all other conditions in that partition will fail.
Boundary Value Analysis– in Boundary Value Analysis, you test boundaries between
equivalence partitions
In our earlier equivalence partitioning example, instead of checking one value for each
partition, you will check the values at the partitions like 0, 1, 10, 11 and so on. As you may
observe, you test values at both valid and invalid boundaries. Boundary Value Analysis
is also called range checking.
Equivalence partitioning and boundary value analysis(BVA) are closely related and can be
used together at all levels of testing.
The first task is to identify functionalities where the output depends on a combination of
inputs. If there are large input set of combinations, then divide it into smaller subsets which
are helpful for managing a decision table.
For every function, you need to create a table and list down all types of combinations of
inputs and its respective outputs. This helps to identify a condition that is overlooked by the
tester.
• State transition should be used when a testing team is testing the application for a
limited set of input values.
• The Test Case Design Technique should be used when the testing team wants to
test sequence of events which happen in the application under test.
Example:
In the following example, if the user enters a valid password in any of the first three
attempts the user will be able to log in successfully. If the user enters the invalid password
in the first or second try, the user will be prompted to re-enter the password. When the user
enters password incorrectly 3rd time, the action has taken, and the account will be blocked.
State Transition Diagram
In this diagram when the user gives the correct PIN number, he or she is moved to Access
granted state. Following Table is created based on the diagram above-
Error Guessing
Error Guessing is a software testing technique based on guessing the error which can
prevail in the code. The technique is heavily based on the experience where the test
analysts use their experience to guess the problematic part of the testing application.
Hence, the test analysts must be skilled and experienced for better error guessing.
The technique counts a list of possible errors or error-prone situations. Then tester writes
a test case to expose those errors. To design test cases based on this software testing
technique, the analyst can use the past experiences to identify the conditions.
• The test should use the previous experience of testing similar applications
• Understanding of the system under test
• Knowledge of typical implementation errors
• Remember previously troubled areas
• Evaluate Historical data & Test results
Use a case-driven approach that follows a set of actions performed by one or more entities. A
use case refers to the process of the team performing the development work from the functional
requirements. The functional requirements are made from the list of requirements that were
specified by the client. For example, an online learning management system can be specified in
terms of use cases such as "add a course," "delete a course," "pay fees," and so on.
An iterative and incremental approach means that the product will be developed in multiple
phases. During these phases, the developers evaluate and test.
Phases
We can represent a unified process model as a series of cycles. Each cycle ends with the
release of a new system version for the customers. We have four phases in every cycle:
• Inception
• Elaboration
• Construction
• Transition
The phases of the unified process
Inception
The main goal of this phase involves delimiting the project scope. This is where we define why
we are making this product in the first place. It should have the following:
Elaboration
We build the system given the requirements, cost, and time constraints and all the risks
involved. It should include the following:
Construction
This phase is where the development, integration, and testing take place. We build the complete
architecture in this phase and hand the final documentation to the client.
Transition
This phase involves the deployment, multiple iterations, beta releases, and improvements of the
software. The users will test the software, which may raise potential issues. The development
team will then fix those errors.
This method allows us to deal with the changing requirements throughout the development
period. The unified process model has various applications which also makes it complex in
nature. Therefore, it's most suitable for smaller projects and should be implemented by a team
of professionals.
System Requirements
System requirements are the configuration that a system must have in order for a
hardware or software application to run smoothly and efficiently. Failure to meet
these requirements can result in installation problems or performance problems.
The former may prevent a device or application from getting installed, whereas
the latter may cause a product to malfunction or perform below expectation or
even to hang or crash.
For packaged products, system requirements are often printed on the packaging.
For downloadable products, the system requirements are often indicated on the
download page. System requirements can be broadly classified as functional
requirements, data requirements, quality requirements and constraints. They are
often provided to consumers in complete detail. System requirements often
indicate the minimum and the recommended configuration. The former is the
most basic requirement, enough for a product to install or run, but performance
is not guaranteed to be optimal. The latter ensures a smooth operation.
❏ Lutz (Lutz, 1993) discovered that many failures experienced by users were a consequence of
specification errors and omissions that could not be detected by formal system specification.
❏ System users rarely understand formal notations so they cannot read the formal specification
directly to find errors and omissions.
❏ Program proofs are large and complex, so, like large and complex programs, they usually
contain errors.
▪ In spite of their disadvantages, formal methods have an important role to play in the development
of critical software systems.
▪ Formal specifications are very effective in discovering specification problems that are the most
common causes of system failure.
▪ Formal verification increases confidence in the most critical components of these systems.
▪ The use of formal approaches is increasing as procurers demand it and as more and more
engineers become familiar with these techniques.
Verification and Validation
Verification and Validation is the process of investigating that a software system satisfies specifications
and standards and it fulfills the required purpose.
Verification:
● Verification is the process of checking that a software achieves its goal without any bugs.
● It is the process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have.
1. Inspections
2. Reviews
3. Walkthroughs
4. Desk-checking
Validation:
● Validation is the process of checking whether the software product is up to the mark.
● It is the process of checking the validation of product i.e., it checks what we are developing is the right
product.
● Verification process includes checking of documents, design, code and program whereas the Validation
process includes testing and validation of the actual product.
● Verification does not involve code execution while Validation involves code execution.
● Verification uses methods like reviews, walkthroughs, inspections and deskchecking whereas
Validation uses methods like black box testing, white box testing and non-functional testing.
● Verification checks whether the software confirms a specification whereas Validation checks whether
the software meets the requirements and expectations.
● Verification finds the bugs early in the development cycle whereas Validation finds the bugs that
verification can not catch.
● Verification process targets software architecture, design, database, etc. while the Validation process
targets the actual software product.
● Verification is done by the QA team while Validation is done by the involvement of testing team with
QA team.
● Verification process comes before validation whereas the Validation process comes after verification.