0% found this document useful (0 votes)
50 views127 pages

Agile Model

Uploaded by

Kumar valmiki j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views127 pages

Agile Model

Uploaded by

Kumar valmiki j
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

Agile model

The agile process model encourages continuous iterations of development and


testing. Each incremental part is developed over an iteration, and each iteration is
designed to be small and manageable so it can be completed within a few weeks.

Each iteration focuses on implementing a small set of features completely. It


involves customers in the development process and minimizes documentation by
using informal communication.

Agile development considers the following:

1. Requirements are assumed to change


2. The system evolves over a series of short iterations
3. Customers are involved during each iteration
4. Documentation is done only when needed

Though agile provides a very realistic approach to software development, it isn’t


great for complex projects. It can also present challenges during transfers as there
is very little documentation. Agile is great for projects with changing requirements.

Phases of Agile model:

1. Requirements gathering

2.Design the requirements

3.Construction/ iterati on
4.Testing/ Quality assurance

5.Deployment

6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You
should explain business opportunities and plan the time and effort needed to build
the project. Based on this information, you can evaluate technical and economic
feasibility.

2. Design the requirements: When you have identified the project, work with
stakeholders to define requirements. You can use the user flow diagram or the
high-level UML diagram to show the work of new features and show how it will
apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work
begins. Designers and developers start working on their project, which aims to
deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's
performance and looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work
environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team
receives feedback about the product and works through the feedback.

Advantages:

1.Frequent Delivery

2.Face-to-Face Communication with clients.

3.Efficient design and fulfils the business requirement.

4. Anytime changes are acceptable.

5. It reduces total development time.


Disadvantages:

1.Due to the shortage of formal documents, it creates confusion and crucial


decisions taken throughout various phases can be misinterpreted at any time by
different team members.

Some commonly used agile methodologies include:

2.Due to the lack of proper documentation, once the project completes and the
developers allotted to another project, maintenance of the finished project can
become a difficulty.

Extreme Programming :

XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun


way to develop software.

Extreme Programming (XP) was conceived and developed to address the specific
needs of software development by small teams in the face of vague and changing
requirements.

Extreme Programming is one of the Agile software development methodologies. It


provides values and principles to guide the team behavior. The team is expected to
self-organize. Extreme Programming provides specific core practices where –

• Each practice is simple and self-complete.

• Combination of practices produces more complex and emergent behavior


Other process models of Agile Development and Tools

 Crystal

 Scrum

Scrum : Scrum is aimed at sustaining strong collaboration between people


working on complex products, and details are being changed or added. It is based
upon the systematic interactions between the three major roles: Scrum Master,
Product Owner, and the Team.

• Scrum Master is a central figure within a project. His principal responsibility is to


eliminate all the obstacles that might prevent the team from working efficiently.

• Product Owner, usually a customer or other stakeholder, is actively involved


throughout the project, conveying the global vision of the product and providing
timely feedback on the job done after every sprint.

• Scrum Team is a cross-functional and self-organizing group of people that is


responsible for the product implementation. It should consist of up to 7 team
members, in order to stay flexible and productive

Crystal : Crystal is an agile methodology for software development. It places


focus on people over processes, to empower teams to find their own solutions for
each project rather than being constricted with rigid methodologies.
Crystal methods focus on:-

• People involved
• Interaction between the teams
• Community
• Skills of people involved
• Their Talents
• Communication between all the teams

Assessing Alternative Architectural Design

Design results in a number of architectural alternatives that are each assessed to determine which is the
most appropriate for the problem to be solved.

Two different approaches for the assessment of alternative architectural designs.

(1) The first method uses an iterative method to assess design trade-offs.

(2)The second approach applies a pseudo-quantitative technique for assessing design quality.

An Architecture Trade-Off Analysis Method

• The Software Engineering Institute (SEI) has developed an architecture trade-off analysis method
(ATAM) that establishes an iterative evaluation process for software architectures.
• The design analysis activities that follow are performed iteratively.

1. Collect scenarios : A set of use cases is developed to represent the system from the user’s point of
view.

2. Elicit (Bring out) requirements, constraints, and environment description. This information is
determined as part of requirements engineering and is used to be certain that all stakeholder concerns
have been addressed.

3. Describe the architectural styles/patterns that have been chosen to address the scenarios and
requirements.

The architectural style(s) should be described using one of the following architectural views…

• Module view for analysis of work assignments with components and the degree to which
information hiding has been achieved.
• Process view for analysis of system performance.
• Data flow view for analysis of the degree to which the architecture meets functional
requirements.
4. Evaluate quality attributes : Quality attributes for architectural design assessment include reliability,
performance, security, maintainability, flexibility, testability, portability, reusability, and interoperability.

5.Identify the sensitivity of quality attributes to various architectural attributes for a specific
architectural style. This can be accomplished by making small changes in the architecture and
determining how sensitive a quality attribute, say performance, is to the change. Any attributes that are
significantly affected by variationin the architecture are termed sensitivity points..

6. Critique (Assess) candidate architectures (developed in step 3) using the sensitivity analysis
conducted in step 5.

• The Software Engineering Institute (SEI) describes this approach in the following manner
• Once the architectural sensitivity points have been determined, finding trade-off points is simply
the identification of architectural elements to which multiple attributes are sensitive. For
example, the performance of a client-server architecture might be highly sensitive to the
number of servers (performance increases, within some range, by increasing the number of
servers). . . . The number of servers, then, is a trade-off point with respect to this architecture.

Architectural Complexity

• A useful technique for assessing the overall complexity of a proposed architecture is to consider
dependencies between components within the architecture.
• These dependencies are driven by information/control flow within the system. Zhao suggests
three types of dependencies:
1. Sharing dependencies represent dependence relationships among consumers who
use the same resource or producers who produce for the same consumers.
2. For example, for two components u and v, if u and v refer to the same global
data, then there exists a shared dependence relationship between u and v.
3. Flow dependencies represent dependence relationships between producers and
consumers of resources
4. Constrained dependencies represent constraints on the relative flow of control
among a set of activities. For example, for two components u and v, u and v cannot
execute at the same time (mutual exclusion), then there exists a constrained
dependence relationship between u and v.
▪ Architectural Description Language
• The architect of a house has a set of standardized tools and notation that allow the design to be
represented in an unambiguous, understandable fashion.
• The software architect can draw on Unified Modeling Language (UML) notation, other
diagrammatic forms, and a few related tools, there is a need for a more formal approach to the
specification of an architectural design.
• Architectural description language (ADL) provides a semantics and syntax for describing a
software architecture.
• Hofmann and his colleagues suggest that
i. An ADL should provide the designer with the ability to decompose architectural
components,
ii. Compose individual components into larger architectural blocks,
iii. Represent interfaces (connection mechanisms) between components.
• Once descriptive, language based techniques for architectural design have been established, it is
more likely that effective assessment methods for architectures will be established as the design
evolves.

Automated Static Analysis


● Inspections are one form of static analysis where you examine the program without executing it.

● Inspections are often driven by checklists of errors and heuristics that identify common errors in
different programming languages.

● For some errors and heuristics( an approach to problem solving or selfdiscovery ), it is possible to
automate the process of checking programs against this list, which has resulted in the development of
automated static analyzers for different programming languages.

● Static analyzers are software tools that scan the source text of a program and detect possible faults and
anomalies.
● They parse the program text and thus recognize the types of statements in the program.

● They can then detect whether statements are well formed, make inferences about the control flow in the
program and, in many cases, compute the set of all possible values for program data.

● They complement the error detection facilities provided by the language compiler.

● They can be used as part of the inspection process or as a separate V & V process activity.

● The intention of automatic static analysis is to draw an inspector’s attention to anomalies in the
program, such as variables that are used without initialization, variables that are unused or data whose
value could go out of range.

The stages involved in static analysis include:

1. Control flow analysis

▪ This stage identifies and highlights loops with multiple exit or entry points and unreachable code.
▪ Unreachable code is code that is surrounded by unconditional go tostatements or that is in a
branch of a conditional statement where the guarding condition can never be true.

2. Data use analysis

▪ This stage highlights how variables in the program are used.


▪ It detects variables that are used without previous initialization, variables that are written twice
without an intervening assignment and variables that are declared but never used.
▪ Data use analysis also discovers ineffective tests where the test condition is redundant. Redundant
conditions are conditions that are either always true or always false.

3. Interface analysis

▪ This analysis checks the consistency of routine and procedure declarations and their use.
▪ It is unnecessary if a strongly typed language such as Java is used for implementation as the
compiler carries out these checks.
▪ Interface analysis can detect type errors in weakly typed languages like FORTRAN and C.
▪ Interface analysis can also detect functions and procedures that are declared and never called or
function results that are never used.

4. Information flow analysis

▪ This phase of the analysis identifies the dependencies between input and output variables.
▪ While it does not detect anomalies, it shows how the value of each program variable is derived
from other variable values.
▪ With this information, a code inspection should be able to find values that have been wrongly
computed.
▪ Information flow analysis can also show the conditions that affect a variable’s value.

5. Path analysis
▪ This phase of semantic analysis identifies all possible paths through the program and sets out the
statements executed in that path.
▪ It essentially unravels the program’s control and allows each possible predicate to be analyzed
individually.

Automated static analysis checks with Fault class are as follow:

1. Data faults

▪ Variables used before initialization


▪ Variables declared but never used
▪ Variables assigned twice but never used between assignments
▪ Possible array bound violations
▪ Undeclared variables

2. Control faults

▪ Unreachable code
▪ Unconditional branches into loops

3. Input/output faults

• Variables output twice with no intervening assignment

4. Interface faults

▪ Parameter type mismatches


▪ Parameter number mismatches
▪ Non-usage of the results of functions
▪ Uncalled functions and procedures

5. Storage management faults

▪ Unassigned pointers
▪ Pointer arithmetic

CASE
Computer-aided software engineering (CASE) is the implementation of computer-facilitated tools and
methods in software development. CASE is used to ensure high-quality and defect-free software. CASE
ensures a check-pointed and disciplined approach and helps designers, developers, testers, managers, and
others to see the project milestones during development.
CASE can also help as a warehouse for documents related to projects, like business plans, requirements, and
design specifications. One of the major advantages of using CASE is the delivery of the final product, which is
more likely to meet real-world requirements as it ensures that customers remain part of the process.
CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a
framework for organizing projects and to be helpful in enhancing productivity. There was more interest in the
concept of CASE tools years ago, but less so today, as the tools have morphed into different functions, often in
reaction to software developer needs. The concept of CASE also received a heavy dose of criticism after its
release.
CASE Tools: The essential idea of CASE tools is that in-built programs can help to analyze developing
systems in order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became
part of the software lexicon, and big companies like IBM were using these kinds of tools to help create
software.
Various tools are incorporated in CASE and are called CASE tools, which are used to support different stages
and milestones in a software development life cycle.
Types of CASE Tools:
1. Diagramming Tools:
It helps in diagrammatic and graphical representations of the data and system processes. It represents
system elements, control flow and data flow among different software components and system structures in
a pictorial form. For example, Flow Chart Maker tool for making state-of-the-art flowcharts.
2. Computer Display and Report Generators: These help in understanding the data requirements and the
relationships involved.
3. Analysis Tools: It focuses on inconsistent, incorrect specifications involved in the diagram and data flow.
It helps in collecting requirements, automatically check for any irregularity, imprecision in the diagrams,
data redundancies, or erroneous omissions.
For example:
• (i) Accept 360, Accompa, CaseComplete for requirement analysis.
• (ii) Visible Analyst for total analysis.

4. Central Repository: It provides a single point of storage for data diagrams, reports, and documents related
to project management.

5. Documentation Generators: It helps in generating user and technical documentation as per standards. It
creates documents for technical users and end users.
For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.
6. Code Generators: It aids in the auto-generation of code, including definitions, with the help of designs,
documents, and diagrams.
Advantages of the CASE approach:
• As the special emphasis is placed on the redesign as well as testing, the servicing cost of a product over its
expected lifetime is considerably reduced.
• The overall quality of the product is improved as an organized approach is undertaken during the process of
development.
• Chances to meet real-world requirements are more likely and easier with a computer-aided software
engineering approach.
• CASE indirectly provides an organization with a competitive advantage by helping ensure the development
of high-quality products.
• It provides better documentation.
• It improves accuracy.
• It provides intangible benefits.
• It reduces lifetime maintenance.
• It is an opportunity to non-programmers.
• It impacts the style of working of the company.
• It reduces the drudgery in software engineer’s work.
• It increases the speed of processing.
• It is easy to program software.
Disadvantages of the CASE approach:
• Cost: Using a case tool is very costly. Most firms engaged in software development on a small scale do not
invest in CASE tools because they think that the benefit of CASE is justifiable only in the development of
large systems.
• Learning Curve: In most cases, programmers’ productivity may fall in the initial phase of
implementation, because users need time to learn the technology. Many consultants offer training and on-
site services that can be important to accelerate the learning curve and to the development and use of the
CASE tools.
• Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE
integration and data integration across all platforms is extremely important.
Software Engineering | Reverse Engineering
COCOMO MODEL
What is a Functional Requirement?
A Functional Requirement (FR) is a description of the service that the software
must offer. It describes a software system or its component. A function is nothing but
inputs to the software system, its behavior, and outputs. It can be a calculation, data
manipulation, business process, user interaction, or any other specific functionality
which defines what function a system is likely to perform. Functional Requirements
in Software Engineering are also called Functional Specification.
In software engineering and systems engineering, a Functional Requirement can
range from the high-level abstract statement of the sender’s necessity to detailed
mathematical functional requirement specifications. Functional
software requirements help you to capture the intended behaviour of the system.

What should be included in the Functional


Requirements Document?
Here is how to write functional requirements document:

Example Functional Requirements


Functional Requirements of a system should include the following things:

• Details of operations conducted in every screen


• Data handling logic should be entered into the system
• It should have descriptions of system reports or other outputs
• Complete information about the workflows performed by the system
• It should clearly define who will be allowed to create/modify/delete the data in
the system
• How the system will fulfill applicable regulatory and compliance needs should
be captured in the functional document

Here, are the pros/advantages of creating a typical functional requirement


document-

• Helps you to check whether the application is providing all the functionalities
that were mentioned in the functional requirement of that application
• A functional requirement document helps you to define the functionality of a
system or one of its subsystems.
• Functional requirements along with requirement analysis help identify missing
requirements. They help clearly define the expected system service and
behavior.
• Errors caught in the Functional requirement gathering stage are the cheapest
to fix.
• Support user goals, tasks, or activities

Example of Functional Requirements


Below are the popular functional requirements examples:

• The software automatically validates customers against the ABC Contact


Management System
• The Sales system should allow users to record customers sales
• The background color for all windows in the application will be blue and have
a hexadecimal RGB color value of 0x0000FF.
• Only Managerial level employees have the right to view revenue data.
• The software system should be integrated with banking API
• The software system should pass Section 508 accessibility requirement.

Non Functional vs. Functional RequirementsHere, are


key differences between Functional and Nonfunctional requirements
in Software Engineering:
Parameters Functional Requirement Non-Functional Requirement
What it is Verb Attributes
Requirement It is mandatory It is non-mandatory
Capturing type It is captured in use case. It is captured as a quality attribute.
End result Product feature Product properties
Capturing Easy to capture Hard to capture
Helps you verify the functionality of Helps you to verify the performance of
Objective
the software. the software.
Area of focus Focus on user requirement Concentrates on the user’s expectation.
Documentation Describe what the product does Describes how the product works
Functional Testing like System, Non-Functional Testing like
Type of Testing Integration, End to End, API testing, Performance, Stress, Usability, Security
etc. testing, etc.
Test Execution is done before non-
Test Execution After the functional testing
functional testing.
Product Info Product Features Product Properties

What is Non-Functional Requirement?


Non-Functional Requirement (NFR) specifies the quality attribute of
a software system. They judge the software system based on
Responsiveness, Usability, Security, Portability and other non-
functional standards that are critical to the success of the software
system. Example of nonfunctional requirement, “how fast does the
website load?” Failing to meet non-functional requirements can result
in systems that fail to satisfy user needs.
Non Functional requirements in Software Engineering allows you to
impose constraints or restrictions on the design of the system across
the various agile backlogs. Example, the site should load in 3 seconds
when the number of simultaneous users are > 10000. Description of
non-functional requirements is just as critical as a functional
requirement.
Types of Non-functional Requirement
Below are the main types of non functional requirements:
• t
Examples of Non-functional requirements
Here, are some examples of Non functional requirements:

1. Users must change the initially assigned login password


immediately after the first successful login. Moreover, the initial
should never be reused.
2. Employees never allowed to update their salary information.
Such attempt should be reported to the security administrator.
3. Every unsuccessful attempt by a user to access an item of data
shall be recorded on an audit trail.
4. A website should be capable enough to handle 20 million users
with affecting its performance
5. The software should be portable. So moving from one OS to
other OS does not create any problem.
6. Privacy of information, the export of restricted technologies,
intellectual property rights, etc. should be audited.

Functional vs Non Functional Requirements


Following is the main difference between Functional and Non
functional requirements:

Functional Requirement Non-Functional Requirem


Parameters
What is it? Verb Attributes
Requirement It is mandatory It is non-mandatory
Capturing typeIt is captured in use case. It is captured as a quality attribute.
End-result Product feature Product properties
Capturing Easy to capture Hard to capture
Helps you verify the functionality of the Helps you to verify the performanc
Objective
software. software.
Area of focus Focus on user requirement Concentrates on the user’s expect
Documentation Describe what the product does Describes how the product works
Functional Requirement Non-Functional Requirem
Parameters
Functional Testing like System,
Non-Functional Testing like Perfor
Type of Testing Integration, End to End, API testing,
Stress, Usability, Security testing,
etc.
Test Execution is done before non-
Test Execution After the functional testing
functional testing.
Product Info Product Features Product Properties

Advantages of Non-Functional Requirement


Benefits/pros of Non-functional testing are:

• The nonfunctional requirements ensure the software system


follow legal and compliance rules.
• They ensure the reliability, availability, and performance of the
software system
• They ensure good user experience and ease of operating the
software.
• They help in formulating security policy of the software system.

Disadvantages of Non-functional requirement


Cons/drawbacks of Non-function requirement are:

• None functional requirement may affect the various high-level


software subsystem
• They require special consideration during the software
architecture/high-level design phase which increases costs.
• Their implementation does not usually map to the specific
software sub-system,
• It is tough to modify non-functional once you pass the
architecture phase.

INTRODUCTION ,ROLE AND IMP 0F SE


Roles and Importance of Software Engineering

Most people don't give a second thought to new technologies as they make
their life easier and more comfortable to drive. We need software
engineering because software engineering is important in daily life. We have
technology like Alexa only because we have software engineering. It has
made things possible which are always beyond our imagination.

1. The rise of technology

The rise of technology has catapulted software engineering to the leading


edge of the enterprise world and made it pretty critical. As technology
continues to seep into each component of our lives, we can need software
program improvement more, and it will become even more vital. From
working manually and on an analog basis, engineers have automated every
aspect of life by nurturing software development as an industry.

2. Adding structure

Without software engineering, we have people who can code. But software
engineering methodology has a structure to everything and makes the
lifecycle and business process easy and reliable.

3. Preventing issues

The software development process has now been formalized to prevent the
software project from running over budget, mismanagement, and poor
planning. The process of quality assurance and user testing is vital as it
helps prevent future issues at lower costs. And this is only possible due to
software engineering. For the success of projects, it becomes vitally
important.

4. Huge Programming

Huge programming is possible because of software engineering as it becomes


the extensive one that has steps to give them a scientific process.

5. Automation & AI

Currently, Automation and AI are hot subjects in the IT industry. Because of


software development, the manufacturing industry is overhauled by
automation. The quantity of humans operating on manufacturing unit
flooring continues to decrease as automation software improves. As this
fashion continues, maximum engineering disciplines will probably rely upon
software improvement in a few ways.

6. Research

Through research and development, only new technology arises from the
industry. It is possible today because software engineering is at the forefront
of new technology research and development. Through each step forward,
other parts of the industry can flourish as we stand on the shoulders of
giants.

Importance of Software Engineering

The importance of software engineering lies in the fact that a specific piece
of Software is required in almost every industry, every business, and
purpose. As time goes on, it becomes more important for the following
reasons.

1. Reduces Complexity

Dealing with big Software is very complicated and challenging. Thus, to


reduce the complications of projects, software engineering has great
solutions. It simplifies complex problems and solves those issues one by
one.

2. Handling Big Projects

Big projects need lots of patience, planning, and management, which you
never get from any company. The company will invest its resources;
therefore, it should be completed within the deadline. It is only possible if the
company uses software engineering to deal with big projects without
problems.

3. To Minimize Software Costs

Software engineers are paid highly as Software needs a lot of hard work
and workforce development. These are developed with the help of a large
number of codes. But programmers in software engineering project all
things and reduce the things which are not needed. As a result of the
production of Software, costs become less and more affordable for Software
that does not use this method.

4. To Decrease Time

If things are not made according to the procedures, it becomes a huge loss
of time. Accordingly, complex Software must run much code to get definitive
running code. So, it takes lots of time if not handled properly. And if you
follow the prescribed software engineering methods, it will save your
precious time by decreasing it.

5. Effectiveness

Making standards decides the effectiveness of things. Therefore, a company


always targets the software standard to make it more effective. And
Software becomes more effective only with the help of software engineering.

6. Reliable Software

The Software will be reliable if software engineering, testing, and


maintenance are given. As a software developer, you must ensure that the
Software is secure and will work for the period or subscription you have
agreed upon.
Introduction to Software Inspection
Software inspection involves people examining the source representation with the aim

of discovering anomalies and defects. An inspection does not require execution of. A

system so may be used before the implementation process. There may be applied to

any representation of the system requirements, design, test data, configuration data, etc.

They have been shown to be an effective technique for discovering program error. The

software inception is conducted only when the author, i.e. developer, has made sure

that the code is ready for inspection. He decides it by performing some preliminary desk

checking and walkthrough on the code. After passing through these review methods,

the code is then sent for group inception.


Process of Software Inspection
Software inspection involves 6 steps – Planning, Overview, Individual Preparation,

Inspection Meeting, Rework and Follow-up.

Learning Paths @ $19 Most Popular Learning Paths in Finance, Financial


Modeling and Excel just for $19 5 to 30+ Courses | 20 to 100+ Hours of Videos | Certificates for each
Course Completed

Step 1: Planning

• Select the group review team – 3 to 5 people group is best.

• Identify the moderator – Has the main responsibility for the inspection.

• Prepare the package for distribution – Work product for review plus supporting

docs.

• The package should be complete for review.

Step 2: Overview

• Brief meeting – Deliver package, explain the purpose of the review, introduction,

etc.

• All team members then individually review the work product. Lists the issues they

find in the self-preparation log, checklist and guidelines are used.

• Ideally should be done in one sitting, and issues are recorded in the log.
Step 3: Individual Preparation

Learning Paths @ $19 Most Popular Learning Paths in Web Dev, Programming,
Cyber Security and Testing just for $19 5 to 30+ Courses | 20 to 100+ Hours of Videos | Certificates for
each Course Completed

• Each reviewer studies the project individually.

• Notes down the issues that have come across while studying the project.

• Decides how to put up these issues and makes a note of it.

Step 4: Inspection Meeting

• Reviewer goes over the product line by line. At any line, all issues are raised.

• Discussion follows to identify if a defect.

• Decisions are recorded at the end of the inspection meeting.

• Scribe present the list of defects. If few defects, the work product is accepted;

else, it might be asked for another review.

• Group does not propose solutions through some suggestions that may be

recorded.

• A summary of the inspection is prepared, which is useful for evaluating

effectiveness.

Step 5 and Step 6: Rework and Follow-up


• Defects bin the defect list are fixed later by the author. These modifications are

made to repair the discovered errors. Once fixed, the author gets it OKed by the

moderator or goes for another review.

• A reinspection may or may not be required.

• Once all defects are satisfactorily addressed, the review is completed, and

collected data is submitted.

Inspection Roles
Various roles involved in an inspection are as follows:

Learning Paths @ $19 Most Popular Learning Paths in Data Science, Machine
Learning and AI just for $19 5 to 30+ Courses | 20 to 100+ Hours of Videos | Certificates for each Course
Completed

• Author or owner: The author is a programmer or designer who is responsible for

producing the program or documents. Responsible for fixing defects discovered

during the inspection process.

• Inspector: Inspector provides review comments for the code. Finds error,

omissions, and inconsistencies bin the program. May also identify the broader

issues that are outside the scope of the inspection team.

• Moderator or chairman: Moderator formally run the inspection according to

process. Manages the process and facilitates the inspection. Also reports process

results to the chief moderator.


• Scribe: Scribe notes the inspection meeting results and circulates them to the

inspection team after the meeting.

• Reader: The reader presents the code or document at an inspection meeting.

• Chief moderator: Chief moderator is responsible for inspection process

improvement, checklist updating, standard development, etc.

Advantages and Disadvantages of Software


Inspection
Given below are the advantages and disadvantages mentioned:

Advantages:
• The goal of this method is to detect all faults, violation and other side effects.

• Authors and other reviewers do complete preparation before conducting an

inspection.

• A group of people are involved in the inspection procedure; multiple diverse

views are enlisted.

• Every person in the inspection team is assigned a specific role.

• The reader in the inspection reads out the document sequentially in a structured

manner so that all the points and all the code is inspected thoroughly.

Disadvantages:
• Logistics and scheduling can become an issue since multiple people are involved.
• Time-consuming as it needs preparation as well as formal meetings.

• It is not always possible to go through every line of code with several parameters

and their combination to ensure the correctness of the logic, side effects and

appropriate error handling.

Error List
Some programming errors which can be checked during software inspection are as

follows:

• Use of uninitialized variables.

• Non terminating loops.

• Jumps into loops.

• Incompatible assignments.

• Array indices out of bounds.

• Mismatches between actual and formal parameters in the procedure call.

• Use of incorrect logical operators or incorrect precedence among operators.

• Improper storage allocation and deallocation.

• Improper modification of loops.

• Comparison of equality of floating-point values, etc.


MAPPING

A comprehensive mapping that accomplishes the transition from the requirements model to a variety of
architectural styles does not exist.

A mapping technique, called structured design, is often characterized as a data flow-oriented design
methodbecause it provides a convenient transition from a data flow diagram to software architecture. It is
accomplishedas part of a six step process:

(1) The type of information flow is established,

(2) Flow boundaries are indicated,

(3) The DFD is mapped into the program structure,

(4) Control hierarchy is defined,

(5) The resultant structure is refined using design measures and heuristics, and

(6) The architectural description is refined and elaborated.

In order to perform the mapping, the type of information flow must be determined. One type of
information flowis called transform flow. Data flows into the system along an incoming flow path. Then
it is processed at atransform center. Finally, it flows out of the system along an outgoing flow path that
transforms the data intoexternal world form.

Transform Mapping

Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to be
mappedinto a specific architectural style. To map these data flow diagrams into a software architecture,
you would initiate the following design steps: (Example Home security System)

Step 1. Review the fundamental system model

The fundamental system model or context diagram depicts the security function as a single
transformation,representing the external producers and consumers of data that flow into and out of the
function. Figure depicts a level 0 context model, and Figure 9.11 shows refined data flow for the security
function.
Step 2. Review and refine data flow diagrams for the software.

Information obtained from the requirements model is refined to produce greater detail. For example, the
level 2 DFD for monitor sensors
Step 3. Determine whether the DFD has transform or transaction flow characteristics. Evaluating
the DFD. Input and output should be consistent for a process.

Step 4. Isolate the transform center by specifying incoming and outgoing flow boundaries.

Incoming data flows along a path in which information is converted from external to internal form;
outgoing flowconverts internalized data to external form. Different designers may select slightly different
points in the flow asboundary locations. In fact, alternative design solutions can be derived by varying the
placement of flowboundaries. The emphasis in this design step should be on selecting reasonable
boundaries, rather than lengthy iteration on placement of divisions.

Step 5. Perform “first-level factoring.”

This mapping is top-down distribution of control. Factoring leads to a program structure in which

✓ Top-level components perform decision making and


✓ Low-level components perform most input, computation, and output work.
✓ Middle-level components perform some control and do moderate amounts of work.

When transform flow is encountered, a DFD is mapped to a specific structure (a call and return
architecture) thatprovides control for incoming, transform, and outgoing information processing. This
first-level factoring for themonitor sensors subsystem is illustrated in Figure 9.14.
A main controller (called monitor sensors executive) resides at the top of the program structure and
coordinatesthe following subordinate control functions:

• An incoming information processing controller, called sensor input controller, coordinates receipt of
allincoming data.

• A transform flow controller, called alarm conditions controller, supervises all operations on data
ininternalized form (e.g., a module that invokes various data transformation procedures).

• An outgoing information processing controller, called alarm output controller, coordinates production of
output information.

Step 6. Perform “second-level factoring.”

Second-level factoring is accomplished by mapping individual transforms (bubbles) of a DFD into


appropriate modules within the architecture. Beginning at the transform center boundary and moving
outward along incomingand then outgoing paths, transforms are mapped into subordinate levels of the
software structure.

Two or even three bubbles can be combined and represented as one component, or a single bubble may be
expanded to two or more components. Review and refinement may lead to changes in this structure, but it
can serve as a “first-iteration” design.

Second-level factoring for incoming flow follows in the same manner. Factoring is again accomplished
by moving outward from the transform center boundary on the incoming flow side. The transform center
of monitor sensors subsystem software is mapped. A completed first-iteration architecture is shown in
Figure 9.16.
Components are named in a manner that implies function. The processing narrative describes the
component interface, internal data structures, a functional narrative, and a brief discussion of restrictions
and special features.

Step 7. Refine the first-iteration architecture using design heuristics for improved software quality.

A first-iteration architecture can always be refined by applying concepts of functional independence.


Components are exploded or imploded to produce sensible factoring, separation of concerns, good
cohesion, minimal coupling,and most important, a structure that can be implemented without difficulty,
tested without confusion, and maintained without grief.

Design Concepts
Introduction: Software design encompasses the set of principles, concepts, and practices that lead to
the development of a high-quality system or product. Design principles establish an overriding
philosophy that guides you in the design work you must perform. Design is pivotal to successful software
engineering The goal of design is to produce a model or representation that exhibits firmness,
commodity, and delight Software design changes continually as new methods, better analysis, and
broader understanding evolve

DESIGN WITHIN THE CONTEXT OF SOFTWARE ENGINEERING

Software design sits at the technical kernel of software engineering and is applied regardless of the
software process model that is used. Beginning once software requirements have been analyzed and
modeled, software design is the last software engineering action within the modeling activity and sets
the stage for construction (code generation and testing).
Each of the elements of the requirements model provides information that is necessary to create the
four design models required for a complete specification of design. The flow of information during
software design is illustrated in following figure.

The requirements model, manifested by scenario-based, class-based, floworiented, and behavioral


elements, feed the design task.The data/class design transforms class models into design class
realizations and the requisite data structures required to implement the software.

The architectural design defines the relationship between major structural elements of the software, the
architectural styles and design patterns that can be used to achieve the requirements defined for the
system, and the constraints that affect the way in which architecture can be implemented. The
architectural design representation—the framework of a computer- based system—is derived from the

requirements model.

The interface design describes how the software communicates with systems that interoperate with it,
and with humans who use it. An interface implies a flow of information (e.g., data and/or control) and a
specific type of behavior. Therefore, usage scenarios and behavioral models provide much of the
information required for interface design.

The component-level design transforms structural elements of the software architecture into a
procedural description of software components. Information obtained from the class-based models,
flow models, and behavioral models serve as the basis for component design.

The importance of software design can be stated with a single word—quality. Design is the place where
quality is fostered in software engineering. Design provides you with representations of software that
can be assessed for quality. Design is the only way that you can accurately translate stakeholder’s
requirements into a finished software product or system. Software design serves as the foundation for
all the software engineering and software support activities that follow.

THE DESIGN PROCESS

Software design is an iterative process through which requirements are translated into a “blueprint” for
constructing the software. Initially, the blueprint depicts a holistic view of software. That is, the design is
represented at a high level of abstraction

Software Quality Guidelines and Attributes

McGlaughlin suggests three characteristics that serve as a guide for the evaluation of a good design:

• The design must implement all of the explicit requirements contained in the requirements model, and
it must accommodate all of the implicit requirements desired by stakeholders.

• The design must be a readable, understandable guide for those who generate code and for those who
test and subsequently support the software.

• The design should provide a complete picture of the software, addressing the data, functional, and
behavioral domains from an implementation perspective.

Quality Guidelines. In order to evaluate the quality of a design representation, consider the following
guidelines:

1. A design should exhibit an architecture that (1) has been created using recognizable architectural
styles or patterns, (2) is composed of components that exhibit good design characteristics and (3) can be
implemented in an evolutionary fashion,2 thereby facilitating implementation and testing.

2. A design should be modular; that is, the software should be logically partitioned into elements or
subsystems.

3. A design should contain distinct representations of data, architecture, interfaces, and components.

4. A design should lead to data structures that are appropriate for the classes to be implemented and
are drawn from recognizable data patterns.

5. A design should lead to components that exhibit independent functionalcharacteristics.

6. A design should lead to interfaces that reduce the complexity of connections between components
and with the external environment.

7. A design should be derived using a repeatable method that is driven by information obtained during
software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.

Quality Attributes. Hewlett-Packard developed a set of software quality attributes that has been given
the acronym FURPS—functionality, usability, reliability, performance, and supportability. The FURPS
quality attributes represent a target for all software design:

• Functionality is assessed by evaluating the feature set and capabilities of the program, the generality
of the functions that are delivered, and thesecurity of the overall system..

• Usability is assessed by considering human factors, overall aesthetics, consistency, anddocumentation.

• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of output
results, the mean-time-to-failure (MTTF), the ability to recover from failure, and the predictability of the
program.

• Performance is measured by considering processing speed, response time, resource consumption,


throughput, and efficiency.

• Supportability combines the ability to extend the program (extensibility), adaptability, serviceability—
these three attributes represent a more common term, maintainability— and in addition, testability,
compatibility, configurability, the ease with which a system can be installed, and the ease with which
problems can be localized.

The Evolution of Software Design

The evolution of software design is a continuing process that has now spanned almost six decades. Early
design work concentrated on criteria for the development of modular programs and methods for
refining software structures in a top down manner. Procedural aspects of design definition evolved into
a philosophy called structured programming.

A number of design methods, growing out of the work just noted, are being applied throughout the
industry. All of these methods have a number of common characteristics:

(1) a mechanism for the translation of the requirements model into a design representation,

(2) a notation for representing functional components and their interfaces,

(3) heuristics for refinement and partitioning, and

(4) guidelines for quality assessment.

DESIGN CONCEPTS

A set of fundamental software design concepts has evolved over the history of software engineering.
Each provides the software designer with a foundation from which more sophisticated design methods
can be applied. Each helps you answer the following questions:
• What criteria can be used to partition software into individual components?

• How is function or data structure detail separated from a conceptual representation of the software?

• What uniform criteria define the technical quality of a software design?

The following brief overview of important software design concepts that span both traditional and
object-oriented software development.

Abstraction

Abstraction is the act of representing essential features without including the background details or
explanations. the abstraction is used to reduce complexity and allow efficient design and
implementation of complex software systems. Many levels of abstraction can be posed. At the highest
level of abstraction, a solution is stated in broad terms using the language of the problem environment.
At lower levels of abstraction, a more detailed description of the solution is provided.As different levels
of abstraction are developed, you work to create both procedural and data abstractions.

A procedural abstraction refers to a sequence of instructions that have a specific and limited function.
The name of a procedural abstraction implies these functions, but specific details are suppressed.

A data abstraction is a named collection of data that describes a data object.

Architecture

Software architecture alludes to “the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system”

Architecture is the structure or organization of program components (modules), the manner in which
these components interact, and the structure of data that are used by the components.

Shaw and Garlan describe a set of properties that should be specified as part of an architectural design:

• Structural properties. This aspect of the architectural design representation defines the
components of a system (e.g., modules, objects, filters) and the manner in which those
components are packaged and interact with oneanother.
• Extra-functional properties. The architectural design description should address how the design
architecture achieves requirements for performance, capacity, reliability, security, adaptability,
and other system characteristics.
• Families of related systems. The architectural design should draw upon repeatable patterns
that are commonly encountered in the design of families of similar systems. In essence, the
design should have the ability to reuse architectural building blocks.

The architectural design can be represented using one or more of a number of different models.
Structural models: Represent architecture as an organized collection of program components.
Framework models: Increase the level of design abstraction by attempting to identify repeatable
architectural design frameworks that are encountered in similar types of applications.

Dynamic models : Address the behavioral aspects of the program architecture, indicating how the
structure or system configuration may change as a function of external events.

Process models :Focus on the design of the business or technical process that the system must
accommodate.

Functional models can be used to represent the functional hierarchy of a system.

A number of different architectural description languages (ADLs) have been developed to represent
these models.

Patterns

Brad Appleton defines a design pattern in the following manner: “A pattern is a named nugget of insight
which conveys the essence of a proven solution to a recurring problem within a certain context amidst
competing concerns”

A design pattern describes a design structure that solves a particular design problem within a specific
context and amid “forces” that may have an impact on the manner in which the pattern is applied and
used.

The intent of each design pattern is to provide a description that enables a designer to determine (1)
whether the pattern is applicable to the current work, (2) whether the pattern can be reused (hence,
saving design time), and (3) whether the pattern can serve as a guide for developing a similar, but
functionally or structurally different pattern.

Separation of Concerns

Separation of concerns is a design concept that suggests that any complex problem can be more easily
handled if it is subdivided into pieces that can each be solved and/or optimized independently. A
concern is a feature or behavior that is specified as part of the requirements model for the software.

Separation of concerns is manifested in other related design concepts: modularity, aspects, functional
independence, and refinement. Each will be discussed in the subsections that follow.

8.3.5 Modularity

Modularity is the most common manifestation of separation of concerns. Software is divided into
separately named and addressable components, sometimes called module.

Modularity is the single attribute of software that allows a program to be


intellectually manageable

Information Hiding

The principle of information hiding suggests that modules be “characterized by design decisions that
hides from all others.” In other words, modules should be specified and designed so that information
contained within a module is inaccessible to other modules that have no need for such information.The
use of information hiding as a design criterion for modular systems provides the greatest benefits when
modifications are required during testing and later during software maintenance. Because most data
and procedural detail are hidden from other parts of the software, inadvertent errors introduced during
modification are less likely to propagate to other locations within the software.

Functional Independence

The concept of functional independence is a direct outgrowth of separation of concerns, modularity,


and the concepts of abstraction and information hiding. Functional independence is achieved by
developing modules with “single minded” function and an “aversion” to excessive interaction with other
modules.

Independence is assessed using two qualitative criteria: cohesion and coupling. Cohesion is an
indication of the relative functional strength of a module. Coupling is an indication of the relative
interdependence among modules.
Cohesion is a natural extension of the information-hiding concept. A cohesive module performs a single
task, requiring little interaction with other components in other parts of a program. Stated simply, a
cohesive module should do just one thing. Although you should always strive for high cohesion (i.e.,

single-mindedness).

Coupling is an indication of interconnection among modules in a software structure. Coupling depends


on the interface complexity between modules, the point at which entry or reference is made to a
module, and what data pass across the interface. In software design, you should strive for the lowest
possible coupling.

Refinement

Stepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. Refinement is
actually a process of elaboration. You begin with a statement of function that is defined at a high level of
abstraction.

Abstraction and refinement are complementary concepts. Abstraction enables you to specify procedure
and data internally but suppress the need for “outsiders” to have knowledge of low-level details.
Refinement helps you to reveal low-level details as design progresses.

Aspects

An aspect is a representation of a crosscutting concern. A crosscutting concern is some characteristic of


the system that applies across many different requirements.

Refactoring

An important design activity suggested for many agile methods, refactoring is a reorganization
technique that simplifies the design (or code) of a component without changing its function or behavior.
Fowler defines refactoring in the following manner: “Refactoring is the process of changing a software
system in such a way that it does not alter the external behavior of the code [design] yet improves its
internal structure.”

Object-Oriented Design Concepts

The object-oriented (OO) paradigm is widely used in modern software engineering. OO design concepts
such as classes and objects, inheritance, messages, and polymorphism, among others.

Design Classes

The requirements model defines a set of analysis classes. Each describes some element of the problem
domain, focusing on aspects of the problem that are user visible. A set of design classes that refine the
analysis classes by providing design detail that will enable the classes to be implemented, and
implement a software infrastructure that supports the business solution.
Five different types of design classes, each representing a different layer of the design architecture, can
be developed:

• User interface classes define all abstractions that are necessary for human computer interaction (HCI).
The design classes for the interface may be visual representations of the elements of the metaphor.

• Business domain classes are often refinements of the analysis classes defined earlier. The classes
identify the attributes and services (methods) that are required to implement some element of the
business domain.

• Process classes implement lower-level business abstractions required to fully manage the business
domain classes.

• Persistent classes represent data stores (e.g., a database) that will persist beyond the execution of the
software.

• System classes implement software management and control functions that enable the system to
operate and communicate within its computing environment and with the outside world.

Arlow and Neustadt suggest that each design class be reviewed to ensure that it is “well- formed.” They
define four characteristics of a well-formed design class:

• Complete and sufficient. A design class should be the complete encapsulation of all attributes
and methods that can reasonably be expected to exist for the class. Sufficiency ensures that the
design class contains only those methods that are sufficient to achieve the intent of the class, no
more and no less.
• Primitiveness. Methods associated with a design class should be focused on accomplishing one
service for the class. Once the service has been implemented with a method, the class should
not provide another way to accomplish the same thing.
• High cohesion. A cohesive design class has a small, focused set of responsibilities and single-
mindedly applies attributes and methods to implement those responsibilities.
• Low coupling. Within the design model, it is necessary for design classes to collaborate with one
another. If a design model is highly coupled, the system is difficult to implement, to test, and to
maintain over time.

THE DESIGN MODEL

The design model can be viewed in two different dimensions. The process dimension indicates the
evolution of the design model as design tasks are executed as part of the software process. The
abstraction dimension represents the level of detail as each element of the analysis model is
transformed into a design equivalent and then refined iteratively. The design model has four major
elements: data, architecture, components, and interface.

3.4.1. Data Design Elements


Data design (sometimes referred to as data architecting) creates a model of data and/or information
that is represented at a high level of abstraction (the customer/user’s view of data). This data model is
then refined into progressively more implementation-specific representations that can be processed by
the computer-based system. The structure of data has always been an important part of software
design. At the program component level, the design of data structures and the associated algorithms
required to manipulate them is essential to the creation of high- quality applications. At the application
level, the translation of a data model into a database is pivotal to achieving the business objectives of a
system. At the business level, the collection of information stored in disparate databases and
reorganized into a “data warehouse” enables data mining or knowledge discovery that can have an
impact on the success of the business itself.

People Capability Maturity Model (PCMM)


PCMM is a maturity structure that focuses on continuously improving the
management and development of the human assets of an organization.
It defines an evolutionary improvement path from Adhoc, inconsistently
performed practices, to a mature, disciplined, and continuously improving the
development of the knowledge, skills, and motivation of the workforce that
enhances strategic business performance.

The People Capability Maturity Model (PCMM) is a framework that helps the
organization successfully address their critical people issues. Based on the best
current study in fields such as human resources, knowledge management, and
organizational development, the PCMM guides organizations in improving their
steps for managing and developing their workforces.

The People CMM defines an evolutionary improvement path from Adhoc,


inconsistently performed workforce practices, to a mature infrastructure of
practices for continuously elevating workforce capability.

The PCMM subsists of five maturity levels that lay successive foundations for
continuously improving talent, developing effective methods, and successfully
directing the people assets of the organization. Each maturity level is a well-
defined evolutionary plateau that institutionalizes a level of capability for
developing the talent within the organization

The five steps of the People CMM framework are:

Initial Level: Maturity Level 1


The Initial Level of maturity includes no process areas. Although workforce
practices implement in Maturity Level, 1 organization tend to be inconsistent or
ritualistic, virtually all of these organizations perform processes that are defined
in the Maturity Level 2 process areas.

Managed Level: Maturity Level 2


To achieve the Managed Level, Maturity Level 2, managers starts to perform
necessary people management practices such as staffing, operating
performance, and adjusting compensation as a repeatable management
discipline. The organization establishes a culture focused at the unit level for
ensuring that person can meet their work commitments. In achieving Maturity
Level 2, the organization develops the capability to handle skills and
performance at the unit level. The process areas at Maturity Level 2 are Staffing,
Communication and Coordination, Work Environment, Performance
Management, Training and Development, and Compensation.

Defined Level: Maturity Level 3


The fundamental objective of the defined level is to help an organization gain
a competitive benefit from developing the different competencies that must be
combined in its workforce to accomplish its business activities. These workforce
competencies represent critical pillars supporting the strategic workforce
competencies to current and future business objectives; the improved
workforce practices for implemented at Maturity Level 3 become crucial
enablers of business strategy.

Predictable Level: Maturity Level 4


At the Predictable Level, the organization handles and exploits the capability
developed by its framework of workforce competencies. The organization is
now able to handle its capacity and performance quantitatively. The
organization can predict its capability for performing work because it can
quantify the ability of its workforce and of the competency-based methods they
use performing in their assignments.

Optimizing Level: Maturity Level 5


At the Optimizing Level, the integrated organization is focused on continual
improvement. These improvements are made to the efficiency of individuals
and workgroups, to the act of competency-based processes, and workforce
practices and activities.

SDLC MODEL

SDLC – Software Development Life Cycle: The Software Development Lifecycle is a systematic process
for building software that ensures the quality and correctness of the software built. SDLC process aims
to produce highquality software which meets customer expectations. The software development should
be complete in the pre-defined time frame and cost. It consists of a detailed plan describing how to
develop, maintain and replace specific software. Software life cycle models describe phases of the
software cycle and the order in which those phases are executed. Each phase produces deliverables
required by the next phase in the life cycle.

A typical Software Development Life Cycle (SDLC) consists of the following phases:

1. Requirement gathering
2. System Analysis
3. Design
4. Development /Implementation or coding
5. Testing 6. Deployment
6. Maintenance Software Engineering
Requirement gathering:

➢ Requirement gathering and analysis is the most important phase in software development
lifecycle. Business Analyst collects the requirement from the Customer/Client as per the client’s
business needs and documents the requirements in the Business Requirement Specification.
➢ This phase is the main focus of the project managers and stake holders. Meetings with
managers, stake holders and users are held in order to determine the requirements like; who is
going to use the system? How will they use the system? What data should be input into the
system? What data should be output by the system?

2. Analysis Phase:

• Once the requirement gathering and analysis is done the next step is to define and
document the product requirements and get them approved by the customer. This is
done through SRS (Software Requirement Specification) document.
• SRS consists of all the product requirements to be designed and developed during the
project life cycle.
• Key people involved in this phase are Project Manager, Business Analysist and Senior
members of the Team.
• The outcome of this phase is Software Requirement Specification.

3. Design Phase:
• In this third phase the system and software design is prepared from the requirement
specifications which were studied in the first phase.
• System Design helps in specifying hardware and system requirements and also helps in defining
overall system architecture.
• There are two kinds of design documents developed in this phase:
• High-Level Design (HLD): It gives the architecture of the software product to be developed and is
done by architects and senior developers. It gives brief description and name of each module. It
also defines interface relationship and dependencies between modules, database tables
identified along with their key elements
• Low-Level Design (LLD): It is done by senior developers. It describes how each and every feature
in the product should work and how every component should work. Here, only the design will
be there and not the code. It defines the functional logic of the modules, database tables design
with size and type, complete detail of the interface. Addresses all types of dependency issues
and listing of error messages.

4. Coding/Implementation Phase:

• In this phase, developers start build the entire system by writing code using the chosen
programming language.
• Here, tasks are divided into units or modules and assigned to the various developers. It is the
longest phase of the Software Development Life Cycle process.
• In this phase, Developer needs to follow certain predefined coding guidelines. They also need to
use programming tools like compiler, interpreters, debugger to generate and implement the
code.
• The outcome from this phase is Source Code Document (SCD) and the developed product.

5. Testing Phase:

• After the code is developed it is tested against the requirements to make sure that the product
is actually solving the needs addressed and gathered during the requirements phase.
• They either test the software manually or using automated testing tools depends on process
defined in STLC (Software Testing Life Cycle) and ensure that each and every component of the
software works fine. The development team fixes the bug and send back to QA for a re-test. This
process continues until the software is bug-free, stable, and working according to the business
needs of that system.

6. Deployment: After successful testing the product is delivered / deployed to the customer for their
use. As soon as the product is given to the customers they will first do the beta testing. If any changes
are required or if any bugs are caught, then they will report it to the engineering team. Once those
changes are made or the bugs are fixed then the final deployment will happen.

7. Maintenance: Software maintenance is a vast activity which includes optimization, error correction,
and deletion of discarded features and enhancement of existing features. Since these changes are
necessary, a mechanism must be created for estimation, controlling and making modifications. The
essential part of software maintenance requires preparation of an accurate plan during the
development cycle. Typically, maintenance takes up about 40-80% of the project cost, usually closer to
the higher pole. Hence, a focus on maintenance definitely helps keep costs down.

PROCESS MODEL

A software process model is an abstraction of the software development process. The models specify
the stages and order of a process. So, think of this as a representation of the order of activities of the
process and the sequence in which they are performed.

A model will define the following:

• The tasks to be performed


• The input and output of each task
• The pre and post-conditions for each task
• The flow and sequence of each task

Prescriptive Process Models


The following framework activities are carried out irrespective of the process model chosen by
the organization.

1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment

The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks,
quality assurance and change the mechanism for every project.

There are three types of prescriptive process models. They are:

1. The Waterfall Model


2. Incremental Process model
3. RAD model

1. The Waterfall Model

• The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle model'.
• In this model, each phase is fully completed before the beginning of the next phase.
• This model is used for the small projects.
• In this model, feedback is taken after each phase to ensure that the project is on the right path.
• Testing part starts only after the development is complete.

NOTE: The description of the phases of the waterfall model is same as that of the process
model.

An alternative design for 'linear sequential model' is as follows:

Advantages of waterfall model

• The waterfall model is simple and easy to understand, implement, and use.
• All the requirements are known at the beginning of the project, hence it is easy to manage.
• It avoids overlapping of phases because each phase is completed at once.
• This model works for small projects because the requirements are understood very well.
• This model is preferred for those projects where the quality is more important as compared to
the cost of the project.
Disadvantages of the waterfall model

• This model is not good for complex and object oriented projects.
• It is a poor model for long projects.
• The problems with this model are uncovered, until the software testing.
• The amount of risk is high.

2. Incremental Process model

• The incremental model combines the elements of waterfall model and they are applied in an
iterative fashion.
• The first increment in this model is generally a core product.
• Each increment builds the product and submits it to the customer for any suggested
modifications.
• The next increment implements on the customer's suggestions and add additional requirements
in the previous increment.
• This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.

Advantages of incremental model

• This model is flexible because the cost of development is low and initial product delivery is
faster.
• It is easier to test and debug during the smaller iteration.
• The working software generates quickly and early during the software life cycle.
• The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model

• The cost of the final product may cross the cost estimated initially.
• This model requires a very clear and complete planning.
• The planning of design is required before the whole system is broken into small increments.
• The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.

3. RAD model

• RAD is a Rapid Application Development model.


• Using the RAD model, software product is developed in a short period of time.
• The initial activity starts with the communication between customer and developer.
• Planning depends upon the initial requirements and then the requirements are divided into
groups.
• Planning is more important to work together on different modules.
The RAD model consist of following phases:

1. Business Modeling

• Business modeling consist of the flow of information between various functions in the project.
• For example what type of information is produced by every function and which are the functions
to handle that information.
• A complete business analysis should be performed to get the essential business information.
2. Data modeling

• The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
• The attributes of each object are identified and define the relationship between objects.
3. Process modeling

• The data objects defined in the data modeling phase are changed to fulfil the information flow to
implement the business model.
• The process description is created for adding, modifying, deleting or retrieving a data object.
4. Application generation

• In the application generation phase, the actual system is built.


• To construct the software the automated tools are used.
5. Testing and turnover

• The prototypes are independently tested after each iteration so that the overall testing time is
reduced.
• The data flow and the interfaces between all the components are are fully tested. Hence, most
of the programming components are already tested.
.

Specialised Process Models : Specialized process models take on many of the


characteristics of one or more of the traditional models However, these models
tend to be applied when a specialized or narrowly defined software engineering
approach is chosen.

There are 3 types of specialized process models:

1. Component Based Development

2. Formal Methods Model

3. Aspect Oriented Software development

1. Component Based Development : Commercial off-the-shelf (COTS)


software components, developed by vendors who offer them as products, provide
targeted functionality with well-defined interfaces that enable the component to be
integrated into the software that is to be built. The component-based development
model incorporates many of the characteristics of the spiral model. It is
evolutionary in nature, demanding an iterative approach to the creation of software.
However, the component-based development model constructs applications from
prepackaged software component. Modeling and construction activities begin with
the identification of candidate components. These components can be designed as
either conventional software modules or objectoriented classes or packages of
classes. Regardless of the technology that is used to create the components, the
component-based development model incorporates the following steps:

1. Available component-based products are researched and evaluated for


the application domain in question.
2. Component integration issues are considered.
3. A software architecture is designed to accommodate the components.
4. Components are integrated into the architecture.
5. Comprehensive testing is conducted to ensure proper functionality

The component-based development model leads to software reuse, and


reusability provides software engineers with a number of measurable benefits.
software engineering team can achieve a reduction in development cycle time as
well as a reduction in project cost if component reuse becomes part of your culture.

2. Formal Methods Model : The formal methods model encompasses a set


of activities that leads to formal mathematical specification of computer software.
Formal methods enable to specify, develop, and verify a computer-based system by
applying a rigorous, mathematical notation. A variation on this approach, called
cleanroom software engineering is currently applied by some software
development organizations. When formal methods are used during development,
they provide a mechanism for eliminating many of the problems that are difficult
to overcome using other software engineering paradigms. Ambiguity,
incompleteness, and inconsistency can be discovered and corrected more easily,
through the application of mathematical analysis. When formal methods are used
during design, they serve as a basis for program verification and therefore enable
you to discover and correct errors that might otherwise go undetected. The formal
methods model offers the promise of defect-free software.There are some of the
disadvantages too:

1. The development of formal models is currently quite time consuming


and expensive.
2. Because few software developers have the necessary background to
apply formal methods, extensive training is required.
3. It is difficult to use the models as a communication mechanism for
technically unsophisticated customers
3. Aspect Oriented Software Development : Regardless of the software
process that is chosen, the builders of complex software invariably
implement a set of localized features, functions, and information content.
These localized software characteristics are modeled as components and
then constructed within the context of a system architecture. As modern
computer-based systems become more sophisticated certain concerns span
the entire architecture. Some concerns are high-level properties of a system,
Other concerns affect functions, while others are systemic.

When concerns cut across multiple system functions, features, and information,
they are often referred to as crosscutting concerns. Aspectual requirements define
those crosscutting concerns that have an impact across the software architecture.
Aspect-oriented software development (AOSD), often referred to as aspect-
oriented programming (AOP), is a relatively new software engineering paradigm
that provides a process and methodological approach for defining, specifying,
designing, and constructing aspects.

A distinct aspect-oriented process has not yet matured. However, it is likely that
such a process will adopt characteristics of both evolutionary and concurrent
process models. The evolutionary model is appropriate as aspects are identified
and then constructed. The parallel nature of concurrent development is essential
because aspects are engineered independently of localized software components
and yet, aspects have a direct impact on these components. It is essential to
instantiate asynchronous communication between the software process activities
applied to the engineering and construction of aspects and components.
Agile model

The agile process model encourages continuous iterations of development and


testing. Each incremental part is developed over an iteration, and each iteration is
designed to be small and manageable so it can be completed within a few weeks.

Each iteration focuses on implementing a small set of features completely. It


involves customers in the development process and minimizes documentation by
using informal communication.

Agile development considers the following:

5. Requirements are assumed to change


6. The system evolves over a series of short iterations
7. Customers are involved during each iteration
8. Documentation is done only when needed

Though agile provides a very realistic approach to software development, it isn’t


great for complex projects. It can also present challenges during transfers as there
is very little documentation. Agile is great for projects with changing requirements.

Some commonly used agile methodologies include:

Scrum: One of the most popular agile models, Scrum consists of iterations called
sprints. Each sprint is between 2 to 4 weeks long and is preceded by planning. You
cannot make changes after the sprint activities have been defined.

Extreme Programming (XP): With Extreme Programming, an iteration can last


between 1 to 2 weeks. XP uses pair programming, continuous integration, test-
driven development and test automation, small releases, and simple software
design.

Kanban: Kanban focuses on visualizations, and if any iterations are used they are
kept very short. You use the Kanban Board that has a clear representation of all
project activities and their numbers, responsible people, and progress.
Estimating project duration is like building a life plan. You know where you want to get, you know
something will likely go wrong, but you still need to establish a timeline to reach your goals.

If there is one thing to know about estimating project duration, it will have to be this: there are lots of
traps to look out for.

But with proper preparation you can make this into an easy experience.

Here is where you start.

What is duration in project management?


Project duration is the total amount of time it takes to finish a project, which you measure in business
days, hours, weeks, or months. The duration can be seen at the timeline for project delivery, whether
it is five days or five years. Project duration usually depends on your resource availability.

The difference between effort, duration, and elapsed


time
In simple terms, effort is focused on highlighting the work units (hours) you need to complete a task
or a project, while duration is focused on the time you need to take to complete it. The duration is
usually longer than the estimated hours in effort because your team doesn't work non-stop.

Elapsed time is more about the progress — it looks at how long it took from the moment you
assigned someone to a project to the moment they completed it. Eventually, it will also show how
effectively you're working — are you going to meet the promised deadlines?

How is project duration calculated?

Top-down estimating

PMBOK explains top down estimating, also known as analogous estimating as “a technique for
estimating duration or cost of an activity or a project using historical data from a similar activity or a
project.”
In other words, here you need to look at your historical data and compare the new project to
something similar that has already been completed, assuming that the new project will take
approximately as much time and resources to complete.

Bottom up estimating

With bottom-up estimating, you go from detailed to general look — from task to project. The rule is
simple — if you cannot make an accurate estimation of a project, dissect it to the units which you
can estimate properly, like milestones or even individual tasks.

Parametric estimating

Parametric estimating is basically taking analogous estimating to another level. You also look at
historical data only get more accurate when it comes to numbers, introducing something of a
"statistical relationships".

That is to say that you need to find a comparable project in your historical data and then customize
calculations based on the numerical parameters of your new project.

Three-point estimating

PMBOK explains three-point estimating as “A technique used to estimate cost or duration by


applying an average or weighted average of optimistic, pessimistic, and most likely estimates when
there is uncertainty with the individual activity estimates.”
Here you get to reduce your risks by accounting for several scenarios your project could end up
following.

Project duration best practices


Suppose you need to build a website and you have estimated that it will take about 40 hours of work
(the effort). However, the website will not be ready in 40 hours for a number of reasons: you might
have other projects running, you cannot devote all of your time to building a website, you need to
take some days off in the middle of the project, etc.

As a result, the effort will be 40 hours, but the duration will be longer. For example, if you decide to
devote 5 hours a day to the project it will take you 8 days to complete it (your duration), but if your
colleague comes to help and takes half of the workload off your shoulders, the project will take only
4 days.

But whether you will be working alone or with reinforcements, here are some things you could do to
get more efficient.

1. Create a resource schedule


When a project involves more than one person working on it, you need to create a resource
schedule for transparency and visibility.

Here you will be able to see when your resources are free, full, or overbooked and by how much.
This is your best bet to juggle resources and follow the "less is more" concept.

With Runn's resource scheduling, all you need to do is click, drag, and drop workload when you
need to allocate it to someone specific. You can extend, shorten, transfer, and split work among your
resources to accommodate everyone involved.

2. Include time off


With a resource schedule, it's wise to account not only for the time when someone is available to
work but also when they are not. People take vacations, days off, sick leave, etc. — all of which can
delay your project delivery if you don't account for them when estimating project duration. But by
adding that time off right into the schedule you can avoid surprises and unneeded headaches.

3. Add a contingency reserve


No project is perfect. There will be risks which may or may not happen, delays that may or may not
stall the project, additional costs that may or may not make you go over the budget.

With a contingency reserve, you can be prepared for whatever happens and still keep your project
going according to plan (even if it's not the most optimistic one).

4. Don't underestimate
People have a natural tendency to be overoptimistic. In project management, this can lead to project
failure.

Have you ever moved houses? It never takes the time you expect it to take — there is always a
something causing one delay after another and you end up sleeping on a mattress for two weeks.

In a way, projects can be the same. This is why leaving space for some wiggle room, scope creep,
ad hoc requests, and the like can help you realistically estimate project duration.

Start a free trial of Runn to see how you can optimize and simplify your project planning and
estimation in a matter of a few clicks!

2.1 REQUIREMENTS ENGINEERING

The broad spectrum of tasks and techniques that lead to an understanding of requirements is

called requirements engineering. From a software process perspective, requirements

engineering is a major software engineering action that begins during the communication activity

and continues into the modeling activity. It must be adapted to the needs of the process, the

project, the product, and the people doing the work.

Requirements engineering builds a bridge to design and construction. Requirements engineering

provides the appropriate mechanism for understanding what the customer wants, analyzing
need, assessing feasibility, negotiating a reasonable solution, specifying the solution

unambiguously, validating the specification, and managing the requirements as they are

transformed into an operational system. It encompasses seven distinct tasks: inception,

elicitation, elaboration, negotiation, specification, validation, and management.

a) Inception. In general, most projects begin when a business need is identified or a potential

new market or service is discovered. Stakeholders from the business community define a

business case for the idea, try to identify the breadth and depth of the market, do a rough

feasibility analysis, and identify a working description of the project’s scope.

At project inception, you establish a basic understanding of the problem, the people who want a

solution, the nature of the solution that is desired, and the effectiveness of preliminary

communication and collaboration between the other stakeholders and the software team.

b)Elicitation. Ask the customer, what the objectives for the system or product are, what is to be

accomplished, how the system or product fits into the needs of the business, and finally, how the

system or product is to be used on a day-to-day basis. A number of problems that are

encountered as elicitation occurs.

• Problems of scope. The boundary of the system is ill-defined or the customers/users specify

unnecessary technical detail that may confuse, rather than clarify, overall system objectives.

• Problems of understanding. The customers/users are not completely sure of what is needed,

have a poor understanding of the capabilities and limitations of their computing environment,

don’t have a full understanding of the problem domain, have trouble communicating needs to the

system engineer, omit information that is believed to be “obvious,” specify requirements that

conflict with the needs of other customers/users, or specify requirements that are ambiguous or

un testable.

• Problems of volatility. The requirements change over time. To help overcome these

problems, you must approach requirements gathering in an organized manner.


c)Elaboration. The information obtained from the customer during inception and elicitation is

expanded and refined during elaboration. This task focuses on developing

a refined requirements model that identifies various aspects of software function, behavior, and

information.

Elaboration is driven by the creation and refinement of user scenarios that describe how the end

user (and other actors) will interact with the system. Each user scenario is parsed to extract

analysis classes—business domain entities that are visible to the end user. The attributes of

each analysis class are defined, and the services that are required by each class are identified.

The relationships and collaboration between classes are identified, and a variety of

supplementary diagrams are produced.

d)Negotiation. It usual for customers, to given limited business resources. It’s also relatively

common for different customers or users to propose conflicting requirements, arguing that their

version is “essential for our special needs.”

You have to reconcile these conflicts through a process of negotiation. Customers, users, and

other stakeholders are asked to rank requirements and then discuss conflicts

in priority. Using an iterative approach that prioritizes requirements, assesses their cost and risk,

and addresses internal conflicts, requirements are eliminated, combined, and/or modified so that

each party achieves some measure of satisfaction.

e)Specification. Specification means different things to different people. A specification can be

a written document, a set of graphical models, a formal mathematical model, a collection of

usage scenarios, a prototype, or any combination of these. Some suggest that a “standard

template” should be developed and used for a specifcation, arguing that this leads to

requirements that are presented in a consistent and therefore more understandable manner.

However, it is sometimes necessary to remain flexible when a specification is to be developed.

For large systems, a written document, combining natural language descriptions and graphical
models may be the best approach.

2.2 ESTABLISHING THE GROUNDWORK

In an ideal setting, stakeholders and software engineers work together on the same team. In

such cases, requirements engineering is simply a matter of conducting meaningful

conversations with colleagues who are well-known members of the team.

We discuss the steps required to establish the groundwork for an understanding of software

requirements—to get the project started in a way that will keep it moving forward toward a

successful solution.

2.2.1 Identifying Stakeholders: Stakeholder is “anyone who benefits in a direct or indirect way

from the system which is being developed.” The usual stakeholders are: business operations

managers, product managers, marketing people, internal and external customers, end users,

consultants, product engineers, software engineers, support and maintenance engineers. Each

stakeholder has a different view of the system, achieves different benefits when the system is

successfully developed, and is open to different risks if the development effort should fail.

2.2.2 Recognizing Multiple Viewpoints: Because many different stakeholders exist, the

requirements of the system will be explored from many different points of view. Each of these

constituencies will contribute information to the requirements engineering process. As

information from multiple viewpoints is collected, emerging requirements may be inconsistent or

may conflict with one another. You should categorize all stakeholder information in a way that

will allow decision makers to choose an internally consistent set of requirements for the system.

2.2.3 Working toward Collaboration: If five stakeholders are involved in a software project,

you may have five different opinions about the proper set of requirements. Customers must

collaborate among themselves and with software engineering practitioners if a successful

system is to result. The job of a requirements engineer is to identify areas of commonality and

areas of conflict or inconsistency. Collaboration does not necessarily mean that requirements
are defined by committee. In many cases, stakeholders collaborate by providing their view of

requirements, but a strong “project champion” may make the final decision about which

requirements make the cut.

2.2.4 Asking the First Questions: Questions asked at the inception of the project should be

“context free. The first set of context-free questions focuses on the customer and other

stakeholders, the overall project goals and benefits. You might ask:

• Who is behind the request for this work?

• Who will use the solution?

• What will be the economic benefit of a successful solution?

• Is there another source for the solution that you need?

These questions help to identify all stakeholders who will have interest in the software to be
built.

In addition, the questions identify the measurable benefit of a successful implementation and

possible alternatives to custom software development.

The next set of questions enables you to gain a better understanding of the problem and allows

the customer to voice his or her perceptions about a solution:

• How would you characterize “good” output that would be generated by a successful
solution?

• What problem(s) will this solution address?

• Can you show me (or describe) the business environment in which the solution will be
used?

• Will special performance issues or constraints affect the way the solution is approached?

The final set of questions focuses on the effectiveness of the communication activity itself.

• Are you the right person to answer these questions? Are your answers “official”?

• Are my questions relevant to the problem that you have?

• Am I asking too many questions?


• Can anyone else provide additional information?

• Should I be asking you anything else?

These questions will help to “break the ice” and initiate the communication that is essential to

successful elicitation.

Feasibility Studies
SOFTWARE TESTING

Software testing can be stated as the process of verifying and validating whether a software or
application is bug-free, meets the technical requirements as guided by its design and development, and
meets the user requirements effectively and efficiently by handling all the exceptional and boundary
cases. The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy, and usability. T he article
focuses on discussing Software Testing in detail.

What is Software Testing?


Software Testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free.
The purpose of software testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality, and performance of a
software program or application.
Software testing can be divided into two steps:
1. Verification: It refers to the set of tasks that ensure that the software correctly implements a
specific function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements. It means “Are we building the right product?”.
Importance of Software Testing:
• Defects can be identified early: Software testing is important because if there are any bugs they
can be identified early and can be fixed before the delivery of the software.
• Improves quality of software: Software Testing uncovers the defects in the software, and fixing
them improves the quality of the software.
• Increased customer satisfaction: Software testing ensures reliability, security, and high
performance which results in saving time, costs, and customer satisfaction.
• Helps with scalability: Software testing type non-functional testing helps to identify the scalability
issues and the point where an application might stop working.
• Saves time and money: After the application is launched it will be very difficult to trace and resolve
the issues, as performing this activity will incur more costs and time. Thus, it is better to conduct
software testing at regular intervals during software development.

Need for Software Testing


Software bugs can cause potential monetary and human loss. There are many examples in history that
clearly depicts that without the testing phase in software development lot of damage was incurred.
Below are some examples:
• 1985: Canada’s Therac-25 radiation therapy malfunctioned due to a software bug and resulted in
lethal radiation doses to patients leaving 3 injured and 3 people dead.
• 1994: China Airlines Airbus A300 crashed due to a software bug killing 264 people.
• 1996: A software bug caused U.S. bank accounts of 823 customers to be credited with 920 million
US dollars.
• 1999: A software bug caused the failure of a $1.2 billion military satellite launch.
• 2015: A software bug in fighter plan F-35 resulted in making it unable to detect targets correctly.
• 2015: Bloomberg terminal in London crashed due to a software bug affecting 300,000 traders on the
financial market and forcing the government to postpone the 3bn pound debt sale.
• Starbucks was forced to close more than 60% of its outlet in the U.S. and Canada due to a software
failure in its POS system.
• Nissan cars were forced to recall 1 million cars from the market due to a software failure in the car’s
airbag sensory detectors.

Different Types Of Software Testing


Software Testing can be broadly classified into 3 types:
1. Functional Testing: Functional testing is a type of software testing that validates the software
systems against the functional requirements. It is performed to check whether the application is
working as per the software’s functional requirements or not. Various types of functional testing are
Unit testing, Integration testing, System testing, Smoke testing, and so on.
2. Non-functional Testing: Non-functional testing is a type of software testing that checks the
application for non-functional requirements like performance, scalability, portability, stress, etc.
Various types of non-functional testing are Performance testing, Stress testing, Usability Testing,
and so on.
3. Maintenance Testing: Maintenance testing is the process of changing, modifying, and updating the
software to keep up with the customer’s needs. It involves regression testing that verifies that recent
changes to the code have not adversely affected other previously working parts of the software.
Apart from the above classification software testing can be further divided into 2 more ways of
testing:
1. Manual Testing: Manual testing includes testing software manually, i.e., without using any
automation tool or script. In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behavior or bug. There are different stages for manual testing
such as unit testing, integration testing, system testing, and user acceptance testing. Testers use
test plans, test cases, or test scenarios to test software to ensure the completeness of test ing.
Manual testing also includes exploratory testing, as testers explore the software to identify errors in
it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product. This process involves the
automation of a manual process. Automation Testing is used to re-run the test scenarios quickly and
repeatedly, that were performed manually in manual testing.
Apart from regression testing, automation testing is also used to test the application from a load,
performance, and stress point of view. It increases the test coverage, improves accuracy , and saves
time and money when compared to manual testing.

Different Types of Software Testing Techniques


Software testing techniques can be majorly classified into two categories:
1. Black Box Testing: Black box technique of testing in which the tester doesn’t have access to the
source code of the software and is conducted at the software interface without any concern with the
internal logical structure of the software known as black-box testing.
2. White-Box Testing: White box technique of testing in which the tester is aware of the internal
workings of the product, has access to its source code, and is conducted by making sure that all
internal operations are performed according to the specifications is known as white box testing.
3. Grey Box Testing: Grey Box technique is testing in which the testers should have knowledge of
implementation, however, they need not be experts.

White Box Testing :The white box testing is a testing method which is based on close examination of
procedural details. Hence it is also called as glass box testing. In white box testing the test cases are
derived for
1. Examining all the independent paths within a module.
2. Exercising all the logical paths with their true and false sides.
3. Executing all the loops within their boundaries and within operational bounds.
4. Exercising internal data structures to ensure their validity.

Why to perform white box testing?


There are three main reasons behind performing the white box testing.
1. Programmers may have some incorrect assumptions while designing or implementing
some functions. Due to this there are chances of having logical errors in the program. To
detect and correct such logical errors procedural details need to be examined.

2. Certain assumptions on flow of control and data may lead programmer to make design
errors. To uncover the errors on logical path, white box testing is must.

3. There may be certain typographical errors that remain undetected even after syntax and
type checking mechanisms. Such errors can be uncovered during white box testing.

Cyclomatic Complexity

Cyclomatic complexity is a software metric that gives the quantitative measure of logic al
complexity of the program. The Cyclomatic complexity defines the number of independent
paths in the basis set of the program that provides the upper bound for the number of tests
that must be conducted to ensure that all the statements have been executed at least once. The
cyclomatic complexity can be computed by one of the following ways.

1. The number of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph, G, is defined as:
V(G) = E - N + 2 ,
E - number of flow graph edges,
N - number of flow graph nodes
3. V(G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.

Structural Testing
1. The structural testing is sometime called as white-box testing.
2. In structural testing derivation of test cases is according to program structure. Hence
knowledge of the program is used to identify additional test cases.
3. Objective of structural testing is to exercise all program statements.

Condition Testing

To test the logical conditions in the program module the condition testing is used. This
condition can be a Boolean condition or a relational expression. The condition is incorrect in
following situations.
1. Boolean operator is incorrect, missing or
2. Boolean variable is incorrect.
3. Boolean parenthesis may be missing, incorrect or extra.
4. Error in relational operator.
5. Error in arithmetic expression.
The condition testing focuses on each testing condition in the program.
 The branch testing is a condition testing strategy in which for a compound condition each and every
true or false branches are tested.
 The domain testing is a testing strategy in which relational expression can be tested using three or
four tests.
Basis Path Testing
 White-box testing technique proposed by Tom McCabe
 Enables the test case designer to derive a logical complexity measure of a procedural design
 Uses this measure as a guide for defining a basis set of execution paths
 Test cases derived to exercise the basis set are guaranteed to execute every statement in the
program at least one time during testing
Flow Graph Notation
• A circle in a graph represents a node, which stands for a sequence of one or more procedural
statements
• A node containing a simple conditional expression is referred to as a predicate node
– Each compound condition in a conditional expression containing one or more Boolean operators
(e.g., and, or) is represented by a separate predicate node
– A predicate node has two edges leading out from it (True and False)
• An edge, or a link, is a an arrow representing flow of control in a specific direction
– An edge must start and terminate at a node
– An edge does not intersect or cross over another edge
• Areas bounded by a set of edges and nodes are called regions
• When counting regions, include the area outside the graph as a region, too
Independent Program Paths
• Defined as a path through the program from the start node until the end node that introduces at least
one new set of processing statements or a new condition (i.e., new nodes)
• Must move along at least one edge that has not been traversed before by a previous path
• Basis set for flow graph on previous slide
– Path 1: 0-1-11
– Path 2: 0-1-2-3-4-5-10-1-11
– Path 3: 0-1-2-3-6-8-9-10-1-11
– Path 4: 0-1-2-3-6-7-9-10-1-11
• The number of paths in the basis set is determined by the cyclomatic complexity
Deriving the Basis Set and Test Cases
1) Using the design or code as a foundation, draw a corresponding flow graph
2) Determine the cyclomatic complexity of the resultant flow graph
3) Determine a basis set of linearly independent paths
4) Prepare test cases that will force execution of each path in the basis set
Black-box testing is a type of software testing in which the tester is not concerned with the internal
knowledge or implementation details of the software, but rather focuses on validating the functionality
based on the provided specifications or requirements.

Black box testing can be done in the following ways:


1. Syntax-Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers, language that can be represented by context-
free grammar. In this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly so instead of
giving all of them separately we can group them and test only one input of each group. The idea is to
partition the input domain of the system into several equivalence classes such that each member of the
class works similarly, i.e., if a test case in one class results in some error, other members of the class
would also result in the same error.

The technique involves two steps:


1. Identification of equivalence class – Partition any input domain into a minimum of two sets: valid
values and invalid values. For example, if the valid range is 0 to 100 then select one valid input
like 49 and one invalid like 104.
2. Generating test cases – (i) To each valid and invalid class of input assign a unique identification
number. (ii) Write a test case covering all valid and invalid test cases considering that no two invalid
inputs mask each other. To calculate the square root of a number, the equivalence classes will
be (a) Valid inputs:
• The whole number which is a perfect square- output will be an integer.
• The whole number which is not a perfect square- output will be a decimal number.
• Positive decimals
• Negative numbers(integer or decimal).
• Characters other than numbers like “a”,”!”,”;”, etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test cases
are designed for boundary values of the input domain then the efficiency of testing improves and the
probability of finding errors also increases. For example – If the valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes a relationship between logical input called
causes with corresponding actions called the effect. The causes and effects are represented using
Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.
For example, in the following cause-effect graph:
It can be converted into a decision table like:

Each column corresponds to a rule which will become a test case for testing. So there will be 4 test
cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a software
system.
6. Compatibility testing – The test case result not only depends on the product but is also on the
infrastructure for delivering functionality. When the infrastructure parameters are changed it is still
expected to work properly. Some parameters that generally affect the compatibility of software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

BlackBoxTestingType

The following are the several categories of black box testing:


1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the existing code. In other
words, a new software update has no impact on the functionality of the software. This is carried out after
a system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing is not functional testing
of software. It focuses on the software’s performance, usability, and scalability.
Tools Used for Black Box Testing:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.

What can be identified by Black Box Testing


1. Discovers missing functions, incorrect function & interface errors
2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any functions.
4. Discovers the errors in performance or behavoiur of software.

Features of black box testing:


1. Independent testing: Black box testing is performed by testers who are not involved in the
development of the application, which helps to ensure that testing is unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted from the perspective of an end
user, which helps to ensure that the application meets user requirements and is easy to use.
3. No knowledge of internal code: Testers performing black box testing do not have access to the
application’s internal code, which allows them to focus on testing the application’s external behavior
and functionality.
4. Requirements-based testing: Black box testing is typically based on the application’s
requirements, which helps to ensure that the application meets the required specifications.
5. Different testing techniques: Black box testing can be performed using various testing techniques,
such as functional testing, usability testing, acceptance testing, and regression testing.
6. Easy to automate: Black box testing is easy to automate using various automation tools, which
helps to reduce the overall testing time and effort.
7. Scalability: Black box testing can be scaled up or down depending on the size and complexity of
the application being tested.
8. Limited knowledge of application: Testers performing black box testing have limited knowledge of
the application being tested, which helps to ensure that testing is more representative of how the
end users will interact with the application.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or programming skills to implement the
Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used in finding the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing:
• There is a possibility of repeating the same tests while implementing the testing process.
• Without clear functional specifications, test cases are difficult to implement.
• It is difficult to execute the test cases because of complex inputs at different stages of testing.
• Sometimes, the reason for the test failure cannot be detected.
• Some programs in the application are not tested.
• It does not reveal the errors in the control structure.
• Working with a large sample space of inputs can be exhaustive and consumes a lot of time.
Unit Testing:

Unit Testing is a software testing technique by means of which individual units of software i.e. group of
computer program modules, usage procedures, and operating procedures are tested to determine
whether they are suitable for use or not. It is a testing method using which every independent module is
tested to determine if there is an issue by the developer himself. It is correlated with the functional
correctness of the independent modules. Unit Testing is defined as a type of software testing w here
individual components of a software are tested. Unit Testing of the software product is carried out
during the development of an application. An individual component may be either an individual function
or a procedure. Unit Testing is typically performed by the developer. In SDLC or V Model, Unit testing is
the first level of testing done before integration testing. Unit testing is such a type of testing technique
that is usually performed by developers. Although due to the reluctance of developers to test, quality
assurance engineers also do unit testing.

Objective of Unit Testing:


The objective of Unit Testing is:
1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers to understand the code base and enable them to make changes quickly.
6. To help with code reuse.

Types of Unit Testing:


There are 2 types of Unit Testing: Manual, and Automated.
Workflow of Unit Testing:

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for input, user interface,
and output parts.
2. White Box Testing: This technique is used in testing the functional behavior of the system by giving
the input and checking the functionality output including the internal design structure and code of the
modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases, test methods, test
functions, and analyzing the code performance for the modules.

Unit Testing Tools:


Here are some commonly used Unit Testing tools:
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit

Advantages of Unit Testing:


1. Unit Testing allows developers to learn what functionality is provided by a unit and how to use it to
gain a basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module works properly.
3. Unit testing enables testing parts of the project without waiting for others to be completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix issues early in the
development process, before they become larger and more difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code works as intended and
meets the requirements, improving the overall quality of the software.
6. Increased Confidence: Unit testing provides developers with confidence in their code, as they can
validate that each unit of the software is functioning as expected.
7. Faster Development: Unit testing enables developers to work faster and more efficiently, as they can
validate changes to the code without having to wait for the full system to be tested.
8. Better Documentation: Unit testing provides clear and concise documentation of the code and its
behavior, making it easier for other developers to understand and maintain the software.
9. Facilitation of Refactoring: Unit testing enables developers to safely make changes to the code, as
they can validate that their changes do not break existing functionality.
10. Reduced Time and Cost: Unit testing can reduce the time and cost required for later testing, as it
helps to identify and fix issues early in the development process.

Disadvantages of Unit Testing:


1. The process is time-consuming for writing the unit test cases.
2. Unit Testing will not cover all the errors in the module because there is a chance of having errors in
the modules while doing integration testing.
3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of the module.
4. It requires more time for maintenance when the source code is changed frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the performance of the
system, etc.
6. Time and Effort: Unit testing requires a significant investment of time and effort to create and
maintain the test cases, especially for complex systems.
7. Dependence on Developers: The success of unit testing depends on the developers, who must write
clear, concise, and comprehensive test cases to validate the code.
8. Difficulty in Testing Complex Units: Unit testing can be challenging when dealing with complex units,
as it can be difficult to isolate and test individual units in isolation from the rest of th e system.
9. Difficulty in Testing Interactions: Unit testing may not be sufficient for testing interactions between
units, as it only focuses on individual units.
10. Difficulty in Testing User Interfaces: Unit testing may not be suitable for testing user interf aces, as it
typically focuses on the functionality of individual units.
11. Over-reliance on Automation: Over-reliance on automated unit tests can lead to a false sense of
security, as automated tests may not uncover all possible issues or bugs.
12. Maintenance Overhead: Unit testing requires ongoing maintenance and updates, as the code and
test cases must be kept up-to-date with changes to the software.

Integration testing is the process of testing the interface between two software units or modules. It
focuses on determining the correctness of the interface. The purpose of integration testing is to expose
faults in the interaction between integrated units. Once all the modules have been unit tested, integration
testing is performed.
Integration testing is a software testing technique that focuses on verifying the interactions and data
exchange between different components or modules of a software application. The goal of integration
testing is to identify any problems or bugs that arise when different components are combined and
interact with each other. Integration testing is typically performed after unit testing and before system
testing. It helps to identify and resolve integration issues early in the development cycle, reducing the
risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there should be
a proper sequence to be followed. And also if you don’t want to miss out on any integration scenarios
then you have to follow the proper sequence. Exposing the defects is the major focus of the integration
testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the modules
are combined and the functionality is verified after the completion of individual module testing. In simple
words, all the modules of the system are simply put together and tested. This approach is practicable
only for very small systems. If an error is found during the integration testing, it is very difficult to localize
the error as the error may potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing is very expensive to fix.
Big-Bang integration testing is a software testing approach in which all components or modules of a
software application are combined and tested at once. This approach is typically used when the software
components have a low degree of interdependence or when there are constraints in the development
environment that prevent testing individual components. The goal of big-bang integration testing is to
verify the overall functionality of the system and to identify any integration problems that arise when the
components are combined. While big-bang integration testing can be useful in some situations, it can
also be a high-risk approach, as the complexity of the system and the number of interactions between
components can make it difficult to identify and diagnose problems.
Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low degree of interdependence between
components.
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are tested with
higher modules until all modules are tested. The primary purpose of this integration testing is that each
subsystem tests the interfaces among various modules making up the subsystem. This integratio n
testing uses test drivers to drive and pass appropriate data to the lower -level modules.
Advantages:
• In bottom-up testing, no stubs are required.
• A principal advantage of this integration testing is that several disjoint subsystems can be tested
simultaneously.
• It is easy to create the test conditions.
• Best for applications that uses bottom up design approach.
• It is Easy to observe the test results.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large number of small
subsystems.
• As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to simulate
the behaviour of the lower-level modules that are not yet integrated. In this integration testing, testing
takes place from top to bottom. First, high-level modules are tested and then low-level modules and
finally integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
• Easier isolation of interface errors.
• In this, design defects can be found in the early stages.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
• It is difficult to observe the test output.
• It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched integration testing.
A mixed integration testing follows a combination of top down and bottom-up testing approaches. In top-
down approach, testing can start only after the top-level module have been coded and unit tested. In
bottom-up approach, testing can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up approaches. It is also called
the hybrid integration testing. also, stubs and drivers are used in mixed integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up approaches.
• Parallel test can be performed in top and bottom layer tests.
Disadvantages:
• For mixed integration testing, it requires very high cost because one part has a Top -down approach
while another part has a bottom-up approach.
• This integration testing cannot be used for smaller systems with huge interdependence between
different modules.

What is System Testing?


System Testing is a level of testing that validates the complete and fully integrated
software product. The purpose of a system test is to evaluate the end-to-end system
specifications. Usually, the software is only one element of a larger computer-based
system. Ultimately, the software is interfaced with other software/hardware systems.
System Testing is defined as a series of different tests whose sole purpose is to exercise
the full computer-based system.

System Testing is Blackbox


Two Category of Software Testing

• Black Box Testing


• White Box Testing

System test falls under the black box testing category of Software testing.

White box testing is the testing of the internal workings or code of a software application.
In contrast, black box or System Testing is the opposite. System test involves the external
workings of the software from the user’s perspective.

What do you verify in System Testing?


System Testing involves testing the software code for following

• Testing the fully integrated applications including external peripherals in order to


check how components interact with one another and with the system as a whole.
This is also called End to End testing scenario.
• Verify thorough testing of every input in the application to check for desired outputs.
• Testing of the user’s experience with the application.
• That is a very basic description of what is involved in system testing. You need to
build detailed test cases and test suites that test each aspect of the application as
seen from the outside without looking at the actual source code. To learn more about
a comprehensive approach to this process, consider reading about end-to-end
testing.
• End To End Testing is a software testing method that validates entire software from
starting to the end along with its integration with external interfaces. The purpose of
end-to-end testing is testing whole software for dependencies, data integrity and
communication with other systems, interfaces and databases to exercise complete
production like scenario.

Why End to End Testing?


End To End Testing verifies complete system flow and increases confidence by detecting
issues and increasing Test Coverage of subsystems. Modern software systems are
complex and interconnected with multiple subsystems that may differ from current systems.
The whole system can collapse by failure of any subsystem that is major risk which can be
avoided by End-to-End testing.

End to End Testing Process:


The following diagram gives an overview of the End to End testing process.

• Software Testing Hierarchy


As with almost any software engineering process, software testing has a prescribed order in
which things should be done. The following is a list of software testing categories arranged
in chronological order. These are the steps taken to fully test new software in preparation
for marketing it:

• Unit testing performed on each module or block of code during development. Unit
Testing is normally done by the programmer who writes the code.
• Integration testing done before, during and after integration of a new module into the
main software package. This involves testing of each individual code module. One
piece of software can contain several modules which are often created by several
different programmers. It is crucial to test each module’s effect on the entire program
model.
• System testing done by a professional testing agent on the completed software
product before it is introduced to the market.
• Acceptance testing – beta testing of the product done by the actual end users.

Types of System Testing


There are more than 50 types of System Testing. Below we have listed types of system
testing a large software development company would typically use

1. Usability Testing – mainly focuses on the user’s ease to use the application,
flexibility in handling controls and ability of the system to meet its objectives
2. Load Testing – is necessary to know that a software solution will perform under
real-life loads.
3. Regression Testing – involves testing done to make sure none of the changes
made over the course of the development process have caused new bugs. It also
makes sure no old bugs appear from the addition of new software modules over
time.
4. Recovery Testing – is done to demonstrate a software solution is reliable,
trustworthy and can successfully recoup from possible crashes.
5. Migration Testing – is done to ensure that the software can be moved from older
system infrastructures to current system infrastructures without any issues.
6. Functional Testing – Also known as functional completeness testing, Functional
Testing involves trying to think of any possible missing functions. Testers might
make a list of additional functionalities that a product could have to improve it during
functional testing.
7. Hardware/Software Testing – IBM refers to Hardware/Software testing as “HW/SW
Testing”. This is when the tester focuses his/her attention on the interactions
between the hardware and software during system testing.

What Types of System Testing Should Testers Use?


There are over 50 different types of system testing. The specific types used by a tester
depend on several variables. Those variables include:

• Who the tester works for – This is a major factor in determining the types of system
testing a tester will use. Methods used by large companies are different than that
used by medium and small companies.
• Time available for testing – Ultimately, all 50 testing types could be used. Time is
often what limits us to using only the types that are most relevant for the software
project.
• Resources available to the tester – Of course some testers will not have the
necessary resources to conduct a testing type. For example, if you are a tester
working for a large software development firm, you are likely to have expensive
automated testing software not available to others.
• Software Tester’s Education- There is a certain learning curve for each type of
software testing available. To use some of the software involved, a tester has to
learn how to use it.
• Testing Budget – Money becomes a factor not just for smaller companies and
individual software developers but large companies as well.

What is Component Testing?


Component testing is defined as a software testing type, in which the testing is performed
on each individual component separately without integrating with other components. It’s
also referred to as Module Testing when it is viewed from an architecture perspective.
Component Testing is also referred to as Unit Testing, Program Testing or Module Testing.

Generally, any software as a whole is made of several components. Component Level


Testing deals with testing these components individually.

It’s one of most frequent black box testing types which is performed by QA Team.

As per the below diagram, there will be a test strategy and test plan for component testing.
Where each and every part of the software or application is considered individually. For
each of this component a Test Scenario will be defined, which will be further brought down
into a High Level Test Cases -> Low Level detailed Test Cases with Prerequisites.
The usage of the term “Component Testing” varies from domain to domain and
organization to organization.

The most common reason for different perception of Component testing are

1. Type of Development Life Cycle Model Chosen


2. Complexity of the software or application under test
3. Testing with or without isolation from rest of other component in software or
application.

As we know Software Test Life Cycle Architecture has lots many test-artifacts (Documents
made, used during testing activities). Among many tests – artifacts, it’s the Test Policy &
Test Strategy which defines the types of testing, depth of testing to be performed in a given
project.
Who does Component Testing
Component testing is performed by testers. ‘Unit Testing’ is performed by the developers
where they do the testing of the individual functionality or procedure. After Unit Testing is
performed, the next testing is component testing. Component testing is done by the testers.

When to perform Component testing


Component testing is performed soon after the Unit Testing is done by the developers and
the build is released for the testing team. This build is referred as UT build (Unit Testing
Build). Major functionality of all the components are tested in this phase,

Entry criteria for component testing

• Minimum number of the component to be included in the UT should be developed &


unit tested.

Exit criteria for component testing

• The functionality of all the component should be working fine.


• There should not presence of any Critical or High or Medium severity & priority
defects Defect log.

Component Testing Techniques


Based on depth of testing levels, Component testing can be categorized as

1. CTIS – Component Testing In Small


2. CTIL – Component Testing In Large

CTIS – Component Testing in Small

Component testing may be done with or without isolation of rest of other components in the
software or application under test. If it’s performed with the isolation of other component,
then it’s referred as Component Testing in Small.

Example 1: Consider a website which has 5 different web pages then testing each
webpage separately & with the isolation of other components is referred as Component
testing in Small.

Example 2: Consider the home page of the guru99.com website which has many
components like

Home, Testing, SAP, Web, Must Learn!, Big Data, Live Projects, Blog and etc.
Similarly, any software is made of many components and also, every component will have
its own subcomponents. Testing each modules mentioned in example 2 separately without
considering integration with other components is referred as Component Testing in Small.

CTIL – Component Testing in Large

Component testing done without isolation of other components in the software or application
under test is referred as Component Testing Large.

Let’s take an example to understand it in a better way. Suppose there is an application


consisting of three components say Component A, Component B, and Component C.

The developer has developed the component B and wants it tested. But in order
to completely test the component B, few of its functionalities are dependent on component
A and few on component C.

Functionality Flow: A -> B -> C which means there is a dependency to B from both A & C,
as per the diagram stub is the called function, and the driver is the calling function.

But the component A and component C has not been developed yet. In that case, to test
the component B completely, we can replace the component A and component C by stub
and drivers as required. So basically, component A & C are replaced by stub & driver’s
which acts as a dummy object till they are actually developed.

• Stub: A stub is called from the software component to be tested as shown in the
diagram below ‘Stub’ is called by Component A.
• Driver: A driver calls the component to be tested as shown in the diagram below
‘Component B’ is called by Driver.

Example Test Cases for Component Testing


Consider 2 webpages as per the diagrams mentioned below, Here both the web page are
interrelated to each other from a functionality point of view.
1. Web page 1 is login page to demo.com

When the user entered valid user-id and password in the text field and click on submit
button, the web page will be navigating to the home page of demo bank website.

So here login page is one component, and the home page is another. Now testing the
functionality of individual pages separately is called component testing.

Component testing scenario’s on web page1 –

• Enter invalid user id and verify if any user-friendly warning pop up is shown to the
end user.
• Enter invalid user id and password and click on ‘reset’ and verify if the data entered
in the text fields user-id and password are cleared out.
• Enter the valid user name and password and click on ‘Login’ button.

Component testing scenario’s on web page2 –

• Verify if the “Welcome to manager page of guru99 bank” message is being displayed
on the home page.
• Verify if all the link on the left side of the web page are clickable.
• Verify if the manager id is being displayed in the center of the home page.
• Verify the presence of the 3 different images on the home page as per the diagram.
2.5 BUILDING THE REQUIREMENTS MODEL: The intent of the analysis model is to provide a description
of the required informational, functional, and behavioral domains for a computer-based system. The
model changes dynamically as you learn more about the system to be built, and other stakeholders
understand more about what they really require. For that reason, the analysis model is a snapshot of
requirements at any given time.

2.5.1 Elements of the Requirements Model: There are many different ways to look at the requirements
for a computer-based system. Different modes of representation force you to consider requirements
from different viewpoints—an approach that has a higher probability of uncovering omissions,
inconsistencies, and ambiguity.

Scenario-based elements. The system is described from the user’s point of view using a scenario-based
approach. For example, basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Scenario-based elements of the requirements model are often the
first part of the model that is developed. Three levels of elaboration are shown, culminating in a
scenario-based representation.

Class-based elements. Each usage scenario implies a set of objects that are manipulated as an actor
interacts with the system. These objects are categorized into classes—a collection of things that have
similar attributes and common behaviors.
Behavioral elements. The behavior of a computer-based system can have a profound effect on the
design that is chosen and the implementation approach that is applied. Therefore, the requirements
model must provide modeling elements that depict behavior. The state diagram is one method for
representing the behavior of a system by depicting its states and the events that cause the system to
change state. A state is any externally observable mode of behavior. In addition, the state diagram
indicates actions taken as a consequence of a particular event.

Flow-oriented elements. Information is transformed as it flows through a computer-based system. The


system accepts input in a variety of forms, applies functions to transform it, and produces output in a
variety of forms. Input may be a control signal transmitted by a transducer, a series of numbers typed by
a human operator, a packet of information transmitted on a network link, or a voluminous data file
retrieved from secondary storage. The transform(s) may comprise a single logical comparison, a complex
numerical algorithm, or a rule-inference approach of an expert system.

2.5.2 Analysis Patterns: Anyone who has done requirements engineering on more than a few software
projects begins to notice that certain problems reoccur across all projects within a specific application
domain. These analysis patterns suggest solutions (e.g., a class, a function, Software Engineering Lecture
notes GPCET, Department of CSE | 53 a behavior) within the application domain that can be reused
when modeling many applications. Analysis patterns are integrated into the analysis model by reference
to the pattern name. They are also stored in a repository so that requirements engineers can use search
facilities to find and apply them. Information about an analysis pattern (and other types of patterns) is
presented in a standard template.

2.6 NEGOTIATING REQUIREMENTS In an ideal requirements engineering context, the inception,


elicitation, and elaboration tasks determine customer requirements in sufficient detail to proceed to
subsequent software engineering activities. You may have to enter into a negotiation with one or more
stakeholders. In most cases, stakeholders are asked to balance functionality, performance, and other
product or system characteristics against cost and time-to-market. The intent of this negotiation is to
develop a project plan that meets stakeholder needs while at the same time reflecting the realworld
constraints (e.g., time, people, budget) that have been placed on the software team. The best
negotiations strive for a “win-win” result. That is, stakeholders win by getting the system or product that
satisfies the majority of their needs and you win by working to realistic and achievable budgets and
deadlines.

Boehm [Boe98] defines a set of negotiation activities at the beginning of each software process
iteration.

Rather than a single customer communication activity,the following activities are defined:

1. Identification of the system or subsystem’s key stakeholders.

2. Determination of the stakeholders’ “win conditions.”

3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of win-win conditions
for all concerned.

2.7 VALIDATING REQUIREMENTS As each element of the requirements model is created, it is examined
for inconsistency, omissions, and ambiguity. The requirements represented by the model are prioritized
by the stakeholders and grouped within requirements packages that will be implemented as software
increments. A review of the requirements model addresses the following questions:

• Is each requirement consistent with the overall objectives for the system/product?
• Have all requirements been specified at the proper level of abstraction? That is, do some
requirements provide a level of technical detail that is inappropriate at this stage?
• Is the requirement really necessary or does it represent an add-on feature that may not be
essential to the objective of the system?
• Is each requirement bounded and unambiguous?
• Does each requirement have attribution? That is, is a source noted for each requirement?
• Do any requirements conflict with other requirements?
Test automation is the process of using automation tools to maintain test data, execute
tests, and analyze test results to improve software quality.

Automated testing is also called test automation or automated QA testing. When


executed well, it relieves much of the manual requirements of the testing lifecycle.

Types of Automated Testing


Most tests done manually can be automated. What a user will manually perform can be
replicated with automation tools using an automation script. However, not all tests
should be automated and we’ll look at this later in this article.

Here is a safe list of test types that can be automated without a doubt.

1. Unit Testing

Unit testing is when you isolate a single unit of your application from the rest of the
software and test its behavior. These tests don’t depend on external APIs, databases,
or anything else.

If you have a function on which you want to perform a unit test and that function uses
some external library or even another unit from the same app, then these resources will
be mocked.

The main purpose of unit testing is to see how each component of your application will
work, without being impacted by anything else. Unit testing is performed during the
development phase, is considered as the first level of testing.

2. Integration Testing

In integration testing, you test how the units are integrated logically and how they work
as a group.

The main purpose of integration testing is to verify how the modules communicate and
behave together and to evaluate the compliance of a system.

3. Smoke Testing

Smoke testing is performed to examine whether the system build is stable or not. In
short, its purpose is to examine if the main functionalities work properly so that testers
can proceed with further testing.

4. Regression Testing
Regression testing checks that a recent change in code doesn’t affect any existing
features of the app in question. In simple terms, it verifies that changes made to the
system did not break any functionality that was working correctly prior to their
implementation.

There are several types of tests that can be automated. Automated testing is when you
configure a script/program to do the same steps as you would do to manually test the
software. .

In the end, the script will perform whatever you instructed it to and it will show you if the
test result is the same as the one that you expected.

A software developer in testing has significant code knowledge and experience in


testing.

They can create functional and nonfunctional code-based test automation scripts with
tools like Selenium and Appium, among others. The SDET is always accountable for the
code-based testing.

The software developer tester creates unit and build acceptance tests.

Software developers also operate in code-based testing. They also work in UI and UX
tests, which are manual.

Test automation is a perfect solution for common, repetitive, and high-volume testing.
Coordinating and managing testing are now becoming much easier . You can track and
share testing results from a single, centralized location.

This gives you more thorough test coverage, because more testing can be
accomplished. While there is definitely manual work still involved in testing, using
Perfecto improves the accuracy and coverage of testing for teams competing in an
increasingly fast-paced software market.

TEST CASE DESIGN

What is Software Testing Technique?


Software Testing Techniques help you design better test cases. Since exhaustive testing is
not possible; Manual Testing Techniques help reduce the number of test cases to be
executed while increasing test coverage. They help identify test conditions that are
otherwise difficult to recognize.

Boundary Value Analysis (BVA)


Boundary value analysis is based on testing at the boundaries between partitions. It
includes maximum, minimum, inside or outside boundaries, typical values and error values.

It is generally seen that a large number of errors occur at the boundaries of the defined
input values rather than the center. It is also known as BVA and gives a selection of test
cases which exercise bounding values.

This black box testing technique complements equivalence partitioning. This software
testing technique base on the principle that, if a system works well for these particular
values then it will work perfectly well for all values which comes between the two boundary
values.

Guidelines for Boundary Value analysis

• If an input condition is restricted between values x and y, then the test cases should
be designed with values x and y as well as values which are above and below x and
y.
• If an input condition is a large number of values, the test case should be developed
which need to exercise the minimum and maximum numbers. Here, values above
and below the minimum and maximum values are also tested.
• Apply guidelines 1 and 2 to output conditions. It gives an output which reflects the
minimum and the maximum values expected. It also tests the below or above
values.
• Example:
Equivalence Class Partitioning
Equivalent Class Partitioning allows you to divide set of test condition into a partition which
should be considered the same. This software testing method divides the input domain of a
program into classes of data from which test cases should be designed.

The concept behind this Test Case Design Technique is that test case of a representative
value of each class is equal to a test of any other value of the same class. It allows you to
Identify valid as well as invalid equivalence classes.

Example:

Input conditions are valid between

1 to 10 and 20 to 30
Hence there are five equivalence classes

--- to 0 (invalid)
1 to 10 (valid)
11 to 19 (invalid)
20 to 30 (valid)
31 to --- (invalid)
You select values from each class, i.e.,

-2, 3, 15, 25, 45


Example 1: Equivalence and Boundary Value
• Let’s consider the behavior of Order Pizza Text Box Below
• Pizza values 1 to 10 is considered valid. A success message is shown.
• While value 11 to 99 are considered invalid for order and an error message will
appear, “Only 10 Pizza can be ordered”
Here is the test condition

1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is
considered invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.

We cannot test all the possible values because if done, the number of test cases will be
more than 100. To address this problem, we use equivalence partitioning hypothesis where
we divide the possible values of tickets into groups or sets as shown below where the
system behavior can be considered the same.

The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick
only one value from each partition for testing. The hypothesis behind this technique is that
if one condition/value in a partition passes all others will also pass. Likewise, if one
condition in a partition fails, all other conditions in that partition will fail.
Boundary Value Analysis– in Boundary Value Analysis, you test boundaries between
equivalence partitions

In our earlier equivalence partitioning example, instead of checking one value for each
partition, you will check the values at the partitions like 0, 1, 10, 11 and so on. As you may
observe, you test values at both valid and invalid boundaries. Boundary Value Analysis
is also called range checking.

Equivalence partitioning and boundary value analysis(BVA) are closely related and can be
used together at all levels of testing.

Decision Table Based Testing


A decision table is also known as to Cause-Effect table. This software testing technique is
used for functions which respond to a combination of inputs or events. For example, a
submit button should be enabled if the user has entered all required fields.

The first task is to identify functionalities where the output depends on a combination of
inputs. If there are large input set of combinations, then divide it into smaller subsets which
are helpful for managing a decision table.

For every function, you need to create a table and list down all types of combinations of
inputs and its respective outputs. This helps to identify a condition that is overlooked by the
tester.

Following are steps to create a decision table:

• Enlist the inputs in rows


• Enter all the rules in the column
• Fill the table with the different combination of inputs
• In the last row, note down the output against the input combination.
State Transition
In State Transition technique changes in input conditions change the state of the Application
Under Test (AUT). This testing technique allows the tester to test the behavior of an AUT.
The tester can perform this action by entering various input conditions in a sequence. In
State transition technique, the testing team provides positive as well as negative input test
values for evaluating the system behavior.

Guideline for State Transition:

• State transition should be used when a testing team is testing the application for a
limited set of input values.
• The Test Case Design Technique should be used when the testing team wants to
test sequence of events which happen in the application under test.

Example:

In the following example, if the user enters a valid password in any of the first three
attempts the user will be able to log in successfully. If the user enters the invalid password
in the first or second try, the user will be prompted to re-enter the password. When the user
enters password incorrectly 3rd time, the action has taken, and the account will be blocked.
State Transition Diagram

In this diagram when the user gives the correct PIN number, he or she is moved to Access
granted state. Following Table is created based on the diagram above-

State Transition Table


Correct PIN Incorrect PIN
S1) Start S5 S2
S2) 1 attempt
st
S5 S3
S3) 2 attempt
nd
S5 S4
S4) 3 attempt
rd
S5 S6
S5) Access Granted – –
S6) Account blocked – –
In the above-given table when the user enters the correct PIN, the state is transitioned to
Access granted. And if the user enters an incorrect password, he or she is moved to next
state. If he does the same 3rd time, he will reach the account blocked state.

Error Guessing
Error Guessing is a software testing technique based on guessing the error which can
prevail in the code. The technique is heavily based on the experience where the test
analysts use their experience to guess the problematic part of the testing application.
Hence, the test analysts must be skilled and experienced for better error guessing.
The technique counts a list of possible errors or error-prone situations. Then tester writes
a test case to expose those errors. To design test cases based on this software testing
technique, the analyst can use the past experiences to identify the conditions.

Guidelines for Error Guessing:

• The test should use the previous experience of testing similar applications
• Understanding of the system under test
• Knowledge of typical implementation errors
• Remember previously troubled areas
• Evaluate Historical data & Test results

What is a unified process model?


The Unified Process (UP) is a software development framework used for object-oriented
modeling. The framework is also known as Rational Unified Process (RUP) and the Open
Unified Process (Open UP). Some of the key features of this process include:

• It defines the order of phases.


• It is component-based, meaning a software system is built as a set of software
components. There must be well-defined interfaces between the components for smooth
communication.
• It follows an iterative, incremental, architecture-centric, and use-case driven approach

A visual representation of the unified process


Let's have a look at these approaches in detail.

The case-driven approach

Use a case-driven approach that follows a set of actions performed by one or more entities. A
use case refers to the process of the team performing the development work from the functional
requirements. The functional requirements are made from the list of requirements that were
specified by the client. For example, an online learning management system can be specified in
terms of use cases such as "add a course," "delete a course," "pay fees," and so on.

The architecture-centric approach


The architecture-centric approach defines the form of the system and how it should be
structured to provide a specific functionality whereas the use case defines the functionality.

The iterative and incremental approach

An iterative and incremental approach means that the product will be developed in multiple
phases. During these phases, the developers evaluate and test.

Phases
We can represent a unified process model as a series of cycles. Each cycle ends with the
release of a new system version for the customers. We have four phases in every cycle:

• Inception
• Elaboration
• Construction
• Transition
The phases of the unified process

Inception
The main goal of this phase involves delimiting the project scope. This is where we define why
we are making this product in the first place. It should have the following:

• What are the key features?


• How does this benefit the customers?
• Which methodology will we follow?
• What are the risks involved in executing the project?
• Schedule and cost estimates.

Elaboration

We build the system given the requirements, cost, and time constraints and all the risks
involved. It should include the following:

• Develop with the majority of the functional requirements implemented.


• Finalize the methodology to be used.
• Deal with the significant risks involved.

Construction
This phase is where the development, integration, and testing take place. We build the complete
architecture in this phase and hand the final documentation to the client.

Transition
This phase involves the deployment, multiple iterations, beta releases, and improvements of the
software. The users will test the software, which may raise potential issues. The development
team will then fix those errors.

This method allows us to deal with the changing requirements throughout the development
period. The unified process model has various applications which also makes it complex in
nature. Therefore, it's most suitable for smaller projects and should be implemented by a team
of professionals.

SOFTWARE REQUIREMENT DOCUMENT


In order to form a good SRS, here you will see some points which can be used and should be considered to
form a structure of good SRS. These are as follows : 1. Introduction
• (i) Purpose of this document
• (ii) Scope of this document
• (iii) Overview
2. General description 3. Functional Requirements 4. Interface Requirements 5. Performance Requirements 6.
Design Constraints 7. Non-Functional Attributes 8. Preliminary Schedule and Budget 9. Appendices Software
Requirement Specification (SRS) Format as name suggests, is complete specification and description of
requirements of software that needs to be fulfilled for successful development of software system. These
requirements can be functional as well as non-functional depending upon type of requirement. The interaction
between different customers and contractor is done because its necessary to fully understand needs of

customers. Depending upon information gathered after


interaction, SRS is developed which describes requirements of software that may include changes and
modifications that is needed to be done to increase quality of product and to satisfy customer’s demand.
1. Introduction :
• (i) Purpose of this Document – At first, main aim of why this document is necessary and what’s
purpose of document is explained and described.
• (ii) Scope of this document – In this, overall working and main objective of document and what value
it will provide to customer is described and explained. It also includes a description of development
cost and time required.
• (iii) Overview – In this, description of product is explained. It’s simply summary or overall review of
product.
2. General description : In this, general functions of product which includes objective of user, a user
characteristic, features, benefits, about why its importance is mentioned. It also describes features of user
community.
3. Functional Requirements : In this, possible outcome of software system which includes effects due to
operation of program is fully explained. All functional requirements which may include calculations, data
processing, etc. are placed in a ranked order.
4. Interface Requirements : In this, software interfaces which mean how software program communicates
with each other or users either in form of any language, code, or message are fully described and explained.
Examples can be shared memory, data streams, etc.
5. Performance Requirements : In this, how a software system performs desired functions under specific
condition is explained. It also explains required time, required memory, maximum error rate, etc.
6. Design Constraints : In this, constraints which simply means limitation or restriction are specified and
explained for design team. Examples may include use of a particular algorithm, hardware and software
limitations, etc.
7. Non-Functional Attributes : In this, non-functional attributes are explained that are required by software
system for better performance. An example may include Security, Portability, Reliability, Reusability,
Application compatibility, Data integrity, Scalability capacity, etc.
8. Preliminary Schedule and Budget : In this, initial version and budget of project plan are explained which
include overall time duration required and overall cost required for development of project.
9. Appendices : In this, additional information like references from where information is gathered, definitions
of some specific terms, acronyms, abbreviations, etc. are given and explained.
Uses of SRS document:
1. Development team require it for developing product according to the need.
2. Test plans are generated by testing group based on the describe external behavior.
3. Maintenance and support staff need it to understand what the software product is supposed to do.
4. Project manager base their plans and estimates of schedule, effort and resources on it.
5. customer rely on it to know that product they can expect.
6. As a contract between developer and customer.
7. in documentation purpose.

System Requirements
System requirements are the configuration that a system must have in order for a
hardware or software application to run smoothly and efficiently. Failure to meet
these requirements can result in installation problems or performance problems.
The former may prevent a device or application from getting installed, whereas
the latter may cause a product to malfunction or perform below expectation or
even to hang or crash.

For packaged products, system requirements are often printed on the packaging.
For downloadable products, the system requirements are often indicated on the
download page. System requirements can be broadly classified as functional
requirements, data requirements, quality requirements and constraints. They are
often provided to consumers in complete detail. System requirements often
indicate the minimum and the recommended configuration. The former is the
most basic requirement, enough for a product to install or run, but performance
is not guaranteed to be optimal. The latter ensures a smooth operation.

Hardware system requirements often specify the operating system version,


processor type, memory size, available disk space and additional peripherals, if
any, needed. Software system requirements, in addition to the aforementioned
requirements, may also specify additional software dependencies (e.g., libraries,
driver version, framework version). Some hardware/software manufacturers
provide an upgrade assistant program that users can download and run to
determine whether their system meets a product’s requirements.

Verification and Formal Methods


▪ Formal methods of software development are based on mathematical representations of the
software, usually as a formal specification.
▪ These formal methods are mainly concerned with a mathematical analysis of the specification
▪ with transforming the specification to a more detailed, semantically equivalent representation; or
▪ with formally verifying that one representation of the system is semantically equivalent to another
representation.
▪ We can use formal methods as the ultimate static verification technique.
▪ They require very detailed analyses of the system specification and the program, and their use is
often time consuming and expensive.
▪ Consequently, the use of formal methods is mostly confined to safety- and security-critical
software development processes.
Formal methods may be used at different stages in the V & V process as shown
below:
▪ A formal specification of the system may be developed and mathematically analyzed for
inconsistency. This technique is effective in discovering specification errors and omissions.
▪ You can formally verify, using mathematical arguments, that the code of a software system is
consistent with its specification. This requires a formal specification and is effective in
discovering programming and some design errors. A transformational development process
where a formal specification is transformed through a series of more detailed representations or a
Cleanroom process may be used to support the formal verification process.
▪ Formal verification demonstrates that the developed program meets its specification so
implementation errors do not compromise dependability.
▪ The argument against the use of formal specification is that it requires specialized notations.
▪ These can only be used by specially trained staff and cannot be understood by domain experts.
▪ Software engineers cannot recognize potential difficulties with the requirements because they
don’t understand the domain; domain experts cannot find these problems because they don’t
understand the specification.
▪ Although the specification may be mathematically consistent, it may not specify the system
properties that are really required.
▪ Many people think that formal verification is not cost-effective.
▪ The same level of confidence in the system can be achieved more cheaply by using other
validation techniques such as inspections and system testing.
▪ It is sometimes claimed that the use of formal methods for system development leads to more
reliable and safer systems.
▪ There is no doubt that a formal system specification is less likely to contain anomalies that must
be resolved by the system designer.
▪ Formal specification and proof do not guarantee that the software will be reliable in practical use.
The reasons for this are:
1. The specification may not reflect the real requirements of system users.

❏ Lutz (Lutz, 1993) discovered that many failures experienced by users were a consequence of
specification errors and omissions that could not be detected by formal system specification.

❏ System users rarely understand formal notations so they cannot read the formal specification
directly to find errors and omissions.

2. The proof may contain errors.

❏ Program proofs are large and complex, so, like large and complex programs, they usually
contain errors.

3. The proof may assume a usage pattern which is incorrect.

❏ If the system is not used as anticipated, the proof may be invalid.

▪ In spite of their disadvantages, formal methods have an important role to play in the development
of critical software systems.
▪ Formal specifications are very effective in discovering specification problems that are the most
common causes of system failure.
▪ Formal verification increases confidence in the most critical components of these systems.
▪ The use of formal approaches is increasing as procurers demand it and as more and more
engineers become familiar with these techniques.
Verification and Validation
Verification and Validation is the process of investigating that a software system satisfies specifications
and standards and it fulfills the required purpose.

Barry Boehm described verification and validation as the following:

Verification: Are we building the product, right?

Validation: Are we building the right product?

Verification:

● Verification is the process of checking that a software achieves its goal without any bugs.

● It is the process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have.

● Verification is Static Testing.

● Activities involved in verification:

1. Inspections

2. Reviews

3. Walkthroughs

4. Desk-checking

Validation:

● Validation is the process of checking whether the software product is up to the mark.

● In other words, the product has high level requirements.

● It is the process of checking the validation of product i.e., it checks what we are developing is the right
product.

● It is validation of actual and expected products.

● Validation is the Dynamic Testing.

Verification is followed by Validation.


Diagrammatic representation of Verification and validation model:
KEY DIFFERENCE

● Verification process includes checking of documents, design, code and program whereas the Validation
process includes testing and validation of the actual product.

● Verification does not involve code execution while Validation involves code execution.

● Verification uses methods like reviews, walkthroughs, inspections and deskchecking whereas
Validation uses methods like black box testing, white box testing and non-functional testing.
● Verification checks whether the software confirms a specification whereas Validation checks whether
the software meets the requirements and expectations.

● Verification finds the bugs early in the development cycle whereas Validation finds the bugs that
verification can not catch.

● Verification process targets software architecture, design, database, etc. while the Validation process
targets the actual software product.

● Verification is done by the QA team while Validation is done by the involvement of testing team with
QA team.

● Verification process comes before validation whereas the Validation process comes after verification.

You might also like