0% found this document useful (0 votes)
14 views44 pages

Software Testing Unit 1 2 3 Printed Notes

The document provides a comprehensive overview of software testing, covering its processes, terminologies, and methodologies such as verification and validation. It details various testing techniques including functional, structural, regression, and object-oriented testing, as well as the importance of test cases and test suites. Additionally, it discusses the Software Development Life Cycle (SDLC) and the significance of Software Requirement Specification (SRS) documents in ensuring software quality.

Uploaded by

dakshesh314
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views44 pages

Software Testing Unit 1 2 3 Printed Notes

The document provides a comprehensive overview of software testing, covering its processes, terminologies, and methodologies such as verification and validation. It details various testing techniques including functional, structural, regression, and object-oriented testing, as well as the importance of test cases and test suites. Additionally, it discusses the Software Development Life Cycle (SDLC) and the significance of Software Requirement Specification (SRS) documents in ensuring software quality.

Uploaded by

dakshesh314
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Software Testing

UNIT I
Review of Software Engineering: Overview of Software Evolution, SDLC, Testing Process,

Terminologies in Testing: Error, Fault, Failure, Verification, Validation, Difference Between

Verification and Validation, Test Cases, Testing Suite, Test ,Oracles, Impracticality of Testing

All Data; Impracticality of Testing All Paths. Verification: Verification Methods, SRS

Verification, Source Code Reviews, User Documentation Verification, Software, Project Audit,

Tailoring Software Quality Assurance Program by Reviews, Walkthrough, Inspection and

Configuration Audits

UNIT II
Functional Testing: Boundary Value Analysis, Equivalence Class Testing, Decision Table

Based Testing, Cause Effect Graphing Technique. Structural Testing: Control Flow Testing,

Path Testing, Independent Paths, Generation of Graph from Program, Identification of

Independent Paths, Cyclomatic Complexity, Data Flow Testing, Mutation Testing

UNIT III
Regression Testing: What is Regression Testing? Regression Test cases selection, Reducing the number
of test cases, Code coverage prioritization technique. Reducing the number of test

cases: Prioritization guidelines, Priority category, Scheme, Risk Analysis

UNIT IV
Software Testing Activities: Levels of Testing, Debugging, Testing techniques and their

applicability, Exploratory Testing Automated Test Data Generation: Test Data, Approaches to

test data generation, test data generation using genetic algorithm, Test Data Generation Tools, Software
Testing Tools, and Software test Plan.

UNIT V
Object Oriented Testing: Definition, Issues, Class Testing, Object Oriented Integration and

System Testing. Testing Web Applications: Web Testing, User Interface Testing, Usability

Testing, Security Testing, Performance Testing, Database testing, Post Deployment Testing
Software Testing

Software Testing is a method to assess the functionality of the software program.


The process checks whether the actual software matches the expected requirements
and ensures the software is bug-free.

Software testing can be divided into two steps


1. Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements. It means “Are we building the right product?”

What is Software Evolution?

Software evolution is the process of continuously changing and improving software over
time. This can involve adding new features, fixing bugs, optimising performance, or adapting
the software to work with new technologies or platforms.

Software evolution is an important aspect of

• Software Development and

• Maintenance.

Software Development Life Cycle (SDLC)


Software development life cycle (SDLC) is a structured process that is used to design,
develop, and test good-quality software. SDLC consists of a precise plan that describes a
method for improving the quality of software and the all-around development process.
Testing Process
Software testing is the process of evaluating and verifying that a software
product or application does what it's supposed to do.

1. Test Plan
Test plan should include provisions about the quantity of work to be done,
timelines and milestones to be completed, testing techniques, and other formalities
like contingencies and hazards.

2. Analysis
In Analysis a functional validation matrix is generated. The in-house or offshore
testing team evaluates the requirements and test cases that will be automated and
those that will be manually tested.

3. Design
In the design stage, the testing team creates appropriate scripts for automated test
cases and generates test data for both automated and manual test cases.

4. Development
The development stage involves unit testing as well as the creation of performance
and stress test strategies.
5. Execution
The testing team run unit tests, followed by functionality tests and detects
vulnerabilities on a surface level and report them to software developers.

6. Bug Fixes
The testing team finds an issue and forwarded to the IT development team. If the
development team decides to remedy the bugs, the testing team must retest the
product to ensure that no new bugs are introduced while correcting.

7. Software Implementation
Software implementation is the last and utmost important phase of software
testing when all test cases and processes have been finished. The program or
software is given to the end-user, who analyses it and reports if any errors are
found.

Terminologies in Testing:
Bug: bug refers to defects which means that the software product or
the application is not working as per the adhered requirements (any
type of logical error which causes our code to break).

Error: Errors are generated due to wrong logic, syntax, or loop that
can impact the end-user experience.

Fault: Lack of resources or not following proper steps by the


developer in software which means that the logic was not incorporated
to handle the errors.

Failure: Failure is the accumulation of several defects that ultimately


lead to Software failure and results in the loss of information. Failure
is detected by end-users once they face a particular issue in the
software.
A simple diagram depicting Bug vs Defect vs Fault vs Failure:

Basis Bug Defect Fault Error Failure


Failure is the
A bug refers
accumulation
to defects
of several
which A Fault is a
defects that
means that state that
ultimately lead
the software A Defect is a causes the An Error is a
to Software
product or deviation software to mistake made in
failure and
Definitio the between the fail and the code due to
results in the
n application actual and therefore it which
loss of
is not expected does not compilation or
information in
working as output achieve its execution fails,
critical
per the necessary
modules
adhered function.
thereby making
requirement
the system
s set
unresponsive.
The defect is
identified by
The Testers The failure is
And is Human found by the
Developers and
Raised Test resolved by mistakes test engineer
automation test
by Engineers developers in lead to during the
engineers
the fault. development
development cycle of SDLC
phase of
SDLC.
Defects are Business
Logical
classified as Logic Syntactic Error
bugs
Different follows: Faults
NA
types Functional
Algorithmic Based on
and Logical UI screen error
bugs Priority:
Faults
Graphical
User
Resource Error handling
High Interface
bugs error
(GUI)
Faults
Performanc Flow control
Medium
e Faults error
Security Calculation
Low
Faults error
Based on Hardware
Hardware error
Severity: Faults
Critical
Major
Minor
Trivial
Wrong
Receiving &
design of
Missing providing Environment
the data Error in code.
Logic incorrect variables
definition
input
processes.
An
irregularity
in Logic or
Coding/Logic
gaps in the
al Error Inability to
Erroneous software
leading to the compile/execute System Errors
Logic leads to the
Reasons breakdown of a program
non-
software
behind functioning
of the
software.
Redundant Ambiguity in
Human Error
codes code logic
Misunderstandi
ng of
requirements
Faulty design
and architecture
Logical error
Peer review
Way to Implementin Implementing
of the Test Confirmation
g Test- Out-of-the- Conduct peer
prevent documents of Re-testing
driven box reviews and
the and the process end
developmen programming code-reviews
reasons requirement to end,
t. methods.
s.
Adjusting
Proper usage Verifying Need for
enhanced Carefully
of primary the validation of
developmen review the
and correct correctness bug fixes and
t practices requirements
software of software enhancing the
and as well as the
coding design and overall quality
evaluation specifications.
practices. coding. of the software.
of
Categorizing
cleanliness and evaluating
of the code. the errors and
issues.

Verification
It is a process that determines the quality of the software. Verification is a relatively
objective process includes all the activities associated with producing high quality
software, i.e.: testing, inspection, design analysis, specification analysis, and so on. The
various processes and documents are expressed precisely enough no subjective judgment
should be needed in order to verify software.

Validation
Validation is a process in which the requirements of the customer are actually met by the
software functionality. Validation is done at the end of the development process and takes
place after verifications are completed.

Difference Between Verification and Validation

Verification Validation

Verification is the static process of Validation is the dynamic


analyzing documents, visual designs, process of checking the correct
computer programs, and codes. is being built for the user.
Verification Validation

It is done by the testers. It is done by the product team.

The execution of codes is not The execution of codes is


included in the verification. included in the validation.

Verification is done before the Validation is done after the


validation. verification.

Verification checks "Are we building Validation checks "Are we


the product right"? building the right product"?

Verification targets internal aspects Validation targets the end


such as design. product that is ready to be
deployed.

It is used to prevent errors. It is used to detect errors.

Verification testing includes Quality Validation testing includes


Assurance. Quality Control.

Test Case
A test case is a defined format for software testing required to check if a
particular application/software is working or not. It consists of a certain set of
conditions that need to be checked to test an application or software. (when
conditions are checked it checks if the resultant output meets with the expected
output or not.)
Some used parameters such as ID, condition, steps, input, expected result, result,
status, and remarks.

Parameters of a Test Case:


 Module Name: Subject or title that defines the functionality of the test.
 Test Case Id: A unique identifier assigned to every single condition in a test
case.
 Tester Name: The name of the person who would be carrying out the test.
 Test scenario: The test scenario provides a brief description to the tester, as
in providing a small overview to know about what needs to be performed
and the small features, and components of the test.
 Test Case Description: The condition required to be checked for a given
software. for eg. Check if only numbers validation is working or not for an
age input box.
 Test Steps: Steps to be performed for the checking of the condition.
 Prerequisite: The conditions required to be fulfilled before the start of the
test process.
 Test Priority: As the name suggests gives priority to the test cases that had
to be performed first, or are more important and that could be performed
later.
 Test Data: The inputs to be taken while checking for the conditions.
 Test Expected Result: The output which should be expected at the end of
the test.
 Test parameters: Parameters assigned to a particular test case.
 Actual Result: The output that is displayed at the end.
 Environment Information: The environment in which the test is being
performed, such as the operating system, security information, the software
name, software version, etc.
 Status: The status of tests such as pass, fail, NA, etc.
 Comments: Remarks on the test regarding the test for the betterment of the
software.

Test Suite
A test suite is a set of tests designed to check the functionality and
performance of the software. It collects individual test cases based on
their specific purpose or characteristics.
A test suite is a collection of test cases grouped according to a specific
set of criteria, In that testers can identify and prioritize the most critical
tests, ensuring that the most important aspects of the software are tested
first. This helps reduce the risk of missed errors or defects during
testing.
Parameter Test Suite Test Case
A collection of test A set of inputs, preconditions,
cases that are designed and expected outcomes that are
Definition to test a specific designed to test a particular
feature or functionality aspect of the software
of the software
Function Tests multiple Tests a single scenario or
scenarios and functionality
functionalities
Dependency It can be dependent on Test cases, ideally, run
other Test Suites independently of each other
Priority Can be prioritized Can be prioritized based on the
based on the severity of the issues they
functionality they uncover
cover
Purpose Validate broad Validate specific detailed
functional scenarios
requirements

Test Oracle (check s/w executed correctly for a test case)


Test Oracle is a mechanism, different from the program itself, that can be used to
test the accuracy of a program’s output for test cases. Test Oracle are likewise
judges, pronouncing the correctness of a system’s output And determine whether
the output produced by the system under test is correct .

A Test Oracle is a mechanism or principle that helps determine the expected


outcome of a test case. In simpler terms, it answers the question, “What
should the correct result be?” Test Oracles serve as a benchmark to evaluate
whether a software system behaves as expected or not.

Types of Test Oracles


1. Explicit Test Oracles: (Explicit >something easy to understand)
These are well-defined, concrete expectations explicitly documented in
specifications, requirements, or user stories. For example, a requirement stating,
“The login page should display an error message for invalid credentials,” serves
as an explicit oracle.

2. Implicit Test Oracles: (Implicit >not expressed in a direct way but


understood by the people involved)

In cases where explicit oracles are absent, testers rely on their intuition,
experience, or industry standards to determine the expected outcome. Implicit
oracles are subjective and may vary from one tester to another.

3. Derived Test Oracles:

Testers derive or calculate expected outcomes based on the system’s logic,


mathematical calculations, or algorithms. For instance, calculating the sum of
values in a spreadsheet is a derived oracle.

Impracticality of testing all data and paths:

Path Testing
Path Testing is a method that is used to design the test cases. In that method,
the control flow graph of a program is designed to find a set of linearly
independent paths of execution. Cyclomatic Complexity (it is a method used
for stability and level of confidence in a program) is used to determine the
number of independent paths and then test cases are generated for each path.

Path Testing Process


 Control Flow Graph:
Draw the corresponding control flow graph of the program in which all the
executable paths are to be discovered.
 Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic
complexity of the program
 Make Set:
Make a set of all the paths according to the control flow graph and calculate
cyclomatic complexity. The cardinality of the set is equal to the calculated
cyclomatic complexity.
 Create Test Cases:
Create a test case for each path of the set obtained in the above step.

Software Requirement Specification (SRS) Document


The SRS document is reviewed by the testing person or a group of
persons by using any verification method (like peer reviews,
walkthroughs, inspections, etc.)

Elements of an SRS document checklist

Software Requirement Specification (SRS) document is a vital


component of software development. It outlines the functional and non-
functional requirements of the software and serves as a reference for all
stakeholders involved in the project.

 Purpose and Scope: The purpose and scope section of an SRS


document should provide a high-level overview of the software, its
intended audience, and the problem it solves.
 Functional Requirements: Functional requirements (like
software’s features, user interface, and data processing) describe
what the software should do. These requirements should be
specific, measurable, and testable.
 Non-functional Requirements: Non-functional requirements (like
software’s performance, security, reliability, and usability) describe
how the software should perform. These requirements should be
measurable and testable.

 System Architecture: The system architecture section should


describe the high-level design of the software. This section should
include details about the software’s components, interfaces, and
data flow.
 Data Management: The data management section should describe
how the software will handle data. This section should include
details about the software’s database, data storage, and data
backup and recovery processes.
 User Documentation: The user documentation section should
describe how users will interact with the software.
 Testing Requirements
The testing requirements section should describe how the software
will be tested.
 Acceptance Criteria: The acceptance criteria section should
describe how the software will be accepted by the stakeholders.
 Project Timeline: The project timeline section should provide a
timeline for the software’s development.
 Stakeholder List: The stakeholder list section should identify all the
stakeholders involved in the project.

Code Review
The code review is a methodical process where a group of developers work
together to analyze and check another developer’s code to detect errors, give
suggestions, and confirm if the developed code is as per the standards. The
objective of code review is to enhance the quality, maintainability, stability,
security etc of the software which bring positive results to the project.

Types of Code Review


The types of code reviews are listed below −
Pull Requests
The developers raise a Pull Request to incorporate changes to the code. It should
be reviewed prior to the changes being merged with the base code.
Pair-Programming
In that two developers work on the same computer. One of them writes the code
and the other one reviews it in real time. It is a highly interactive form of code
review.
Over the Shoulder Review
In that one developer in the team is requested to review the code of another
developer by sitting together and going through the code on the computer.
Tool Aided Reviews
It is a type of review conducted by tools like Github, GitLab, BitBucket, Crucible
etc.
Email Based Reviews
It is a type of review in which the code changes sent over email for review and
feedback both.
Checklist Reviews
In that the reviewers follow the list of checklist items for the review process.
Ad Hoc Review
It is an informal way of review. A developer may be requested to have a quick look
at the code and provide feedback not formally.
Formal Inspection

In that an already existing process is followed. It is mostly done by an inspection


team and is guided by proper documentation.

Testing Documentation
Testing documentation is the artifacts that are created during or before the
testing of a software application. Documentation reflects the importance of
processes for the customer, individual and organization.

There is the necessary reference document, which is prepared by every


test engineer before stating the test execution process. test engineer write
the test document whenever the developers are busy in writing the code.

Once the test document is ready, the entire test execution process
depends on the test document. The primary objective for writing a test
document is to decrease or eliminate the doubts related to the testing
activities.

Types of test document

In software testing, we have various types of test document, which are as


follows:
 Test scenarios:

It is a document that defines the multiple ways or combinations of


testing the application. It is prepared to understand the flow of an
application, and it does not consist of any inputs and navigation
steps.

 Test case:
It is a step by step procedure to test an application. It consists of
the complete navigation steps and inputs and all the scenarios that
need to be tested for the application.

 Test plan:
The test plan consists of multiple components such as Objectives,
Scope, Approach, Test Environments, Test methodology,
Template, Role & Responsibility, Effort estimation, Entry and
Exit criteria, Schedule, Tools, Defect tracking, Test
Deliverable, Assumption, Risk, and Mitigation Plan or
Contingency Plan.
 Requirement traceability matrix(RTM)
Requirement traceability matrix [RTM] is a document which
ensures that all the test case has been covered. This document is
created before the test execution process to verify that we did not
miss writing any test case for the particular requirement.

 Test strategy
Test strategy is a high-level document, which is used to verify the
test types (levels) to be executed for the product and what kind of
technique has to be used and which module is going to be tested.
It includes the multiple components such as documentation
formats, objective, test processes, scope, and customer
communication strategy, etc

 Test data
It is data that occurs before the test is executed. It is mainly used
when tester are implementing the test case. Mostly, tester will
have the test data in the Excel sheet format and entered manually
while performing the test case.

 Bug report
Bug report is a document where we maintain a summary of all the
bugs which occurred during the testing process.

 Test execution report


It is the document prepared by test leads after the entire testing
execution process is completed. The test summary report defines
the constancy of the product, and it contains information like the
modules, the number of written test cases, executed, pass, fail,
and their percentage. And each module has a separate
spreadsheet of their respective module.

Software Audit
Software audits are similar to routine checkups to see if there are any problems
in the software and whether they are safe for their users.
“A software audit is the examination of software performed either
internally or by a third party to assess its compliance with policies and
licenses, software quality, compliance with industry standards, legal
requirements, and others.”

Software Audits can include many different types of activities. Most often, these
are specified types of audits tailored to the client’s current needs.

Tailoring Software Quality Assurance Program by Reviews

Software quality assurance plan’s main goal is to guarantee that the


market’s product or service is trouble- and bug-free. Additionally, it must
fulfill the specifications listed in the SRS.

SQA plan serves three purposes. It includes the following:

 Determining the QA duties assigned to the concerned team.


 A list of the areas that require review, audit, and examination.
 Determines the work products for SQA.
For a software product or service, an SQA plan will be used in
conjunction with the typical development, prototyping, design, production,
and release cycle. An SQA plan will include several components, such as
purpose, references, configuration and management, tools, code
controls, testing methodology, problem reporting and remedial measures,
and more, for easy documentation and referencing.

Importance of Software Quality Assurance Plan


 Quality Standards and Guidelines: The SQA Plan lays out the
requirements and guidelines to make sure the programme satisfies
predetermined standards for quality.
 Risk management: It is the process of recognizing,
evaluating and controlling risks in order to reduce the possibility of
errors and other problems with quality.
 Standardization and Consistency: The strategy guarantees
consistent methods, processes, and procedures, fostering a
unified and well-structured approach to quality assurance.
 Customer Satisfaction: The SQA Plan helps to ensure that the
finished product satisfies customer needs, which in turn increases
overall customer satisfaction.
 Resource optimization: It is the process of defining roles,
responsibilities, and procedures in order to maximize resource
utilization and minimize needless rework.
 Early Issue Detection: SQA Plans help identify problems early
on, which lowers the expense and work involved in fixing them.
Objectives And Goals of Software Quality Assurance Plan:
The objectives and goals of a Quality Assurance Plan (QAP) are to
ensure that the products or services meet specified quality standards
and requirements. The plan serves as a roadmap for implementing
quality assurance processes throughout a project or organizational
activity. The specific objectives and goals can vary depending on the
nature of the project or industry, but common elements include:
Compliance with Standards and Regulations:
 Objective: Ensure that the project or product complies with
relevant industry standards, regulatory requirements, and any
other applicable guidelines.
 Goal: Achieve and maintain adherence to established quality
standards to meet legal and regulatory obligations.
Customer Satisfaction:
 Objective: Enhance customer satisfaction by delivering products
or services that meet or exceed customer expectations.
 Goal: Identify and prioritize customer requirements, and
incorporate them into the quality assurance processes to create a
positive customer experience.
Defect Prevention:
 Objective: Implement measures to prevent defects, errors, or
issues in the early stages of the project lifecycle.
 Goal: Identify potential sources of defects, analyze root causes,
and take proactive steps to eliminate or minimize the occurrence of
defects.
Consistency and Reliability:
 Objective: Establish a consistent and reliable process for the
development or delivery of products and services.
 Goal: Ensure that the quality of deliverables is consistent over
time and across different phases of the project, promoting
reliability and predictability.
Process Improvement:
 Objective: Continuously improve processes to enhance efficiency,
effectiveness, and overall quality.
 Goal: Implement feedback mechanisms, conduct regular process
assessments, and identify opportunities for improvement to
optimize the quality assurance process.
Risk Management:
 Objective: Identify and manage risks that could impact the quality
of the project or product.
 Goal: Develop strategies to assess, mitigate, and monitor risks
that may affect the achievement of quality objectives.
Clear Roles and Responsibilities:
 Objective: Clearly define roles and responsibilities related to
quality assurance activities.
 Goal: Ensure that team members understand their roles in
maintaining and improving quality, fostering accountability and
collaboration.
Documentation and Traceability:
 Objective: Establish a robust documentation process to track and
trace quality-related activities and decisions.
 Goal: Create comprehensive records that enable transparency,
accountability, and the ability to trace the development or
production process.
Training and Competence:
 Objective: Ensure that team members are adequately trained and
possess the necessary competencies to perform quality assurance
tasks.
 Goal: Provide ongoing training to enhance the skills and
knowledge of individuals involved in quality assurance.
Continuous Monitoring and Reporting:
 Objective: Monitor quality metrics and report on the status of
quality assurance activities.
 Goal: Implement regular monitoring and reporting mechanisms to
track progress, identify issues, and make data-driven decisions to
maintain or improve quality.

Why is Audit Required in Software Testing?


Auditing in software testing is essential as it aids organisations and
comprehends if the progress is being followed and monitored perfectly. It
allows testers to find any defects and errors in the system and ensures that
product performance is of standard quality. Other benefits of auditing in
software testing are mentioned below:

 It ensures that the consistency is sustained and the procedures are


truthful.
 To resolve development associated troubles.
 Audit assists testers in locating the exact origin and the reason of a
crisis.
 To notice or avoid deception.
 It is used to confirm acquiescence of principles (i.e. ISO, CMM, etc.)
 With Audit one can advance testing methods.
 Helps in avoiding errors and bugs in the product.
Inspection
In general the term Inspection means the process of evaluating or
examining things. The output of inspection is compared with the set
standard or specified requirement to check whether an item that is being
developed is as per the requirement or not. It is the non-destructive type of
testing and it doesn’t harm the product under evaluation. Inspection is a
formal review type that is led by trained and expert moderators.

Goals of Inspection in Software Testing:

Inspection is a type of appraisal execution that is frequently used in


software applications. The objective of inspection is to enable the observer
to achieve agreement on an exertion system and endorse it for employing it
in the development of the software application. Usually inspected job
systems consist of software necessities definition and test designs. Other
goals of Inspection are:

1. It helps the author to improve the quality of the document under


inspection.
2. Removes defects efficiently, as soon as possible.
3. Helps in improving product quality.
4. Enables common understanding by exchanging information.
5. One can learn from defects found and prevent the occurrence of
similar defects.

Walkthrough
walkthrough is a review meeting process but it is different from the
Inspection, as it does not involve any formal process i.e. it is a
nonformal process.
The code or document is read by the author, and others who are present
in the meeting can note down the important points or can write notes on
the defects and can give suggestions about them. The walkthrough is an
informal way of testing, no formal authority is been involved in this
testing.
Advantages and Objectives of Walkthrough:
Following are some of the objectives of the walkthrough.
 To detect defects in developed software products.
 To fully understand and learn the development of software products.
 To properly explain and discuss the information present in the
document.
 To verify the validity of the proposed system.
 To give suggestions and report them appropriately with new
solutions and ideas.
 To provide an early “proof of concept”.
UNIT II

Functional Testing
Functional testing is a type of software testing that verifies the functionality
of a software system or application by checking that ensuring that the
system behaves according to the specified functional requirements and
meets the intended business needs.
A type of testing that verifies that each function of the software application
works in conformance with the requirement and specification. Each
functionality of the software application is tested by providing appropriate
test input, expecting the output, and comparing the actual output with the
expected output. This testing focuses on checking the user interface, APIs ,
database , security , client or server application , and functionality of the
Application Under Test. Functional testing can be manual or automated .

Types of Functional Testing

 Unit Testing: Developers write scripts that test if individual


components/units of an application match the requirements like writing
tests that call the methods in each unit and validate them when they
return values that match the requirements.
In unit testing, code coverage is mandatory. Ensure that test cases
exist to cover Line coverage, Code path coverage and Method
coverage.
 Smoke Testing: To ensure that software stability is intact and
not facing any anomalies.
 Sanity Testing: After smoke testing, to verify that every major
functionality of an application is working perfectly, both by itself
and in combination with other elements.
 Regression Testing: It ensures that changes to the codebase
(new code, debugging strategies, etc.) do not disrupt the already
existing functions or trigger some instability.
 Integration Testing: If a system requires multiple functional
modules to work effectively, integration testing is done to ensure
that individual modules work as expected when operating in
combination with each other. It validates that the end-to-end
outcome of the system meets these necessary standards.
 Beta/ Usability Testing: In this stage, actual customers test the
product in a production environment. This stage is necessary to
gauge how comfortable a customer is with the interface. Their
feedback is taken for implementing further improvements to the
code.
 Regression Testing: This test ensures that changes to the
codebase (new code, debugging strategies, etc.) do not disrupt
the already existing functions or trigger some instability.
Boundary Value Analysis
It is a part of black box testing where It is used to identify defects and errors in software
by testing input values on the boundaries of the allowable ranges.
To find any issues which may arise due to incorrect assumptions about the system
behavior.

Every partition has its maximum and minimum values and these maximum and
minimum values are the boundary values of a partition.
In simple terms boundary value Analysis is like testing the edge cases of our software
where most of the time it will get broke so it is important to do BVA before deploying
the code.

 A boundary value for a valid partition is a valid boundary value.


 A boundary value for an invalid partition is an invalid boundary value.
 For each variable we check-
o Minimum value.
o Just above the minimum.
o Nominal Value.
o Just below Max value.
o Max value.

Example: Consider a system that accepts ages from 8 to 19.

Boundary Value Analysis(Age accepts 8 to 19)

Invalid Valid Invalid


(min-1) (min, min + 1, nominal, max – 1, max) (max + 1)

7 8,9, 13,18,19 20
Equivalence Class Testing -That assists the team in
getting accurate and expected results, within the limited period of time
and while covering large input scenarios and it plays such a significant
role in Software Testing Life Cycle (STLC)
Equivalence class testing can be termed as a logical step in the model
of functional testing. It improves the quality of test cases, which
further enhances the quality of testing, by removing the vast amount of
redundancy and gaps that appear in the boundary value testing.
 It is a black box testing technique which restricts the testers to
examine the software product, externally.
 Also known by equivalence class partitioning, it is used to
form groups of test inputs of similar behavior or nature.
 In that if one member works well in the family then the whole
family is considered to function well and if one members fails,
whole family is rejected.
 Test cases are based on classes, not on every input, thereby
reduces the time and efforts required to build large number
of test cases.
 It may be used at any level of testing i.e. unit, integration, system
& acceptance.
 It is good to go for the ECT, when the input data is available in
terms of intervals and sets of discrete values.
 It may not work well with the boolean or logical types variables.
 A mixed combination of Equivalence class testing and boundary
value testing produces effective results.
 The fundamental concept of equivalence class testing/partition
comes from the equivalence class, which further comes from
equivalence relations.
Decision table technique
This is a systematic approach where various input combinations and their
respective system behavior are captured in a tabular form. That's why it is
also known as a cause-effect table. This technique is used to pick the test
cases in a systematic manner; it saves the testing time and gives good
coverage to the testing area of the software application.

Decision table technique is appropriate for the functions that have a logical relationship
between two and more than two inputs. This technique is related to the correct
combination of inputs and determines the result of various combinations of
input. To design the test cases by decision table technique, we need to
consider conditions as input and actions as output.

Number of possible conditions = 2^ Number of Values of the second


condition
Number of possible conditions =2^2 = 4

A decision table is created for the login function in which we can log in by
using email and password. Both the email and the password are the
conditions, and the expected result is action.
Cause Effect Graph
A cause effect graph is a methodology which helps to generate a
high yield group of test cases. This methodology has come up to
eradicate the loopholes of equivalence partitioning, and boundary
value analysis where testing of all the combinations of input
conditions are not feasible.

So whenever we need to verify some critical scenarios consisting


of combinations of input criterias, then the cause effect graph is
used.

The graph obtained is converted into a decision table which in


turn can be used to design the test cases.

Create Test Cases from a Cause Effect Graph


The steps to create test cases from a cause effect graph are listed below −

Step 1 − Detect the causes and effects from the requirements and then
assign distinct numbers to them. A cause is a unique input condition because
of which the system undergoes some kind of changes. An effect is an output
condition or state of change in the system that is caused by an input
condition.

Step 2 − Create a boolean graph which connects all the causes and effects.
This is known as the cause effect graph which depicts for what all causes
different effects have been generated.

Step 3 − Point out the constraints on the cause effect graph, describing all
the combinations of causes and/or effects which are practically not possible.

All possible constraints are listed below –

Exclusive Constraints
These constraints are between two causes C1, and C2, such that either C1
or C2 can have the value as 1, both simultaneously can not hold the value 1.
Inclusive Constraints
These constraints are between the causes C1, C2, and C3, such that at least
one of them is always equal to 1, and hence all of them simultaneously can
not hold the value 1.

One and Only One Constraint


These constraints are between the causes C1, and C2, such that one and
only one of C1 and C2 should be 1.
Requires Constraint
These constraints are between the causes C1, and C2, such that if C1 is
equal to 1, then C2 should also be 1. It is not possible for C1 to have the
value 1 with the C2 having the value as 0.

Mask Constraint
These constraints are between the effects E1, and E2, such that if E1 is
equal to 1, then E2 should be 0.

Convert the cause effect graph into a limited entry decision table by linking
the state conditions in the cause effect graph. In the decision table, each
column is converted into a test case.

Notations Used in the Cause Effect Graph


The notations used in cause effect graph are listed below −

Identify Function
It says that if the condition C1 and event E1 is related to each other by an
Identify Function, it means that if C1 holds true or equal to 1 then E1 is also
equal to 1, else E1 is equal to 0.
Not Function
It says that if the condition C1 and event E1 is related to each other by a
Not Function, it means that if C1 holds true or equal to 1 then E1 is equal to
0, else E1 is equal to 1.

OR Function
It is denoted by the symbol V. It can be used to relate the ‘n’ number of
conditions to a single effect. It says that if the conditions C1, or C2, or C3
hold true or equal to 1, then the event E1 is equal to 1, else E1 is equal to 0.
AND Function
It is denoted by the symbol /\. It can be used to relate the ‘n’ number of
conditions to a single effect. It says that if both the conditions C1, and C2
hold true or equal to 1, then the event E1 is equal to 1, else E1 is equal to 0.

Structural Testing
Structural testing is related to the internal design and implementation of
the software i.e. it involves the development team members in the testing
team. It tests different aspects of the software according to its types.
Structural testing is just the opposite of behavioral testing.

Types of Structural Testing


There are 4 types of Structural Testing:
Control Flow Testing:
It is a structural testing that uses the programs control flow as a model. The entire
code, design and structure of the software have to be known for this type of testing. It
is used by the developers to test their own code and implementation.

Data Flow Testing:


By using the control flow graph to explore the unreasonable things that can happen to
data. The detection of data flow anomalies are based on the associations between
values and variables. Without being initialized usage of variables. Initialized variables
are not used once.

Slice Based Testing:


It is useful for software debugging, software maintenance, program understanding
and quantification of functional cohesion. It divides the program into different slices
and tests that slice which can majorly affect the entire software.

Mutation Testing:
It is a type of Software Testing that is performed to design new software tests and
also evaluate the quality of already existing software tests. Mutation testing is related
to modification a program in small ways. It focuses to help the tester develop effective
tests or locate weaknesses in the test data used for the program.
UNIT -3

Regression Testing
It is a software quality checkup after any changes are made.
It involves running tests to make sure that everything still
works as it should, even after updates or tweaks to the code.
It is for the software remains reliable and functions properly,
maintaining its integrity throughout its development lifecycle.
Regression testing is a type of QA software testing that ensures
changes or updates to an existing software product do not affect
previously functioning features.

This type of testing may be necessary following various changes,


including:

 Bug fixes
 Software enhancements
 Configuration adjustments, and
 Even the substitution of electronic components (hardware).
Regression testing is a type of software testing that ensures
existing functionality still works correctly after new features
or updates are introduced. Essentially, it checks whether
new changes cause any issues or "regressions" in the
previously stable code, making sure the old features aren't
broken by the new ones.

Benefits of Regression Testing


1. Prevents Domino Effect on Key Functions:
Even minor modifications in the code can cause
significant issues in the product’s key functionalities.
Regression testing helps detect these issues early,
preventing extensive efforts to reverse the damage.
2. Aligns with Agile Methodology:
Regression testing supports Agile best practices by
continuously iterating, integrating, and testing new
code. This leads to frequent releases and faster
feedback loops, avoiding a build-up of broken code
as the production date nears.
3. Supports CI/CD Pipelines:
CI/CD relies on automated tests, including regression
tests, to ensure new code integrations are
continuously tested. This not only helps find defects
but also identifies optimization opportunities, such as
UX improvements.
Challenges of Regression Testing
Time-Consuming: every time you add or tweak a feature,
you need to re-test everything. This can turn a simple update
into a marathon of tests, especially if your project is large
and complex. Without automation, it feels like you’re
constantly playing catch-up (or you can prioritize testing a
certain part of the complete test suite)
Test Suite Maintenance: as your application grows, so does
your regression test suite. Keeping those test cases up-to-
date is like organizing a closet that never stops getting
messier. You need to add new tests, remove irrelevant ones,
and update old ones — it’s a balancing act that requires
constant attention.
Risk of Overlooking Bugs: when the pressure’s on to release
quickly, it’s easy to overlook smaller parts of the system.
Maybe a feature that worked flawlessly in previous releases
suddenly breaks, and no one noticed. The more complex
your app becomes, the higher the chance of missing hidden
bugs lurking in the shadows.
The need for regression testing comes when software maintenance
includes enhancements, error corrections, optimization, and deletion of
existing features. These modifications may affect system functionality.
Regression Testing becomes necessary in this case.

Regression testing can be performed using the following techniques:


regression testing

1. Re-test All:

Re-Test is one of the approaches to do regression testing. In this approach,


all the test case suits should be re-executed. Here we can define re-test as
when a test fails, and we determine the cause of the failure is a software
fault. The fault is reported, we can expect a new version of the software in
which defect fixed. In this case, we will need to execute the test again to
confirm that the fault fixed. This is known as re-testing. Some will refer to
this as confirmation testing.

The re-test is very expensive, as it requires enormous time and resources.

2. Regression test Selection:

In this technique, a selected test-case suit will execute rather than an entire
test-case suit.

The selected test case suits divided in two cases


 Reusable Test cases.
 Obsolete Test cases.
Reusable test cases can use in succeeding regression cycle.
Obsolete test cases can't use in succeeding regression cycle.

3. Prioritization of test cases:

Prioritize the test case depending on business impact, critical and frequently
functionality used. Selection of test cases will reduce the regression test suite.

Types of Regression Testing


The different types of Regression Testing are as follows:

Unit Regression Testing [URT]


Regional Regression Testing[RRT]
Full or Complete Regression Testing [FRT]
1) Unit Regression Testing [URT]
In this, we are going to test only the changed unit, not the impact area,
because it may affect the components of the same module.

Example1

In the below application, and in the first build, the developer develops the
Search button that accepts 1-15 characters. Then the test engineer tests the
Search button with the help of the test case design technique.

Now, the client does some modification in the requirement and also requests that
the Search button can accept the 1-35 characters. The test engineer will test only
the Search button to verify that it takes 1-35 characters and does not check any
further feature of the first build.

2) Regional Regression testing [RRT]


In this, we are going to test the modification along with the impact area or
regions, are called the Regional Regression testing. Here, we are testing
the impact area because if there are dependable modules, it will affect the
other modules also.
For example:
In the below image as we can see that we have four different modules, such as
Module A, Module B, Module C, and Module D, which are provided by the
developers for the testing during the first build. Now, the test engineer will
identify the bugs in Module D. The bug report is sent to the developers, and the
development team fixes those defects and sends the second build.
In the second build, the previous defects are fixed. Now the test engineer
understands that the bug fixing in Module D has impacted some features in
Module A and Module C. Hence, the test engineer first tests the Module D where
the bug has been fixed and then checks the impact areas in Module A and
Module C. Therefore, this testing is known as Regional regression testing.

3) Full Regression testing [FRT]


During the second and the third release of the product, the client asks for
adding 3-4 new features, and also some defects need to be fixed from the
previous release. Then the testing team will do the Impact Analysis and
identify that the above modification will lead us to test the entire product.

Therefore, we can say that testing the modified features and all the remaining
(old) features is called the Full Regression testing.

You might also like