0% found this document useful (0 votes)
47 views20 pages

Unit 4 Notes SW

Unit IV covers software analysis and testing, detailing static and dynamic analysis, testing processes, and various testing techniques including manual and automation testing. It emphasizes the importance of testing principles, types of software testing, and the differences between static and dynamic testing approaches. The document also discusses the levels of software testing, including unit testing, and the benefits and limitations of both static and dynamic testing methods.

Uploaded by

anurag22092002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views20 pages

Unit 4 Notes SW

Unit IV covers software analysis and testing, detailing static and dynamic analysis, testing processes, and various testing techniques including manual and automation testing. It emphasizes the importance of testing principles, types of software testing, and the differences between static and dynamic testing approaches. The document also discusses the levels of software testing, including unit testing, and the benefits and limitations of both static and dynamic testing methods.

Uploaded by

anurag22092002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit IV : Software Analysis and Testing

Software Static and Dynamic analysis, Code inspections, Software Testing, Fundamentals, Software
Test Process, Testing Levels, Test Criteria, Test Case Design, Test Oracles, Test Techniques, Black- Box
Testing, White-Box Unit Testing and Unit, Testing Frameworks, Integration Testing, System Testing
and other Specialized, Testing, Test Plan, Test Metrics, Testing Tools. , Introduction to Object-
oriented analysis, design and comparison with structured Software Engg.

What is Software Testing?


Software testing involves executing a program to identify any error or bug in
the software product’s code.
1. This process takes into account all aspects of the software, including its
reliability, scalability, portability, reusability, and usability.
2. The main goal of software testing is to ensure that the system and its
components meet the specified requirements and work accurately in every
case.
Manual Testing
Manual testing is the process of verifying an application’s functionality by
client requirements without using any automated techniques. We do not need
to have in-depth knowledge of any testing tool to do manual testing on any
application; rather, we should have a thorough grasp of the product to quickly
create the test document.
Manual testing is further divided into three types of testing :
• White box testing
• Black box testing
• Grey box testing

Automation Testing

With the use of automation tools or a programming language referred to as


automation testing, any manual test cases may be converted into test scripts.
We can increase the pace of our test execution with the aid of automated
testing as there is no need for human labour. It is necessary to create and run
test scripts.
Software Testing Principles
Software testing is putting software or an application to use to find errors
or faults. We must go by certain guidelines while testing software or
applications to ensure that the final result is free of flaws. This saves the
test engineers’ time and effort as they can more efficiently test the
program. We will learn about the seven fundamental tenets of software
testing in this part.
Let us see the Seven different testing principles, one by one:
1. Testing shows the presence of defects
2. Exhaustive Testing is not possible
3. Early Testing
4. Defect Clustering
5. Pesticide Paradox
6. Testing is context-dependent
7. Absence of errors fallacy

Types of Software Testing


Testing is the process of executing a program to find errors. To make our
software perform well it should be error-free. If testing is done successfully
it will remove all the errors from the software. In this article, we will
discuss first the principles of testing and then we will discuss, the different
types of testing.

Principles of Testing
• All the tests should meet the customer’s requirements.
• To make our software testing should be performed by a third party.
• Exhaustive testing is not possible. As we need the optimal amount of
testing based on the risk assessment of the application.
• All the tests to be conducted should be planned before implementing it
• It follows the Pareto rule(80/20 rule) which states that 80% of errors
come from 20% of program components.
• Start testing with small parts and extend it to large parts.
Software Static and Dynamic analysis

Static analysis and dynamic analysis act as a two-pronged approach to improving the development
process in terms of reliability, bug detection, efficiency, and security. But how do they differ, and
why is each important?

Finding and fixing bugs early in development pays off in many ways. It can
reduce development time, cut costs, and prevent data breaches or other
security vulnerabilities. In particular with DevOps, incorporating testing into the
SDLC early and continuously can be extremely helpful.

This is where both dynamic and static analysis testing come in. They each
serve different purposes within the SDLC while also delivering unique and
almost immediate ROIs for any development team.

Static vs. Dynamic Analysis: Understanding the


Differences
Static code analysis is a broad term used to describe several different types of
analyses. However, all of these feature a common trait: they do not require
code execution to operate.
In contrast, dynamic analysis does require code execution. Though there are
other differences, this characteristic is what drastically separates the two types
of testing approaches.

This also means that each approach offers different benefits at different
stages of the development process. In order to understand these differences,
let’s review the following.

• What each strategy requires.


• Testing types under the umbrella terms.
• Tools that assist the process.

What Is Static Analysis?


Static code analysis testing includes various types with the main two being
pattern-based and flow-based.

Pattern-based static analysis looks for code patterns that violate defined
coding rules. In addition to ensuring that code meets uniform expectations for
regulatory compliance or internal initiatives, it helps teams prevent defects
such as resource leaks, performance and security issues, logical errors, and
API misuse.

Flow-based static analysis involves finding and analyzing the various paths
that can be taken through the code. This can happen by control (the order in
which lines can be executed) and by data (the sequences in which a variable
or similar entity can be created, changed, used, and destroyed). These
processes can expose problems that lead to critical defects such as:

• Memory corruptions (buffer overwrites)


• Memory access violations
• Null pointer dereferences
• Race conditions
• Deadlocks

It can also detect security issues by pointing out paths that bypass security-
critical code such as code for authentication or encryption.

Additionally, metrics analysis involves measuring and visualizing various


aspects of the code. It can help detect existing defects, but more often, it
warns of potential difficulty in preventing and detecting future defects when
code is maintained. This is done by finding complexity and unwieldiness such
as:

• Overly large components


• Excessive nesting of loops
• Too-lengthy series of decisions
• Convoluted intercomponent dependencies

What Is Dynamic Analysis?

Sometimes referred to as runtime error detection, dynamic analysis is where


distinctions among testing types start to blur. For embedded systems,
dynamic analysis examines the internal workings and structure of an
application rather than external behavior. Therefore, code execution is
performed by way of white box testing.

Dynamic analysis testing detects and reports internal failures the instant they
occur. This makes it easier for the tester to precisely correlate these failures
with test actions for incident reporting.

Expanding into the external behavior of the application with emphasis on


security, dynamic application security testing (DAST) is analytical testing with
the intent to examine the test item rather than exercise it. Yet the code under
test must be executed.

DAST also extends the capability of empirical testing at all levels—from unit to
acceptance. It does this by making it possible to detect internal failures that
point to otherwise unobservable external failures that occur or will occur after
testing has stopped.

Pros & Cons of Static Analysis

As with all avenues toward DevSecOps perfection, there are pros and cons
with static analysis testing.

PROS

• Evaluates source code without executing it.


• Analyzes the code in its entirety for vulnerabilities and bugs.
• Follows customized, defined rules.
• Enhances developer accountability.
• Capable of automation.
• Highlights bugs early and reduces the time it takes to fix them.
• Reduces project costs.

CONS

• Can return false positives and false negatives that might distract developers.
• Can take a long time to operate manually.
• Can’t locate bugs or vulnerabilities that come about in runtime environments.
• Deciding which industry coding standards to apply can be confusing.
• May be challenging to determine if deviating from a rule violation is
appropriate.

While the list of cons might look intimidating, the holes of static analysis can
be patched with two things.

1. Automating static analysis.


2. Using dynamic techniques.
Testing is the most important stage in the Software Development Lifecycle
(SDLC). It helps to deliver high-quality products to the end-user and also
provides an opportunity for the developer to improve the product. Testing
is of many types and is chosen based on the product that is being
developed. Static Testing and Dynamic Testing are the two testing
techniques that will be discussed in this article.
Static Testing
Static Testing also known as Verification testing or Non-execution testing
is a type of Software Testing method that is performed to check the
defects in software without actually executing the code of the software
application.
1. Static testing is performed in the early stage of development to avoid
errors as it is easier to find sources of failures and it can be fixed
easily.
2. Static testing is performed in the white box testing phase of the
software development where the programmer checks every line of the
code before handing it over to the Test Engineer.
3. The errors that can’t not be found using Dynamic Testing, can be
easily found by Static Testing.
4. It involves assessing the program code and documentation.
5. It involves manual and automatic assessment of the software
documents.
Documents that are assessed in Static Testing are:
1. Test Cases
2. Test Scripts.
3. Requirement Specification.
4. Test Plans.
5. Design Document.
6. Source Code.
Static Testing Techniques
Below are some of the static testing techniques:
1. Informal Reviews: In informal review, all the documents are presented
to every team member, they just review the document and give
informal comments on the documents. No specific process is followed
in this technique to find the errors in the document. It leads to detecting
defects in the early stages.
2. Walkthroughs: Skilled people or the author of the product explains the
product to the team and the scribe makes note of the review of
comments.
3. Technical Reviews: Technical specifications of the software product
are reviewed by the team of your peers to check whether the
specifications are correct for the project. They try to find discrepancies
in the specifications and standards. Technical specifications
documents like Test Plan, Test Strategy, and requirements
specification documents are considered in technical reviews.
4. Code Reviews: Code reviews also known as Static code reviews are a
systematic review of the source code of the project without executing
the code. It checks the syntax of the code, coding standards, code
optimization, etc.
5. Inspection: Inspection is a formal review process that follows a strict
procedure to find defects. Reviewers have a checklist to review the
work products. They record the defects and inform the participants to
rectify the errors.
Benefits of Static Testing
Below are some of the benefits of static testing:
1. Early detection of defects: Static testing helps in the early detection
of defects by reviewing the documents and artifacts before execution,
issues can be detected and resolved at an early stage, thus saving
time and effort later in the development process.
2. Cost-effective: Static testing is more cost-effective than dynamic
testing techniques. Defects found during static testing are much
cheaper to find and fix for the organization than in dynamic testing. It
reduces the development, testing, and overall organization cost.
3. Easy to find defects: Static testing easily finds defects that dynamic
testing does not detect easily.
4. Increase development productivity: Static testing increases
development productivity due to quality and understandable
documentation, and improved design.
5. Identifies coding errors: Static testing helps to identify coding errors
and syntax issues resulting in cleaner and more maintainable code.
Limitations of Static Testing
Below are some of the limitations of static testing:
1. Detect some issues: Static testing may not uncover all issues that
could arise during runtime. Some defects may appear only during
dynamic testing when the software runs.
2. Depends on the reviewer’s skills: The effectiveness of static testing
depends on the reviewer’s skills, experience, and knowledge.
3. Time-consuming: Static testing can be time-consuming when working
on large and complex projects.
Dynamic Testing
Dynamic Testing is a type of Software Testing that is performed to
analyze the dynamic behavior of the code. It includes the testing of the
software for the input values and output values that are analyzed.
1. The purpose of dynamic testing is to confirm that the software product
works in conformance with the business requirements.
2. It involves executing the software and validating the output with the
expected outcome.
3. It can be with black box testing or white box testing.
4. It is slightly complex as it requires the tester to have a deep knowledge
of the system.
5. It provides more realistic results than static testing.
Dynamic Testing Techniques
Dynamic testing is broadly classified into two types:
1. White box Testing: White box testing also known as clear box testing
looks at the internal workings of the code. The developers will perform
the white box testing where they will test every line of the program’s
code. In this type of testing the test cases are derived from the source
code and the inputs and outputs are known in advance.

2. Black box Testing: Black box testing looks only at the functionality of
the Application Under Test (AUT). In this testing, the testers are
unaware of the system’s underlying code. They check whether the
system is generating the expected output according to the
requirements. The Black box testing is further classified as, Functional
Testing and Non-functional Testing.
Benefits of Dynamic Testing
Below are some of the benefits of dynamic testing:
1. Reveals runtime errors: Dynamic testing helps to reveal runtime
errors, performance bottlenecks, memory leaks, and other issues that
become visible only during the execution.
2. Verifies integration of modules: Dynamic testing helps to verify the
integration of modules, databases, and APIs, ensuring that the system
is working seamlessly.
3. Accurate reliability assessment: Dynamic testing helps to provide
accurate quality and reliability assessment of the software thus
verifying that the software meets the specified requirements and
functions as intended. This helps to make sure that the software
functions correctly in different usage scenarios.
Limitations of Dynamic Testing
Below are some of the limitations of dynamic testing:
1. Time-consuming: Dynamic testing can be time-consuming in the case
of complex systems and large test suites.
2. Requires effort: It requires significant effort in complex systems to
debug and pinpoint the exact cause.
3. Challenging: In case of testing exceptional or rare conditions it can be
challenging to conduct.
4. May not cover all scenarios: Dynamic testing may not cover all
possible scenarios due to a large number of potential inputs and
execution paths.
Static Testing vs Dynamic Testing
Below are the differences between static testing and dynamic testing:
Parameters Static Testing Dynamic Testing

Static testing is performed Dynamic testing is


to check the defects in the performed to analyze the
software without actually dynamic behavior of the
Definition executing the code. code.

The objective is to prevent The objective is to find and


Objective defects. fix defects.

It is performed at the early It is performed at the later


Stage of stage of software stage of the software
execution development. development.

Code In static testing, the whole In dynamic testing, the whole


Execution code is not executed. code is executed.

Before/ After Dynamic testing is


Static testing is performed
Code performed after code
before code deployment.
Deployment deployment.

Dynamic testing is highly


Static testing is less costly.
Cost costly.

Static Testing involves a Dynamic Testing involves


Documents checklist for the testing test cases for the testing
Required process. process.
Parameters Static Testing Dynamic Testing

It usually takes a longer time


It generally takes a shorter
as it involves running several
time.
Time Required test cases.

It exposes the bugs that are


It can discover a variety of explorable through execution
bugs. hence discovering only a
Bugs limited type of bugs.

Static testing may complete Dynamic testing only


Statement 100% statement coverage achieves less than 50%
Coverage incomparably in less time. coverage.

It includes Informal reviews,


walkthroughs, technical It involves functional and
reviews, code reviews, and non-functional testing.
Techniques inspections.

Example It is a verification process. It is a validation process.

Levels of Software Testing


Software Testing is an activity performed to identify errors so that errors
can be removed to obtain a product with greater quality. To assure and
maintain the quality of software and to represent the ultimate review of
specification, design, and coding, Software testing is required. There are
different levels of testing :
1. Unit Testing: In this type of testing, errors are detected individually
from every component or unit by individually testing the components or
units of software to ensure that they are fit for use by the developers. It
is the smallest testable part of the software.
2. Integration Testing: In this testing, two or more modules which are
unit tested are integrated to test i.e., technique interacting components,
and are then verified if these integrated modules work as per the
expectation or not, and interface errors are also detected.
3. System Testing: In system testing, complete and integrated Softwares
are tested i.e., all the system elements forming the system are tested
as a whole to meet the requirements of the system.
4. Acceptance Testing: This is a kind of testing conducted to ensure that
the requirements of the users are fulfilled before its delivery and that
the software works correctly in the user’s working environment.
These tests can be conducted at various stages of software development.
The levels of testing along with the corresponding software development
phase are shown in the following diagram:

While performing the software testing, following Testing


principles must be applied by every software engineer:
1. The requirements of customers should be traceable and identified by
all different tests.
2. Planning of tests that how tests will be conducted should be done long
before the beginning of the test.
3. The Pareto principle can be applied to software testing- 80% of all
errors identified during testing will likely be traceable to 20% of all
program modules.
4. Testing should begin “in the small” and progress toward testing “in the
large”.
5. Exhaustive testing which simply means to test all the possible
combinations of data is not possible.
6. Testing conducted should be most effective and for this purpose, an
independent third party is required.
Test Case Design

A test case is a defined format for software testing required to check if a


particular application/software is working or not. A test case consists of a
certain set of conditions that need to be checked to test an application or
software i.e. in more simple terms when conditions are checked it checks
if the resultant output meets with the expected output or not. A test case
consists of various parameters such as ID, condition, steps, input,
expected result, result, status, and remarks.

Parameters of a Test Case:


• Module Name: Subject or title that defines the functionality of the test.
• Test Case Id: A unique identifier assigned to every single condition in
a test case.
• Tester Name: The name of the person who would be carrying out the
test.
• Test scenario: The test scenario provides a brief description to the
tester, as in providing a small overview to know about what needs to
be performed and the small features, and components of the test.
• Test Case Description: The condition required to be checked for a
given software. for eg. Check if only numbers validation is working or
not for an age input box.
• Test Steps: Steps to be performed for the checking of the condition.
• Prerequisite: The conditions required to be fulfilled before the start of
the test process.
• Test Priority: As the name suggests gives priority to the test cases
that had to be performed first, or are more important and that could be
performed later.
• Test Data: The inputs to be taken while checking for the conditions.
• Test Expected Result: The output which should be expected at the
end of the test.
• Test parameters: Parameters assigned to a particular test case.
• Actual Result: The output that is displayed at the end.
• Environment Information: The environment in which the test is being
performed, such as the operating system, security information, the
software name, software version, etc.
• Status: The status of tests such as pass, fail, NA, etc.
• Comments: Remarks on the test regarding the test for the betterment
of the software.

Why Write Test Cases?


Test cases are one of the most important aspects of software engineering,
as they define how the testing would be carried out. Test cases are
carried out for a very simple reason, to check if the software works or not.
There are many advantages of writing test cases:
• To check whether the software meets customer expectations: Test
cases help to check if a particular module/software is meeting the
specified requirement or not.
• To check software consistency with conditions: Test cases
determine if a particular module/software works with a given set of
conditions.
• Narrow down software updates: Test cases help to narrow down the
software needs and required updates.
• Better test coverage: Test cases help to make sure that all possible
scenarios are covered and documented.
• For consistency in test execution: Test cases help to maintain
consistency in test execution. A well-documented test case helps the
tester to just have a look at the test case and start testing the
application.
• Helpful during maintenance: Test cases are detailed which makes
them helpful during the maintenance phase.

Test Oracle is a mechanism, different from the program itself, that can be
used to test the accuracy of a program’s output for test cases.
Conceptually, we can consider testing a process in which test cases are
given for testing and the program under test. The output of the two then
compares to determine whether the program behaves correctly for test
cases. This is shown in figure.

Testing oracles are required for testing. Ideally, we want an automated


oracle, which always gives the correct answer. However, often oracles are
human beings, who mostly calculate by hand what the output of the
program should be. As it is often very difficult to determine whether the
behavior corresponds to the expected behavior, our “human deities” may
make mistakes. Consequently, when there is a discrepancy, between the
program and the result, we must verify the result produced by the oracle
before declaring that there is a defect in the result.

The human oracles typically use the program’s specifications to decide


what the correct behavior of the program should be. To help oracle
determine the correct behavior, it is important that the behavior of the
system or component is explicitly specified and the specification itself be
error-free. In other words actually specify the true and correct behavior.
There are some systems where oracles are automatically generated from
the specifications of programs or modules. With such oracles, we are
assured that the output of the oracle conforms to the specifications.
However, even this approach does not solve all our problems, as there is
a possibility of errors in specifications. As a result, a divine generated
from the specifications will correct the result if the specifications are
correct, and this specification will not be reliable in case of errors. In
addition, systems that generate oracles from specifications require formal
specifications, which are often not generated during design.

Software Testing Techniques


Software testing techniques are methods used to design and execute
tests to evaluate software applications. The following are common testing
techniques:
1. Manual testing – Involves manual inspection and testing of the
software by a human tester.
2. Automated testing – Involves using software tools to automate the
testing process.
3. Functional testing – Tests the functional requirements of the software
to ensure they are met.
4. Non-functional testing – Tests non-functional requirements such as
performance, security, and usability.
5. Unit testing – Tests individual units or components of the software to
ensure they are functioning as intended.
6. Integration testing – Tests the integration of different components of
the software to ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it
meets the specified requirements.
8. Acceptance testing – Tests the software to ensure it meets the
customer’s or end-user’s expectations.
9. Regression testing – Tests the software after changes or
modifications have been made to ensure the changes have not
introduced new defects.
10. Performance testing – Tests the software to determine its
performance characteristics such as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and
ensure it meets security requirements.
12. Exploratory testing – A type of testing where the tester actively
explores the software to find defects, without following a specific test
plan.
13. Boundary value testing – Tests the software at the boundaries of
input values to identify any defects.
14. Usability testing – Tests the software to evaluate its user-
friendliness and ease of use.
15. User acceptance testing (UAT) – Tests the software to determine
if it meets the end-user’s needs and expectations.

Types Of Software Testing Techniques


There are two main categories of software testing techniques:
1. Static Testing Techniques are testing techniques that are used to find
defects in an application under test without executing the code. Static
Testing is done to avoid errors at an early stage of the development
cycle thus reducing the cost of fixing them.
2. Dynamic Testing Techniques are testing techniques that are used to
test the dynamic behaviour of the application under test, that is by the
execution of the code base. The main purpose of dynamic testing is to
test the application with dynamic inputs- some of which may be allowed
as per requirement (Positive testing) and some are not allowed
(Negative Testing).
Each testing technique has further types as showcased in the below
diagram. Each one of them will be explained in detail with examples
below.
Static Testing Techniques
As explained earlier, Static Testing techniques are testing techniques that
do not require the execution of a code base. Static Testing Techniques
are divided into two major categories:
1. Reviews: They can range from purely informal peer reviews between
two developers/testers on the artifacts (code/test cases/test data) to
formal Inspections which are led by moderators who can be
internal/external to the organization.
1. Peer Reviews: Informal reviews are generally conducted without
any formal setup. It is between peers. For Example- Two
developers/Testers review each other’s artifacts like code/test
cases.
2. Walkthroughs: Walkthrough is a category where the author of work
(code or test case or document under review) walks through what
he/she has done and the logic behind it to the stakeholders to
achieve a common understanding or for the intent of feedback.
3. Technical review: It is a review meeting that focuses solely on the
technical aspects of the document under review to achieve a
consensus. It has less or no focus on the identification of defects
based on reference documentation. Technical experts like
architects/chief designers are required to do the review. It can vary
from Informal to fully formal.
4. Inspection: Inspection is the most formal category of reviews.
Before the inspection, the document under review is thoroughly
prepared before going for an inspection. Defects that are identified
in the Inspection meeting are logged in the defect management tool
and followed up until closure. The discussion on defects is avoided
and a separate discussion phase is used for discussions, which
makes Inspections a very effective form of review.
2. Static Analysis: Static Analysis is an examination of requirement/code
or design to identify defects that may or may not cause failures. For
Example- Review the code for the following standards. Not following a
standard is a defect that may or may not cause a failure. Many tools for
Static Analysis are mainly used by developers before or during
Component or Integration Testing. Even Compiler is a Static Analysis
tool as it points out incorrect usage of syntax, and it does not execute
the code per se. There are several aspects to the code structure –
Namely Data flow, Control flow, and Data Structure.
1. Data Flow: It means how the data trail is followed in a given
program – How data gets accessed and modified as per the
instructions in the program. By Data flow analysis, you can identify
defects like a variable definition that never got used.
2. Control flow: It is the structure of how program instructions get
executed i.e. conditions, iterations, or loops. Control flow analysis
helps to identify defects such as Dead code i.e. a code that never
gets used under any condition.
3. Data Structure: It refers to the organization of data irrespective of
code. The complexity of data structures adds to the complexity of
code. Thus, it provides information on how to test the control flow
and data flow in a given code.
Dynamic Testing Techniques
Dynamic techniques are subdivided into three categories:
1. Structure-based Testing:
These are also called White box techniques. Structure-based testing
techniques are focused on how the code structure works and test
accordingly. To understand Structure-based techniques, we first need to
understand the concept of code coverage.
Code Coverage is normally done in Component and Integration Testing. It
establishes what code is covered by structural testing techniques out of
the total code written. One drawback of code coverage is that- it does not
talk about code that has not been written at all (Missed requirement),
There are tools in the market that can help measure code coverage.
There are multiple ways to test code coverage:
1. Statement coverage: Number of Statements of code exercised/Total
number of statements. For Example, if a code segment has 10 lines and
the test designed by you covers only 5 of them then we can say that
statement coverage given by the test is 50%.
2. Decision coverage: Number of decision outcomes exercised/Total
number of Decisions. For Example, If a code segment has 4 decisions (If
conditions) and your test executes just 1, then decision coverage is 25%
3. Conditional/Multiple condition coverage: It has the aim to identify
that each outcome of every logical condition in a program has been
exercised.
2. Experience-Based Techniques:
These are techniques for executing testing activities with the help of
experience gained over the years. Domain skill and background are major
contributors to this type of testing. These techniques are used majorly
for UAT/business user testing. These work on top of structured techniques
like Specification-based and Structure-based, and they complement them.
Here are the types of experience-based techniques:
1. Error guessing: It is used by a tester who has either very good
experience in testing or with the application under test and hence they
may know where a system might have a weakness. It cannot be an
effective technique when used stand-alone but is helpful when used along
with structured techniques.
2. Exploratory testing: It is hands-on testing where the aim is to have
maximum execution coverage with minimal planning. The test design and
execution are carried out in parallel without documenting the test design
steps. The key aspect of this type of testing is the tester’s learning about
the strengths and weaknesses of an application under test. Similar to
error guessing, it is used along with other formal techniques to be useful.
3. Specification-based Techniques:
This includes both functional and non-functional techniques (i.e. quality
characteristics). It means creating and executing tests based on functional
or non-functional specifications from the business. Its focus is on
identifying defects corresponding to given specifications. Here are the
types of specification-based techniques:
1. Equivalence partitioning: It is generally used together and can be
applied to any level of testing. The idea is to partition the input range of
data into valid and non-valid sections such that one partition is considered
“equivalent”. Once we have the partitions identified, it only requires us to
test with any value in a given partition assuming that all values in the
partition will behave the same. For example, if the input field takes the
value between 1-999, then values between 1-999 will yield similar results,
and we need NOT test with each value to call the testing complete.
2. Boundary Value Analysis (BVA): This analysis tests the boundaries
of the range- both valid and invalid. In the example above, 0,1,999, and
1000 are boundaries that can be tested. The reasoning behind this kind of
testing is that more often than not, boundaries are not handled gracefully
in the code.
3. Decision Tables: These are a good way to test the combination of
inputs. It is also called a Cause-Effect table. In layman’s language, one
can structure the conditions applicable for the application segment under
test as a table and identify the outcomes against each one of them to
reach an effective test.
1. It should be taken into consideration that there are not too many
combinations so the table becomes too big to be effective.
2. Take an example of a Credit Card that is issued if both credit score and
salary limit are met. This can be illustrated in below decision table
below:

Decision Table

4. Use case-based Testing: This technique helps us to identify test


cases that execute the system as a whole- like an actual user (Actor),
transaction by transaction. Use cases are a sequence of steps that
describe the interaction between the Actor and the system. They are
always defined in the language of the Actor, not the system. This testing is
most effective in identifying integration defects. Use case also defines any
preconditions and postconditions of the process flow. ATM example can
be tested via use case:

Use case-based Testing

5. State Transition Testing: It is used where an application under test or


a part of it can be treated as FSM or finite state machine. Continuing the
simplified ATM example above, We can say that ATM flow has finite
states and hence can be tested with the State transition technique. There
are 4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to another
4. Outcomes of change of state
A state event pair table can be created to derive test conditions – both
positive and negative.

State Transition
Advantages of software testing techniques:
1. Improves software quality and reliability – By using different testing
techniques, software developers can identify and fix defects early in
the development process, reducing the risk of failure or unexpected
behaviour in the final product.
2. Enhances user experience – Techniques like usability testing can help
to identify usability issues and improve the overall user experience.
3. Increases confidence – By testing the software, developers, and
stakeholders can have confidence that the software meets the
requirements and works as intended.
4. Facilitates maintenance – By identifying and fixing defects early,
testing makes it easier to maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development
process is less expensive than fixing them later in the life cycle.
Disadvantages of software testing techniques:
1. Time-consuming – Testing can take a significant amount of time,
particularly if thorough testing is performed.
2. Resource-intensive – Testing requires specialized skills and resources,
which can be expensive.
3. Limited coverage – Testing can only reveal defects that are present in
the test cases, and defects can be missed.
4. Unpredictable results – The outcome of testing is not always
predictable, and defects can be hard to replicate and fix.
5. Delivery delays – Testing can delay the delivery of the software if
testing takes longer than expected or if significant defects are
identified.
6. Automated testing limitations – Automated testing tools may have
limitations, such as difficulty in testing certain aspects of the software,
and may require significant maintenance and updates.

You might also like