0% found this document useful (0 votes)
41 views

Unit- 1 STM Notes

The document outlines software testing methodologies, emphasizing the importance of testing in identifying defects early, maintaining software stability, and enhancing user satisfaction. It discusses various types of testing, approaches, and the software testing life cycle, along with key terminologies and dichotomies between testing and debugging. Additionally, it highlights the complexities of software development and the need for a structured testing model to ensure high-quality software delivery.

Uploaded by

ARCHANA MANNE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Unit- 1 STM Notes

The document outlines software testing methodologies, emphasizing the importance of testing in identifying defects early, maintaining software stability, and enhancing user satisfaction. It discusses various types of testing, approaches, and the software testing life cycle, along with key terminologies and dichotomies between testing and debugging. Additionally, it highlights the complexities of software development and the need for a structured testing model to ensure high-quality software delivery.

Uploaded by

ARCHANA MANNE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

SOFTWARE TESTING METHODOLOGIES

UNIT – I

Introduction:

 Purpose of testing
 Dichotomies
 model for testing
 consequences of bugs
 taxonomy of bugs

Flow graphs and Path testing: Basics concepts of path testing,

 Predicates
 path predicates and
 achievable paths
 path sensitizing
 path instrumentation,
 application of path testing

Introduction:
What is software testing?
Software testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free.
It mainly aims at measuring the specification, functionality, and performance of a software program or
application.

Why is Software Testing Important?


The road of software development is bumpy, and products can always be vulnerable to bugs and
defects.

 Detect Defects early: the primary goal of testing is to identify bugs and defects before they
impact users.
 Maintaining: Preserving stability through regression testing and safeguarding critical areas like
usability, compatibility, and security.
 Enhancing: Improving reliability, optimizing performance, ensuring functionality, and aligning
the product with user needs.
 Build Trust and Satisfaction: consistent testing creates a stable and dependable product that
meets user expectations, fostering trust and loyalty.
 Identify Vulnerabilities to Mitigate Risks: in high-stakes industries like finance, healthcare, and
law, testing prevents costly errors that could harm users or expose companies to legal risks.
Types of software testing
There are two major software testing types that Quality Assurance professionals frequently use,
including:
 Functional testing: a type of software testing to verify whether the application delivers the
expected output.
 Non-functional testing: a type of software testing to verify whether the non-functional aspects
of an application (e.g., stability, security, and usability) are working as expected.

Approaches to software Testing:


 Manual
 Automation Tool

Software Testing Life Cycle: consists of 6 key activities to ensure that all software quality goals
are met, as shown below:

Basic Terminologies of Software Testing:


 Bug/Defect: A flaw or error in the software that causes it to behave unexpectedly or
incorrectly.
 Error: A mistake made by a developer that leads to a bug in the program.
 Failure: When the software does not perform a required function or produces incorrect results
during execution.
 Test Case: A set of inputs, execution conditions, and expected outcomes created to test a
specific function.
 Test Plan: A detailed document that outlines the testing scope, approach, resources, and
schedule.
 Test Suite: A collection of test cases that are designed to test a particular module or
application.
 Test Script: A set of instructions written in a programming or scripting language to automate
test case execution.
 Test Environment: The setup of software, hardware, and network required to conduct testing.
 Test Data: The input values used during testing to validate software behavior.
1. Purpose of Testing
 The purpose of software testing is to ensure that a software application meets its specified
requirements, functions as expected, and delivers a high-quality user experience.
 Testing helps identify and fix defects early, reducing risks and costs while ensuring the
software's reliability, security, and performance.

“MYTH: Good programmers write code without bugs. (Its wrong!!!)”

“History says that even well written programs still have 1-3 bugs per hundred statements.”

1.1 What we do in testing?


 Manufacturing Quality Control:
 Quality Checks at Every Stage – Components undergo testing from raw materials to the final
product
 Defect Handling – Flawed items are either discarded or reworked.
 Productivity Measurement – Includes material cost, rework cost, discarded components,
and quality assurance expenses.
 Trade-off Between QA and Manufacturing Costs –
 Less QA → High reject rate → Increased net cost.
 More QA → higher inspection costs → Increased net cost.
 Quality Assurance Costs Vary by Industry: Consumer products: QA costs ~2%.
 High-risk industries (space, nuclear, aviation): QA costs up to 80%.
 Software Development Quality and Cost:
 Manufacturing Cost is Trivial – Unlike physical goods, the cost of copying software is
negligible.
 Bug Costs Dominate – Detecting and fixing bugs.
 Designing and running tests to find them.
 Quality = Productivity – In software, better quality directly leads to higher productivity.

1.2 Phases in a tester's mental life:


Phases in a tester's mental life can be categorized into the following 5 phases:

 Phase 0: (Until 1956: Debugging Oriented): There is no difference between testing and
debugging. Phase 0 thinking was the norm in early days of software development till testing
emerged as a discipline.
 Phase 1: (1957-1978: Demonstration Oriented):
 The purpose of testing here is to show that software works.
 Highlighted during the late 1970s.
 This failed because the probability of showing that software works 'decreases' as testing
increases. i.e. The more you test, the more likely you’ll find a bug.
 Phase 2: (1979-1982: Destruction Oriented)
 The purpose of testing is to show that software doesn’t work.
 This also failed because the software will never get released as you will find one bug or the
other.
 Also, a bug corrected may also lead to another bug.
 Phase 3: (1983-1987: Evaluation Oriented)
 The purpose of testing is not to prove anything but to reduce the perceived risk of not
working to an acceptable value (Statistical Quality Control).
 Notion is that testing does improve the product to the extent that testing catches bugs and
to the extent that those bugs are fixed.
 The product is released when the confidence on that product is high enough
 Phase 4: (1988-2000: Prevention Oriented)
 Testability is the factor considered here.
 One reason is to reduce the labor of testing. Other reason is to check the testable and non-
testable code.
 Testable code has fewer bugs than the code that's hard to test.
 Identifying the testing techniques to test the code is the main key here.

1.3 Testing isn't everything: There are approaches other than testing to create better software.
Methods other than testing include:

 Inspection Methods: Walkthroughs, desk checking, inspections, and code reading are as
effective as testing but catch different bugs.
 Design Style: When designing software, following principles like testability, openness, and
clarity can help to prevent bugs.
 Static Analysis Methods: Includes formal analysis of source code during compilation.
 Languages: Programmers find new bugs while using new languages.
 Development Methodologies and Development Environment: The way software is
developed and the environment it’s used in can help prevent many bugs.

1.4 Pesticide Paradox and the Complexity Barrier:

 Pesticide Paradox: Every method used to prevent or find bugs still leaves behind some subtle
bugs that it can't catch.
 Complexity Barrier : software complexity grows to the limits of our ability to manage that
complexity
2. Dichotomies

The separation that exists between two groups or things that are completely opposite to and
different from each other

 2.1 Testing Vs Debugging :

 Purpose of testing is to show that a program has bugs. The purpose of testing is to find the
error or misconception that led to the program's failure and to design and implement the
program changes that correct the error.
 Debugging usually follows testing, but they differ as to goals, methods and most important
psychology.

Testing Debugging

Starts with known conditions. User predefined Starts with possibly unknown initial conditions &
procedure & Has predictable outcomes. End cannot be predicted.

Planned, Designed and Scheduled. Procedures & Duration are not constrained.

A demo of an error or apparent correctness. A Deductive process.

Proves programmer’s success or failure. It is programmer’s Vindication.

Should be predictable, dull, constrained, rigid & There are intuitive leaps, conjectures,
in human. experimentation & freedom.

Much of testing can be without design


Impossible without detailed design knowledge.
knowledge.

Can be done by outsider to the development


Must be done by an insider (development team).
team.

There are only Rudimentary Results (on how much


A theory establishes what testing can do or
can be done. Time, effort, how etc. depends on
cannot do.
human ability).

Test execution and design can be automated. Debugging - Automation is a dream.


 2.2 Function Vs Structure

 In functional testing, the program or system is treated as a black box.


 It is subjected to inputs, and its outputs are verified for conformance to specified behavior.
 Functional testing takes the user point of view- bother about functionality and features and
not the program's implementation.
 Structural testing does look at the implementation details. Things such as programming
style, control method, source language, database design, and coding details dominate
structural testing.

 2.3 Designer Vs Tester :

 Test designer is the person who designs the tests where as the tester is the one actually
tests the code.
 During functional testing, the designer and tester are probably different persons. During unit
testing, the tester and the programmer merge into one person.
 Tests designed and executed by the software designers are by nature biased towards
structural consideration and therefore suffer the limitations of structural testing.

 2.4 Modularity Vs Efficiency:

Modularity Efficiency

Implies more number of components &


Smaller the component easier to hence more no of interfaces increase
understand. complexity & reduce efficiency (=> more
bugs likely)

Small components/modules are repeatable


Higher efficiency at module level, when a bug
independently with less rework (to check if a
occurs with small components.
bug is fixed).

More test cases mean a higher chance of


Microscopic test cases require separate
bugs in them, leading to more rework and
setups for data, systems, and software,
lower efficiency, especially with microscopic
which can lead to bugs.
test cases.

Easier to design large modules & smaller Less complex & efficient.
interfaces at a higher level.

 2.5 Small Vs Large:

Small Big

Simple, easier to understand, with fewer Highly complex, with multiple modules,
dependencies. dependencies, and interactions

These Can be built by a single developer or a Require structured development processes,


small team. multiple teams, and collaboration.

Easier to test, with fewer test cases and Require extensive testing strategies,
simpler debugging. automation, and continuous integration

Require careful planning; changes can have


It Can be quickly modified or rewritten
widespread impacts.

 2.6Builder Vs the Buyer:

 Most software is written and used by the same organization. Unfortunately, this situation is
dishonest because it clouds accountability.
 If there is no separation between builder and buyer, there can be no accountability.

The different roles / users in a system include:

 Builder: Who designs the system and is accountable to the buyer.


 Buyer: Who pays for the system in the hope of profits from providing services?
 User: Ultimate beneficiary or victim of the system. The user's interests are also guarded by.
 Tester: Who is dedicated to the builder's destruction?
 Operator: Who has to live with the builders' mistakes, the buyers' murky (unclear)
specifications, testers' oversights and the users' complaints?

3. Model for Testing

A model for project in a real world consists of the following components:


 Application: An online real-time system (with remote terminals) providing timely responses
to user requests (for services).
 Staff: Manageable size of programming staff with specialists in systems design.
 Schedule: project may take about 24 months from start to acceptance. 6 month
maintenance period.
 Specifications: Good ones are documented, while undocumented ones are well understood
within the team.
 Personnel: The technical staff comprises of combination of experienced professionals &
junior programmers (1 – 3 yrs) with varying degrees of knowledge of the application.
 Objectives: (of a project): A system is expected to operate profitably for > 10 yrs (after
installation). Similar systems with up to 75% code in common may be implemented in
future.
 Source: (for a new project) is a combination of New Code - up to 1/3rd , From a
previous reliable system- up to 1/3 rd , Re-hosted from another language & O.S.- up to
1/3rd
 Acceptance test: Application is accepted after a formal acceptance test. At first it’s the
customer’s & then the software design team’s responsibility.
 Standards: Programming, test and interface standard (documented and followed). A
centralized standards data base is developed & administrated
 History: Developers leave before their components are tested.
 Work is excellent but poorly documented.
 Customers may request unexpected changes, big or small.
 Milestones may be delayed, but the final delivery date is met.
 Integration issues, hardware problems, and rework are common.

 3.1 ENVIRONMENT:

 A Program's environment is the hardware and software required to make it run. For online
systems, the environment may include communication lines, other systems, terminals and
operators.
 The environment also includes all programs that interact with and are used to create the
program under test - such as OS, linkage editor, loader, compiler, utility routines.
 Because the hardware and firmware are stable, it is not smart to blame the environment for
bugs.
The World The Model World

Environment
Environment Unexpected
Model

Expected
Program
Program Tests Outcome
Model

Nature &
Bug Model
Psychology

 3.2 PROGRAM:

 Most programs are too complicated to understand in detail.


 The concept of the program is to be simplified in order to test it.
 If simple model of the program doesn’t explain the unexpected behaviour, we may have to
modify that model to include more facts and details. And if that fails, we may have to
modify the program.

 3.3 BUGS:

 Bugs are more insidious (deceiving but harmful) than ever we expect them to be.
 An unexpected test result may lead us to change our notion of what a bug is and our model
of bugs.
 Some optimistic notions that many programmers or testers have about bugs are usually
unable to test effectively and unable to justify the dirty tests most programs need.

3.3.1 OPTIMISTIC NOTIONS ABOUT BUGS:

 Benign Bug Hypothesis: The belief that bugs are nice, tame and logical. (Benign: Not
Dangerous)
 Bug Locality Hypothesis: The belief that a bug discovered with in a component effects only
that component's behavior.
 Control Bug Dominance: The belief those errors in the control structures (if, switch etc) of
programs dominate the bugs.
 Code / Data Separation: The belief that bugs respect the separation of code and data.
 Lingua Salvator Est: The belief that the language syntax and semantics (e.g. Structured
Coding, Strong typing, etc) eliminates most bugs.
 Corrections Abide: The mistaken belief that a corrected bug remains corrected.
 Silver Bullets: The mistaken belief that X (Language, Design method, representation,
environment) grants immunity from bugs.
 Sadism Suffices: The common belief (especially by independent tester) that a sadistic
streak, low cunning, and intuition are sufficient to eliminate most bugs. Tough bugs need
methodology and techniques.
 Angelic Testers: The belief that testers are better at test design than programmers are at
code design
 3.4 TESTS:

 Formal procedures
 Input Preparation
 Outcome prediction and Observation
 Documentation of test
 Execution and observation of outcome are subject to errors
 An unexpected test result may lead us to revise the test and test model

 3.5 Testing and Levels: 3 kinds of tests

 Unit/Component testing
 Integration testing
 System testing

Unit Testing:

 It is first level of testing


 It is done during the development (coding phase) of an application by the developers.
 Unit Tests isolate a section of code and verify its correctness.
 A unit may be an individual function, method, procedure, module, or object as long it’s
independent.
 When tests reveal faults, then it is a unit bug

Component Testing:
 It is an integrated aggregate of one or more units
 It verifies the component against functional specifications and the implemented
structure against the design
 Component testing is performed before Integration testing and after unit testing.
 QA team will do the component testing.
 Problems revealed are component bugs.

System Testing:

 System is a Big component


 Concerns issues & behaviors that can be tested at the level of entire or major part of
the integrated system.
 Includes testing for performance, security, accountability, configuration sensitivity,
start up & recovery

3.6 Role of models

 Testing is a process in which we create mental models of the environment, Program


and human nature and the tests.
 Used for the testing process until system behavior is correct or until the model is
insufficient (for testing).
 Unexpected results may force a revision of the model.
 Art of testing consists of creating, selecting, exploring and revising models.
 The model should be able to express the program.

3.7 Playing Pool and Consulting Oracles:

 Testing is like playing a pool game. Either you hit the ball to any pocket (kiddie pool) or you
specify the pocket in advance (real pool). So is the testing.
 There is a kiddie testing and real testing. In kiddies testing, the observed outcome will be
considered as the expected outcome.
 In real testing, the outcome is predicted and documented before the test is run.
 The tester who cannot make that kind of predictions does not understand the program's
functional objectives.
Oracles: An oracle is any program, process, or body of data that specifies the expected outcome
of a set of tests as applied to a tested object.

Example of oracle: Input/Outcome Oracle - an oracle that specifies the expected outcome for
a specified input.

Sources of Oracles:. The hardest part of test design is predicting the expected outcome, but we
often have oracles that reduce the work. They are:

 Kiddie Testing
 Regression Test Suites
 Purchased Suits and Oracles
 Existing Program

Is Complete Testing Possible?

If the objective of the testing were to prove that a program is free of bugs, then testing
not only would be practically impossible, but also would be theoretically impossible.

Three different approaches can be used to demonstrate that a program is correct. They are:

 Functional Testing:
1. Every program operates on a finite number of inputs. A complete functional test would
consists of subjecting the program to all possible input streams.
2. For each input the routine either accepts the stream and produces a correct outcome, accepts
the stream and produces an incorrect outcome, or rejects the stream and tells us that it did so.
 Structural Testing:
 The design should have enough tests to ensure that every path through the routine is
exercised at least once. Right off that’s impossible because some loops might never terminate.
 The number of paths through a small routine can be awesome because each loop multiplies
the path count by the number of times through the loop.

Formal Proofs of Correctness: Formal proofs of correctness rely on a combination of functional


and structural concepts. Requirements are stated in a formal language (e.g. Mathematics) and each
program statement is examined and used in a step of an inductive proof that the routine will produce
the correct outcome for all possible input sequences.
4. Consequences of Bugs

4.1 Importance's of Bugs: IMPORTANCE OF BUGS: The importance of bugs depends on frequency,
correction cost, installation cost, and consequences.

1. Frequency: How often does that kind of bug occur? Pay more attention to the more frequent
bug types.
2. Correction Cost: What does it cost to correct the bug after it is found? The cost is the sum of 2
factors: (1) the cost of discovery (2) the cost of correction. These costs go up dramatically later
in the development cycle when the bug is discovered. Correction cost also depends on system
size.
3. Installation Cost: Installation cost depends on the number of installations: small for a single
user program but more for distributed systems. Fixing one bug and distributing the fix could
exceed the entire system's development cost.
4. Consequences: What are the consequences of the bug? Bug consequences can range from
mild to catastrophic.

A reasonable metric for bug importance is:


Importance= ($) = Frequency * (Correction cost + Installation cost + Consequential cost)

4.2 Consequences of Bugs:

The consequences of a bug can be measure in terms of human rather than machine. Some
consequences of a bug on a scale of one to ten are:

1. Mild: The symptoms of the bug offend us aesthetically (gently); a misspelled output or a
misaligned printout.
2. Moderate: Outputs are misleading or redundant. The bug impacts the system's performance.
3. Annoying: The system's behavior because of the bug is dehumanizing. E.g. Names are
truncated or arbitrarily modified.
4. Disturbing: It refuses to handle legitimate (authorized / legal) transactions. The ATM wont give
you money. My credit card is declared invalid.
5. Serious: It loses track of its transactions. Not just the transaction itself but the fact that the
transaction occurred. Accountability is lost.
6. Very Serious: The bug causes the system to do the wrong transactions. Instead of losing your
paycheck, the system credits it to another account or converts deposits to withdrawals.
7. Extreme: The problems aren't limited to a few users or too few transaction types. They are
frequent and arbitrary instead of sporadic infrequent) or for unusual cases.
8. Intolerable: Long term unrecoverable corruption of the database occurs and the corruption is
not easily discovered. Serious consideration is given to shutting the system down.
9. Catastrophic: The decision to shut down is taken out of our hands because the system fails.
10. Infectious: What can be worse than a failed system? One that corrupt other systems even
though it doesn’t fall in itself ; that erodes the social physical environment; that melts nuclear
reactors and starts war.

4.3 Flexible Severity Rather than Absolutes

The factors involved in bug severity are:

 Correction Cost: Not so important because catastrophic bugs may be corrected easier and
small bugs may take major time to debug.
 Context and Application Dependency: Severity depends on the context and the application in
which it is used.
 Creating Culture Dependency: What's important depends on the creators of software and
their cultural aspirations. Test tool vendors are more sensitive about bugs in their software
then games software vendors.
 User Culture Dependency: Severity also depends on user culture. Naive users of PC software
go crazy over bugs where as pros (experts) may just ignore.
 The software development phase: Severity depends on development phase. Any bugs gets
more severe as it gets closer to field use and more severe the longer it has been around.

4.4 The nightmare list and when to stop testing?

 List all nightmares in terms of the symptoms & reactions of the user to their consequences.
 Convert the consequences of into a cost. There could be rework cost. (but if the scope extends
to the public, there could be the cost of lawsuits, lost business, nuclear reactor meltdowns.)
 Order these from the costliest to the cheapest. Discard those with which you can live with.
 Based on experience, measured data, intuition, and published statistics postulate the kind of
bugs causing each symptom. This is called “bug design processes.” A bug type can cause
multiple symptoms.
 Order the causative bugs by decreasing probability (judged by intuition, experience, statistics
etc.).
 Calculate the importance of a bug type as:

i= ∑ Cij P(bug type i∈nightmare j)


all nightmares

 Rank the bug types in order of decreasing importance.


 Design tests & QA inspection process with most effective against the most important bugs.
 If a test is passed or when correction is done for a failed test, some nightmares disappear. As
testing progresses, revise the probabilities & nightmares list as well as the test strategy.
 Stop testing when probability (importance & cost) proves to be inconsequential.

5. Taxonomy of Bugs

• There is no universally correct way categorize bugs.


• A given bug can be put into one or another category depending on its history and the
programmer's state of mind.

The major categories are:

1. Requirements, Features and Functionality Bugs


2. Structural Bugs
3. Data Bugs
4. Coding Bugs
5. Interface, Integration and System Bugs
6. Test and Test Design Bugs.

1. Requirements, Features and Functionality Bugs

a) Requirements and Specifications Bugs

 Requirements and specifications can be incomplete, ambiguous, or self-contradictory.


 They can be misunderstood or hard to understand.
 Specifications may change during design, with features being added, modified, or deleted.
 Requirements are a major source of expensive bugs.
 Bug impact ranges from a few percent to over 50%, depending on the application.
 Bugs are often the first to appear and the last to be fixed.
b) Feature Bugs:

 Specification problems usually create corresponding feature problems.


 A feature can be wrong, missing, or superfluous (serving no useful purpose). A missing feature
or case is easier to detect and correct. A wrong feature could have deep design implications.
 Removing the features might complicate the software, consume more resources, and foster
more bugs.

c) Feature Interaction Bugs:

 Clear and testable feature specifications alone aren't sufficient.


 Related features are usually well-tested, but unpredictable interactions can cause bugs.
 Features from different groups may interact unexpectedly (e.g., call holding and call
forwarding).
 Every application has unique features with numerous unspecified interaction potentials.
 These unpredictable interactions lead to feature interaction bugs.

Additional information

Examples

 Conflicting Features : Auto save & Undo feature


 Feature Dependency Issues: 2 factor authentication and Password Reset
 Performance Degradation Due to Feature Interaction: system overload or slow down
 Security Vulnerabilities from Feature Interactions: Security risks ex: incognito tab and screen
shot
 Unexpected Behavioral Changes: alarm clock and silent mode.

“Real-World Case Study: Feature Interaction Bug in Boeing 737 MAX (2018-2019)”

 Bug: The MCAS (Maneuvering Characteristics Augmentation System) interacted


poorly with the angle-of-attack sensor, forcing the aircraft into repeated
downward pitches.
 Impact: Led to two fatal crashes and a global grounding of the aircraft.

Specification and Feature Bug Remedies:


•Most feature bugs are rooted in human to human communication problems.
• One solution is to use high-level, formal specification languages or systems.
•Such languages and systems provide short term support but in the long run, does not solve the
problem.
•Short term Support: Specification languages facilitate formalization of requirements and
inconsistency and ambiguity analysis.
•Long term Support: Assume that we have a great specification language and that can be used to
create unambiguous, complete specifications with unambiguous complete tests and consistent
test criteria.
•The specification problem has been shifted to a higher level but not eliminated.

2. Structural Bugs

• Structural bugs are defects related to the internal design, architecture, and implementation of
a software system.
• These issues affect the maintainability, scalability, and overall integrity of the system.

Below are the key categories of structural bugs:

a. Control and sequence bugs


b. Logic Bugs
c. Processing Bugs
d. Initialization Bugs
e. Data flow bugs and Anomalies

a) Control and sequence bugs

Control and sequence bugs include paths left out, unreachable code, improper nesting of loops,
loop-back or loop termination criteria incorrect, missing process steps, duplicated processing,
unnecessary processing, rampaging, GOTO's, ill-conceived (not properly planned) switches,
sphagetti code, and worst of all, pachinko code.

Causes of Control Flow Bugs:

 This area can be analyzed using theories, which makes it more likely to have systematic errors.
 Many control flow bugs are easily tested and caught in unit testing.
 Old codebases (especially in Assembly Language Programming (ALP) and COBOL) are heavily
dominated by control flow bugs.

Detection and Prevention:

 Structural testing is effective in identifying control flow issues.


 Path testing (tracing all possible execution paths) helps catch these bugs.
 Bottom-line functional testing, based on specifications, ensures correctness.

b) Logic Bugs

 Misunderstanding of the semantics of the control structures & logic operators


 Improper layout of cases, including impossible & ignoring necessary cases,
 Using a look-alike operator, improper simplification, confusing Ex-OR with inclusive OR.
 Deeply nested conditional statements & using many logical operations in 1 stmt.

Prevention and Control: Logic testing, careful checks, functional testing

Additional Information, Examples:

 Conditional Error : i>18 (pass) ; i>= 18


 Loop Error : infinite loop
 Calculation Error : calculation of radius
 Incorrect variable: Assigning the wrong value to a variable causes unexpected behavior.
 Incorrect function: A function produces incorrect results due to flawed logic

C) Processing Bugs:

 Processing bugs include arithmetic bugs, algebraic, mathematical function evaluation,


algorithm selection and general processing.
 Examples of Processing bugs include: Incorrect conversion from one data representation to
other, ignoring overflow, improper use of grater-than-or-eual etc
 Although these bugs are frequent (12%), they tend to be caught in good unit testing.

d) Initialization Bugs:

 Initialization bugs are common. Initialization bugs can be improper and superfluous.
 Superfluous bugs are generally less harmful but can affect performance.
 Typical initialization bugs include: Forgetting to initialize the variables before first use,
assuming that they are initialized elsewhere, initializing to the wrong format, representation or
type etc
 Explicit declaration of all variables, as in Pascal, can reduce some initialization problems.

e) Data-Flow Bugs and Anomalies:

 Most initialization bugs are special case of data flow anamolies.


 A data flow anomaly occurs where there is a path along which we expect to do something
unreasonable with data, such as using an uninitialized variable, attempting to use a variable
before it exists, modifying and then not storing or using the result, or initializing twice without
an intermediate use.

3 ) Data bugs : Depend on the types of data or the representation of data. There are 4 sub
categories.

a. Generic Data Bugs


b. Dynamic Data Vs Static Data
c. Information, Parameter, and Control Bugs
d. Contents, Structure & Attributes related Bugs

a) Generic Data Bugs: These bugs occur due to incorrect handling of data in general, regardless of
its type or state.

Examples: Data corruption, Data truncation, Incorrect data type handling, Invalid or missing
default values

b) Dynamic Data Vs Static Data:

Dynamic Data Bugs: Issues in real-time, Static Data Bugs: Issues in fixed, predefined, or
changing data during execution. hardcoded data.

Examples: Examples:

 Race conditions in data processing  Hardcoded values that do not adapt to


 Stale cache usage changes
 Inconsistent real-time data updates  Incorrect database seed data
 Configuration files with incorrect settings

c) Information, parameter, control


Information Bugs: Incorrect or missing data that impacts user decisions or system behavior.

Examples:
 Incorrect financial calculations
 Wrong stock price updates in a trading app

Parameter Bugs: Errors related to incorrect input parameters in functions or APIs.

Examples:
 Function receiving null values instead of required parameters
 API expecting an integer but receiving a string

Control Bugs: Bugs that affect how data is processed based on logical or conditional errors.

Examples:
 Incorrect condition checking for user permissions
 Data flow disrupted due to missing validation

4) CODING BUGS:
 Coding errors of all kinds can create any of the other kind of bugs.
 Syntax errors are generally not important in the scheme of things if the source language
translator has adequate syntax checking.
 If a program has many syntax errors, then we should expect many logic and coding bugs.
 The documentation bugs are also considered as coding bugs which may mislead the
maintenance programmers.

5) INTERFACE, INTEGRATION, AND SYSTEM BUGS

a) Interface:

External interfaces enable communication between the system and the outside world.
 These interfaces include:
 Devices (e.g., keyboards, touch screens).
 Actuators & Sensors (e.g., temperature sensors, motion detectors).
 Input Terminals (e.g., ATMs, POS systems).
 Printers (e.g., networked or local).
 Communication Lines (e.g., APIs, network protocols)

The primary design goal for an external interface is robustness. Every human or machine interface
should follow a defined protocol.

Common external interface bugs


 Incorrect Protocol Implementation – Misinterpretation or faulty execution.
 Invalid Timing or Sequence Assumptions – System expects inputs at wrong intervals.
 Misunderstood Input / Output Formats – Data structure mismatches.
 Insufficient Error Handling for Bad Input – Failure to reject invalid data.
Internal interfaces are in principle not different from external interfaces but they are more
controlled.

 A best example for internal interfaces is communicating routines.


 The external environment is fixed and the system must adapt to it but the internal
environment, which consists of interfaces with other components, can be negotiated.
 Internal interfaces have the same problem as external interfaces

b) Hardware Architecture:

 Bugs related to hardware architecture originate mostly from misunderstanding how the
hardware works.
 Examples of hardware architecture bugs: address generation error, i/o device operation /
instruction error, waiting too long for a response, incorrect interrupt handling etc.
 The remedy for hardware architecture and interface problems is two fold:
(1) Good Programming and Testing
(2) Centralization of hardware interface software in programs written by hardware
interface specialists.

C) Operating System Bugs:

 Program bugs related to the operating system are a combination of hardware architecture
and interface bugs mostly caused by a misunderstanding of what it is the operating system
does.
 Use operating system interface specialists, and use explicit interface modules or macros for
all operating system calls.
 This approach may not eliminate the bugs but at least will localize them and make testing
easier.

d) Software Architecture:

 Software architecture bugs are the kind that called - interactive.


 Routines can pass unit and integration testing without revealing such bugs.
 Many of them depend on load, and their symptoms emerge only when the system is
stressed.
 Sample for such bugs: Assumption that there will be no interrupts, Failure to block or un
block interrupts, Assumption that memory and registers were initialized or not initialized
etc
 Careful integration of modules and subjecting the final system to a stress test are effective
methods for these bugs.

e) Control and Sequence Bugs (Systems Level):


 These bugs include: Ignored timing, Assuming that events occur in a specified sequence,
Working on data before all the data have arrived from disc, Waiting for an impossible
combination of prerequisites, Missing, wrong, redundant or superfluous process steps.
 The remedy for these bugs is highly structured sequence control.
 Specialize, internal, sequence control mechanisms are helpful.

f) Resource Management Problems:

 Memory is subdivided into dynamically allocated resources such as buffer blocks, queue
blocks, task control blocks, and overlay buffers.
 External mass storage units such as discs, are subdivided into memory resource pools.
 Some resource management and usage bugs: Required resource not obtained, Wrong
resource used, Resource is already in use, Resource dead lock etc

Resource Management Remedies: A design remedy that prevents bugs is always preferable to a test
method that discovers them.

 The design remedy in resource management is to keep the resource structure simple: the
fewest different kinds of resources, the fewest pools, and no private resource management.

g) Integration Bugs:

 Integration bugs are bugs having to do with the integration of, and with the interfaces
between, working and tested components.
 These bugs results from inconsistencies or incompatibilities between components.
 The communication methods include data structures, call sequences, registers,
semaphores, and communication links and protocols results in integration bugs.
 The integration bugs do not constitute a big bug category(9%) they are expensive category
because they are usually caught late in the game and because they force changes in several
components and/or data structures.

h) System Bugs:

 System bugs covering all kinds of bugs that cannot be ascribed to a component or to their
simple interactions, but result from the totality of interactions between many components
such as programs, data, hardware, and the operating systems.
 There can be no meaningful system testing until there has been thorough component and
integration testing.
 System bugs are infrequent (1.7%) but very important because they are often found only
after the system has been fielded.

TEST AND TEST DESIGN BUGS:


 Testing: testers have no immunity to bugs. Tests require complicated scenarios and
databases. They require code or the equivalent to execute and consequently they can have
bugs.
 Test criteria: if the specification is correct, it is correctly interpreted and implemented, and
a proper test has been designed; but the criterion by which the software's behavior is
judged may be incorrect or impossible. So, a proper test criteria has to be designed. The
more complicated the criteria, the likelier they are to have bugs.

Remedies:

 Test Debugging: The first remedy for test bugs is testing and debugging the tests. Test
debugging, when compared to program debugging, is easier because tests, when properly
designed are simpler than programs and don’t have to make concessions to efficiency.
 Test Quality Assurance: Programmers have the right to ask how quality in independent testing is
monitored.
 Test Execution Automation: The history of software bug removal and prevention is
indistinguishable from the history of programming automation aids. Assemblers, loaders,
compilers are developed to reduce the incidence of programming and operation errors. Test
execution bugs are virtually eliminated by various test execution automation tools.
 Test Design Automation: Just as much of software development has been automated, much test
design can be and has been automated. For a given productivity rate, automation reduces the
bug count - be it for software or be it for tests

Flow graph and basis path testing


PATH TESTING:

 Path Testing is the name given to a family of test techniques based on judiciously selecting a
set of test paths through the program.
 If the set of paths are properly chosen then we have achieved some measure of test
thoroughness. For example, pick enough paths to assure that every source statement has
been executed at least once.
 Path testing techniques are the oldest of all structural test techniques.
 Path testing is most applicable to new software for unit testing. It is a structural technique.
 It requires complete knowledge of the program's structure.
 It is most often used by programmers to unit test their own code.
 The effectiveness of path testing rapidly deteriorates as the size of the software aggregate
under test increases.

THE BUG ASSUMPTION:

 The bug assumption for the path testing strategies is that something has gone wrong with
the software that makes it take a different path than intended.
 As an example "GOTO X" where "GOTO Y" had been intended.
 Structured programming languages prevent many of the bugs targeted by path testing: as a
consequence the effectiveness for path testing for these languages is reduced and for old
code in COBOL, ALP, FORTRAN and Basic, the path testing is indispensable.
CONTROL FLOW GRAPHS:

 The control flow graph is a graphical representation of a program's control structure. It uses
the elements named process blocks, decisions, and junctions.

Flow Graph Elements: A flow graph contains four different types of elements.

(1) Process Block

(2) Decisions

(3) Junctions

(4) Case Statements

1) Process Block:

 A process block is a sequence of program statements uninterrupted by either decisions or


junctions.
 It is a sequence of statements such that if any one of statement of the block is executed,
then all statement thereof is executed.
 Formally, a process block is a piece of straight line code of one statement or hundreds of
statements.
 A process has one entry and one exit. It can consists of a single statement or instruction, a
sequence of statements or instructions, a single entry/exit subroutine, a macro or function
call, or a sequence of these.

2) Decisions:

 A decision is a program point at which the control flow can diverge.


 Machine language conditional branch and conditional skip instructions are examples of
decisions.
 Most of the decisions are two-way but some are three way branches in control flow.

3) Case Statements:

 A case statement is a multi-way branch or decisions.


 Examples of case statement are a jump table in assembly language, and the PASCAL case
statement.
 From the point of view of test design, there are no differences between Decisions and Case
Statements

4) Junctions:

 A junction is a point in the program where the control flow can merge.
 Examples of junctions are: the target of a jump or skip instruction in ALP, a label that is a
target of GOTO.
CONTROL FLOW GRAPHS Vs FLOWCHARTS:

 A program's flow chart resembles a control flow graph.


 In flow graphs, we don't show the details of what is in a process block.
 In flow charts every part of the process block is drawn.
 The flowchart focuses on process steps, where as the flow graph focuses on control flow of
the program.
 The act of drawing a control flow graph is a useful tool that can help us clarify the control
flow and data flow issues.

Predicate, Path Predicate, and Achievable Paths in Software Testing


1. Predicate

A predicate is a logical condition in a program that controls decision-making, such as if, while, or
for statements. It determines which part of the code executes.

Example:

python
CopyEdit
if (x > 5): # Predicate: (x > 5)
print("x is greater than 5")

 The predicate (x > 5) controls whether the print statement executes.

2. Path Predicate

A path predicate is a set of logical conditions that must be true to follow a specific execution path in a
program. It is formed by combining predicates along a given path.

Example:

python
CopyEdit
if (x > 5): # Predicate 1
if (y < 10): # Predicate 2
print("Path A")
else:
print("Path B")
else:
print("Path C")

Here, the path predicates for different paths are:

 Path A: (x > 5) AND (y < 10)


 Path B: (x > 5) AND (y >= 10)
 Path C: (x <= 5)

Each path predicate represents a unique execution flow.

3. Achievable Paths

Achievable paths are execution paths that can actually occur with some input values. Some paths may
be unreachable due to logical constraints or program structure.

Example (Unreachable Path):

python
CopyEdit
if (x > 10): # Predicate 1
if (x < 5): # Predicate 2 (Contradiction: This can never be true)
print("Unreachable Path")

 No value of x can be both greater than 10 and less than 5 at the same time.
 The path (x > 10) AND (x < 5) is not achievable.
Key Points:

 Predicate: A single condition that controls execution.


 Path Predicate: A combination of predicates defining a unique execution path.
 Achievable Path: A path that can be followed with some input values.

These concepts are crucial for path coverage testing to ensure all possible execution flows are
tested. 🚀

4o

You said:

path sensitizing
ChatGPT said:

Path Sensitizing in Software Testing

Path sensitizing is the process of finding input values that make a specific execution path achievable
in a program. It ensures that a particular path can be executed by satisfying the conditions (predicates)
along the way.

Example: Path Sensitizing Process

Consider the following code:

python
CopyEdit
def test_function(x, y):
if x > 5: # Predicate 1
if y < 10: # Predicate 2
print("Path A")
else:
print("Path B")
else:
print("Path C")

Step 1: Identify Paths and Their Predicates

The program has three possible execution paths:

Path Path Predicate


A (x > 5) AND (y < 10)
B (x > 5) AND (y >= 10)
C (x ≤ 5)

Step 2: Check for Achievability

 Path A: Needs x > 5 and y < 10. Example: (x=6, y=5) ✅ Achievable
 Path B: Needs x > 5 and y >= 10. Example: (x=7, y=15) ✅ Achievable
 Path C: Needs x ≤ 5. Example: (x=3, y=8) ✅ Achievable

All three paths are achievable, meaning no contradiction exists.

Step 3: Handling Unachievable Paths

Sometimes, certain paths may not be achievable due to contradictory conditions.

Example of an Unachievable Path:

python
CopyEdit
if x > 10:
if x < 5: # Contradiction: No x can be both >10 and <5
print("Unreachable")

Here, the path (x > 10) AND (x < 5) is unachievable, so no input values can satisfy it.

Why is Path Sensitizing Important?

1. Ensures complete path coverage in testing.


2. Identifies unreachable paths, helping to remove dead code.
3. Improves test case design by ensuring test cases trigger all execution flows.

By applying path sensitizing, testers ensure that all feasible paths are tested and identify impossible
paths in the program. 🚀

4o

You said:

path instrumentation
ChatGPT said:

Path Instrumentation in Software Testing

Path Instrumentation is a technique used to track which execution paths are taken during program
execution. It involves inserting instrumentation code into the program to collect information about
executed paths.

How Path Instrumentation Works

1. Insert Counters or Logs


o Add code to log or count executions of different paths.
2. Run the Program with Different Inputs
o Execute the program with test cases and observe which paths are taken.
3. Analyze Path Coverage
o Check if all intended paths have been executed at least once.
Example: Path Instrumentation in Python
python
CopyEdit
def test_function(x, y):
if x > 5:
print("Path A taken")
else:
print("Path B taken")

if y < 10:
print("Path C taken")
else:
print("Path D taken")

# Test cases
test_function(6, 5) # Executes Path A and Path C
test_function(4, 15) # Executes Path B and Path D

Here, instrumentation is done using print statements to track which paths were taken.

Advanced Path Instrumentation: Using Logs

Instead of print statements, we can use logging for better tracking.

python
CopyEdit
import logging

logging.basicConfig(level=logging.INFO, filename="path_log.txt", filemode="w")

def test_function(x, y):


if x > 5:
logging.info("Path A taken")
else:
logging.info("Path B taken")

if y < 10:
logging.info("Path C taken")
else:
logging.info("Path D taken")

# Running test cases


test_function(6, 5)
test_function(4, 15)

 This stores path execution logs in a file (path_log.txt) for later analysis.

Why Use Path Instrumentation?

✔ Ensures all paths are tested at least once.


✔ Helps in detecting untested or unreachable paths.
✔ Useful in debugging and performance analysis.
This technique is widely used in structural testing and code coverage analysis to verify complete
path execution. 🚀

Key Applications of Path Testing

1. Ensuring Complete Code Coverage

✅ Confirms that every decision point (if, while, for) and statement is executed at least once.
✅ Helps in branch coverage, statement coverage, and path coverage.

💡 Example: Testing all possible conditions in an if-else structure.

2. Detecting Unreachable Code (Dead Code Analysis)

✅ Helps find redundant or unreachable code segments.


✅ Eliminates logical contradictions that prevent certain paths from executing.

💡 Example:

python
CopyEdit
if x > 10:
if x < 5: # This condition can never be true
print("Unreachable code")

🚀 Path testing detects that x < 5 is unreachable if x > 10.

3. Identifying Infinite Loops and Incorrect Loop Termination

✅ Ensures that loops exit correctly and do not result in infinite execution.
✅ Helps verify loop conditions, edge cases, and boundary values.

💡 Example: Checking if a loop terminates when i = 10

python
CopyEdit
i = 1
while i < 10:
print(i)
i += 1 # If this line is missing, it causes an infinite loop

🚀 Path testing detects missing or incorrect loop termination conditions.

4. Validating Decision-Based Systems

✅ Ensures correctness in systems with complex decision trees.


✅ Useful in AI algorithms, banking systems, and conditional workflows.

💡 Example: Loan approval system


 Path testing verifies all possible paths (approved, rejected, pending) are tested.

5. Enhancing Software Security

✅ Helps uncover security vulnerabilities by testing all logical paths, including rare ones.
✅ Identifies unexpected execution flows that could lead to exploits.

💡 Example: Testing authentication paths to check all possible login failures and bypasses.

6. Optimizing Performance & Debugging

✅ Helps detect unnecessary computations and optimize execution paths.


✅ Useful for analyzing execution bottlenecks in performance testing.

💡 Example: Detecting slow paths in an e-commerce checkout process.

Industries Using Path Testing

🏦 Banking & Finance → Validating transaction workflows.


🚀 Aerospace & Defense → Ensuring safety-critical systems work in all conditions.
📱 Software Development → Checking mobile apps for logical errors.
🧠 AI & Machine Learning → Verifying decision-making in AI models.

You might also like