Debugging
Debugging
Introduction: Debugging is an integral part of the software development process, often learned
through experience and applied through trial and error. Despite the challenges it poses, successful
debugging requires humility, an open mind, and the ability to recognize and rectify mistakes.
Shneiderman emphasizes the frustrating nature of debugging, likening it to problem-solving and
brain teasers, but notes the relief that comes with ultimately correcting the bug.
3. Non-Error Causes:
5. Timing Problems:
Symptoms may arise from timing issues rather than processing problems.
7. Intermittent Symptoms:
Common in embedded systems where hardware and software are tightly coupled.
8. Distributed Causes:
If two modules behave correctly individually but fail when integrated, check the
interface for consistency.
Conclusion: Debugging, despite its challenges, is an essential skill for developers. Recognizing the
characteristics of bugs and employing appropriate techniques after a thorough analysis of symptoms
can lead to effective bug resolution. The iterative nature of debugging requires persistence, an open
mind, and the acknowledgment of the possibility of errors in code.
Debugging Approaches: Trial & Error, Backtracking, Binary Search, Watch Points, Induction &
Deduction
Debugger examines error symptoms, makes snap judgments on potential error locations,
and uses debugging tools.
2. Backtracking:
Backtracks in the program flow to a point where symptoms disappear, bracketing the error
location.
Subsequent careful study of the bounded code segment reveals the cause.
Injects a set of inputs near the middle of the program and examines the output.
If output is correct, the error is in the first half; if wrong, in the second half.
Repeated to bracket the erroneous portion of the code for final analysis.
4. Watch Points:
Software can be used to insert watch points without manual modification, eliminating
practical problems.
Inductive Approach:
Deductive Approach:
Steps:
4. Devise a hypothesis.
Conclusion: Effective debugging involves a combination of systematic approaches. Trial & Error and
Backtracking provide immediate insights, while Binary Search, Watch Points, and Induction &
Deduction offer more structured and methodical ways to identify and fix errors. The choice of
approach depends on the nature of the problem and the available information.
Consider clues from similar but different test cases without symptoms.
Patterns may include conditions like "error occurs only with no outstanding balance."
3. Devise a Hypothesis:
Study relationships among clues to devise one or more hypotheses about the error cause.
If unable to devise a theory, additional data may be needed from new test cases.
If multiple theories seem possible, select the most probable one first.
Compare the hypothesis to original clues, ensuring it completely explains their existence.
Failure to prove the hypothesis may result in fixing only a symptom or a portion of the
problem.
If all causes are eliminated, additional data (e.g., new test cases) are needed.
If more than one cause remains, select the most probable one (prime hypothesis) first.
Use available clues to refine the theory, moving from general (e.g., "error in handling the last
transaction") to specific (e.g., "last transaction in the buffer is overlaid with the end-of-file
indicator").
Conclusion: Effective debugging involves a structured and systematic approach. The induction
approach focuses on gathering and analyzing data systematically, while the deduction approach
involves eliminating possible causes one by one until a single, validated cause remains. Both
approaches help in identifying and fixing errors accurately.
Notes on Debugging Tools:
Tools include debugging compilers, dynamic debugging aids, automatic test case generators,
memory dumps, and cross-reference maps.
2. Role of Tools:
Tools are not a substitute for careful evaluation based on a complete software design
document and clear source code.
Provides detailed error messages in the attribute table, aiding the debugging process.
Static Analyzers:
To enhance testing quality, tools should be concise, powerful, and natural for testers.
Conclusion: Effective debugging relies on a combination of approaches and tools. While tools such as
debugging compilers and testing tools contribute significantly, human judgment and understanding
remain crucial for successful software development and debugging. The integration of tools with
careful evaluation enhances the efficiency and accuracy of the debugging process.
2. Static Analyzers:
Aim to prove allegations, which are claims about analyzed programs demonstrated through
systematic examination.
Examples include FACES, DAVE, RXVP, PLA, checkout compiler, LINT on PWB/Unix.
Typically find 0.1 to 0.2% Non-Comment Source Statements (NCSS) deficiency reports.
3. Code Inspectors:
Enforce standards uniformly for many programs.
Examples include the AUDIT system, found in some COBOL tools like the AORIS librarian
system.
4. Standard Enforcers:
5. Other Tools:
Tools used to catch bugs indirectly through program listings that highlight mistakes.
Example: Program generator used to produce proforma parts of each source module,
ensuring uniform program appearance.
Conclusion: Static testing tools play a crucial role in enhancing program quality by analyzing code
without execution. Static analyzers, code inspectors, standard enforcers, and related tools
contribute to identifying and preventing issues early in the development process, promoting code
readability and adherence to standards.
Dynamic testing tools support the dynamic testing process by facilitating the execution of
tests and analyzing their results.
2. Test Execution:
A test consists of a single invocation of the test object and all subsequent execution until the
test object returns control to the point of invocation.
Subsidiary modules called by the test object can be real or simulated by testing stubs.
Input Setting:
Selects test data that the test object reads during the test.
Stub Processing:
Handles outputs and selects inputs when a stub is called.
Results Display:
Provides the tester with the values produced by the test object for validation.
Test Planning:
Assists the tester in planning efficient and effective tests for defect discovery.
Often relatively simple and involve declaring a minimum level of coverage (CI coverage).
CI is measured by planting subroutine calls (software probes) along each program segment.
Results are collected using a run-time system and reported to the user.
5. Output Comparators:
Used in dynamic testing to check equivalence between predicted and actual outputs.
Objective is to identify differences between old and new output files from a program.
Conclusion: Dynamic testing tools play a crucial role in executing and analyzing tests during the
dynamic testing process. Coverage analyzers and output comparators are essential components,
ensuring that tests cover the required program segments and validating the equivalence of
predicted and actual outputs. These tools contribute to effective and thorough testing, identifying
discrepancies and facilitating quality assurance.
Operating systems for better minicomputers often have built-in output comparators or file
comparators.
These tools help identify differences between old and new output files from a program.
Create a file of information for program use based on user commands or data descriptions.
The test data generation problem is challenging, and there is no general solution.
Practical need for methods to generate test data that meet specific objectives, such as
exercising previously unexercised program segments.
Difficulties include the complexity of long paths, non-linear formula sets, and many illegal
paths.
Variational test data generation techniques are often effective, derived from existing paths
near the intended segment.
A test harness system is link-edited and relocated around the test object.
Goal is to keep track of series of tests and serve as a basis for documenting test execution
and defect findings.
Establishes procedures for handling test information files and documenting when tests were
run and their outcomes.
Conclusion: Dynamic testing tools continue to play a critical role in facilitating the testing process.
Output comparators, test file generators, test data generators, test harness systems, and test-
archiving systems contribute to efficient and effective testing, providing mechanisms for
comparison, data generation, input/output control, and documentation of test processes. These
tools collectively support comprehensive dynamic testing practices.