0% found this document useful (0 votes)
4 views

Debugging

Uploaded by

shubhamjha160704
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Debugging

Uploaded by

shubhamjha160704
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Title: Debugging Techniques and Challenges

Introduction: Debugging is an integral part of the software development process, often learned
through experience and applied through trial and error. Despite the challenges it poses, successful
debugging requires humility, an open mind, and the ability to recognize and rectify mistakes.
Shneiderman emphasizes the frustrating nature of debugging, likening it to problem-solving and
brain teasers, but notes the relief that comes with ultimately correcting the bug.

Characteristics of Bugs (Pressman's Insights):

1. Geographical Remote Cause:

 Symptom and cause may be located in different parts of the program.

 Highly coupled program structures can complicate identification.

2. Temporary Disappearance of Symptoms:

 Symptoms may vanish temporarily when another error is corrected.

3. Non-Error Causes:

 Symptoms may result from non-errors, such as round-off inaccuracies.

4. Tracing Human Errors:

 Human errors that cause symptoms may not be easily traced.

5. Timing Problems:

 Symptoms may arise from timing issues rather than processing problems.

6. Difficulty in Reproducing Input Conditions:

 Real-time applications with indeterminate input ordering make it challenging to


reproduce conditions accurately.

7. Intermittent Symptoms:

 Common in embedded systems where hardware and software are tightly coupled.

8. Distributed Causes:

 Symptoms may be due to causes distributed across tasks running on different


processors.

Debugging Techniques (Table 8.8):

1. Thorough Analysis of Symptoms:

 Techniques should not be employed without prior analysis and hypothesis


formulation.

2. Interface Consistency Check:

 If two modules behave correctly individually but fail when integrated, check the
interface for consistency.
Conclusion: Debugging, despite its challenges, is an essential skill for developers. Recognizing the
characteristics of bugs and employing appropriate techniques after a thorough analysis of symptoms
can lead to effective bug resolution. The iterative nature of debugging requires persistence, an open
mind, and the acknowledgment of the possibility of errors in code.

Debugging Approaches: Trial & Error, Backtracking, Binary Search, Watch Points, Induction &
Deduction

1. Trial & Error:

 Debugger examines error symptoms, makes snap judgments on potential error locations,
and uses debugging tools.

 Slow and wasteful approach due to lack of a systematic plan.

2. Backtracking:

 Examines error symptoms to identify where they are first noticed.

 Backtracks in the program flow to a point where symptoms disappear, bracketing the error
location.

 Subsequent careful study of the bounded code segment reveals the cause.

 Forward tracking variation uses print statements to examine intermediate results.

3. Binary Search Strategy:


 Assumes known correct values at key points in the program.

 Injects a set of inputs near the middle of the program and examines the output.

 If output is correct, the error is in the first half; if wrong, in the second half.

 Repeated to bracket the erroneous portion of the code for final analysis.

4. Watch Points:

 Inserts watch points (output statements) at appropriate places in the program.

 Software can be used to insert watch points without manual modification, eliminating
practical problems.

 Ensures easy removal of debugging statements after error resolution.

5. Induction & Deduction (Fig. 8.32):

 Inductive Approach:

1. Formulates a single working hypothesis based on data analysis.

 Deductive Approach:

1. Applies the hypothesis to prove or disprove it.

 Steps:

1. Cannot locate pertinent data.

2. Organize the data.

3. Study the relationships.

4. Devise a hypothesis.

5. Cannot prove the hypothesis - iterate steps.

6. Can prove the hypothesis - fix the error.

Conclusion: Effective debugging involves a combination of systematic approaches. Trial & Error and
Backtracking provide immediate insights, while Binary Search, Watch Points, and Induction &
Deduction offer more structured and methodical ways to identify and fix errors. The choice of
approach depends on the nature of the problem and the available information.

Notes on the Induction Approach:

1. Locate the Pertinent Data:

 Mistake: Failing to consider all available data or symptoms.

 Enumerate known correct and incorrect program behaviors.

 Consider clues from similar but different test cases without symptoms.

2. Organize the Data:


 Induction progresses from specific to general.

 Structure data to observe patterns, with a focus on finding contradictions.

 Patterns may include conditions like "error occurs only with no outstanding balance."

3. Devise a Hypothesis:

 Study relationships among clues to devise one or more hypotheses about the error cause.

 Use visible patterns in the clues' structure.

 If unable to devise a theory, additional data may be needed from new test cases.

 If multiple theories seem possible, select the most probable one first.

4. Prove the Hypothesis:

 Prove the reasonableness of the hypothesis before proceeding.

 Avoid jumping to conclusions and attempting to fix the problem prematurely.

 Compare the hypothesis to original clues, ensuring it completely explains their existence.

 Failure to prove the hypothesis may result in fixing only a symptom or a portion of the
problem.

Notes on the Deduction Approach:

1. Enumerate Possible Causes or Hypotheses:

 Develop a list of all conceivable causes, even if incomplete.

 Theories are structures for analyzing available data.

2. Use Data to Eliminate Possible Causes:

 Analyze data, especially for contradictions, to eliminate causes.

 If all causes are eliminated, additional data (e.g., new test cases) are needed.

 If more than one cause remains, select the most probable one (prime hypothesis) first.

3. Refine the Remaining Hypothesis:

 The remaining cause might be correct but not specific enough.

 Use available clues to refine the theory, moving from general (e.g., "error in handling the last
transaction") to specific (e.g., "last transaction in the buffer is overlaid with the end-of-file
indicator").

Conclusion: Effective debugging involves a structured and systematic approach. The induction
approach focuses on gathering and analyzing data systematically, while the deduction approach
involves eliminating possible causes one by one until a single, validated cause remains. Both
approaches help in identifying and fixing errors accurately.
Notes on Debugging Tools:

1. Debugging Approaches and Tools:

 Debugging approaches benefit from supplementation with various tools.

 Tools include debugging compilers, dynamic debugging aids, automatic test case generators,
memory dumps, and cross-reference maps.

2. Role of Tools:

 Tools are not a substitute for careful evaluation based on a complete software design
document and clear source code.

 The importance of human judgment and understanding remains integral to effective


debugging.

3. Compiler as a Debugging Tool:

 Compiler serves as an effective tool for checking and diagnostics.

 Checks syntax errors and specific types of runtime errors.

 Provides detailed error messages in the attribute table, aiding the debugging process.

 Modern compilers come with error-detection features, emphasizing meaningful error


messages.

8.7 Testing Tools:

1. Categories of Testing Tools:

 Two broad categories: static and dynamic testing tools.


 Exceptions include symbolic evaluation systems and mutation analysis systems, which run
interpretively.

2. Types of Testing Tools:

 Static Analyzers:

 Automatically examine programs systematically.

 Code inspectors ensure adherence to minimum quality standards.

 Standards enforcers impose rules on developers.

 Coverage analyzers measure the extent of coverage.

 Output comparators determine the appropriateness of program output.

 Test file/data generators set up test inputs.

 Test harnesses simplify test operations.

 Test archiving systems provide documentation about programs.

3. Improving Testing Quality:

 To enhance testing quality, tools should be concise, powerful, and natural for testers.

 The testing process should be as pleasant as possible for testers.

Conclusion: Effective debugging relies on a combination of approaches and tools. While tools such as
debugging compilers and testing tools contribute significantly, human judgment and understanding
remain crucial for successful software development and debugging. The integration of tools with
careful evaluation enhances the efficiency and accuracy of the debugging process.

Notes on Static Testing Tools:

1. Definition of Static Testing Tools:

 Static testing tools analyze programs without executing them.

2. Static Analyzers:

 Operate from a precomputed database of descriptive information derived from the


program's source text.

 Aim to prove allegations, which are claims about analyzed programs demonstrated through
systematic examination.

 Examples include FACES, DAVE, RXVP, PLA, checkout compiler, LINT on PWB/Unix.

 Language-dependent and often system-dependent, requiring high initial tool investment


costs.

 Typically find 0.1 to 0.2% Non-Comment Source Statements (NCSS) deficiency reports.

3. Code Inspectors:
 Enforce standards uniformly for many programs.

 Rules can be single or multiple statements.

 Code inspector assistance programs link to an interactive environment to ensure thorough


inspection.

 Examples include the AUDIT system, found in some COBOL tools like the AORIS librarian
system.

4. Standard Enforcers:

 Similar to code inspectors but generally enforce simpler rules.

 Examines single statements rather than whole programs.

 Focuses on cosmetic standards to enhance program readability, indirectly indicating


program quality.

5. Other Tools:

 Tools used to catch bugs indirectly through program listings that highlight mistakes.

 Example: Program generator used to produce proforma parts of each source module,
ensuring uniform program appearance.

 Example: Structured programming preprocessors that enhance program listings with


automatic indentation, indexing features, and more.

Conclusion: Static testing tools play a crucial role in enhancing program quality by analyzing code
without execution. Static analyzers, code inspectors, standard enforcers, and related tools
contribute to identifying and preventing issues early in the development process, promoting code
readability and adherence to standards.

Notes on Dynamic Testing Tools:

1. Definition of Dynamic Testing Tools:

 Dynamic testing tools support the dynamic testing process by facilitating the execution of
tests and analyzing their results.

2. Test Execution:

 A test consists of a single invocation of the test object and all subsequent execution until the
test object returns control to the point of invocation.

 Subsidiary modules called by the test object can be real or simulated by testing stubs.

3. Functions of Test Support Tools:

 Input Setting:

 Selects test data that the test object reads during the test.

 Stub Processing:
 Handles outputs and selects inputs when a stub is called.

 Results Display:

 Provides the tester with the values produced by the test object for validation.

 Test Coverage Measurement:

 Determines the effectiveness of tests in terms of the program's structure.

 Test Planning:

 Assists the tester in planning efficient and effective tests for defect discovery.

4. Coverage Analyzers (Execution Verifiers):

 Common and important tools for testing.

 Often relatively simple and involve declaring a minimum level of coverage (CI coverage).

 CI coverage is expressed as a percentage of exercised elemental segments during testing.

 CI is measured by planting subroutine calls (software probes) along each program segment.

 Results are collected using a run-time system and reported to the user.

5. Output Comparators:

 Used in dynamic testing to check equivalence between predicted and actual outputs.

 Common in both single-module and multiple-module (system level) testing, including


regression testing.

 Objective is to identify differences between old and new output files from a program.

Conclusion: Dynamic testing tools play a crucial role in executing and analyzing tests during the
dynamic testing process. Coverage analyzers and output comparators are essential components,
ensuring that tests cover the required program segments and validating the equivalence of
predicted and actual outputs. These tools contribute to effective and thorough testing, identifying
discrepancies and facilitating quality assurance.

Notes on Dynamic Testing Tools (Continued):

6. Output Comparators (Continued):

 Operating systems for better minicomputers often have built-in output comparators or file
comparators.

 These tools help identify differences between old and new output files from a program.

7. Test File Generators:

 Create a file of information for program use based on user commands or data descriptions.

 Commonly used in COBOL environments to simulate transaction inputs in a database


management situation.

 Can be adapted to other programming environments.


8. Test Data Generators:

 The test data generation problem is challenging, and there is no general solution.

 Practical need for methods to generate test data that meet specific objectives, such as
exercising previously unexercised program segments.

 Difficulties include the complexity of long paths, non-linear formula sets, and many illegal
paths.

 Variational test data generation techniques are often effective, derived from existing paths
near the intended segment.

9. Test Harness Systems:

 A test harness system is link-edited and relocated around the test object.

 Permits easy modification and control of test inputs and outputs.

 Provides online measurement of CI coverage values.

 Can be batch-oriented or fully interactive, with modern thinking favoring interactive


systems.

 Acts as a focal point for installing various analysis support tools.

10. Test-Archiving Systems:

 Goal is to keep track of series of tests and serve as a basis for documenting test execution
and defect findings.

 Establishes procedures for handling test information files and documenting when tests were
run and their outcomes.

 Mainly developed on a system-specific/application-specific basis.

Conclusion: Dynamic testing tools continue to play a critical role in facilitating the testing process.
Output comparators, test file generators, test data generators, test harness systems, and test-
archiving systems contribute to efficient and effective testing, providing mechanisms for
comparison, data generation, input/output control, and documentation of test processes. These
tools collectively support comprehensive dynamic testing practices.

You might also like