ST Notes3
ST Notes3
Static white box testing is a software testing technique where the internal structure, design,
and implementation of the software are analysed without executing the code. It is a static
analysis method, meaning it focuses on reviewing the source code, architecture, and
documentation rather than running the program.
1. No Execution of Code – The code is examined for errors, vulnerabilities, and logical
flaws without actually running it.
2. Access to Internal Code – Since it is a white-box approach, testers have complete
knowledge of the internal workings of the application.
3. Early Detection of Defects – It helps identify issues in the design, structure, and
logic of the code early in the development phase.
4. Automated or Manual – Can be performed manually through code reviews or
automatically using static analysis tools.
1. Code Reviews – Peers or experts manually review the source code to identify defects.
2. Walkthroughs – Developers explain the code logic and flow to a group for feedback.
3. Static Code Analysis – Tools like SonarQube, Checkstyle, or Coverity scan the code
for issues like security vulnerabilities, unused variables, and syntax errors.
4. Data Flow Analysis – Examines how data moves through the application to detect
issues like uninitialized variables.
5. Control Flow Analysis – Analyses the logical flow of the program to find potential
bugs or unreachable code.
1. Planning: For specific review, review process generally begins with ‘request for review’
simply by author to moderator or inspection leader. Individual participants, according to
their understanding of document and role, simply identify and determine defects,
questions, and comments. Moderator also performs entry checks and even considers exit
criteria.
2. Kick-Off: Getting everybody on the same page regarding document under review is the
main goal and aim of this meeting. Even entry result and exit criteria are also discussed in
this meeting. It is basically an optional step. It also provides better understanding of team
about relationship among document under review and other documents. During kick-off,
Distribution of document under review, source documents, and all other related
documentation can also be done.
3. Preparation: In preparation phase, participants simply work individually on document
under review with the help of related documents, procedures, rules, and provided
checklists. Spelling mistakes are also recorded on document under review but not
mentioned during meeting. These reviewers generally identify and determine and also
check for any defect, issue or error and offer their comments, that later combined and
recorded with the assistance of logging form, while reviewing document.
4. Review Meeting: This phase generally involves three different phases i.e. logging,
discussion, and decision. Different tasks are simply related to document under review is
performed.
5. Rework: Author basically improves document that is under review based on the defects
that are detected and improvements being suggested in review meeting. Document needs
to be reworked if total number of defects that are found are more than an unexpected
level. Changes that are done to document must be easy to determine during follow-up,
therefore author needs to indicate changes are made.
6. Follow-Up: Generally, after rework, moderator must ensure that all satisfactory actions
need to be taken on all logged defects, improvement suggestions, and change requests.
Moderator simply makes sure that whether author has taken care of all defects or not. In
order to control, handle, and optimize review process, moderator collects number of
measurements at every step of process. Examples of measurements include total number
of defects that are found, total number of defects that are found per page, overall review
effort, etc.
7. Individual Assessment: The stage prior to the official group meeting during which each
reviewer conducts an independent examination of the artefacts.
8. Meeting for Group Review: The cooperative stage in which the review panel discusses
over results, resolves conflicts and makes choices regarding the examined artifacts.
9. Finalization and Record-Keeping: Completing the formal review procedure, recording
the results and being ready for any necessary follow-up measures.
10. Metrics and Ongoing Improvement: Finding opportunities for ongoing improvement
and evaluating the success of the formal review process through the tracking and analysis
of review metrics.
1. Code Functionality
Is the code following the project’s style guide (e.g., Google Style Guide, PEP8 for
Python)?
Is indentation, spacing, and formatting consistent?
Are function and class definitions structured properly?
5. Security Considerations
9. Dependency Management
Dynamic White Box Testing is a testing approach where the internal structure, logic, and flow of a
software application are tested while it is running. It is used to analyze how the program behaves
under different conditions, ensuring correctness, performance, and security.
try:
print(divide(10, 2)) # Works fine, prints 5.0
print(divide(10, 0)) # Causes an error
except ZeroDivisionError:
print("Error: Cannot divide by zero!")
💡 What happens?
Both Dynamic White Box Testing and Debugging involve running the code and analyzing its
behavior, but they serve different purposes.
Purpose Finds errors before deployment Fixes errors after they occur
Scope Tests the entire application or module Focuses on fixing specific issues
import unittest
class TestMathOperations(unittest.TestCase):
def test_division(self):
self.assertEqual(divide(10, 2), 5) # Expected output: 5
self.assertRaises(ZeroDivisionError, divide, 10, 0) # Should handle division by zero
unittest.main()
🔹 What happens?
The test automatically checks if the function works for all inputs.
If an error occurs, the test fails and warns the developer before release.
🔹 What happens?