0% found this document useful (0 votes)
12 views27 pages

Summary SWT301

The document provides an overview of software testing fundamentals, emphasizing its importance in ensuring quality, mitigating risks, and enhancing customer satisfaction. It outlines key testing principles, processes, and the psychological aspects of testing, along with ethical guidelines for testers. Additionally, it covers various testing techniques, including static and dynamic testing, and the roles involved in the review process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

Summary SWT301

The document provides an overview of software testing fundamentals, emphasizing its importance in ensuring quality, mitigating risks, and enhancing customer satisfaction. It outlines key testing principles, processes, and the psychological aspects of testing, along with ethical guidelines for testers. Additionally, it covers various testing techniques, including static and dynamic testing, and the roles involved in the review process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter 1: Fundamentals of testing

• What is testing ?
Testing, in the context of software development, is the process of evaluating a software
application or system to identify defects, errors, or discrepancies between expected and
actual behavior. It involves executing the software under controlled conditions and
comparing the observed outcomes with predefined expectations to ensure that the
software functions correctly and meets specified requirements. Testing encompasses
various activities, including designing test cases, executing tests, analyzing results, and
reporting defects. The goal of testing is to improve the quality, reliability, and
performance of software products by identifying and addressing issues early in the
development process.
• Why testing is necessary ?
1. Quality Assurance: Testing helps ensure the quality and reliability of software
products. By identifying and fixing defects early in the development process,
testing contributes to delivering software that meets user expectations and
performs as intended.
2. Risk Mitigation: Software defects can lead to financial losses, damage to
reputation, or even safety hazards. Testing helps mitigate these risks by
identifying and addressing issues before they impact users or businesses.
3. Customer Satisfaction: Thorough testing leads to higher customer satisfaction.
By delivering software that is free of critical defects and meets user needs, testing
helps maintain positive relationships with customers and users.
4. Compliance: Many industries have regulatory standards and compliance
requirements that software must meet. Testing ensures that software complies
with these standards, reducing legal and financial risks for businesses.
5. Continuous Improvement: Testing provides valuable feedback for improving the
software development process. By analyzing test results and identifying areas for
improvement, organizations can refine their processes and deliver better-quality
software over time.
• Testing principles ?
1. Testing Shows Presence of Defects: Testing is primarily aimed at uncovering
defects in software. While successful tests may indicate the absence of defects in
specific scenarios, they can't guarantee the absence of defects across all possible
scenarios.
2. Exhaustive Testing is Impossible: It's practically impossible to test every
possible input combination and scenario due to resource constraints like time,
budget, and human effort. Instead, testing should be prioritized based on risk,
focusing on critical functionalities and high-risk areas.
3. Early Testing: Testing activities should begin as early as possible in the software
development lifecycle. Identifying and fixing defects early reduces the cost of
rectification and ensures that defects are addressed before they become more
challenging and costly to fix.
4. Defect Clustering: A small number of modules or functionalities typically
contain the majority of defects. Focusing testing efforts on these areas can yield
significant improvements in software quality.
5. Pesticide Paradox: Repeating the same tests over and over again may not reveal
new defects. Test cases should evolve over time to identify new defects and
address changes in the software under test.
6. Testing is Context-Dependent: Testing strategies, techniques, and priorities
should be adapted to the specific context of the project, including its objectives,
constraints, and risks.
7. Absence-of-Errors Fallacy: The absence of reported defects doesn't necessarily
mean the software is defect-free or ready for release. It's essential to consider
factors such as user expectations, requirements coverage, and system performance
to assess software readiness accurately.
• Fundamental test process ?
1. Test Planning and Control:
o Define the objectives and scope of testing.
o Develop a test strategy and plan that outlines testing activities, resources,
schedules, and deliverables.
o Identify test priorities, risks, and dependencies.
o Establish criteria for test completion and exit criteria.
2. Test Analysis and Design:
o Review requirements, specifications, and other relevant documentation to
understand the software's functionality and behavior.
o Identify test conditions and scenarios based on requirements and risks.
o Develop test cases, test procedures, and test data that cover the identified
test conditions.
o Define test environments and infrastructure requirements.
3. Test Implementation and Execution:
o Set up test environments and configure test tools.
o Execute test cases according to the test plan and schedule.
o Record test results, including observed outcomes and any defects found.
o Compare actual results with expected results to determine test outcomes.
4. Test Evaluation and Reporting:
o Analyze test results to assess the software's quality and identify defects,
trends, and areas for improvement.
o Prioritize defects based on severity, impact, and risk.
o Report defects and issues using a defined defect management process,
including clear descriptions, steps to reproduce, and supporting evidence.
o Generate test reports summarizing testing activities, results, and metrics.
5. Test Closure:
o Review test objectives, criteria, and deliverables to determine if they have
been met.
o Conduct a post-mortem or lessons learned session to identify strengths,
weaknesses, and areas for improvement in the testing process.
o Prepare test closure reports and documentation, including final test results,
defect metrics, and recommendations for future testing efforts.
o Obtain approval from stakeholders to close the testing phase and proceed to
the next stage of the software development lifecycle.
• Psychology of testing ?
1. Cognitive Processes: This involves understanding how testers process
information, perceive software behavior, and make decisions during testing.
Cognitive processes include perception, attention, memory, problem-solving,
decision-making, and learning. Testers rely on these cognitive abilities to analyze
requirements, design test cases, execute tests, and report defects effectively.
2. Heuristics and Mental Models: Testers often employ heuristics, which are
mental shortcuts or rules of thumb, to guide their testing efforts. These heuristics
are based on past experiences, domain knowledge, and mental models of the
software under test. For example, testers may use heuristic techniques like
equivalence partitioning or boundary value analysis to identify test cases
efficiently.
3. Biases and Fallacies: Testers are susceptible to various cognitive biases and
logical fallacies that can influence their testing decisions and interpretations of
results. Examples include confirmation bias (favoring information that confirms
preexisting beliefs), availability bias (relying on readily available information),
and the gambler's fallacy (expecting past events to influence future outcomes).
Understanding these biases can help testers mitigate their impact on testing
activities.
4. Creativity and Exploration: Testing often requires creativity and exploratory
skills to uncover defects that may not be apparent through scripted test cases.
Testers need to think outside the box, explore different paths through the software,
and consider alternative scenarios that may lead to unexpected behavior.
Encouraging a culture of exploration and experimentation can foster creativity in
testing.
5. Emotions and Stress Management: Testing can be a mentally demanding and
stressful activity, particularly when facing tight deadlines, complex software
systems, or high-pressure environments. Testers need to manage their emotions
effectively, stay focused, and maintain a positive mindset to perform testing tasks
efficiently. Techniques such as mindfulness, relaxation exercises, and stress
management strategies can help testers cope with stress and stay productive.
6. Communication and Collaboration: Effective communication skills are
essential for testers to convey testing results, discuss issues, and collaborate with
developers, stakeholders, and other team members. Testers must be able to
articulate their findings clearly, advocate for quality, and build strong working
relationships with colleagues. Good communication promotes transparency,
facilitates knowledge sharing, and enhances teamwork in software testing
projects.
• Code of Ethics ?
A code of ethics in software testing outlines the principles and guidelines that testers
should adhere to in their professional practice. These ethical standards help ensure that
testing is conducted with integrity, honesty, and respect for stakeholders. While there
isn't a universally accepted code of ethics specifically tailored to software testing, many
testing professionals follow general ethical principles that apply to the broader field of
software development and engineering. Here are some common principles found in
codes of ethics relevant to testing:
1. Integrity: Testers should conduct testing activities honestly and ethically,
avoiding any actions that could compromise the integrity of the testing process or
the quality of the software being tested.
2. Professional Competence: Testers should strive to maintain high standards of
professional competence by continually updating their skills, staying informed
about developments in testing methodologies and technologies, and seeking
opportunities for professional development and learning.
3. Confidentiality: Testers should respect the confidentiality of sensitive
information obtained during testing, including proprietary software, trade secrets,
and confidential data. They should handle this information responsibly and only
disclose it to authorized parties as necessary.
4. Independence and Objectivity: Testers should maintain independence and
objectivity in their testing activities, avoiding conflicts of interest and bias that
could influence their judgment or testing results. They should base their
assessments on objective evidence and avoid favoritism or undue influence.
5. Respect for Stakeholders: Testers should treat all stakeholders with respect and
professionalism, including clients, users, colleagues, and other members of the
project team. They should listen to stakeholders' concerns, communicate
effectively, and strive to meet their needs and expectations.
6. Transparency and Accountability: Testers should be transparent about their
testing processes, methodologies, and findings, providing clear and accurate
information to stakeholders. They should take responsibility for their actions and
decisions, acknowledging and addressing any mistakes or shortcomings.
7. Quality and Safety: Testers should prioritize the quality, reliability, and safety of
software products, identifying and reporting defects and issues that could affect
usability, functionality, or security. They should advocate for rigorous testing
practices and standards to ensure that software meets high-quality standards.
Chapter 2: Lifecycle
1. V-Model Shows Test Levels, Early Test Design:
o Test Levels: The V-model visualizes the relationship between each
development phase and its corresponding testing phase, forming a "V"
shape. For instance, requirements analysis is paired with acceptance testing,
system design with system testing, and so on. This clear mapping ensures
that every phase has a corresponding validation process.
o Early Test Design: By designing tests early in the development cycle,
issues can be anticipated and addressed sooner. Early test design integrates
testing activities with requirements gathering and design phases, ensuring
that tests are ready as soon as components are developed. This reduces the
risk of discovering critical issues late in the development process.
2. High-Level Test Planning:
o Scope and Objectives: High-level test planning involves defining the
overall scope of testing, including what will be tested and the goals of
testing activities. Objectives might include ensuring compliance with
requirements, verifying performance standards, or validating security
measures.
o Resources and Schedule: Effective planning allocates the necessary
resources, such as personnel, tools, and environments, and establishes a
timeline for testing activities. This ensures that testing is adequately
supported and aligns with project milestones.
o Risk Management: High-level planning also involves identifying potential
risks and devising mitigation strategies. This proactive approach helps
manage uncertainties and ensures that the testing process remains robust
under various scenarios.
3. Component Testing Using the Standard:
o Isolation and Verification: Component testing, also known as unit testing,
isolates individual software components to verify their correctness.
Standards, such as coding guidelines and testing frameworks, ensure
consistency and reliability in how tests are conducted.
o Automation: Automated testing tools are often employed to execute unit
tests, providing quick feedback and facilitating regression testing. This
helps maintain code quality throughout development iterations.
4. Integration Testing in the Small: Strategies:
o Incremental Integration: Components are integrated and tested
incrementally, either one at a time (incremental testing) or in small groups
(bottom-up or top-down integration). This helps identify integration issues
early.
o Stubs and Drivers: These are used to simulate parts of the system that are
not yet developed. Stubs mimic the behavior of lower-level modules, while
drivers simulate higher-level modules, enabling testing of incomplete
systems.
5. System Testing (Non-Functional and Functional):
o Functional Testing: Ensures that the system's functions conform to
specified requirements. This includes testing features, operations, and user
interactions.
o Non-Functional Testing: Focuses on aspects such as performance,
scalability, security, usability, and reliability. This ensures that the system
performs well under various conditions and meets user expectations for
quality.
o End-to-End Scenarios: System testing often involves executing end-to-
end scenarios that simulate real-world usage, ensuring that the system
operates correctly in a fully integrated environment.
6. Integration Testing in the Large:
o System Interactions: Involves testing interactions between different
systems or subsystems, ensuring they work together seamlessly. This is
critical for systems that rely on external services or complex architectures.
o Interfaces and Protocols: Verifies that data is correctly exchanged
between systems using the appropriate protocols and interfaces. This
includes testing APIs, communication protocols, and data formats.
7. Acceptance Testing: User Responsibility:
o User Involvement: Acceptance testing is typically conducted by end-users
or clients to ensure that the software meets their needs and requirements.
This hands-on testing validates that the system is ready for production.
o Real-World Scenarios: Users test the software under real-world
conditions, identifying any discrepancies between the delivered product and
their expectations. This phase is critical for user satisfaction and approval.
8. Maintenance Testing to Preserve Quality:
o Regression Testing: Ensures that recent changes (bug fixes, updates,
enhancements) do not negatively impact existing functionality. Automated
regression tests are often used to quickly verify the system after each
change.
o Ongoing Quality Assurance: Maintenance testing is continuous, helping
to detect and correct defects that might arise over time. This ongoing
process is essential for maintaining the long-term reliability and
performance of the software.
o Impact Analysis: Evaluates the impact of changes on the system, ensuring
that all affected areas are tested. This prevents new issues from being
introduced and ensures comprehensive coverage..
Chapter 3: Static Techniques
Static Analysis
Static analysis is a method of debugging by examining the code without executing the
program. It involves analyzing the code structure and syntax to detect errors, code
smells, and potential vulnerabilities.
Static Testing
Static testing is a type of software testing where the code is not executed. It includes
reviewing the documents and source code to find errors. This process can include
walkthroughs, inspections, and reviews.
Dynamic Testing
Dynamic testing involves executing the code and validating the software’s dynamic
behavior. It checks the functionality of the code during runtime.
Work Products Examined by Static Testing
Static testing examines various work products including:
 Requirements documents
 Design documents
 Source code
 Test plans and test cases
 User manuals
Benefits of Static Testing
Static testing offers several benefits:
 Early detection of defects, which can save time and cost
 Detection of inconsistencies, ambiguities, and incomplete requirements
 Improved quality of code and documentation
 Prevention of defects in the later stages of development
Costs of Reviews
The costs associated with reviews in static testing include:
 Time and resources spent by reviewers and moderators
 Training and preparation costs for conducting effective reviews
 Costs related to tools used for static analysis
Types of Defects
Static testing can identify various types of defects such as:
 Syntax errors
 Missing requirements
 Design flaws
 Logic errors
 Security vulnerabilities
 Coding standards violations
Objectives of Static Testing
The primary objectives of static testing are:
 Improve software quality
 Identify defects early in the development cycle
 Ensure the software meets the requirements
 Verify adherence to coding standards
 Provide feedback on the software development process
Review Process
The review process typically involves several steps:
1. Planning: Define the objectives, scope, and roles.
2. Preparation: Reviewers prepare by studying the work product.
3. Review Meeting: Discuss findings and potential issues.
4. Rework: Address and fix the identified defects.
5. Follow-up: Verify the corrections and improvements.
Roles & Responsibilities
Key roles in the review process include:
 Author: The creator of the work product under review.
 Moderator: Facilitates the review process.
 Reviewer: Examines the work product to identify defects.
 Scribe: Records the findings and discussions during the review meeting.
 Manager: Ensures the review process is carried out effectively.
Review Types
There are several types of reviews:
 Informal Review: An informal, unstructured review process.
 Walkthrough: A step-by-step presentation by the author to gather feedback.
 Technical Review: A formal process focusing on technical content.
 Inspection: A formal and rigorous review process with defined roles and metrics.
Review Techniques
Common review techniques include:
 Ad hoc: Unstructured and informal.
 Checklist-based: Using predefined lists to identify common defects.
 Scenario-based: Using specific scenarios to uncover issues.
 Role-based: Review from the perspective of different stakeholders.
 Perspective-based: Using different viewpoints to identify defects.
Success Factors for Reviews
Factors that contribute to successful reviews include:
 Clear objectives and scope
 Proper planning and preparation
 Effective communication and collaboration
 Use of appropriate review techniques
 Training and support for reviewers
 Management support and commitment
Static Analysis
Static analysis tools automatically analyze the code for potential defects without
executing it. These tools can identify syntax errors, potential bugs, code smells, and
security vulnerabilities.
Data Flow Analysis
Data flow analysis tracks the flow of data through the program to identify anomalies,
such as variables that are used before being initialized or variables that are never used
after being initialized.
Control Flow Analysis
Control flow analysis examines the order in which the instructions of a program are
executed. It helps to identify unreachable code, infinite loops, and other control flow
anomalies.
Cyclomatic Complexity
Cyclomatic complexity is a metric used to measure the complexity of a program by
quantifying the number of linearly independent paths through the source code. It helps in
understanding the potential complexity and maintainability of the code.
Static Metrics
Static metrics are measurements derived from static analysis of the software, such
as:
 Lines of code (LOC)
 Number of classes and methods
 Complexity metrics (e.g., cyclomatic complexity)
 Number of code comments
Limitations and Advantages
Advantages:
 Early detection of defects
 Improved software quality
 Cost-effective in the long run
 Provides a better understanding of the code
Limitations:
 Does not detect runtime errors
 Requires skilled personnel to interpret results
 May produce false positives and false negatives
 Cannot assess the performance of the software
Chapter 4: Test Techniques
1. Categories of Test Techniques
Static (non-execution)
Static techniques involve reviewing and analyzing the software artifacts without
executing the code. They include:
 Reviews: Formal or informal evaluation of documents and code.
 Static Analysis: Automated tools analyze code for potential errors and adherence
to coding standards.
 Walkthroughs: Step-by-step presentation of a document by the author to gather
feedback.
 Inspections: Formal examination of work products to identify defects.
Behavioural (Black Box)
Black-box testing techniques focus on testing the software without any knowledge of the
internal workings. They examine the functionality based on the requirements and
specifications.
 Equivalence Partitioning: Divides input data into equivalent partitions to reduce
the number of test cases.
 Boundary Value Analysis: Focuses on testing the boundaries between partitions.
 Decision Table Testing: Uses decision tables to represent combinations of inputs
and corresponding outputs.
 State Transition Testing: Examines the behavior of the system for different states
and transitions between them.
Structural (White Box)
White-box testing techniques involve testing the internal structure and workings of the
software. They require knowledge of the code.
 Statement Coverage: Ensures every statement in the code is executed at least
once.
 Decision Coverage: Ensures every decision point (e.g., if-else) is tested for both
true and false outcomes.
 Branch Coverage: Similar to decision coverage but focuses on all possible
branches of the code.
 Paths Through Code: Examines all possible paths through the code to ensure
thorough testing.
2. Black-box Test Techniques
Equivalence Partitioning
 Divides input data into partitions where each partition represents a set of valid or
invalid inputs.
 Reduces the number of test cases while maintaining coverage.
 Example: Testing a range of input values by selecting one representative from
each partition.
Boundary Value Analysis
 Focuses on values at the edges of partitions.
 Often finds defects at the boundaries of input ranges.
 Example: Testing minimum and maximum values just inside and just outside of
acceptable ranges.
Decision Table Testing
 Uses tables to represent complex combinations of conditions and actions.
 Helps ensure all possible combinations of inputs are tested.
 Example: Representing business rules that have multiple conditions affecting the
outcome.
State Transition Testing
 Tests the software's response to different events in various states.
 Useful for systems with defined states and transitions.
 Example: Testing a login system with states like "Logged Out", "Logging In", and
"Logged In".
3. White-box Test Techniques
Statement Coverage
 Measures the percentage of executable statements that have been tested.
 Ensures all code statements are executed at least once during testing.
Decision Coverage
 Measures the percentage of decision points (e.g., if statements) that have been
evaluated to both true and false.
 Ensures all decision outcomes are tested.
Structural Coverage
 Measures the extent to which the internal structure of the code is tested.
 Includes various metrics like statement, branch, and path coverage.
Branch Coverage
 Ensures every possible branch (path) from each decision point is executed.
 Helps identify untested paths in the code.
Paths Through Code
 Involves identifying and testing all possible paths through the code.
 Ensures maximum code coverage and detection of complex bugs.
4. Experience-based Test Techniques
Experience-based Techniques
 Rely on the tester's experience and intuition to identify potential defects.
 Useful when there is limited documentation or time.
Error Guessing
 Testers use their experience to guess the most likely areas where defects might be
found.
 Often involves thinking about common mistakes developers make.
Exploratory Testing
 Simultaneous learning, test design, and test execution.
 Testers explore the software and create tests on the fly based on their findings.
5. Choosing Test Techniques
Formality
 The level of formality required in the testing process can influence the choice of
test techniques.
 More formal techniques may be needed for regulatory compliance or critical
systems.
Type of Component / System
 Different components or systems might require different test techniques.
 For example, a database might require specific query testing techniques.
Component / System Complexity
 Highly complex systems may need more thorough and detailed testing techniques.
 Structural techniques are often used for complex algorithms and critical
components.
Regulatory Standards
 Compliance with industry standards may dictate specific test techniques.
 Ensures the software meets regulatory requirements.
Customer or Contractual Requirements
 Customer or contractual obligations may specify particular test techniques.
 Ensures all agreed-upon testing is performed.
Risk Levels and Risk Types
 High-risk areas of the software might require more rigorous testing techniques.
 Risk-based testing helps prioritize testing efforts.
Test Objectives
 The overall objectives of the testing effort will influence the choice of techniques.
 For example, performance testing may require specific load testing techniques.
Available Documentation
 The quality and availability of documentation can affect the choice of techniques.
 Lack of documentation may necessitate more exploratory or experience-based
testing.
Tester Knowledge & Skills
 The skills and experience of the testing team can influence technique selection.
 Some techniques may require specialized knowledge or training.
Available Tools
 The availability of tools can impact the choice of techniques.
 Automated tools can facilitate certain types of testing, such as static analysis or
performance testing.
Time & Budget
 Time and budget constraints can affect the depth and breadth of testing.
 Efficient techniques like equivalence partitioning can maximize coverage within
constraints.
SDLC Model
 The software development life cycle model in use can influence testing
techniques.
 Agile development may favor exploratory testing, while Waterfall may use more
formal techniques.
Expected Use of the Software
 Understanding how the software will be used can guide the choice of test
techniques.
 User acceptance testing may focus on real-world usage scenarios.
Previous Experience with Using the Test Techniques
 Previous success or failure with specific techniques can influence their selection.
 Leveraging proven techniques can improve testing effectiveness.
Expected Types of Defects
 Anticipating the types of defects that may be present can guide technique
selection.
 Certain techniques may be better suited for detecting specific types of defects,
such as boundary value analysis for input validation errors.
Chapter 05: Test Management
Test Organisation
Independence Testing:
 Testing conducted by an individual or team that is not involved in the product's
development. This separation helps maintain objectivity.
Independence Degree of Testing:
 Levels range from low (developers testing their own code) to high (external
organizations conducting tests). Higher independence usually means less bias but
can introduce challenges such as longer feedback cycles and communication
barriers.
Tester(s) in Development Team
Pros:
 Immediate feedback.
 Better understanding of the code.
 Enhanced communication with developers.
Cons:
 Potential for bias.
 Conflicts of interest.
 Less objective testing.
Tester(s) outside Development Team
Pros:
 Higher objectivity.
 Unbiased defect identification.
 Independent verification and validation.
Cons:
 Possible communication issues.
 Less familiarity with the codebase.
 Longer feedback loops.
Internal Specialised Testers / Test Consultants
Pros:
 Expertise and specialized skills.
 Focused testing strategies.
 In-depth knowledge of testing methodologies.
Cons:
 Higher costs.
 Possible integration issues with internal teams.
 Dependence on specific individuals.
Outside Organisation (3rd Party)
Pros:
 High level of independence.
 Professional expertise.
 Rigorous and standardized testing processes.
Cons:
 High costs.
 Onboarding time.
 Potential misalignment with internal processes.
Pros & Cons of Independence
Pros:
 Unbiased testing.
 Higher quality defect detection.
 Objective assessment of the product.
Cons:
 Potential communication barriers.
 Increased cost.
 Longer feedback cycles.
Test Manager Tasks
 Define test policies and strategies.
 Plan, monitor, and control the test process.
 Manage test resources and environments.
 Handle risk management related to testing.
 Communicate with stakeholders and manage expectations.
Tester Tasks
 Design and execute test cases.
 Report and track defects.
 Prepare test documentation and reports.
 Maintain test scripts and datasets.
 Collaborate with developers for issue resolution.
Test Planning & Estimation
Test Planning Activities
 Define the scope of testing.
 Identify resources, schedules, and milestones.
 Set up test environments.
 Determine test objectives and deliverables.
Test Strategy & Test Approach
Test Strategy Analytical:
 Based on analysis of requirements and design documents.
Test Strategy Model-based:
 Uses models to represent the desired behavior of the system.
Test Strategy Methodical:
 Follows systematic approaches like checklists and predefined techniques.
Test Strategy Process- / Standard-Compliant:
 Adheres to specific standards and processes.
Test Strategy Directed (Consultative):
 Involves consulting with stakeholders to direct testing efforts.
Test Strategy Regression-averse:
 Focuses on preventing regression by extensive reuse of test cases.
Test Strategy Reactive (Dynamic):
 Adapts based on observed behavior and changes during testing.
Test Approach Entry Criteria and Exit Criteria
Entry Criteria:
 Preconditions that must be met before testing begins (e.g., test environment setup,
code freeze).
Exit Criteria:
 Conditions that must be met to conclude testing (e.g., no critical defects, all test
cases executed).
Test Execution Schedule
 Timeline for executing test cases.
 Coordination with development cycles and release schedules.
Factors Influencing Test Effort
Product Characteristics:
 Complexity, size, and technology stack.
Development Process Characteristics:
 Methodology, tools used, and release frequency.
People Characteristics:
 Skills, experience, and team dynamics.
Test Results:
 Previous testing outcomes and defect rates.
Test Estimation Techniques
Metrics-based:
 Uses historical data and metrics to estimate effort.
Expert-based:
 Relies on the judgment and experience of experts.
Test Monitoring & Control
Test Control Activities
 Track progress against the plan.
 Manage changes to the test plan.
 Communicate status and issues to stakeholders.
Metrics used in Testing
 Defect density.
 Test coverage.
 Test execution progress.
 Pass/fail rates.
Test Reports
 Provide insights into test progress, product quality, and outstanding risks.
Configuration Management
 Manage changes to test artifacts.
 Version control of test cases, scripts, and data.
 Ensure consistency across test environments.
Risk & Testing
Product (Quality) Risks
 Risks that could impact the product's quality (e.g., critical functionalities failing).
Project Risks
 Risks that could affect the project timeline, budget, or scope (e.g., resource
constraints).
Risk-based Testing & Product Quality
 Prioritize testing efforts based on the risk level to ensure the most critical areas are
tested first.
Defect Management
Defect Report Objectives
 Document and track defects from discovery to resolution.
Defect Report Components
Severity versus Priority:
 Severity: Impact on the system.
 Priority: Urgency of fixing.
Steps to reproduce:
 Detailed steps required to reproduce the defect.
Expected & Actual Result:
 Expected Result: Intended outcome as per requirements.
 Actual Result: Outcome observed during testing.
Screenshot:
 Visual proof of the defect, aiding in easier diagnosis and resolution.
Chapter 06: Tool support for testing
1. Test Tool Considerations
Test Tool Classification
Tool Classification:
 Management of Testing & Testware: Tools that assist in managing test activities
and test artifacts, such as test management tools, configuration management tools,
and defect tracking tools.
 Static Testing: Tools that support static testing processes, including code review,
static analysis, and model-based testing.
 Test Design & Specification: Tools that facilitate the design and specification of
test cases, such as test case management tools, requirement management tools,
and modeling tools.
 Performance & Dynamic Analysis: Tools that help in performance testing and
dynamic analysis, including load testing tools, stress testing tools, and profiling
tools.
 Specialised Needs: Tools designed for specific testing requirements, such as
security testing tools, usability testing tools, and accessibility testing tools.
Benefits & Risks of Test Automation
Benefits:
 Increased efficiency and speed of test execution.
 Higher test coverage and consistency.
 Reusability of test scripts.
 Enhanced accuracy and precision in testing.
Risks:
 High initial cost and resource investment.
 Maintenance overhead for automated test scripts.
 Possible false sense of security.
 Need for skilled personnel to develop and maintain automation scripts.
Execution & Management Tools Considerations
Test Execution:
 Capture/Replay: Tools that record user interactions with the application and
replay them to execute tests. Pros: Easy to use, quick setup. Cons: Maintenance
can be high, brittle scripts.
 Data-driven: Tools that execute the same test script with multiple sets of data
inputs. Pros: Efficient, reusable scripts. Cons: Requires proper data management.
 Keyword-driven: Tools that use keywords to represent actions to be performed,
separating test logic from the actual script. Pros: Easy to maintain, reusable
keywords. Cons: Initial setup can be complex.
 Model-based: Tools that generate test cases based on models of system behavior.
Pros: Comprehensive test coverage, automated test case generation. Cons:
Requires accurate models, can be complex to implement.
Test Management:
 Tools that help plan, organize, and control the testing process, including test
planning, test scheduling, and resource management tools.
2. Effective Use of Tools
Principles for Tool Selection
 Needs Assessment: Understand and document the specific needs and
requirements of the organization.
 Cost-Benefit Analysis: Evaluate the potential return on investment (ROI) and
total cost of ownership (TCO) of the tools.
 Compatibility and Integration: Ensure the tool can integrate with existing
processes, tools, and environments.
 User Skill Level: Consider the skill level of the team members who will be using
the tool.
 Vendor Support and Community: Evaluate the level of support and resources
available from the vendor and the user community.
Pilot Project
 Objective: Validate the tool in a controlled environment before full-scale
implementation.
 Scope: Define a limited scope that represents typical use cases and workflows.
 Evaluation Criteria: Establish clear criteria for success, including performance,
ease of use, and impact on productivity.
 Feedback and Adjustment: Gather feedback from users, make necessary
adjustments, and refine tool usage guidelines.
Success Factors for Tools
 Management Support: Secure commitment and support from management to
provide necessary resources and backing.
 Clear Objectives: Define clear and achievable objectives for tool implementation
and usage.
 User Training: Provide comprehensive training and resources to users to ensure
they can effectively use the tool.
 Process Integration: Ensure the tool is integrated into existing processes and
workflows seamlessly.
 Continuous Improvement: Regularly review and improve tool usage and
processes based on feedback and performance metrics.

You might also like