Manual Testing
Manual Testing
Testing can trigger failures that are caused by defects in the software
(dynamic testing) or can directly find defects in the test object (static
testing).
When dynamic testing triggers a failure, debugging is concerned with
finding causes of this failure (defects), analyzing these causes,
and eliminating them.
The typical debugging process in this case involves:
• Reproduction of a failure
• Diagnosis (finding the root cause)
• Fixing the cause
Why is Testing Necessary?
Test planning
Test monitoring and control.
Test analysis – what to test
Test design – how to test
Test implementation
Test execution
Test completion
Test Process: the way the testing is carried out
will depend on a number of contextual factors
including:
Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.)
Team members (skills, knowledge, level of experience, availability, training
needs, etc.)
Business domain (criticality of the test object, identified risks, market needs,
specific legal regulations, etc.)
Technical factors (type of software, product architecture, technology used, etc.)
Project constraints (scope, time, budget, resources, etc.)
Organizational factors (organizational structure, existing policies, practices
used, etc.)
Software development lifecycle (engineering practices, development methods,
etc.)
Tools (availability, usability, compliance, etc.)
Testware is created as output work
products from the test activities
Test planning work products include: test plan, test schedule, risk register, and entry and exit
criteria
Risk register is a list of risks together with risk likelihood, risk impact and information about risk
mitigation
Test monitoring and control work products include: test progress reports documentation of control
directives and risk information
Test analysis work products include: (prioritized) test conditions and defect reports regarding
defects in the test basis (if not fixed directly).
Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
Test implementation work products include: test procedures, automated test scripts, test suites,
test data, test execution schedule, and test environment elements. Examples of test environment
elements include: stubs, drivers, simulators, and service virtualizations.
Test execution work products include: test logs, and defect reports
Test completion work products include: test completion report action items for improvement of
subsequent projects or iterations, documented lessons learned, and change requests (e.g., as
product backlog items).
Traceability between the Test Basis
and Testware
Accurate traceability supports coverage evaluation
The coverage criteria can function as key performance indicators to drive
the activities that show to what extent the test objectives have been
achieved
Good traceability also makes test progress and completion reports more
easily understandable by including the status of test basis elements.
Generic Skills Required for Testing
The results should be recorded and are normally part of the test
completion report
Test Levels
Component testing (also known as unit testing)
focuses on testing components in isolation
Component integration testing (also known as unit integration
testing)
focuses on testing the interfaces and interactions between components
System testing
focuses on the overall behavior and capabilities of an entire system or
product, often including functional testing of end-to-end tasks and the
non-functional testing of quality characteristics
System integration testing
focuses on testing the interfaces of the system under test and other
systems and external services .
Acceptance testing
focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs.
Test Types
Functional testing evaluates the functions that a component or
system should perform
Planning
Review initiation
Individual review
Communication and analysis
Fixing and reporting
Roles and Responsibilities in
Reviews
Manager – decides what is to be reviewed and provides resources,
such as staff and time for the review
Author – creates and fixes the work product under review
Moderator (also known as the facilitator) – ensures the effective
running of review meetings, including mediation, time management,
and a safe review environment in which everyone can speak freely
Scribe (also known as recorder) – collates anomalies from reviewers
and records review information, such as decisions and new anomalies
found during the review meeting
Reviewer – performs reviews. A reviewer may be someone working on
the project, a subject matter expert, or any other stakeholder
Review leader – takes overall responsibility for the review such as
deciding who will be involved, and organizing when and where the
review will take place
Review Types
Informal review
Walkthrough
Technical Review
Inspection
Black-Box Test Techniques
Equivalence Partitioning
Boundary Value Analysis
Decision Table Testing
State Transition Testing
Equivalence Partitioning
Statement testing
Branch testing
Statement Testing
Error Guessing
Exploratory Testing
Checklist-based testing
Acceptance Criteria
Entry criteria and exit criteria should be defined for each test level,
and will differ based on the test objectives.
To be continued..