Draft For TestPlan
Draft For TestPlan
1. What is testing?
Every one of us using software in one way or another:
Phones, Medical machinery, Car, Air Planes, Business Apps.
Unfortunately, many software after release subsequently cause failure and not meet the
stakeholder’s needs.
https://www.udemy.com/course/istqb-certified-tester-foundation-level-
ctfl/learn/lecture/15371836#overview
• It aims to find and fix as many defects as possible before the software release
• It involves the execution of the component or system being tested with some test data
• It requires the software code to be implemented and running.
A batter strategy is prevent the software using Static testing:
• Does NOT involve the execution of the component or system being tested
• Involves techniques such as review of documents (e.g. requirements, design documents,
source code and user stories
• Affects in preventing defects from being introduced into the code
Both dynamic and static testing can be used as a means for achieving similar objectives
The used technique will provide information that can be used to improve both the system being
tested and the development and testing processes.
Now the question came… Who performs static and dynamic testing?
This task can be performed by anyone in the software life cycle: architects, requirement
engineers, developers and testers.
How testing contributes to the success of the software?
• Having testers involved in requirements review or user story refinement could detect
defects in these work product
• The identification and removal of requirements defects reduce the risk of incorrect or
untestable functionality being developed
• Having testers work closely with system designers while the system is being designed
can increase each party’s understanding of the design and how to test it.
o This increased understanding can reduce the risk of significant design defects
and enable tests to be identified at an early stage.
• Having tester work closely with the developers while the code is under development can
increase each party’s understanding of the code and how to test it.
o This increased understanding can reduce the risk of defects within the code and
the tests.
• Having testers verify and validate the software prior to release can defect failures that
might otherwise have been missed and support the process of removing the defects
that caused the failures (i.e., debugging)
o Increases the likelihood that the software meets stakeholder needs and satisfies
requirements.
How do we design tests that find defects?
Effective: by using proven documents test design techniques
2
Efficient: find the defects with the least effort, time, cost and resources.
Quality assurance is referring to the process, in order to provide the confidence that a certain
level of quality is achieved.
Let’s have an example:
If I ask you to make pizza, and you don’t know how to make pizza than the probability that you
will make something eatable is very low, eaven if you have the best ingredients. But if I give you
a pizza recepy that it was proven in before to be a good pizza. The probability to make a pizza
will be much higher. The software in this case is pizza and the recepy is the process. Whoever
wrote the recepy for shure had multimple tried in order to come up with that recepy (process).
Even do there are space for improved, so called process improvement. The batter the process,
the batter the software. To have a good pizza everything need to be good (dought/blat, the
souce, the ingredients, need to look good and fresh). In software to have a work products is:
When processes are carried out properly, the work products or the outputs created by thouse
processes are generally of higher quality, which contributes to defect prevention.
The use of root cause analysis to detect and remove the causes of defects is important for
effective quality assurance.
Quality Control
• Involves test activities, that support the achievement of appropriate level of quality
• These test activities are part of the overall software development or maintenance
process
• Quality assurance supports proper testing
3
A person makes an error (mistake): could be a system architect, requirement engineer,
developer, tester or even a customer. Some of those mistake/error create a fault/bug in the
software that can cause a failure in operation.
If an error is not found at the right time, the arrow will drop intro the software (can be from
architect side, requirement side, not well understood the customer specification, and so on).
Describe each problem in detail (e.g. Miscommunication between requirement engineer and
architect)…
Defects:
• Functional: A financial system with miss calculation in one of it’s feature (un acceptable)
• Non Functional: Not visible, a system that works but it’s slow or a system that cannot
accept more than 1000 user at a specific time.
One role for testing is to ensure that key functional and nonfunctional requirements are
exanimated before the system is released for operation used and any defects are reported to
the development team to be fixed.
We use testing to help us to measure the quality of software in terms of the number of defects
found, the tests run, and the system covered by the tests.
Do you think testing increases the quality of the software?
• No, testing can give confidence in the quality of the software also the quality of the
software system increases when defects are fixed.
4
Failure:
Failure may be caused by environmental conditions. For example radiation, electromagnetic
fields and pollution, can also cause defects in firmware or influence the execution of the
software by changing hardware conditions.
Defects, Root Causes, Effects and Debugging
So we analyze the defects to identify the root causes, so that we can reduce the occurrence of
similar defects in the future.
By focusing on the most significant root causes, root cause analysis can lead to process
improvements that prevent a significant number of future defects from being introduced.
Leak of knowledge while review the requirement and discussion between the others
Testing and Debugging:
Executing tests can show failures that are caused by defects in the software
Debugging is the development activity that finds, analyzes and fixes such defects
Subsequent re-testing or confirmation testing to check whether the fixes resolved the defects
(to not be confused with regression testing, where all tests are executed in order to check
whether the new fixes caused other bugs).
5
Test Coverage measure the effectiveness of our testing. Coverage measure how much our test
cases have covered.
• To find the areas in specified requirement which is not covered by our tests
• To know where we need to create more test cases to increase our test coverage
• To measure how a change request will affect the software
• To identify a quantitative measure of test coverage, which is an indirect method for
quality check
• To identify meaningless test cases that do not increase coverage
6
2. Exhaustive testing is impossible
Imagine to test an Integer range from: -32,767 to 32,767 => we have ~ 65535 possible values
Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases
Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts
As conclusion we need to focus on the most important parts that can be affected.
3. Early testing saves time and money
When a bug in the requirement is discovered during the requirement phase, the cost of fixing
this bug is very cheap. The more we wait on fixing this bug, the more costly it is to be fixed.
7
If a customer discovered a bug after delivering, the software would cost as 1000 times higher
than if we would have discovered and fixed the bug in the requirement phase.
The time and effort required to fix a requirement bug during the system-testing phase could be
500 times higher than if we would have discovered and fixed the bug in the requirement phase.
8
To find defects early, testing activities shall be started as early as possible in the software or
system development life cycle, as soon as the documents are in draft mode, and shall be
focused on defined objectives.
If we wait until the last minute to introduce the testers, time pressure can be increase
dramatically.
The earlier the testing activity is started, the longer the elapsed time available
Testers do not have to wait until the software is available to test
9
If we have in software 10 modules or components, imagine that in our first cycle of testing it
will expose 100bugs and you will find that each modules has 10bugs exactly, but rather it’s a
small number of module that exhibits the majority of problems.
A specific modules can contain most of the bugs for various reasons of defect clustering:
• System complexity
• Volatile code
• The effects of change
• Development staff experience
• Development staff inexperience
A small number of modules usually contains most of the defects discovered during pre-release
testing or is responsible for most of the operational failures.
Predicted defect clusters, and the actual observed defect clusters in test or operation, are
essential input into a risk analysis used to focus the test effort
Pareto principle 80/20 (80% of the bugs are due to 20% of the issues, e.g. 80% of car accident
are due to 20% of the drivers). In software, ~80% of the problem are found in about 20% of the
modules. If you want to uncover a higher number of the fix it’s useful to employ this principle
an target of aria of application where a high proportion of defect can be find. However, it must
be remember that testing should not concentrate exclusively on those parts.
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test case will no
longer find any new defects. Test are no longer effective at finding defects, just as pesticide are
no longer effective at killing insects after a while.
In order to detect new defects, the test cases need to be regularly reviewed and revised.
6. Testing is context dependent
Different testing is necessary for different circumstance: e.g.
A game software which is use graphics heavily need to be tested differently comparing
to a webpage or automotive software.
• A simple static website will be tested differently than a dynamic e-commerce site,
where products ca be pursued where users use debit/credit cards.
• A software used in an airplane will be tested differently than a flight simulator
• Safety-critical industrial control software is tested differently from a mobile e-commerce
app. (logging back pages compared to a online game login, even do the login interface
might look the same)
10
• Testing in an Agile project is done differently than testing in a sequential lifecycle
project.
7. Absence of errors is a fallacy
Some organization expect the testers can run all possible tests and find all possible defects. But
principle 2, tells us that this is impossible. Further it is a fallacy (meaning a mistaken belief) to
expect that just finding and fixing a large number of defects will ensure the success of a system.
For example truly testing all the specified requirements, and fixing all the defects found, could
still produce a system that is difficult to use. That doesn’t full field the users needs and
expectation.
Software with no know errors is not necessarily ready to be shipped/deliver. You should always
ask the question:
Does the application under test fulfill the users’ needs or not?
The fact that we cannot find any defect or the fact that we have fixed all the defects that we
have found are not enough reasons that the software will be successful. Think about before
dynamic testing has began, there where no defects reported against the code delivered so far.
That means so far that the software has not been tested yet and hence no outstanding defects
identified can be delivered? I don’t think so.
Let’s have an example: You might go to a doctor with symptoms, he gives you medication
according to his analysis of the problem. You tried the medication for a while and no
improvements. So It’s turn out that the doctor initial analysis was wrong and he might
recommend another medication. The first medication was not a bad medication, but it was the
wrong medication according to the current situation.
You can think to the same analogy when you thing from the software side. We usually build
software to help us solve a problem. We put the characteristics of the software thinking that it
will be solve the problem.
11
Testing helps gain confidence on the software, it’s true but it’s not a principle. In addition, they
can gives you a situation and ask you which principle does the situation describe or which
principal the situation is lacking. For example, a customer get the software but he is upset
because the software doesn’t meet his need. So which principle we didn’t follow? Absence-of-
errors fallacy. Another one, a team wants to discover as many defects as they can and which
principle they need to follow? Defect clustering! And so on…
One more question here… How much testing is enough?
As we discuss earlier, risk is an important aspect when we talking about testing.
Every time we should evaluate the risk of the current situation of the software and decide
wheeler the risk is high or low. If the risk is low and acceptable then we can stop testing and
deliver the software, otherwise we should continue the testing.
Testing should provide sufficient information to stakeholders to make sufficient information.
12
6. Test Process
A common misperception of testing is that it only consists of running tests, meaning executing
the software and checking the results. But instead, software testing is a process which includes
many different activities, and test execution is only one of these activities.
Before test execution there is some preliminary things to do. To do a proper testing, one should
through various test activities, we call those activities the test process.
• Requirement review
• Create the test specification
• Review the test specification
• Create the test cases
• Review the test case
• Execute the test case
• Analyze the test results
The test process in any given situation depends on many factors depending on the context, so
you might put the effort in one step more than the others.
The most important thing is to decide, how we should perform the test process in order to
achieve its established objectives.
7. Test Planning
Is where we define the objective of testing, what to test, who will do the testing, how they will
do the testing, define a specific test activities in order to meet the objectives. For how long and
define when can we consider the testing is complete, which is called the exit criteria. This is
when we will stop testing and rise a report to the stakeholders in order to define if the testing
was enough or not. All this need to be reviewed together with software project leader and
software process engineer.
13
8. Test monitoring and control
Represent the ongoing activity of comparing actual progress against the test plan using any test
monitoring metrics defined in the test plan.
If there are any deviations, then we should do test control, which is taking any necessary
action/actions to stay on track to meet the targets. Therefore, we need to undertake both
planning and control throughout the testing activities.
Evaluating the exit criteria continuously during test monitoring and control, to check if we are
on track/time in order meet the exit criteria.
Evaluating exit criteria is the activity where test execution results are assessed against the
defined objectives.
Example of exit criteria:
Test Progress (e.g. imagine cronometru), against the plan and the stats of the exit criteria
are communicated to stakeholders in test progress reports, including deviations from the
plan and information to support any decision to stop testing.
The test manager then will evaluate the test reports submitted by various testers and
decide if we should stop testing or if testing in a specific area should be continue or not.
For example: if the criteria was that the software performs the beat is to be 8s bare with
badge transaction, if this beat is 10s meaning that criteria was not meet, than there are two
possible actions. The most likely is to employ extra testing activities to check why the desired
performance is not meet. That means that we need to change the exit criteria which require
approval from stakeholders. In agile environment the exit criteria is map to the definition of
done.
9. Test Analysis
Is concerned with the fine detail of knowing what to test and breaking it into fine testable
elements that we called test conditions.
It is the activity during which general testing objectives are transformed into real test
conditions.
During test analysis, any information or documentation we have is analyzed to identify testable
features and define associated test conditions.
14
Test basis, is any sort of documentation that we can use as a reference or base to know what to
test.
Test analysis include the following activities:
• Analyzing and understanding any documentation that we will use for testing to make
sure it’s testable. Example of test basis include:
o Requirement specification (customer requirements, functional requirements,
system requirements, users stories that specify the functional and non-fictional
component or system behavior)
o Design and implementation information (such as software or system
architecture, diagram or documentation, design specification, modeling diagram:
.uml, interfaces specification, system state transitions or state machines)
o Risk analysis reports (which lists all the items from the software that are risky
that requires more attention from us.
• During the test analysis activities it will be a good opportunity to evaluate the test basis
and test items to identify defects of various types, such as:
o Ambiguities (something that confusing the reader and might be interpreted
differently by different people)
o Omissions (something is not mention, could be some signals or some values)
o Inconsistencies (something that it was mention in a way somewhere, but also
mention differently somewhere else)
o Inaccuracies (something is not accurate)
o Contradictions (could be a contradiction between to state transitions or
statements that is kind of inconsistencies between them. If two sentences are
contradictory, than one must be true and one must be false, but if there are
inconsistent, than both could be false)
o Superfluous statements (are referring to unnecessary statements that has
nothing to the meaning)
• Identifying the features and sets of features to be tested
• Defining and prioritizing tests conditions for each feature based on analysis of the test
basic, and considering functional, non-functional and structural characteristics, other
business and technical factor and levels of risks.
• Finally we should be focusing on capturing bi-directional traceability between each
element of the test basis and the associated test conditions.
The main goal of test monitoring and control is to reduce the likelihood of omitting important
test conditions and to define more precise and accurate test conditions. The application of
black-box, white-box, and experience-based test techniques can be useful in process of test
analysis.
15
Test Design
During the test design, the test conditions are elaborated into high-level test cases, sets of high-
level test cases and other test ware.
Test analysis answered the questions “what to test?” while the test design answers the
question “how to test?”.
Test design also include a set of activities:
• Design and prioritizing test cases and sets of test cases best on test technique
• Identifying necessary test data to support test conditions and test cases (what kind
of data should we use to create our test design (specification) and how to combine
test condition so that a small number of test cases can cover as many as the test
condition as possible)
• Designing the test environment and identifying any required infrastructure and
tools
• Capture bi-directional traceability between the test basis, test conditions, test cases,
and test procedures
As with test analysis, test design may also result in the identification of similar types of defects
in the test basis.
As with test analysis, the identification of defects during test design is an important potential
benefit.
• Developing and prioritizing test procedures, and potentially creating automated test
scripts
• Create test suites from the test procedures and automated test scripts if any
• Arranging the test suites within a test execution schedule in a way that results in
efficient test execution
• Building the test environment (simulation or other infrastructure)
• Prepare and implement test data and to ensure it is properly loaded in the test
environment
• Verifying and updating bi-directional traceability between the test basis, test conditions,
test cases, test procedures and test suites
16
Test exeution
During test execution, test suites are run in accordance with the test execution schedule
As tests are run, their outcome the actual results need to be logged and compared to the
expected results
Whenever there is a discrepancy between the expected and actual results, test incident (bug
report) should be raised to trigger an investigation.
Test execution activities:
• Keeping a log of testing activities, including the outcome (passed/failed) and the
versions of software, data and tools while recording the IDs and versions of the test
item(s) or test object, test tool(s), and test were we used in running the tests
• Running test cases in the determined order manually or using test automation tools
• Comparing actual results with expected results
• Analyzing anomalies to establish their likely causes (when is a difference between the
actual result and the expected results)
17
• Reporting defects based on the failures observed with as much information as possible
and communicate them to the developer to try and fix them
• After fixing the bug we need to retest or repeat test activities to confirm that the bug
was actually fixed which is called confirmation testing
• Also it’s very important if there is time on planning to check not only the test that was
affected, and run all the test in order to make sure that the new bug fix has not
created/introduce, unintentionally other bugs in the part that has worked previously
fine, which is called regression testing
• Verifying and updating bi-directional traceability between the test basic, test conditions,
test cases, test procedures, and test results
18
• Analyzing lessons learned from the completed test activities to determine changes
needed for future iterations, releases and projects
• Using the information gathered to improve test process maturity
• Organized
• Managed
• The names used for those work products
Testing Standards
ISO/IEC/IEEE 29119-1 (2013) Software and system engineering – Software testing Part 1:
Is talking about Testing Concepts and definitions
ISO/IEC/IEEE 29119-2 (2013) Software and system engineering – Software testing Part 2:
Is talking about Test processes
ISO/IEC/IEEE 29119-3 (2013) Software and system engineering – Software testing Part 3
Is talking about Test documentation
All test reports should provide audience-relevant details about the test progress as of the date
of the report, including summarizing the test execution results once those become available.
Test monitoring and control work products should also address project management concerns,
such as:
• Task completion
• Resource allocation and usage
• Effort
19
Test Analysis work products
Include document that contain defined and prioritized test conditions, each of which is ideally
bidirectional traceable to the specific element(s) of the test basis it covers
Test Design work products
Test design results in test cases and sets of test cases to exercise the test conditions defined in
test analysis
• It is often a good practice to design high-level test cases, without create values for input
data and expected results
• Such high-level test cases are reusable across multiple test cycles with different
concrete data, while still adequately documenting the scope of the test case
Ideally, each test case is bidirectional traceable to the test condition(s) it covers.
Test Implementation work products
It includes:
20
Documentation about which test item(s), test object(s). test tools, and test ware were involved
in the testing.
Once the test execution is complete, the status of each element of the basis can be determined
and reported via bi-directional traceability with the associated the test procedure(s).
17. Traceability between the test basis and test work product
It is beneficial if the test basis has measureable coverage criteria defined
The coverage criteria can act effectively as key performance indicators (KPIs) to drive the
activities that demonstrate achievement of software test objectives
Some test management tools provide test work product models that match part or all of the
test work
Some organizations build their own management system to organize the work products and
provide the information traceability they require
FACTORS influence test process for a organization
21
TEST MANAGEMENT
1. Risk and Testing
Risk involves the possibility of an event in the future, which has negative consequences.
Risk-based testing draws on the collective knowledge and insight of the project stakeholders to
carry out product risk analysis. To ensure that the likely hood of product risk is minimized, by
following the cascade/steps process:
Involves the possibility that a work product (e.g., a specification, component, system or test)
may fail to satisfy the legitimate needs of its users and/or stakeholders.
When the product risks are associated with specific quality characteristics of a product (e.g.,
functional suitability, reliability, performance efficiency, usability, security, compatibility,
maintainability and probability).
Product Risk = Quality Risk
• Software might not perform its intended functions according to the specification
• Software might not perform its intended functions according to user, customer and/or
stakeholders needs
• System architecture may not adequately support some non-functional requirement(s)
• A particular computation may be performed incorrectly in some circumstances
• A loop control structure may be coded incorrectly
• Response-times may be inadequate for a high-performance transaction processing
system
• User experience (UX) feedback might not meet product expectations
2. Project Risks
Involves situations that, should they occur, may have a negative effect on a project’s ability to
achieve its objectives
22
Project issues:
• Testers issues (they may not communicate sufficient, their needs or the test results in
time)
• Developers and/or testers may failures (to follow up on information found on testing
and review, or to not improve the code or tests implementation)
• Improper attitude (towards of expectation of testing, not appreciating the value of
finding defects during testing)
Technical issues:
23
The level of the risk is determined by the likelihood of the event and the impact (the harm)
from that event.
Level of risk = Probability of the risk x impact if it did happen
Calculation (error in the report) and User Interface. It’s very important to prioritize the risk in
order to avoid delays as early as possible.
Risk-based Testing and Product Quality
Based on risk, we know when to start testing, where to test more, making some testing
decisions (what should retest, what to skip, when to stop testing, how to make reprioritization
and so on).
Testing is used as a risk mitigation activity, to provide feedback about identified risks, as well as
providing feedback on residual (uncovered) risks.
A risk-based approach to testing provides proactive opportunities to reduce the level of product
risk.
Risk-based testing involves product risk analysis, which includes the identification of product
risks and the assessment of each risk’s likelihood and impact. The resulting product risk
information is used to guide test planning, the specification, preparation and execution of test
cases and test monitoring and control. Analyzing product risks early contributes to the success
of a project.
24
Risk response - the results of product risk analysis are used to:
2. Independent Testing
3. Tasks of Test Manager and Tester
1. Task of the test manager
Overall responsibility regarding testing (test process and successful leadership of the test
activities).
Typical activities:
• Test strategy (for the project and test policy for the organization if not already emplace)
• Plan test activities (considering the context and understanding the test objectives and
risks, including selecting the test approach, estimating the time, effort and cost of
testing. Acquiring resources, defending test level, test cycles and planning defect
management and create high level test schedule)
• Write and update the test plan
• Coordinate (the test strategy and test plan with project manager, product owner and
others)
• Share the test perspective (to other project activities, such as integration planning)
25
• Initiate activities (analysis, design, implementation, review, execution of tests, monitor
tests, progress and results, check the status of exit criteria or definition of done if we are
talking about agile)
• Prepare and deliver (test reports or test summary reports based on the information
gather during testing)
• Adapt to changes (planning based on test results and progress and take action if it’s
needed for test control)
• Support setting up the system (defect management system, and reporting issues)
• Metrix (provided based on measuring the test progress and evaluating the quality of
the testing and the product)
• Tools (support in installing and selecting the tools in according to the test process)
• Design test environment (to ensure that this is put in a place before test execution,
manage during test execution)
• Advocate testers (promote and educate the testers, test team)
• Develop tester’s skills (knowledge transfer and improve on the testers, training plans,
performance evaluation, coaching and so on)
• Contribute to the test plan (review and additional contribution based on less and learn)
• Analyze for testability (review the requirements and specifications, models for
testability)
• Create test conditions (identify and document test conditions, traceability between test
cases, test conditions, test basis and requirements)
• Implement test environment (design setup and verify test environment, setting up the
hardware and software needed for testing, offering coordinating support
• Implement tests (design and implement test cases and test procedure)
• Prepare test data (acquire the test data)
• Create the test schedule (it’s up to each tester to create their own detailed test
schedule around the high test schedule created by the test manager. E.g. how much
take the subtask and prepare before test run)
• Execute and log the tests (evaluate the results and comment the deviation from the
expected results)
• Use tools (use appropriate tools to facilitate the test process)
• Automate tests (supported by developer or automated expert)
• Evaluate non-functional characteristics (performance efficiency, reliability, usability,
security, compatibility and portability)
• Review (test created by others)
26
4. Test Strategy and Test Approach
A test strategy provides a generalized description of the test process, usually at the product or
organizational level.
1. Test Strategies
1. Analytical
• Testing will be based on an analysis of some factor (usually during requirement and
design stages of the project, to define where to test first, where to test more and
when to stop testing)
• Risk-based strategy (involves the forming risk analysis using project documents and
stake holder input)
• Requirements-based strategy (where an analysis of requirement specification form
the basis for planning, estimating and designing tests)
2. Model-based
• Design or benchmark some formal or informal model that our system must
follow
• The model will be based of some required aspect of the product, such as a
function, a business process, an internal structure, or a non-functional
characteristic
3. Methodial
• Relies on making systematic use of some predefined set of tests or test
conditions (state transitions, diagrams and so on)
• Failure-based (including error guessing and fault attacks)
• Checklist based
• Quality-characteristic based
4. Process or standard-compliant
• Adopt an industry standard or a known process to tst the system
• IEEE 29119 standard
• Agile methodologies or Kanban
5. Reactive
• Testing is reactive to the component or system being tested
• Exploratory testing
6. Directed or consultative
• Driven primarily by the advice, guidance or instructions of stakeholders, business
domain or technology experts
• Ask the users or developers what to test or to obtain some directive
7. Regression-averse
• Reuse of existing test ware (test cases and test data)
• Test automation (standard test suites)
27
The test approach is the implementation of the test strategy for a particular project or release
The test approach is the starting point for selecting the test techniques, test levels and test
types and for defining the entry criteria and exit criteria (or definition of ready and definition of
done, respectively).
The tailoring of the strategy is based on decisions made in relation to the complexity and goals
of the project, the type of product being develop and product risk analysis.
Factors for defining the strategy
• Risk (testing is about risk management, so consider the risk and the level of it, for a well
establish application that is involving slowly, regression is an important strategy for risk
• Available resource and skills (what skills are the tester possess and lack, that could be
skill and time)
• Objectives (testing must satisfies the needs of stakeholders to be successful, if the
objectives is to find as many defects as possible with a minimal amount of time and
effort invested, than a reactive strategy make sense)
• Regulations (not satisfying only the stake holders, also some regulations. In this case a
methodical test strategy may satisfy those regulations)
• The nature of the product and the business (different approach of testing based on
product and focus)
• Safety (safety considerations promote more formal strategies)
• Technology
5. Test Planning
Is a project plan for the testing work to be done in development or maintenance projects.
Planning is influence by many factors, for example:
28
A test plan ensures that there is initially a list of tasks and milestones in a bassline plan to track
progress against, as well as defining the shape and size of the test effort (a plan must contain at
least: what to do, when, how, by whom, how to manage the change of the plan, how to track
the progress of the plan, and how much it is the cost).
Test Planning (K2)
Is based on testing level (based on v-model), and a master plan to coordinate between them
Test planning is a continuous activity and is performed throughout the product’s lifecycle (it’s
really normal to change the plan if additional information was missing or are wrong)
Feedback from test activities should be used to recognize changing risks so that planning can be
adjusted (based on time spend on reviews, test specification/implementation, execution,
analyzing bugs, reporting).
Test planning activities:
29
6. Entry and Exit Criteria
Named also in Agile method as definition of start and definition of done. In order to have an
effective control over the quality of the software and of the testing, it is important to have
criteria, which define when a given test activity should start and when the activity is complet
1. Entry Criteria (Definition of Ready)
Define the preconditions for understanding a given test activity (this could include a beginning
of a level of testing when test design or when test execution is ready to start. If entry criteria is
not meet it’s likely that activity will prove more difficult, more time consuming, more costly and
more risky).
Remember that according to the test process, when we reach the exit criteria the tester should
provide a report of his findings to the test manager or to the stakeholders to decide if the
testing should continue or stop.
ENTRY CRITERIA -> PASS/FAIL CRITERIA -> EXIT CRITERIA
30