0% found this document useful (0 votes)
41 views30 pages

Draft For TestPlan

Uploaded by

alexdaramus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views30 pages

Draft For TestPlan

Uploaded by

alexdaramus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

FUNDAMENTALS OF TESTING

1. What is testing?
Every one of us using software in one way or another:
Phones, Medical machinery, Car, Air Planes, Business Apps.
Unfortunately, many software after release subsequently cause failure and not meet the
stakeholder’s needs.
https://www.udemy.com/course/istqb-certified-tester-foundation-level-
ctfl/learn/lecture/15371836#overview

2. Why is testing necessary?


Heavy testing of components and systems, and their associated documentation, can help
reduce the risk of failures occurring during operation.
Risk Management:
Testers work to reduce the risk of failures occurring during operation.
When defect are detected an subsequently fixed, this contributes to the quality of the
components or systems.
Typical Objective of Testing:

Testing’s Contribution to success:


Testing techniques is use in order to reduce the major software problem: Find and fix as many
bugs is possible before the release.
1
Dynamic Testing: (the bugs are already there, and we are trying to find them)

• It aims to find and fix as many defects as possible before the software release
• It involves the execution of the component or system being tested with some test data
• It requires the software code to be implemented and running.
A batter strategy is prevent the software using Static testing:

• Does NOT involve the execution of the component or system being tested
• Involves techniques such as review of documents (e.g. requirements, design documents,
source code and user stories
• Affects in preventing defects from being introduced into the code
Both dynamic and static testing can be used as a means for achieving similar objectives

The used technique will provide information that can be used to improve both the system being
tested and the development and testing processes.
Now the question came… Who performs static and dynamic testing?
This task can be performed by anyone in the software life cycle: architects, requirement
engineers, developers and testers.
How testing contributes to the success of the software?

• Having testers involved in requirements review or user story refinement could detect
defects in these work product
• The identification and removal of requirements defects reduce the risk of incorrect or
untestable functionality being developed
• Having testers work closely with system designers while the system is being designed
can increase each party’s understanding of the design and how to test it.
o This increased understanding can reduce the risk of significant design defects
and enable tests to be identified at an early stage.
• Having tester work closely with the developers while the code is under development can
increase each party’s understanding of the code and how to test it.
o This increased understanding can reduce the risk of defects within the code and
the tests.
• Having testers verify and validate the software prior to release can defect failures that
might otherwise have been missed and support the process of removing the defects
that caused the failures (i.e., debugging)
o Increases the likelihood that the software meets stakeholder needs and satisfies
requirements.
How do we design tests that find defects?
Effective: by using proven documents test design techniques
2
Efficient: find the defects with the least effort, time, cost and resources.

3. Quality Assurance and Testing


Quality assurance and testing are not the same, but are related

Quality assurance is referring to the process, in order to provide the confidence that a certain
level of quality is achieved.
Let’s have an example:
If I ask you to make pizza, and you don’t know how to make pizza than the probability that you
will make something eatable is very low, eaven if you have the best ingredients. But if I give you
a pizza recepy that it was proven in before to be a good pizza. The probability to make a pizza
will be much higher. The software in this case is pizza and the recepy is the process. Whoever
wrote the recepy for shure had multimple tried in order to come up with that recepy (process).
Even do there are space for improved, so called process improvement. The batter the process,
the batter the software. To have a good pizza everything need to be good (dought/blat, the
souce, the ingredients, need to look good and fresh). In software to have a work products is:

When processes are carried out properly, the work products or the outputs created by thouse
processes are generally of higher quality, which contributes to defect prevention.

The use of root cause analysis to detect and remove the causes of defects is important for
effective quality assurance.
Quality Control

• Involves test activities, that support the achievement of appropriate level of quality
• These test activities are part of the overall software development or maintenance
process
• Quality assurance supports proper testing

4. Errors, Defects and Failures

3
A person makes an error (mistake): could be a system architect, requirement engineer,
developer, tester or even a customer. Some of those mistake/error create a fault/bug in the
software that can cause a failure in operation.

If an error is not found at the right time, the arrow will drop intro the software (can be from
architect side, requirement side, not well understood the customer specification, and so on).

Describe each problem in detail (e.g. Miscommunication between requirement engineer and
architect)…

Defects:

• Functional: A financial system with miss calculation in one of it’s feature (un acceptable)
• Non Functional: Not visible, a system that works but it’s slow or a system that cannot
accept more than 1000 user at a specific time.
One role for testing is to ensure that key functional and nonfunctional requirements are
exanimated before the system is released for operation used and any defects are reported to
the development team to be fixed.
We use testing to help us to measure the quality of software in terms of the number of defects
found, the tests run, and the system covered by the tests.
Do you think testing increases the quality of the software?

• No, testing can give confidence in the quality of the software also the quality of the
software system increases when defects are fixed.

4
Failure:
Failure may be caused by environmental conditions. For example radiation, electromagnetic
fields and pollution, can also cause defects in firmware or influence the execution of the
software by changing hardware conditions.
Defects, Root Causes, Effects and Debugging

So we analyze the defects to identify the root causes, so that we can reduce the occurrence of
similar defects in the future.

By focusing on the most significant root causes, root cause analysis can lead to process
improvements that prevent a significant number of future defects from being introduced.

Leak of knowledge while review the requirement and discussion between the others
Testing and Debugging:
Executing tests can show failures that are caused by defects in the software
Debugging is the development activity that finds, analyzes and fixes such defects
Subsequent re-testing or confirmation testing to check whether the fixes resolved the defects
(to not be confused with regression testing, where all tests are executed in order to check
whether the new fixes caused other bugs).

4. Concept of Test Coverage in Software Testing


Test Coverage
Is an essential part of software testing, defined as a metric in software testing that measures
the amount of testing performed by a set of tests.

5
Test Coverage measure the effectiveness of our testing. Coverage measure how much our test
cases have covered.

• We covered 3 features out of 4 => 75% coverage


• We covered 600 lines of code (LOC) out of 1000 => 60% coverage
Effectiveness of testing is not measured by the number of test cases but rather by how much
those test cases can cover.
What parts can measure for coverage

• Requirement coverage: each requirement need to be covered by a test


• Structural coverage: each design element of the system/structure need to be tested
• Implementation Coverage: lines of code
Traceability
Between requirement and test is mandatory in order to check
Why do we do test coverage?

• To find the areas in specified requirement which is not covered by our tests
• To know where we need to create more test cases to increase our test coverage
• To measure how a change request will affect the software
• To identify a quantitative measure of test coverage, which is an indirect method for
quality check
• To identify meaningless test cases that do not increase coverage

5. The seven testing principles


1. Testing shows the presence of defects, not their absence
When you test software, you may or may not find defects

• If you find defects, then that’s a proof of the presence of bugs,


• But on the other hand, if your test didn’t find defects, that’s not proof that the
software is defect free
• There is a high probability that you did not find defects because your test does not cover
the breadth and width of the software.
• Maybe you didn’t select the right test data to exercise the software
• Maybe the defect is waiting for an exceptional circumstance to fail the software
There is nothing called bug-free software, there’s no way we can prove that
We simply need to design as many tests as possible to find as many defects as possible
Testing reduces the probability of undiscovered defects remaining in the software but, even if
no defects are found, testing is not a proof of correctness.

6
2. Exhaustive testing is impossible
Imagine to test an Integer range from: -32,767 to 32,767 => we have ~ 65535 possible values

Testing everything (all combinations of inputs and preconditions) is not feasible except for
trivial cases
Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts
As conclusion we need to focus on the most important parts that can be affected.
3. Early testing saves time and money

When a bug in the requirement is discovered during the requirement phase, the cost of fixing
this bug is very cheap. The more we wait on fixing this bug, the more costly it is to be fixed.

7
If a customer discovered a bug after delivering, the software would cost as 1000 times higher
than if we would have discovered and fixed the bug in the requirement phase.

The time and effort required to fix a requirement bug during the system-testing phase could be
500 times higher than if we would have discovered and fixed the bug in the requirement phase.

8
To find defects early, testing activities shall be started as early as possible in the software or
system development life cycle, as soon as the documents are in draft mode, and shall be
focused on defined objectives.

If we wait until the last minute to introduce the testers, time pressure can be increase
dramatically.
The earlier the testing activity is started, the longer the elapsed time available
Testers do not have to wait until the software is available to test

Check diagram of code complete by Steve McConnell.


4. Defect Cluster Together

9
If we have in software 10 modules or components, imagine that in our first cycle of testing it
will expose 100bugs and you will find that each modules has 10bugs exactly, but rather it’s a
small number of module that exhibits the majority of problems.
A specific modules can contain most of the bugs for various reasons of defect clustering:

• System complexity
• Volatile code
• The effects of change
• Development staff experience
• Development staff inexperience
A small number of modules usually contains most of the defects discovered during pre-release
testing or is responsible for most of the operational failures.
Predicted defect clusters, and the actual observed defect clusters in test or operation, are
essential input into a risk analysis used to focus the test effort
Pareto principle 80/20 (80% of the bugs are due to 20% of the issues, e.g. 80% of car accident
are due to 20% of the drivers). In software, ~80% of the problem are found in about 20% of the
modules. If you want to uncover a higher number of the fix it’s useful to employ this principle
an target of aria of application where a high proportion of defect can be find. However, it must
be remember that testing should not concentrate exclusively on those parts.
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test case will no
longer find any new defects. Test are no longer effective at finding defects, just as pesticide are
no longer effective at killing insects after a while.
In order to detect new defects, the test cases need to be regularly reviewed and revised.
6. Testing is context dependent
Different testing is necessary for different circumstance: e.g.
A game software which is use graphics heavily need to be tested differently comparing
to a webpage or automotive software.

• A simple static website will be tested differently than a dynamic e-commerce site,
where products ca be pursued where users use debit/credit cards.
• A software used in an airplane will be tested differently than a flight simulator
• Safety-critical industrial control software is tested differently from a mobile e-commerce
app. (logging back pages compared to a online game login, even do the login interface
might look the same)

10
• Testing in an Agile project is done differently than testing in a sequential lifecycle
project.
7. Absence of errors is a fallacy

Some organization expect the testers can run all possible tests and find all possible defects. But
principle 2, tells us that this is impossible. Further it is a fallacy (meaning a mistaken belief) to
expect that just finding and fixing a large number of defects will ensure the success of a system.

For example truly testing all the specified requirements, and fixing all the defects found, could
still produce a system that is difficult to use. That doesn’t full field the users needs and
expectation.

Software with no know errors is not necessarily ready to be shipped/deliver. You should always
ask the question:
Does the application under test fulfill the users’ needs or not?
The fact that we cannot find any defect or the fact that we have fixed all the defects that we
have found are not enough reasons that the software will be successful. Think about before
dynamic testing has began, there where no defects reported against the code delivered so far.
That means so far that the software has not been tested yet and hence no outstanding defects
identified can be delivered? I don’t think so.
Let’s have an example: You might go to a doctor with symptoms, he gives you medication
according to his analysis of the problem. You tried the medication for a while and no
improvements. So It’s turn out that the doctor initial analysis was wrong and he might
recommend another medication. The first medication was not a bad medication, but it was the
wrong medication according to the current situation.
You can think to the same analogy when you thing from the software side. We usually build
software to help us solve a problem. We put the characteristics of the software thinking that it
will be solve the problem.

11
Testing helps gain confidence on the software, it’s true but it’s not a principle. In addition, they
can gives you a situation and ask you which principle does the situation describe or which
principal the situation is lacking. For example, a customer get the software but he is upset
because the software doesn’t meet his need. So which principle we didn’t follow? Absence-of-
errors fallacy. Another one, a team wants to discover as many defects as they can and which
principle they need to follow? Defect clustering! And so on…
One more question here… How much testing is enough?
As we discuss earlier, risk is an important aspect when we talking about testing.

Every time we should evaluate the risk of the current situation of the software and decide
wheeler the risk is high or low. If the risk is low and acceptable then we can stop testing and
deliver the software, otherwise we should continue the testing.
Testing should provide sufficient information to stakeholders to make sufficient information.

12
6. Test Process
A common misperception of testing is that it only consists of running tests, meaning executing
the software and checking the results. But instead, software testing is a process which includes
many different activities, and test execution is only one of these activities.
Before test execution there is some preliminary things to do. To do a proper testing, one should
through various test activities, we call those activities the test process.

• Requirement review
• Create the test specification
• Review the test specification
• Create the test cases
• Review the test case
• Execute the test case
• Analyze the test results
The test process in any given situation depends on many factors depending on the context, so
you might put the effort in one step more than the others.
The most important thing is to decide, how we should perform the test process in order to
achieve its established objectives.

• Which test activities are involved in this test process?


• How these activities are implemented?
• When these activities do occur?
Need to be discussed/described in a document called test strategy.
Test process in context

• Test activities and tasks


• Test work products (what we will create during those activities)
• Traceability between the test basis and test work products (meaning house various tests
work products reference each other and reference to the documents that we used
(requirements & design)
ISO standard (ISO/IEC/IEEE 29119-2) has further information about test processes.

7. Test Planning
Is where we define the objective of testing, what to test, who will do the testing, how they will
do the testing, define a specific test activities in order to meet the objectives. For how long and
define when can we consider the testing is complete, which is called the exit criteria. This is
when we will stop testing and rise a report to the stakeholders in order to define if the testing
was enough or not. All this need to be reviewed together with software project leader and
software process engineer.

13
8. Test monitoring and control
Represent the ongoing activity of comparing actual progress against the test plan using any test
monitoring metrics defined in the test plan.

If there are any deviations, then we should do test control, which is taking any necessary
action/actions to stay on track to meet the targets. Therefore, we need to undertake both
planning and control throughout the testing activities.

Evaluating the exit criteria continuously during test monitoring and control, to check if we are
on track/time in order meet the exit criteria.

Evaluating exit criteria is the activity where test execution results are assessed against the
defined objectives.
Example of exit criteria:

• Checking test results and logs against specified coverage criteria


• Assessing the level of component or system quality based on test results and logs
• Determining if more tests are needed (e.g. if tests originally intend to achieve a certain
level of product risk coverage failed to do so, requiring additional tests to be written and
executed).

Test Progress (e.g. imagine cronometru), against the plan and the stats of the exit criteria
are communicated to stakeholders in test progress reports, including deviations from the
plan and information to support any decision to stop testing.

The test manager then will evaluate the test reports submitted by various testers and
decide if we should stop testing or if testing in a specific area should be continue or not.

For example: if the criteria was that the software performs the beat is to be 8s bare with
badge transaction, if this beat is 10s meaning that criteria was not meet, than there are two
possible actions. The most likely is to employ extra testing activities to check why the desired
performance is not meet. That means that we need to change the exit criteria which require
approval from stakeholders. In agile environment the exit criteria is map to the definition of
done.

9. Test Analysis
Is concerned with the fine detail of knowing what to test and breaking it into fine testable
elements that we called test conditions.

It is the activity during which general testing objectives are transformed into real test
conditions.

During test analysis, any information or documentation we have is analyzed to identify testable
features and define associated test conditions.

14
Test basis, is any sort of documentation that we can use as a reference or base to know what to
test.
Test analysis include the following activities:

• Analyzing and understanding any documentation that we will use for testing to make
sure it’s testable. Example of test basis include:
o Requirement specification (customer requirements, functional requirements,
system requirements, users stories that specify the functional and non-fictional
component or system behavior)
o Design and implementation information (such as software or system
architecture, diagram or documentation, design specification, modeling diagram:
.uml, interfaces specification, system state transitions or state machines)
o Risk analysis reports (which lists all the items from the software that are risky
that requires more attention from us.
• During the test analysis activities it will be a good opportunity to evaluate the test basis
and test items to identify defects of various types, such as:
o Ambiguities (something that confusing the reader and might be interpreted
differently by different people)
o Omissions (something is not mention, could be some signals or some values)
o Inconsistencies (something that it was mention in a way somewhere, but also
mention differently somewhere else)
o Inaccuracies (something is not accurate)
o Contradictions (could be a contradiction between to state transitions or
statements that is kind of inconsistencies between them. If two sentences are
contradictory, than one must be true and one must be false, but if there are
inconsistent, than both could be false)
o Superfluous statements (are referring to unnecessary statements that has
nothing to the meaning)
• Identifying the features and sets of features to be tested
• Defining and prioritizing tests conditions for each feature based on analysis of the test
basic, and considering functional, non-functional and structural characteristics, other
business and technical factor and levels of risks.
• Finally we should be focusing on capturing bi-directional traceability between each
element of the test basis and the associated test conditions.

The main goal of test monitoring and control is to reduce the likelihood of omitting important
test conditions and to define more precise and accurate test conditions. The application of
black-box, white-box, and experience-based test techniques can be useful in process of test
analysis.

15
Test Design
During the test design, the test conditions are elaborated into high-level test cases, sets of high-
level test cases and other test ware.

Test analysis answered the questions “what to test?” while the test design answers the
question “how to test?”.
Test design also include a set of activities:

• Design and prioritizing test cases and sets of test cases best on test technique
• Identifying necessary test data to support test conditions and test cases (what kind
of data should we use to create our test design (specification) and how to combine
test condition so that a small number of test cases can cover as many as the test
condition as possible)
• Designing the test environment and identifying any required infrastructure and
tools
• Capture bi-directional traceability between the test basis, test conditions, test cases,
and test procedures

As with test analysis, test design may also result in the identification of similar types of defects
in the test basis.

As with test analysis, the identification of defects during test design is an important potential
benefit.

10. Test Implementation and test execution


Test implementation
During this phase we already answer to our test design question “ How to test?” while now on
the test implementation we need to focus on the following question “Do we now have
everything in place to run the tests?”
Test implementation Activities:

• Developing and prioritizing test procedures, and potentially creating automated test
scripts
• Create test suites from the test procedures and automated test scripts if any
• Arranging the test suites within a test execution schedule in a way that results in
efficient test execution
• Building the test environment (simulation or other infrastructure)
• Prepare and implement test data and to ensure it is properly loaded in the test
environment
• Verifying and updating bi-directional traceability between the test basis, test conditions,
test cases, test procedures and test suites

16
Test exeution
During test execution, test suites are run in accordance with the test execution schedule

As tests are run, their outcome the actual results need to be logged and compared to the
expected results

Whenever there is a discrepancy between the expected and actual results, test incident (bug
report) should be raised to trigger an investigation.
Test execution activities:

• Keeping a log of testing activities, including the outcome (passed/failed) and the
versions of software, data and tools while recording the IDs and versions of the test
item(s) or test object, test tool(s), and test were we used in running the tests
• Running test cases in the determined order manually or using test automation tools
• Comparing actual results with expected results
• Analyzing anomalies to establish their likely causes (when is a difference between the
actual result and the expected results)

17
• Reporting defects based on the failures observed with as much information as possible
and communicate them to the developer to try and fix them
• After fixing the bug we need to retest or repeat test activities to confirm that the bug
was actually fixed which is called confirmation testing
• Also it’s very important if there is time on planning to check not only the test that was
affected, and run all the test in order to make sure that the new bug fix has not
created/introduce, unintentionally other bugs in the part that has worked previously
fine, which is called regression testing
• Verifying and updating bi-directional traceability between the test basic, test conditions,
test cases, test procedures, and test results

11. Test Completion


Test completion activities occur at project milestones such as when:

• Software system is released


• Test project is completed (or canceled)
• Milestone has been achieved
• Agile project iteration is finished (followed with a retrospective meeting)
• Test level is completed
• Maintenance release has been completed
Test completion activities collect data to consolidate experience, test ware and any other
relevant information
Test completion activities concentrate on making sure that everything is finalized, synchronized
and documented. Reports are written, defects closed and those defects deferred for another
phase clearly seen to be as such:

• Checking which planned deliverables have been delivered


• Ensuring that the documentation is in order (the requirement document is sync with the
design document is in sync with the delivered software, all haze the same baseline)
• Checking whether all defect reports are closed, entering change requests or product
backlog items for any defects that remain unresolved at the end of test execution
• Creating a test summary report to be communicated to stakeholders
• Finalizing and archiving the test environment the test data, the test infrastructure and
other test ware for later reuse (make baseline/freeze after each release (archive all the
results an data that has been use in order to make sure that you will can do the same
retest if it will be needed in the future)
• Make sure that we delete any confidential data
• Handing over the test ware to the maintenance teams, other project teams, and/or
other stakeholders who could benefit from its use

18
• Analyzing lessons learned from the completed test activities to determine changes
needed for future iterations, releases and projects
• Using the information gathered to improve test process maturity

14. Test Work Products


There is a significant variation in the types of work products created during the test process, in
the ways those work products are:

• Organized
• Managed
• The names used for those work products

Testing Standards
ISO/IEC/IEEE 29119-1 (2013) Software and system engineering – Software testing Part 1:
Is talking about Testing Concepts and definitions
ISO/IEC/IEEE 29119-2 (2013) Software and system engineering – Software testing Part 2:
Is talking about Test processes
ISO/IEC/IEEE 29119-3 (2013) Software and system engineering – Software testing Part 3
Is talking about Test documentation

Test Monitoring & Control Work Products


Typically include various types of test reports, including:
1. Test progress reports
2. Test summary reports (millstones)

All test reports should provide audience-relevant details about the test progress as of the date
of the report, including summarizing the test execution results once those become available.
Test monitoring and control work products should also address project management concerns,
such as:

• Task completion
• Resource allocation and usage
• Effort

19
Test Analysis work products
Include document that contain defined and prioritized test conditions, each of which is ideally
bidirectional traceable to the specific element(s) of the test basis it covers
Test Design work products
Test design results in test cases and sets of test cases to exercise the test conditions defined in
test analysis

• It is often a good practice to design high-level test cases, without create values for input
data and expected results
• Such high-level test cases are reusable across multiple test cycles with different
concrete data, while still adequately documenting the scope of the test case
Ideally, each test case is bidirectional traceable to the test condition(s) it covers.
Test Implementation work products
It includes:

• Test procedures and the sequencing of those test procedures


• Test suites
• Test execution schedule
Creating work products using or used by tools, such as service virtualization and automated test
scripts.
Creation and verification of test data and the test environment
Assign concrete values to the inputs and expected results of test cases
Turn high-level test cases or logical test case into executable low-level-test cases or concrete
test case
Once test implementation is complete, the achievement of coverage criteria established in the
test plan can be demonstrated via bi-directional traceability between test procedures and
specific elements of the test basis, through the test cases and test conditions
Test conditions defined in test analysis may be further refined in test implementation.
Test Execution work products
Documentation of the status of individual test case or test procedures (e.g. ready to run,
passed/failed, blocked, deliberately, skipped, and so on)
Defect reports (detailed as process)

20
Documentation about which test item(s), test object(s). test tools, and test ware were involved
in the testing.
Once the test execution is complete, the status of each element of the basis can be determined
and reported via bi-directional traceability with the associated the test procedure(s).

• Which requirement have passed all planned tests


• Which requirements have failed tests and/or have defects associated with them
• Which requirements have planned tests still waiting to be run
This enables verification that the coverage criteria have been met and enables the reporting of
test results in terms that are understandable to stakeholders.

• Test summary reports


• Action items for improvement of subsequent projects or iterations
• Change requests
• Product backlog items

17. Traceability between the test basis and test work product
It is beneficial if the test basis has measureable coverage criteria defined

The coverage criteria can act effectively as key performance indicators (KPIs) to drive the
activities that demonstrate achievement of software test objectives

Some test management tools provide test work product models that match part or all of the
test work
Some organizations build their own management system to organize the work products and
provide the information traceability they require
FACTORS influence test process for a organization

• Software development lifecycle model and project methodologies being used


• Test levels and test types being considered
• Product and project risks
• Business domain
• Operational constraints, including but not limited to:
o Budgets and resources
o Timescales
o Complexity
o Contractual and regulatory requirements
• Organizational policies and practices
• Required internal and external standards

21
TEST MANAGEMENT
1. Risk and Testing
Risk involves the possibility of an event in the future, which has negative consequences.
Risk-based testing draws on the collective knowledge and insight of the project stakeholders to
carry out product risk analysis. To ensure that the likely hood of product risk is minimized, by
following the cascade/steps process:

• Analyze (and re-evaluate on a regular basis) what can go wrong (risks)


• Determine which risks are important to deal with
• Implement actions to mitigate those risks
• Make contingency plans to deal with the risks should they become actual events (in
addition, testing may identify new risks)
Product and project risks
1. Product risks (is the software itself)
2. Project risks (activities/steps needed to create the product, how we develop the
software)
1. Product (Quality) Risks

Involves the possibility that a work product (e.g., a specification, component, system or test)
may fail to satisfy the legitimate needs of its users and/or stakeholders.
When the product risks are associated with specific quality characteristics of a product (e.g.,
functional suitability, reliability, performance efficiency, usability, security, compatibility,
maintainability and probability).
Product Risk = Quality Risk

• Software might not perform its intended functions according to the specification
• Software might not perform its intended functions according to user, customer and/or
stakeholders needs
• System architecture may not adequately support some non-functional requirement(s)
• A particular computation may be performed incorrectly in some circumstances
• A loop control structure may be coded incorrectly
• Response-times may be inadequate for a high-performance transaction processing
system
• User experience (UX) feedback might not meet product expectations
2. Project Risks
Involves situations that, should they occur, may have a negative effect on a project’s ability to
achieve its objectives

22
Project issues:

• Delays (may occur in delivery task completion or exit criteria/definition of done)


• Inaccurate estimates, reallocation of funds, or general cost-cutting
• Late changes (substantial rework that will add additional time)
Organizational factors:

• Skills, training and staff shortage (may not be sufficient)


• Personnel issues (can create conflicts and problems between members)
• User, business staff or subject matter expert’s availability (may not be available due to
conflicting business priorities, or time frame)
Political issues:

• Testers issues (they may not communicate sufficient, their needs or the test results in
time)
• Developers and/or testers may failures (to follow up on information found on testing
and review, or to not improve the code or tests implementation)
• Improper attitude (towards of expectation of testing, not appreciating the value of
finding defects during testing)
Technical issues:

• Poor requirements (requirements may not be defined enough/unclear or ambiguous)


• The requirements may not be met (giving the existent constrains)
• Test environment readiness (may not be ready on time)
• Late data conversion, migration planning and their tools support (might be late)
• Development process weaknesses (may embark the consistency or quality of the project
towards product such as: design, code, configuration, data, test cases)
• Poor defect management (can accumulate more defects that are not know in the
software)
Supplier issues:

• Third party failure


• Contractual issues

23
The level of the risk is determined by the likelihood of the event and the impact (the harm)
from that event.
Level of risk = Probability of the risk x impact if it did happen

Calculation (error in the report) and User Interface. It’s very important to prioritize the risk in
order to avoid delays as early as possible.
Risk-based Testing and Product Quality
Based on risk, we know when to start testing, where to test more, making some testing
decisions (what should retest, what to skip, when to stop testing, how to make reprioritization
and so on).

Testing is used as a risk mitigation activity, to provide feedback about identified risks, as well as
providing feedback on residual (uncovered) risks.
A risk-based approach to testing provides proactive opportunities to reduce the level of product
risk.
Risk-based testing involves product risk analysis, which includes the identification of product
risks and the assessment of each risk’s likelihood and impact. The resulting product risk
information is used to guide test planning, the specification, preparation and execution of test
cases and test monitoring and control. Analyzing product risks early contributes to the success
of a project.

24
Risk response - the results of product risk analysis are used to:

• Determine the test techniques to be employed


• Determine the levels and types of testing to be performed
• Determine the extent of testing to be carried out
• Prioritize testing to find the critical defects as early as possible
• Determine whether any activities in addition to testing could be employed to reduce risk
In order to lower the risk level you need to know:

1. Avoid - doing anything to make shore that the risk is reduce to 0.


e.g. Roomers that one of the team member might move to another company. To avoid
such a risk we should not assign him to the project on the 1st place. Look for other
resource in order that the impact will be 0.
2. Mitigate – mean that we will lower the risk level. We can achieve this, by either lower
their likelihood or lowing the impact of the risk. You can give him a salary increase, or
more responsibility or, just give him minor task to be shore that he will leave.
3. Transfer – moving the risk from our side to another side. Inform the big boss that once
that member are leaving the company, he will be responsible to replace the member
with a new one with similar qualifications.
4. Accept – accept the risk passively, and wait until it happened. In the mean time be
aware that additional time is needed to make a transfer plan to a new resource.

2. Independent Testing
3. Tasks of Test Manager and Tester
1. Task of the test manager

Overall responsibility regarding testing (test process and successful leadership of the test
activities).
Typical activities:

• Test strategy (for the project and test policy for the organization if not already emplace)
• Plan test activities (considering the context and understanding the test objectives and
risks, including selecting the test approach, estimating the time, effort and cost of
testing. Acquiring resources, defending test level, test cycles and planning defect
management and create high level test schedule)
• Write and update the test plan
• Coordinate (the test strategy and test plan with project manager, product owner and
others)
• Share the test perspective (to other project activities, such as integration planning)

25
• Initiate activities (analysis, design, implementation, review, execution of tests, monitor
tests, progress and results, check the status of exit criteria or definition of done if we are
talking about agile)
• Prepare and deliver (test reports or test summary reports based on the information
gather during testing)
• Adapt to changes (planning based on test results and progress and take action if it’s
needed for test control)
• Support setting up the system (defect management system, and reporting issues)
• Metrix (provided based on measuring the test progress and evaluating the quality of
the testing and the product)
• Tools (support in installing and selecting the tools in according to the test process)
• Design test environment (to ensure that this is put in a place before test execution,
manage during test execution)
• Advocate testers (promote and educate the testers, test team)
• Develop tester’s skills (knowledge transfer and improve on the testers, training plans,
performance evaluation, coaching and so on)

2. Tasks of the tester

• Contribute to the test plan (review and additional contribution based on less and learn)
• Analyze for testability (review the requirements and specifications, models for
testability)
• Create test conditions (identify and document test conditions, traceability between test
cases, test conditions, test basis and requirements)
• Implement test environment (design setup and verify test environment, setting up the
hardware and software needed for testing, offering coordinating support
• Implement tests (design and implement test cases and test procedure)
• Prepare test data (acquire the test data)
• Create the test schedule (it’s up to each tester to create their own detailed test
schedule around the high test schedule created by the test manager. E.g. how much
take the subtask and prepare before test run)
• Execute and log the tests (evaluate the results and comment the deviation from the
expected results)
• Use tools (use appropriate tools to facilitate the test process)
• Automate tests (supported by developer or automated expert)
• Evaluate non-functional characteristics (performance efficiency, reliability, usability,
security, compatibility and portability)
• Review (test created by others)

26
4. Test Strategy and Test Approach
A test strategy provides a generalized description of the test process, usually at the product or
organizational level.
1. Test Strategies
1. Analytical
• Testing will be based on an analysis of some factor (usually during requirement and
design stages of the project, to define where to test first, where to test more and
when to stop testing)
• Risk-based strategy (involves the forming risk analysis using project documents and
stake holder input)
• Requirements-based strategy (where an analysis of requirement specification form
the basis for planning, estimating and designing tests)
2. Model-based
• Design or benchmark some formal or informal model that our system must
follow
• The model will be based of some required aspect of the product, such as a
function, a business process, an internal structure, or a non-functional
characteristic
3. Methodial
• Relies on making systematic use of some predefined set of tests or test
conditions (state transitions, diagrams and so on)
• Failure-based (including error guessing and fault attacks)
• Checklist based
• Quality-characteristic based
4. Process or standard-compliant
• Adopt an industry standard or a known process to tst the system
• IEEE 29119 standard
• Agile methodologies or Kanban
5. Reactive
• Testing is reactive to the component or system being tested
• Exploratory testing
6. Directed or consultative
• Driven primarily by the advice, guidance or instructions of stakeholders, business
domain or technology experts
• Ask the users or developers what to test or to obtain some directive
7. Regression-averse
• Reuse of existing test ware (test cases and test data)
• Test automation (standard test suites)

27
The test approach is the implementation of the test strategy for a particular project or release

The test approach is the starting point for selecting the test techniques, test levels and test
types and for defining the entry criteria and exit criteria (or definition of ready and definition of
done, respectively).

The tailoring of the strategy is based on decisions made in relation to the complexity and goals
of the project, the type of product being develop and product risk analysis.
Factors for defining the strategy

• Risk (testing is about risk management, so consider the risk and the level of it, for a well
establish application that is involving slowly, regression is an important strategy for risk
• Available resource and skills (what skills are the tester possess and lack, that could be
skill and time)
• Objectives (testing must satisfies the needs of stakeholders to be successful, if the
objectives is to find as many defects as possible with a minimal amount of time and
effort invested, than a reactive strategy make sense)
• Regulations (not satisfying only the stake holders, also some regulations. In this case a
methodical test strategy may satisfy those regulations)
• The nature of the product and the business (different approach of testing based on
product and focus)
• Safety (safety considerations promote more formal strategies)
• Technology

5. Test Planning
Is a project plan for the testing work to be done in development or maintenance projects.
Planning is influence by many factors, for example:

• Test policy or strategy of the organization


• The development life cycle
• The scope of testing
• Objectives
• Risks
• Constrains
• Criticality
• Availability of resources
Having a test plan guides our thinking and is used as a vehicle for communication with the
stakeholders and help us manage change.

28
A test plan ensures that there is initially a list of tasks and milestones in a bassline plan to track
progress against, as well as defining the shape and size of the test effort (a plan must contain at
least: what to do, when, how, by whom, how to manage the change of the plan, how to track
the progress of the plan, and how much it is the cost).
Test Planning (K2)
Is based on testing level (based on v-model), and a master plan to coordinate between them

• Component test plan


• Integration test plan
• System test plan
• Acceptance test plan
Is it important to use a template test plan (based on ISO standard ISO/IEC/IEEE 29119-3), and
keep them update by versioning.
As the project and test planning progress, more information becomes available and more detail
can be included in the test plan.

Test planning is a continuous activity and is performed throughout the product’s lifecycle (it’s
really normal to change the plan if additional information was missing or are wrong)

Feedback from test activities should be used to recognize changing risks so that planning can be
adjusted (based on time spend on reviews, test specification/implementation, execution,
analyzing bugs, reporting).
Test planning activities:

• Determining the scope, objectives and risks of testing


• Defining the overall approach of testing (in order to ensure that entry and exit criteria is
defined)
• Integrating and coordinating the test activities into the software lifecycle activities
(using the correct sequence and dependency, requirement/review/test specification/test
implementation/test execution/Report analyzed based on expected
results/Reporting/bugs/traceability)
• Making decisions about what to test, the people and other resources required
(planning based on work-packages what is in focus to be tested, who will test, who will
collect the results and so on)
• Scheduling the test activities (analyze, design, implementation)
• Selecting metrics for test monitoring and control
• Budgeting for the test activities
• Determining the level of detail and structure for test documentation

29
6. Entry and Exit Criteria
Named also in Agile method as definition of start and definition of done. In order to have an
effective control over the quality of the software and of the testing, it is important to have
criteria, which define when a given test activity should start and when the activity is complet
1. Entry Criteria (Definition of Ready)
Define the preconditions for understanding a given test activity (this could include a beginning
of a level of testing when test design or when test execution is ready to start. If entry criteria is
not meet it’s likely that activity will prove more difficult, more time consuming, more costly and
more risky).

• Availability of testable requirements, user stories, and/or models


• Availability of test items that have met the exit criteria for any prior test levels
• Availability of test environment and readiness
• Availability of necessary test tools
• Availability of test data and other necessary resources
2. Exit Criteria (Definition of Done)
Exit criteria re used to determinate when a given test activity has been completed or when it
should stop (define what conditions must be achieved in order to declare at this level or a set of
tests its completed).

• Planned tests have been executed


• A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks,
code) has been achieved
• The number of unresolved defects is within an agreed limit
• The number of estimated remaining defects is sufficiently low
• The evaluated levels of quality characteristics (coverage levels and passed/failed
reports)
Understand different criteria

Remember that according to the test process, when we reach the exit criteria the tester should
provide a report of his findings to the test manager or to the stakeholders to decide if the
testing should continue or stop.
ENTRY CRITERIA -> PASS/FAIL CRITERIA -> EXIT CRITERIA

30

You might also like