Lec 5
Lec 5
Verification
Lecture 5
Presented by:
Dr. Yasmine Afify
[email protected]
KINDLY
Reference
https://www.istqb.org/certifications/certified-tester-foundation-level
• Likewise, you can calculate for other parameters like test cases not
executed, test cases passed, test cases failed, test cases blocked, etc.
• Requirement Creep =
( total number of requirements added/number of initial requirements)*100
6
Entry Criteria (def of Ready)
• It is a minimum set of conditions that should be met (prerequisite
items) before starting the software testing.
• Used to determine when a given test activity should start.
7
Example:
Entry Criteria for Integration Testing
8
Exit Criteria (Def of Done)
• It is a minimum set of conditions or activities that should be completed in
order to stop the software testing.
• Used to determine whether given test activity is completed or NOT.
• It is defined in Test Plan.
10
Tester Role
Execute tests,
Design and
Prepare and acquire Create detailed test evaluate results,
implement test cases
test data execution schedule document deviations
and test procedures
from expected results
Automate tests as
Use appropriate tools Evaluate non-
needed (support by Review tests
to facilitate the test functional
developer or test developed by others
process characteristics
automation expert)
11
Testing Role
Different people may take over role of testing at different test levels:
• At the component testing level and the component integration
testing level, the role of a tester is often done by developers.
• At the system test level and the system integration test level, the
role of a tester is often done by an independent test team.
• At the operational acceptance test level, the role of a tester is often
done by operations/administration staff.
• At the user acceptance test level, the role of a tester is often done
by subject matter experts and users.
12
Test Organization and Independence
• The effectiveness of finding defects by testing and reviews can be
improved by using independent testers.
20
Test Work Products Examples
• Test planning work products typically include one or more test plans.
• Test monitoring and control work products typically include various
types of test reports, including test progress reports (produced on
regular basis) and test summary reports (produced at various
completion milestones).
• Test analysis work products include defined and prioritized test
conditions, often traceable to specific element(s) of test basis it covers.
• Test design results in test cases and sets of test cases to exercise the
test conditions defined in test analysis. Test design also results in the
design and/or identification of the necessary test data, the design of
the test environment, and the identification of infrastructure and tools.
21
Test Work Products Examples
Test implementation work products include:
• Test procedures and the sequencing of those test procedures
• Test suites
• A test execution schedule
The test data serve to assign concrete values to the inputs and expected results of
test cases. Such concrete values, together with explicit directions about the use of
the concrete values, turn high-level test cases into executable low-level test cases
23
Test Reporting Audience
• In addition to tailoring test reports based on the project context,
test reports should be tailored based on the report’s audience.
• The type and amount of information that should be included for
a technical audience or a test team may be different from what
would be included in an executive summary report.
• In the first case, detailed information on defect types and trends
may be important. In the latter case, a high-level report (e.g., a
status summary of defects by priority, budget, schedule, and test
conditions passed/failed/not tested) may be more appropriate.
24
Test Control
• Test control describes any guiding or corrective actions taken as a
result of information and metrics gathered and reported.
• Examples:
• Re-prioritizing tests when identified risk occurs (e.g. software delivered late)
• Changing the test schedule due to availability or unavailability of a test
environment or other resources
• Re-evaluating whether test item meets entry or exit criterion due to rework
Test Strategy Types
27
Test Strategy Types
Analytical: the most common strategy, it might be based on analysis of
requirements (in user acceptance testing), specifications (in system or
component testing), or risk-based testing (tests are designed and
prioritized based on the level of risk).
Model-Based: Related to real usage. This might be a diagram that
shows the user route taken through a website, or a diagram that shows
how different technical components integrate, or both! Tests are
designed based on some model of some required aspect of the
product, such as a function, a business process, an internal structure,
or a non-functional characteristic (e.g., reliability). Examples of such
models include business process and state models.
28
Test Strategy Types
29
Test Strategy Types
Standard-compliant: involves designing, and implementing tests based on
external rules and standards, such as those specified by industry-specific
standards. This is most likely where you are in a specific regulated sector, like
self-driving cars, or aviation, where there are specific standards that you have
to meet for testing.
30
Test Strategy Types
Reactive: respondent to component/system being tested, and the events
occurring during test execution, rather than being pre-planned (as the
preceding strategies are). Tests are designed and implemented and may
immediately be executed in response to knowledge gained from prior test
results. You decide what to test when you receive the software. This can
happen in exploratory testing, where things you observe on your testing journey
drive further tests
31
32
Test Plan
• A master test plan is developed during the planning phase. It outlines
test activities for development and maintenance projects.
• As the project and test planning evolve, more information becomes
available, and more detail can be included in the test plan. Test
planning is a continuous activity and is performed throughout the
product's lifecycle.
• Planning may be documented in a master test plan and in separate
test plans for test levels, such as system testing and acceptance
testing, or for separate test types, such as usability testing and
performance testing.
• The main goal of test plan is to include all the details:
• What to test
• When to test
• How to test
• Who will be the tester
33
Test Strategy Test Plan
• Identifier
• Scope and overview • References
• Test Approach • Risks and Contingencies
• Testing tools • Test Items
• Industry standards to follow • Test Deliverables
• Test deliverables • Test Levels and Types
• Testing metrics • Pass/Fail Criteria
• Requirement Traceability Matrix • Features To Be Tested (In Scope)
• Risk and mitigation • Features Not To Be Tested (Out of
• Reporting tool Scope)
• Testing Tasks
• Environmental Needs
• Responsibilities
• Staffing and Training Needs
• Schedule 34
35
Test
Scenario
• Test Scenario
gives the idea
of what we
have to test.
• Test Scenario
is like a high-
level test
case.
http://SoftwareTestingHelp.com
36
37
Test Case
• Once the test scenarios are prepared, reviewed and approved,
test cases will be prepared based on these scenarios .
• Test cases are the set of positive and negative executable steps
of a test scenario which has a set of pre-conditions, test data,
expected result, post-conditions and actual results.
• Its main intent is to ensure whether a software passes or fails in
terms of its functionality and other aspects.
• Test cases are written to keep track of the testing coverage of a
software.
• Assume that we need to test the functionality of a login page of
Gmail application
Test Case 1: Enter valid User Name and valid Password
Test Case 2: Enter valid User Name and invalid Password
Test Case 3: Enter invalid User Name and valid Password
Test Case 4: Enter invalid User Name and invalid Password
38
Logical vs. Physical Test Cases
• The logical test case (high-level) describes what is tested
and not how it is done.
• The aim of a physical test case (low-level) is to fill in the
details of the corresponding logical test case so that the
how is clear. It is the concrete elaboration of a logical test
case, with choices having been made for the values of all
required the input and settings of the environmental factors.
39
Negative vs. Positive Test Cases
• Positive test cases ensure that users can perform
appropriate actions when using valid data.
• Negative test cases are performed to try to “break” the
software by performing invalid (or unacceptable)
actions, or by using invalid data.
40
Negative vs. Positive Test Cases
41
Negative vs. Positive
Negative Test Cases
vs. Positive Test Cases
42
Test Case Template
43
Test Case Template
44
Example Test Cases for Sign-up Page
45
Defect/Bug report
• After uncovering a defect (bug), testers generate formal defect report.
• The purpose of a defect report is to state the problem as clearly as
possible so that developers can replicate the defect easily and fix it.
• Defect severity can be defined as the impact/effect of the bug on the
application. It can be Showstopper/Critical/High/Medium/Low.
• Defect priority can be defined as an impact/effect of the bug on the
customers business. Main focus on how soon/urgent the defect
should be fixed. It gives the order in which a defect should be
resolved by developers. It can be High/ Medium/Low.
Exercise: Specify scenario severity and priority
a. Submit button is not working on a login page a. High Priority & High
and customers are unable to login to the Severity
application.
b. Crash in some functionality which is going to b. Low Priority & High
deliver after couple of releases. Severity
c. Spelling mistake of a company name on
homepage.
c. High Priority & Low
d. FAQ page takes a long time to load. Severity
e. Company logo or tagline issues. d. Low Priority & Low Severity
f. On a bank website, an error message pops up
when a customer clicks on transfer money e. High Priority & Low
button. Severity
47
g. Font family or font size or colour or spelling
BUG
Report
BUG
Report
(cont.)
50
Bug Report
51
Writing an Effective Bug Report
52
Writing an Effective Bug Report
53
Writing an Effective Bug Report
54
Writing an Effective Bug Report
55
Writing an Effective Bug Report
56
Writing an Effective Bug Report
57
Practical Example
Test Case Writing
58
TEST CASE
59
BUG REPORT
60
Test Procedures
• Test Procedures Specification specifies the sequence/order of
actions for test cases
• Defining the test procedures requires carefully identifying
constraints and dependencies that might influence the test
execution sequence
• Test procedures list any initial preconditions (e.g., loading of test
data from a data repository) and any activities following execution
(e.g., resetting the system status).
• Contents of a Test Procedure:
• Identifier
• Purpose
• Special requirements
• Procedure steps
Test Execution Schedule
• Once the various test cases and test procedures are assembled
into test suites, the test suites can be arranged in a test execution
schedule that defines the order in which they are to be run.
• Test execution schedule should take into account factors such as
prioritization, dependencies, and the most efficient sequence for
executing the tests.
• Ideally, test cases would be ordered to run based on their priority
levels, usually by executing the test cases with the highest priority
first. However, this practice may not work if the test cases have
dependencies, or the features being tested have dependencies. If
a test case with a higher priority is dependent on a test case with
a lower priority, the lower priority test case must be executed
first.
Test Conditions, Cases, Procedures and Schedule
What How How When
Test Test
Procedure Execution
Sourced Documentation
Specification Schedule
Manual Test
Test Script Test Procedure
Test Test
Cases Specifications
Condition Cases or
Automated
Priority Test Script
Requirements Traceability Matrix (RTM)
• To implement effective test monitoring and control, it is important
to establish and maintain traceability throughout the test process
between elements of the test basis and the various test work
products associated with that element
• Requirements Traceability Matrix (RTM) connects requirements
to test cases throughout the validation process
• It shows the relationship between requirements and test cases to
ask such question as:
• Which test cases may be affected by change of requirement?
• How much coverage of requirements has been achieved?
• Are any requirements overly complex and need too many test cases?
64
RTM
65
RTM
• It is a matrix is used to trace requirements. It provides forward and
backward traceability.
SR2 FR2
66
Importance of Traceability
Good traceability supports:
•Analyzing the impact of changes
•Making testing auditable
•Meeting IT governance criteria
•Improving the understandability of test progress reports and
test summary reports to include the status of elements of
the test basis (e.g., requirements that passed their tests/
requirements that failed their tests/requirements that have
pending tests)
67
Test Closure Report
• It is a report that is created once the testing phase is successfully
completed by meeting exit criteria defined for the project.
• It is a document that gives a summary of all the tests conducted.
• It also gives a detailed analysis of the bugs removed and errors
found.
• It also presents the list of known issues.
• It is created by test Lead, reviewed by various stake holders like
test architect, test manager, business analyst, project manager
and finally approved by clients.
68
69
Answer:
B
Answer:
D
Answer:
D
Answer:
A
Answer:
A
Answer:
D