0% found this document useful (0 votes)
38 views76 pages

Lec 5

Uploaded by

ahksase2312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views76 pages

Lec 5

Uploaded by

ahksase2312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Software Testing, Validation and

Verification
Lecture 5

Presented by:
Dr. Yasmine Afify
[email protected]
KINDLY
Reference
https://www.istqb.org/certifications/certified-tester-foundation-level

The ISTQB® Certified Tester Foundation


Level (CTFL) certification provides
essential testing knowledge that can be
put to practical use and, very
importantly, explains the terminology
and concepts that are used worldwide in
the testing domain. CTFL is relevant
across software delivery approaches and
practices including Waterfall, Agile,
DevOps, and Continuous Delivery. CTFL
certification is recognized as a
prerequisite to all other ISTQB®
certifications where Foundation Level
is required. 3
Software Test metrics
• Metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute.
• It helps estimating the progress, quality and health of a software
testing effort, and Improving the efficiency and effectiveness of a
software testing process.
• can be defined as
“STANDARDS OF MEASUREMENT”
Example of Test Metrics
• Schedule slippage =
(actual end date – estimated end date) /
(planned end date – planned start date) * 100

• Number of tests run per time period =


number of tests run/total time

• Fixed Defects Percentage =


(defects fixed/defects reported) * 100
5
Examples of test metrics
• To obtain the execution status of test cases in percentage, we use formula:
Percentage test cases executed=
(number of test cases executed/ total number of test cases written) * 100

• Likewise, you can calculate for other parameters like test cases not
executed, test cases passed, test cases failed, test cases blocked, etc.

• Requirement Creep =
( total number of requirements added/number of initial requirements)*100

6
Entry Criteria (def of Ready)
• It is a minimum set of conditions that should be met (prerequisite
items) before starting the software testing.
• Used to determine when a given test activity should start.

• Examples for Entry Criterion:


• Verify if the testing environment is available and ready for use.
• Verify if testing tools in the environment are ready for use.
• Verify if testable code is available.
• Verify if test data is available and validated for correctness of Data.

7
Example:
Entry Criteria for Integration Testing

• All modules completed successfully.


• Unit tested components/modules
• All high prioritized bugs fixed and closed
• Integration tests plan, test case, scenarios to be signed off and
documented.
• Required test environment to be set up for integration testing

8
Exit Criteria (Def of Done)
• It is a minimum set of conditions or activities that should be completed in
order to stop the software testing.
• Used to determine whether given test activity is completed or NOT.
• It is defined in Test Plan.

• Examples for Exit Criterion:


• Verify if all tests planned have been run.
• Verify if the level of requirement coverage has been met.
• Verify if there are NO Critical or high severity defects that are left unresolved.
• Verify if all high-risk areas are completely tested.
• Verify if software development activities are completed within the projected cost.
• Verify if software development activities are completed within the projected timelines.
9
Example:
Exit Criteria for Integration Testing

• Successful testing of integrated application


• Executed test cases are documented
• All high prioritized bugs fixed and closed

10
Tester Role

Analyze, review req., Identify test Design, set up, verify


Review and user stories, conditions, capture test environment(s),
contribute to test acceptance criteria, traceability between coordinate with sys
plans specifications for test cases, test administration and
testability (test basis) conditions, test basis network

Execute tests,
Design and
Prepare and acquire Create detailed test evaluate results,
implement test cases
test data execution schedule document deviations
and test procedures
from expected results

Automate tests as
Use appropriate tools Evaluate non-
needed (support by Review tests
to facilitate the test functional
developer or test developed by others
process characteristics
automation expert)

11
Testing Role
Different people may take over role of testing at different test levels:
• At the component testing level and the component integration
testing level, the role of a tester is often done by developers.
• At the system test level and the system integration test level, the
role of a tester is often done by an independent test team.
• At the operational acceptance test level, the role of a tester is often
done by operations/administration staff.
• At the user acceptance test level, the role of a tester is often done
by subject matter experts and users.

12
Test Organization and Independence
• The effectiveness of finding defects by testing and reviews can be
improved by using independent testers.

• Benefits of independence include:


• Independent testers are unbiased and see other and different defects.
• An independent tester can verify assumptions made by people during
specification and implementation of the system.

• Drawbacks of independence include:


• Isolation from the development team (if treated as totally independent).
• Developers may lose a sense of responsibility for quality.
• Independent testers may be seen as a bottleneck or blamed for delays in
release.
Test Organization and Independence
• Options for independence include the following:
• Developers test their own code (No independent testers).
• Independent testers within the development teams.
• Independent test team or group within the organization, reporting to
project management or executive management.
• Independent testers from the business organization or user community.
• Independent test specialists for specific test types such as usability
testers, security testers or
• Certification testers (certify product against standards and regulations)
• Independent testers outsourced or external to the organization
Test Progress Monitoring and Control
• Test Progress Monitoring
• Test Reporting
• Test Control
Test Progress Monitoring
• The purpose of test monitoring is to provide feedback and
visibility about test activities.
• Information to be monitored may be collected manually or
automatically and may be used to measure exit criteria, such as
coverage.
• Metrics may also be used to assess progress against the planned
schedule and budget.
Test Progress Monitoring
• Common test metrics include:
• Percentage of work done in test case preparation (or percentage of planned test
cases prepared)
• Percentage of work done in test environment preparation
• Test case execution (number of test cases run/not run, test cases passed/failed)
• Defect information (defect density, defects found & fixed, failure rate, re-test results)
• Test coverage of requirements, risks or code.
• Subjective confidence of testers in the product
• Dates of test milestones.
• Testing costs, including the cost compared to the benefit of finding the next defect or
to run the next test.
Test Reporting
• Test reporting is concerned with summarizing information about
testing effort during and at the end of test activity/level.
• Test report prepared during a test activity may be referred to as a
test progress/status report, while a test report prepared at the
end of a test activity may be referred to as a test summary report.
• Including:
• What happened during a period of testing, such as dates when exit criteria
were met.
• Analyzed information and metrics to support recommendations and
decisions about future actions, such as an assessment of defects
remaining, the economic benefit of continued testing, outstanding risks,
and the level of confidence in the tested software.
19
Test Work Products
• Test work products are created as part of the test process.
• Just as there is significant variation in the way that organizations
implement the test process, there is also significant variation in
the types of work products created during that process, in the
ways those work products are organized and managed, and in the
names used for those work products.
• They help in estimating the testing effort required, test coverage,
requirement tracking/tracing, etc.

20
Test Work Products Examples
• Test planning work products typically include one or more test plans.
• Test monitoring and control work products typically include various
types of test reports, including test progress reports (produced on
regular basis) and test summary reports (produced at various
completion milestones).
• Test analysis work products include defined and prioritized test
conditions, often traceable to specific element(s) of test basis it covers.
• Test design results in test cases and sets of test cases to exercise the
test conditions defined in test analysis. Test design also results in the
design and/or identification of the necessary test data, the design of
the test environment, and the identification of infrastructure and tools.
21
Test Work Products Examples
Test implementation work products include:
• Test procedures and the sequencing of those test procedures
• Test suites
• A test execution schedule
The test data serve to assign concrete values to the inputs and expected results of
test cases. Such concrete values, together with explicit directions about the use of
the concrete values, turn high-level test cases into executable low-level test cases

Test execution work products include:


• Documentation of the status of individual test cases or test procedures
(e.g., ready to run, pass, fail, blocked, deliberately skipped, etc.)
• Defect reports
• Documentation about test item(s), test object(s), test tools
22
Test Work Products Examples
• Test completion work products include test closure report and
action items for improvement of subsequent projects or iterations,
thus finalized test ware.

23
Test Reporting Audience
• In addition to tailoring test reports based on the project context,
test reports should be tailored based on the report’s audience.
• The type and amount of information that should be included for
a technical audience or a test team may be different from what
would be included in an executive summary report.
• In the first case, detailed information on defect types and trends
may be important. In the latter case, a high-level report (e.g., a
status summary of defects by priority, budget, schedule, and test
conditions passed/failed/not tested) may be more appropriate.

24
Test Control
• Test control describes any guiding or corrective actions taken as a
result of information and metrics gathered and reported.
• Examples:
• Re-prioritizing tests when identified risk occurs (e.g. software delivered late)
• Changing the test schedule due to availability or unavailability of a test
environment or other resources
• Re-evaluating whether test item meets entry or exit criterion due to rework
Test Strategy Types

27
Test Strategy Types
Analytical: the most common strategy, it might be based on analysis of
requirements (in user acceptance testing), specifications (in system or
component testing), or risk-based testing (tests are designed and
prioritized based on the level of risk).
Model-Based: Related to real usage. This might be a diagram that
shows the user route taken through a website, or a diagram that shows
how different technical components integrate, or both! Tests are
designed based on some model of some required aspect of the
product, such as a function, a business process, an internal structure,
or a non-functional characteristic (e.g., reliability). Examples of such
models include business process and state models.

28
Test Strategy Types

Regression-averse: motivated by desire to avoid regression of


existing capabilities. It includes reuse of existing test ware (especially
test cases and test data), extensive automation of regression tests,
and standard test suites.
Example: testing very complex systems where named experts need
to test the changes, and the testing function just aims to verify there is
no regression.

29
Test Strategy Types
Standard-compliant: involves designing, and implementing tests based on
external rules and standards, such as those specified by industry-specific
standards. This is most likely where you are in a specific regulated sector, like
self-driving cars, or aviation, where there are specific standards that you have
to meet for testing.

Consultative: driven primarily by the advice, guidance, or instructions of


stakeholders, business domain experts, or technology experts, who may be
outside the test team or outside the organization itself. Basically, it means
asking someone else what you should test and letting them decide.

30
Test Strategy Types
Reactive: respondent to component/system being tested, and the events
occurring during test execution, rather than being pre-planned (as the
preceding strategies are). Tests are designed and implemented and may
immediately be executed in response to knowledge gained from prior test
results. You decide what to test when you receive the software. This can
happen in exploratory testing, where things you observe on your testing journey
drive further tests

31
32
Test Plan
• A master test plan is developed during the planning phase. It outlines
test activities for development and maintenance projects.
• As the project and test planning evolve, more information becomes
available, and more detail can be included in the test plan. Test
planning is a continuous activity and is performed throughout the
product's lifecycle.
• Planning may be documented in a master test plan and in separate
test plans for test levels, such as system testing and acceptance
testing, or for separate test types, such as usability testing and
performance testing.
• The main goal of test plan is to include all the details:
• What to test
• When to test
• How to test
• Who will be the tester
33
Test Strategy Test Plan
• Identifier
• Scope and overview • References
• Test Approach • Risks and Contingencies
• Testing tools • Test Items
• Industry standards to follow • Test Deliverables
• Test deliverables • Test Levels and Types
• Testing metrics • Pass/Fail Criteria
• Requirement Traceability Matrix • Features To Be Tested (In Scope)
• Risk and mitigation • Features Not To Be Tested (Out of
• Reporting tool Scope)
• Testing Tasks
• Environmental Needs
• Responsibilities
• Staffing and Training Needs
• Schedule 34
35
Test
Scenario
• Test Scenario
gives the idea
of what we
have to test.
• Test Scenario
is like a high-
level test
case.
http://SoftwareTestingHelp.com

36
37
Test Case
• Once the test scenarios are prepared, reviewed and approved,
test cases will be prepared based on these scenarios .
• Test cases are the set of positive and negative executable steps
of a test scenario which has a set of pre-conditions, test data,
expected result, post-conditions and actual results.
• Its main intent is to ensure whether a software passes or fails in
terms of its functionality and other aspects.
• Test cases are written to keep track of the testing coverage of a
software.
• Assume that we need to test the functionality of a login page of
Gmail application
Test Case 1: Enter valid User Name and valid Password
Test Case 2: Enter valid User Name and invalid Password
Test Case 3: Enter invalid User Name and valid Password
Test Case 4: Enter invalid User Name and invalid Password
38
Logical vs. Physical Test Cases
• The logical test case (high-level) describes what is tested
and not how it is done.
• The aim of a physical test case (low-level) is to fill in the
details of the corresponding logical test case so that the
how is clear. It is the concrete elaboration of a logical test
case, with choices having been made for the values of all
required the input and settings of the environmental factors.

39
Negative vs. Positive Test Cases
• Positive test cases ensure that users can perform
appropriate actions when using valid data.
• Negative test cases are performed to try to “break” the
software by performing invalid (or unacceptable)
actions, or by using invalid data.

40
Negative vs. Positive Test Cases

41
Negative vs. Positive
Negative Test Cases
vs. Positive Test Cases

42
Test Case Template

43
Test Case Template

44
Example Test Cases for Sign-up Page

45
Defect/Bug report
• After uncovering a defect (bug), testers generate formal defect report.
• The purpose of a defect report is to state the problem as clearly as
possible so that developers can replicate the defect easily and fix it.
• Defect severity can be defined as the impact/effect of the bug on the
application. It can be Showstopper/Critical/High/Medium/Low.
• Defect priority can be defined as an impact/effect of the bug on the
customers business. Main focus on how soon/urgent the defect
should be fixed. It gives the order in which a defect should be
resolved by developers. It can be High/ Medium/Low.
Exercise: Specify scenario severity and priority
a. Submit button is not working on a login page a. High Priority & High
and customers are unable to login to the Severity
application.
b. Crash in some functionality which is going to b. Low Priority & High
deliver after couple of releases. Severity
c. Spelling mistake of a company name on
homepage.
c. High Priority & Low
d. FAQ page takes a long time to load. Severity
e. Company logo or tagline issues. d. Low Priority & Low Severity
f. On a bank website, an error message pops up
when a customer clicks on transfer money e. High Priority & Low
button. Severity
47
g. Font family or font size or colour or spelling
BUG
Report
BUG
Report
(cont.)
50
Bug Report

• A bug report serves as a bridge between testers and developers,


facilitating the identification and resolution of issues.

• Its purpose is to state the problem as clearly as possible so that


developers can replicate the defect easily and fix it.

• With careful attention to detail and a clear, respectful, and


professional communication style, testers can be true partners with
developers in delivering robust and reliable software.

51
Writing an Effective Bug Report

52
Writing an Effective Bug Report

53
Writing an Effective Bug Report

54
Writing an Effective Bug Report

55
Writing an Effective Bug Report

56
Writing an Effective Bug Report

57
Practical Example
Test Case Writing

58
TEST CASE

59
BUG REPORT

60
Test Procedures
• Test Procedures Specification specifies the sequence/order of
actions for test cases
• Defining the test procedures requires carefully identifying
constraints and dependencies that might influence the test
execution sequence
• Test procedures list any initial preconditions (e.g., loading of test
data from a data repository) and any activities following execution
(e.g., resetting the system status).
• Contents of a Test Procedure:
• Identifier
• Purpose
• Special requirements
• Procedure steps
Test Execution Schedule

• Once the various test cases and test procedures are assembled
into test suites, the test suites can be arranged in a test execution
schedule that defines the order in which they are to be run.
• Test execution schedule should take into account factors such as
prioritization, dependencies, and the most efficient sequence for
executing the tests.
• Ideally, test cases would be ordered to run based on their priority
levels, usually by executing the test cases with the highest priority
first. However, this practice may not work if the test cases have
dependencies, or the features being tested have dependencies. If
a test case with a higher priority is dependent on a test case with
a lower priority, the lower priority test case must be executed
first.
Test Conditions, Cases, Procedures and Schedule
What How How When

Test Test
Procedure Execution
Sourced Documentation

Specification Schedule

Manual Test
Test Script Test Procedure
Test Test
Cases Specifications
Condition Cases or
Automated
Priority Test Script
Requirements Traceability Matrix (RTM)
• To implement effective test monitoring and control, it is important
to establish and maintain traceability throughout the test process
between elements of the test basis and the various test work
products associated with that element
• Requirements Traceability Matrix (RTM) connects requirements
to test cases throughout the validation process
• It shows the relationship between requirements and test cases to
ask such question as:
• Which test cases may be affected by change of requirement?
• How much coverage of requirements has been achieved?
• Are any requirements overly complex and need too many test cases?
64
RTM

65
RTM
• It is a matrix is used to trace requirements. It provides forward and
backward traceability.

BR1 SR1 FR1

SR2 FR2

66
Importance of Traceability
Good traceability supports:
•Analyzing the impact of changes
•Making testing auditable
•Meeting IT governance criteria
•Improving the understandability of test progress reports and
test summary reports to include the status of elements of
the test basis (e.g., requirements that passed their tests/
requirements that failed their tests/requirements that have
pending tests)

67
Test Closure Report
• It is a report that is created once the testing phase is successfully
completed by meeting exit criteria defined for the project.
• It is a document that gives a summary of all the tests conducted.
• It also gives a detailed analysis of the bugs removed and errors
found.
• It also presents the list of known issues.
• It is created by test Lead, reviewed by various stake holders like
test architect, test manager, business analyst, project manager
and finally approved by clients.

68
69
Answer:
B
Answer:
D
Answer:
D
Answer:
A
Answer:
A
Answer:
D

You might also like