0% found this document useful (0 votes)
20 views51 pages

Lec 4

The document discusses various types of non-functional testing including performance, security, usability, and regression testing. It also covers topics like debugging, maintenance testing, and change-related testing.

Uploaded by

ahksase2312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views51 pages

Lec 4

The document discusses various types of non-functional testing including performance, security, usability, and regression testing. It also covers topics like debugging, maintenance testing, and change-related testing.

Uploaded by

ahksase2312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Software Testing, Validation and

Verification
Lecture 4

Presented by:
Dr. Yasmine Afify
[email protected]
KINDLY
Reference
https://www.istqb.org/certifications/certified-tester-foundation-level

The ISTQB® Certified Tester Foundation


Level (CTFL) certification provides
essential testing knowledge that can be
put to practical use and, very
importantly, explains the terminology
and concepts that are used worldwide in
the testing domain. CTFL is relevant
across software delivery approaches and
practices including Waterfall, Agile,
DevOps, and Continuous Delivery. CTFL
certification is recognized as a
prerequisite to all other ISTQB®
certifications where Foundation Level
is required. 3
Non-functional Testing Types
Non-functional testing is the testing of “how well” the system behaves.
It involves testing a software for the requirements which are nonfunctional in
nature but important such as performance, security, scalability, etc.
5
Performance Testing
• Performance Testing: It is a process of measuring various efficiency
characteristics of a system such as response time, throughput, load, stress,
transactions per minutes.

• Load testing involves running a series of tests where you increase the load until
the system performance becomes unacceptable.

• To test whether performance requirements are being achieved, you may have
to construct an operational profile.
• An operational profile is a set of tests that reflect the actual mix of work that
will be handled by the system. Therefore, if 90% of the transactions in a system
are of type A, 5% of type B, and the remainder of types C, D, and E, then you
have to design the operational profile so that the vast majority of tests are of
type A (to get an accurate test of the operational performance of the system).
Performance Testing
• Experience has shown that an effective way to discover defects is to design tests
around the limits of the system.
• In performance testing, this means stressing/overloading the system by making
demands that are outside the design limits of the software. This is known as stress
testing.
• Say you are testing a transaction processing system that is designed to process up to
300 transactions per second. You start by testing this system with fewer than
300 transactions per second. You then gradually increase the load on the system
beyond 300 transactions per second until it is well beyond the maximum design load
of the system and the system fails.
• Stress testing helps you do two things:
1. Test the failure behaviour of the system
2. Reveal defects that only show up when the system is fully loaded.

7
Performance Testing

Volume testing: observation of the system behaviour dependent


on the amount of data. Testing in which large amounts of data
are manipulated or the system is subjected to large volumes of
data.

Soak testing: involves testing a system with a typical production


load, over an extended period of time, to validate system
behaviour under production use.
Non-functional Testing Types

• Usability: Is the software product easy to use, learn and


understand from the user’s perspective?

• Maintainability: The effort needed to make specified


modifications. Is the software product easy to maintain?
Assessing the understandability of the system documentation
and whether it is up-to-date; checking if the system has a
modular structure.

• Efficiency: The relationship between the level of performance


of the software and the amount of resources used, under
stated conditions. Does the software product use the
hardware, system software and other resources efficiently?
Non-functional Testing Types

• Portability: The ability of software to be transferred from one


environment to another. Is the software product portable?

• Interoperability: The ability to interact with specified


systems. Does the software product work with other software
applications, as required by the users?

• Localization: checking default languages, currency, date, and


time format if it is designed for a particular region/locality.

• Recovery: It is done in order to check how fast and better the


application can recover after it has gone through any type of
crash or hardware failure.
Non-functional Testing Types

Baseline: It refers to the validation of the documents and


specifications on which test cases are designed.
Non-functional Testing Types
UI Testing
Checklist for User Interface Testing:

1) Check if all basic elements are available in the page or not.


2) Check spelling of the objects.
3) Check alignments of the objects.
4) Check content displayed in web pages.
5) Check if the mandatory fields are highlights or not.
6) Check consistency in background colour, font type and size,
etc.
Non-functional Testing Types
Security Testing
Checklist for Security Testing:

1) Check if sensitive data such as password, credit card, CVV


numbers are getting encrypted or not
2) Check direct URL access for the both secured and non-
secured pages
3) Check view source code option for secured pages
4) Check for Authorization
5) Check for Authentication
6) Check cookies
14
Change-related Testing
• When changes are made to a system, either to correct a defect or
because of new/changing functionality, testing should be done to
confirm that the changes have corrected the defect or implemented the
functionality correctly and have not caused any unforeseen adverse
consequences.
• Confirmation (re-test) testing: It is a type of retesting that is carried out by
software testers as a part of defect fix verification. After a defect is detected and
fixed, the software should be re-tested to confirm that the original defect has been
successfully removed.

• Regression testing: It is possible that a change made in one part of the code may
accidentally affect the behaviour of other parts of the code, whether within the
same component, in other components. Such unintended side-effects are called
regressions.

• Confirmation and regression testing are performed at ALL test levels. 15


Re-testing vs Sanity Testing

16
Regression Testing
• All tests are re-run every time a change is made to the
program.
• Regression testing is testing the system to check that changes
have not ‘broken’ previously working code.
• In a manual testing process, regression testing is expensive
but, with automated testing, it is simple and straightforward.
• Example:
An online shopping application added a function that enables
clients to select multiple promotions at once. Regression testing
should be conducted to make sure that the check out and
payment functions are not affected.
17
Regression
Testing
Types

18
Regression Testing Types
• Corrective regression suitable option when there is no change
in your application’s source code. You want to check if the current
system is working correctly, and therefore, you will test the existing
functionalities and their related test cases (no need for new test
cases).

• Progressive regression testing applies when the specifications are


modified, and new test cases must be designed. Generally, this
testing type is preferred when introducing a new component into
the system. This is because it helps you verify that changes do not
adversely affect the old components.
19
Regression Testing Types
• Selective strategy uses a subset of the existing test cases to reduce the
retesting cost. In this strategy, a test unit must be rerun if and only if any
of the program entities (functions, variables etc.) it covers have been
changed.

• Complete strategy This means testing the entire system at once. It is


similar to acceptance testing to check whether the user experience gets
compromised due to adding one or multiple modules. Complete testing
is done just before the final release of the product.

• Retest-all strategy reuses all test cases. This strategy may waste time
and resources due to the execution of unnecessary tests. When the
change to a system is minor, this strategy would be wasteful.
20
21
Debugging!!
• Debugging is a development activity, not a testing activity.
• Debugging identifies and fixes the source of defects (source of
failure).
• In some cases, testers are responsible for the initial test and the final
confirmation test, while developers do the debugging and associated
component testing.
• However, in Agile development and in some other lifecycles, testers
may be involved in debugging and component testing.
Maintenance Testing
• Once deployed to production environment, system need to be maintained.
• Changes of various sorts in delivered system: to fix defects discovered in
operational use, to add new functionality, or to delete or alter already-
delivered functionality.
• Maintenance testing focuses on testing the changes to the working system,
as well as testing unchanged parts that might be affected by the changes.
• Maintenance involve planned releases and unplanned releases (hot fixes).
• Impact analysis is useful for regression testing during maintenance testing.

23
Maintenance Testing
The scope of maintenance testing depends on:
• The degree of risk of the change, for example, the degree to which
the changed area of software communicates with other
components or systems
• The size of the existing system
• The size of the change

24
Maintenance Testing
We can classify the triggers for maintenance as follows:
• Modification, such as planned enhancements (e.g., release-based),
corrective and emergency changes, changes of the operational environment
(such as planned operating system or database upgrades), upgrades of COTS
software, and patches for defects and vulnerabilities
• Migration, such as from one platform to another, which can require
operational tests of the new environment as well as of the changed
software, or tests of data conversion when data from another application will
be migrated into the system being maintained
• Retirement, such as when an application reaches the end of its life

25
Test Types and Test Levels
• It is possible to perform any of test types at any test level. To
illustrate, examples of functional, non-functional, white-box, and
change-related tests will be given across all test levels, for a banking
application.

• Starting with functional tests:


• For component testing, tests are designed based on how a component
should calculate compound interest.
• For component integration testing, tests are designed based on how
account information captured at user interface is passed to business logic.
• For system testing, tests are designed based on how account holders can
apply for a line of credit on their checking accounts.
• For system integration testing, tests are designed based on how the system
uses an external microservice to check an account holder’s credit score.
• For acceptance testing, tests are designed based on how the banker
handles approving or declining a credit application.

26
Test Types and Test Levels
• Examples on non-functional tests:
• For component testing, performance tests are designed to evaluate
the number of CPU cycles required to perform a complex total interest
calculation.
• For system testing, portability tests are designed to check whether the
presentation layer works on all supported browsers and mobile devices.
• For system integration testing, reliability tests are designed to
evaluate system robustness if the credit score microservice fails to
respond.
• For acceptance testing, usability tests are designed to evaluate the
accessibility of the banker’s credit processing interface for people with
disabilities.

27
Test Types and Test Levels
• Examples on change-related tests:
• For component testing, automated regression tests are built for each
component and included within the continuous integration framework.
• For component integration testing, tests are designed to confirm fixes
to interface-related defects as fixes are checked into code repository.
• For system testing, all tests for a given workflow are re-executed if any
screen on that workflow changes.
• For system integration testing, tests of the application interacting with
the credit scoring microservice are re-executed daily as part of continuous
deployment of that microservice.
• For acceptance testing, all previously-failed tests are re-executed after a
defect found in acceptance testing is fixed.

28
29
STLC (Software Testing Life Cycle)

30
STLC (Software Testing Life Cycle)
1) Requirement Analysis:
• Activities to be done:
• Analyzing the (SRS) System Requirement specifications to make sure they are testable.
• Preparation of RTM (Requirement Traceability Matrix)
• Identifying testing types & techniques to be performed.
• Prioritizing the features which need focused testing
• Analyzing the Automation Feasibility
• Identifying testing environment details where actual testing will be done.

• Deliverables :
• Requirement Traceability Matrix (RTM)
• Automation Feasibility Report

31
STLC (Software Testing Life Cycle)
2) Test Planning:
• In this stage, a test lead will determine effort and cost estimates for the project and
would prepare and finalize the Test Plan.
• Answers the question “what to test?”

• Activities to be done:
• Preparation of test plan/strategy document
• Test tool selection
• Test effort estimation
• Resource planning
• Determining roles and responsibilities
• Training requirement

• Deliverables :
• Test plan/strategy document
• Effort estimation document

32
STLC (Software Testing Life Cycle)
3) Test Case Development:
• This phase involves creation, verification and rework of test cases & test scripts.
• Test data is identified, created, reviewed and then reworked as well.
• Answers the question “do we now have everything in place to run the tests?”

• Activities to be done:
• Create test cases and automation scripts
• Review and baseline test cases and scripts
• Create test data

• Deliverables:
• Test cases/scripts
• Test data

33
STLC (Software Testing Life Cycle)
4) Environment Setup:
• Decides the software and hardware conditions under which a work product is tested.
• Can be done in parallel with Test Case Development Stage.

• Activities to be done:
• Understand the required architecture, environment set-up.
• Prepare hardware and software requirement for the Test Environment.
• Setup test Environment and test data list.
• Perform smoke test on the build.

• Deliverables:
• Environment ready with test data set up.
• Smoke Test Results.

34
STLC (Software Testing Life Cycle)
5) Test Execution:
• Tester carries out the testing based on the test plans and the test cases prepared.
• Bugs will be reported back to the development team for correction, then retesting
will be performed.

• Activities to be done:
• Execute tests as per plan.
• Document test results, and log defects for failed cases.
• Map defects to test cases in RTM.
• Retest the Defect fixes.
• Track the defects to closure.

• Deliverables:
• Completed RTM with execution status.
• Test cases updated with results.
• Defect reports. 35
STLC (Software Testing Life Cycle)
6) Test Cycle Closure:
• Testing team meet , discuss and analyze testing artifacts.
• Taking lessons from the current test cycle to remove the process bottlenecks for
future test cycles and share best practices for any similar projects in future.
• Activities to be done:
• Evaluate cycle completion criteria based on: Time, Test coverage, Cost, Critical Business
Objectives , Quality
• Prepare test metrics based on the above parameters
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to the customer
• Test result analysis to find out the defect distribution by type and severity

• Deliverables:
• Test Closure report
• Test metrics
36
Software Testing Estimation
Test Estimation is a management activity which approximates how long a task would
take to complete.
Two of the most commonly used techniques are:
• Metrics-based technique: estimating the test effort based on metrics of former
similar projects or based on typical values
• Expert-based technique: estimating the test effort based on the experience of
the owners of the testing tasks or by experts
When you are estimating a testing project, consider:
• Team skills
• Complexity of the application
• Historical data
• Resources availability
• System environment and downtime
37
Software Testing Estimation (Cont.)
The following testing estimation techniques are proven to be accurate
and are widely used:
• PERT software testing estimation technique
• WBS work breakdown structure
• Wideband Delphi technique
• Percentage distribution
• Experience-based testing

38
PERT Estimation Technique- Three-point Estimation
PERT (Program Evaluation Review Technique)
• Three types of estimations are done on each activity. The formula used by
this technique is:

Test Estimate = (O + (4 × M) + P)/6

• O= Optimistic estimate (best case scenario in which nothing goes wrong, and
all conditions are optimal).
• M= Most likely estimate (most likely duration and there may be some
problems but most of the things will go right).
• P= Pessimistic estimate (worst case scenario where everything goes wrong).
39
PERT Estimation Technique- Example

• For example, let's say you estimate a piece of work to most


likely take ten hours. The best case is six hours. The worst case
is twenty six hours.
• The PERT estimate is (6 + 4(10) + 26)/6.
• The answer is 72/6, or 12 hours.

40
Work Breakdown Structure Technique
(WBS)
• Step 1 − Create WBS by breaking down the test project into small pieces.
• Step 2 − Divide modules into sub-modules.
• Step 3 - Divide sub-modules further into functionalities.
• Step 4 − Divide functionalities into sub-functionalities.
• Step 5 − Review all the testing requirements to make sure they are
added in WBS.
• Step 6 − Figure out the number of tasks your team needs to complete.
• Step 7 − Estimate the effort for each task.
• Step 8 − Estimate the duration of each task.
41
Wideband Delphi Technique

• In Wideband Delphi Method, WBS is distributed to a team


comprising of 3-7 members for re-estimating the tasks.
• The final estimate is the result of the summarized estimates based
on the team agreement.
• This method speaks more on experience rather than any statistical
formula.
• This method emphasizes on the group iteration to reach an
agreement where the team visualized different aspects of the
problems while estimating the test effort.

42
Percentage Distribution
• In this technique, all the Phase % of Effort
phases of SDLC are assigned Project Management 7%
effort in %. Requirements 9%
• This can be based on past Design 16%
data from similar projects. Coding 26%
Testing (all Test Phases) 27%
Documentation 9%
Installation and 6%
Training

43
Percentage Distribution (Cont.)
• Next, % of effort for testing (all test phases) is further distributed for
all Testing Phases

System Testing % of Effort


Functional System Testing 65
Non-functional System Testing 35
Total 100

44
Experience-based Testing Estimation Technique

• This technique is based on similarities and experts.


• The technique assumes that you already tested similar
applications in previous projects and collected metrics from those
projects. You also collected metrics from previous tests.
• Take inputs from subject matter experts who know the
application (as well as testing) very well and use the metrics you
have collected and arrive at the testing effort.

45
46
Answer:
B
Answer:
C
Answer:
C
Answer:
D

You might also like