Module5 - Software Testing
Module5 - Software Testing
5
Characteristics of Testable Software
• Operable
– The better it works (i.e., better quality), the easier it is to test
• Observable
– Incorrect output is easily identified; internal errors are automatically detected
• Controllable
– The states and variables of the software can be controlled directly by the tester
• Decomposable
– The software is built from independent modules that can be tested independently
• Simple
– The program should exhibit functional, structural, and code simplicity
• Stable
– Changes to the software during testing are infrequent and do not invalidate existing tests
• Understandable
– The architectural design is well understood; documentation is available and organized
Test Characteristics
Tested Subsystem
Subsystem Unit
Code Test
All
Alltests
testsby
bydeveloper
developer
Cf. levels of testing
Testing Activities continued
Client’s
Global Understanding User
Requirements of Requirements Environment
Usable
Tests
Testsby
byclient
client System
Tests
Testsby
bydeveloper
developer
User’s understanding
System in
Use
Tests
Tests(?)
(?) by
byuser
user
Unit Testing
• This type of testing is performed by developers before
the setup is handed over to the testing team to
formally execute the test cases.
• Unit testing is performed by the respective developers
on the individual units of source code assigned areas.
• The developers use test data that is different from the
test data of the quality assurance team.
• The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in
terms of requirements and functionality.
Unit Testing
• Informal:
– Incremental coding Write a little, test a little
• Static Analysis:
– Hand execution: Reading the source code
– Walk-Through (informal presentation to others)
– Code Inspection (formal presentation to others)
– Automated Tools checking for
• syntactic and semantic errors
• departure from coding standards
• Dynamic Analysis:
– Black-box testing (Test the input/output behavior)
– White-box testing (Test the internal logic of the subsystem or
object)
– Data-structure based testing (Data types determine test
cases)
Which is more effective, static or dynamic analysis?
Limitations of Unit Testing
• Testing cannot catch each and every bug in an
application.
• It is impossible to evaluate every execution
path in every software application.
• The same is the case with unit testing.
Integration Testing
• Integration testing is defined as the testing of
combined parts of an application to determine
if they function correctly.
• Integration testing can be done in two ways:
Bottom-up integration testing and Top-down
integration testing.
Top Down
Bottom up
System Testing
• System testing tests the system as a whole.
Once all the components are integrated, the
application as a whole is tested rigorously to
see that it meets the specified Quality
Standards.
• This type of testing is performed by a
specialized testing team.
System Testing
Regression Testing
• Whenever a change in a software application is
made, it is quite possible that other areas within
the application have been affected by this change.
• Regression testing is performed to verify that a
fixed bug hasn't resulted in another functionality
or business rule violation.
• The intent of regression testing is to ensure that a
change, such as a bug fix should not result in
another fault being uncovered in the application.
Acceptance Testing
• This is arguably the most important type of testing, as it
is conducted by the Quality Assurance Team who will
gauge whether the application meets the intended
specifications and satisfies the client’s requirement.
• The QA team will have a set of pre-written scenarios and
test cases that will be used to test the application.
• By performing acceptance tests on an application, the
testing team will deduce how the application will
perform in production.
• There are also legal and contractual requirements for
acceptance of the system.
Regression Testing
• Test the effects of the newly introduced
changes on all the previously integrated
code.
• The common strategy is to accumulate a
comprehensive regression bucket but
also to define a subset.
• The full bucket is run only occasionally,
but the subset is run against every spin.
• Disadvantages:
1. To decide how much of a subset to use
and which tests to select.
Alpha Testing
• Alpha testing is performed by testers who are usually internal
employees of the organization
• This test is the first stage of testing and will be performed amongst the
teams (developer and QA teams).
• Unit testing, integration testing and system testing when combined
together is known as alpha testing.
During this phase, the following aspects will be tested in the application:
• Spelling Mistakes
• Broken Links
• Cloudy Directions
• The Application will be tested on machines with the lowest
specification to test loading times and any latency problems.
Beta Testing
• This test is performed after alpha testing has been
successfully performed.
• In beta testing, a sample of the intended audience tests
the application.
• Beta testing is performed by clients who are not part of
the organization
• Beta testing is also known as pre-release testing.
• Beta test versions of software are ideally distributed to a
wide audience on the Web, partly to give the program a
"real-world" test and partly to provide a preview of the
next release.
• the beta test is being carried out by real users in the real
environment.
• In this phase, the audience will be testing the following:
• Users will install, run the application and send their feedback to
the project team.
• Typographical errors, confusing application flow, and even crashes.
• Getting the feedback, the project team can fix the problems before
releasing the software to the actual users.
• The more issues you fix that solve real user problems, the higher
the quality of your application will be.
• Having a higher-quality application when you release it to the
general public will increase customer satisfaction.
White Box Testing
Read A Scenario 1:
Read B f A = 7, B= 3
if A > B No of statements Executed= 5
Print “A is greater than B” Total statements= 7
else Statement coverage= 5 / 7 * 100
Print “B is greater than A” = 71.00 %
endif Scenario 2:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 7
Statement coverage= 6 / 7 * 100
= 85.20 %
White Box Test: Cyclomatic Complexity
Note-
Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
Method-02:
Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
• Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative
complexity of various designs.
• It is able to compute faster than the Halstead’s
metrics.
• It is used to measure the minimum effort and
best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
• Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control
complexity and not the data complexity.
• In this, nested conditional structures are
harder to understand than non-nested
structures.
• In case of simple comparisons and decision
structures, it may give a misleading figure.
Black Box Testing
• Possible answers:
– check for leap years (every 4th yr, no 100s, yes 400s)
– try years such as: even 100s, 101s, 4s, 5s
– try months such as: June, July, Feb, invalid values
Decision tables
Equivalence testing
• equivalence partitioning:
– A black-box test technique to reduce # of required test cases.
– What is it?
Expected
Test Case ID Day Month Year
Output
2-6-2000
1 1 6 2000
3-6-2000
2 2 6 2000
16-6-2000
3 15 6 2000
4 30 6 2000 1-7-2000
Invalid Date
5 31 6 2000
6 15 1 2000 16-1-2000
16-2-2000
7 15 2 2000
16-11-2000
8 15 11 2000
16-12-2000
9 15 12 2000
16-6-1800
10 15 6 1800
16-6-1801
11 15 6 1801
16-6-2047
12 15 6 2047
13 15 6 2048 16-6-2048
Test Cases:
Expected
Test Case ID Day Month Year
Output
16-4-2004
E1 15 4 2004
16-4-2003
E2 15 4 2003
16-1-2004
E3 15 1 2004
16-1-2003
E4 15 1 2003
16-2-2004
E5 15 2 2004
16-2-2003
E6 15 2 2003
30-4-2004
E7 29 4 2004
30-4-2003
E8 29 4 2003
30-1-2004
E9 29 1 2004
30-1-2003
E10 29 1 2003
1-3-2004
E11 29 2 2004
Invalid Date
E12 29 2 2003
1-5-2004
E13 30 4 2004
1-5-2003
E14 30 4 2003
31-1-2004
E15 30 1 2004
31-1-2003
E16 30 1 2003
Invalid Date
E17 30 2 2004
Invalid Date
E18 30 2 2003
Invalid Date
E19 31 4 2004
Invalid Date
E20 31 4 2003
1-2-2004
E21 31 1 2004
1-5-2003
E22 31 1 2003
Invalid Date
E23 31 2 2004
Invalid Date
E24 31 2 2003
• Equivalence Class Testing:
• Input classes:
• Day: D1: day between 1 to 28 D2: 29 D3: 30 D4:
31 Month: M1: Month has 30 days M2: Month
has 31 days M3: Month is February Year: Y1: Year
is a leap year Y2: Year is a normal year
• Output Classes:
• Increment Day Reset Day and Increment Month
Increment Year Invalid Date
Who does Testing?
• It depends on the process and the associated
stakeholders of the project(s). In the IT industry, large
companies have a team with responsibilities to
evaluate the developed software in context of the
given requirements
• Smoke Testing:
• Sanity Testing:
• Integration Testing:
• Regression Testing:
• Localization Testing:
• User Acceptance Testing
Smoke Testing
• An online HRMS portal on which the user logs in with their user account
and password. The login page has two text fields for username and
password. It also has two buttons – Login and Cancel.
• When successful, the login page directs the user to the HRMS home
page. The cancel button cancels the login.
Specifications:
• The user id field requires a minimum of 6 characters, a maximum of 10
characters, numbers(0-9), letters(a-z, A-z), special characters (only
underscore, period, hyphen allowed). It cannot be left blank. User id
must begin with a number/character. It cannot include special
characters.
• The password field requires a minimum of 6 characters, a maximum of 8
characters, numbers (0-9), letters (a-z, A-Z), all special characters. It
cannot be blank.
Non-Functional Testing
• Performance Testing:
• Usability Testing:
• Security Testing:
• Performance Testing:
– Load Testing
– Stress Testing
– Volume Testing
– Endurance Testing
• Load Testing: An application which is expected to handle a particular workload is
tested for its response time in a real environment depicting a particular workload.
It is tested for its ability to function correctly in a stipulated time and is able to
handle the load.
Examples of load testing
• Downloading a series of large files from the internet
• Running multiple applications on a computer or server simultaneously
• Assigning many jobs to a printer in a queue
• Subjecting a server to a large amount of traffic
• Writing and reading data to and from a hard disk continuously
• Load Testing – Real time Examples
• Target.com lost $780,000 in sales in just 3 hours when the site was down during a
promotion in 2015
• When Amazon.com servers crashed in 2013 for 30 minutes, Amazon lost $66,240
per minute
• Example: load testing
• Check that the mail server can handle thousands of
concurrent users.
• Testing an e-commerce application by simulating
thousands of users making purchases simultaneously.
• Tools Preferred:
• Neotys Neoload
• JMeter
• Parasoft Load Test
• Stress Testing: In Stress testing, the application is stressed
with an extra workload to check if it works efficiently and
is able to handle the stress as per the requirement.
• Example: Consider a website which is tested to check its
behavior when the user accesses is at its peak. There
could be a situation where the workload crosses beyond
the specification. In this case, the website may fail, slow
down or even crash.
• Stress testing is to check these situations using
automation tools to create a real-time situation of
workload and find the defects.
• Usability Testing:
• In this type of testing, the User Interface is tested for
its ease of use and see how user-friendly it is.
• Security Testing:
• Security Testing is to check how secure the software is
regarding data over the network from malicious
attacks. The key areas to be tested in this testing
include authorization, authentication of users and
their access to data based on roles such as admin,
moderator, composer, and user level.
• Example security testing
• Suppose a financial institute is reforming its security
guidelines and introducing new privacy protocols to
safeguard customer data. Performing
penetration testing on a bank application to identify
security vulnerabilities in such cases is a way to do so.
• Tools Preferred:
• ImmuniWeb
• Vega
• Google Nogotofail
• Volume Testing: Under Volume testing the application’s ability to handle
data in the volume is tested by providing a real-time environment. The
application is tested for its correctness and reliability under adverse
conditions.
• Endurance Testing: In Endurance testing the durability of the software is
tested with a repeated and consistent flow of load in a scalable pattern. It
checks the endurance power of the software when loaded with a
consistent workload.
• While Stress testing takes the tested system to its limits, Endurance
testing takes the application to its limit over time.
• For Example, the most complex issues – memory leaks, database server
utilization, and unresponsive system – happen when software runs for an
extended period of time. If you skip the endurance tests, your chances of
detecting such defects prior to deployment are quite low.
White Box Testing
Read A Scenario 1:
Read B f A = 7, B= 3
if A > B No of statements Executed= 5
Print “A is greater than B” Total statements= 7
else Statement coverage= 5 / 7 * 100
Print “B is greater than A” = 71.00 %
endif Scenario 2:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 7
Statement coverage= 6 / 7 * 100
= 85.20 %
White Box Test: Cyclomatic Complexity
Note-
Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
Method-02:
Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
• Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative
complexity of various designs.
• It is able to compute faster than the Halstead’s
metrics.
• It is used to measure the minimum effort and
best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
• Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control
complexity and not the data complexity.
• In this, nested conditional structures are
harder to understand than non-nested
structures.
• In case of simple comparisons and decision
structures, it may give a misleading figure.
Black Box Testing
• Possible answers:
– check for leap years (every 4th yr, no 100s, yes 400s)
– try years such as: even 100s, 101s, 4s, 5s
– try months such as: June, July, Feb, invalid values
Decision tables
Equivalence testing
• equivalence partitioning:
– A black-box test technique to reduce # of required test cases.
– What is it?
Expected
Test Case ID Day Month Year
Output
2-6-2000
1 1 6 2000
3-6-2000
2 2 6 2000
16-6-2000
3 15 6 2000
4 30 6 2000 1-7-2000
Invalid Date
5 31 6 2000
6 15 1 2000 16-1-2000
16-2-2000
7 15 2 2000
16-11-2000
8 15 11 2000
16-12-2000
9 15 12 2000
16-6-1800
10 15 6 1800
16-6-1801
11 15 6 1801
16-6-2047
12 15 6 2047
13 15 6 2048 16-6-2048
Test Cases:
Expected
Test Case ID Day Month Year
Output
16-4-2004
E1 15 4 2004
16-4-2003
E2 15 4 2003
16-1-2004
E3 15 1 2004
16-1-2003
E4 15 1 2003
16-2-2004
E5 15 2 2004
16-2-2003
E6 15 2 2003
30-4-2004
E7 29 4 2004
30-4-2003
E8 29 4 2003
30-1-2004
E9 29 1 2004
30-1-2003
E10 29 1 2003
1-3-2004
E11 29 2 2004
Invalid Date
E12 29 2 2003
1-5-2004
E13 30 4 2004
1-5-2003
E14 30 4 2003
31-1-2004
E15 30 1 2004
31-1-2003
E16 30 1 2003
Invalid Date
E17 30 2 2004
Invalid Date
E18 30 2 2003
Invalid Date
E19 31 4 2004
Invalid Date
E20 31 4 2003
1-2-2004
E21 31 1 2004
1-5-2003
E22 31 1 2003
Invalid Date
E23 31 2 2004
Invalid Date
E24 31 2 2003
• Equivalence Class Testing:
• Input classes:
• Day: D1: day between 1 to 28 D2: 29 D3: 30 D4:
31 Month: M1: Month has 30 days M2: Month
has 31 days M3: Month is February Year: Y1: Year
is a leap year Y2: Year is a normal year
• Output Classes:
• Increment Day Reset Day and Increment Month
Increment Year Invalid Date
Who does Testing?
• It depends on the process and the associated
stakeholders of the project(s). In the IT industry, large
companies have a team with responsibilities to
evaluate the developed software in context of the
given requirements
Static
verification
Dynamic
Prototype
validation
Verification & Validation
Software Testing Tools
Testing is one of the important aspects in SDLC because of the following reasons:
Testing in SDLC helps to prove that all the software requirements are always
implemented correctly or not.
Testing helps in identifying defects and ensuring that testing are addressed before
software deployment. If any defect is discover and fixed after deployment, then the
correction cost will be much huge than the cost of fixing it at earlier stages of
development
• Testing in SDLC means that testing always improves the quality of product
and project.
• Testing plays an important role in SDLC and apart from that testing also
improves the quality of the product and project by discovering bugs early
in the software.
• And remember testing not only improves the quality of the product, but it
also improves the company quality also.
STLC
Requirement Analysis
• During this phase, test team studies the requirements
from a testing point of view to identify the testable
requirements.
• The QA team may interact with various stakeholders
(Client, Business Analyst, Technical Leads, System
Architects etc) to understand the requirements in detail.
• Requirements could be either Functional (defining what
the software must do) or Non Functional (defining
system performance /security availability ) .
• Automation feasibility for the given testing project is also
done in this stage.
Activities
• Identify types of tests to be performed.
• Gather details about testing priorities and focus.
• Prepare Requirement Traceability Matrix (RTM).
• Identify test environment details where testing is
supposed to be carried out.
• Automation feasibility analysis (if required).
Deliverables
• RTM
• Automation feasibility report. (if applicable)
Test Planning
• This phase is also called Test Strategy phase.
• Typically , in this stage, a Senior QA manager will determine effort and
cost estimates for the project and would prepare and finalize the Test
Plan.
Activities
• Preparation of test plan/strategy document for various types of testing
• Test tool selection
• Test effort estimation
• Resource planning and determining roles and responsibilities.
• Training requirement
Deliverables
• Test plan /strategy document.
• Effort estimation document.
Test Case Development
• This phase involves creation, verification and rework of test
cases & test scripts.
• Test data , is identified/created and is reviewed and then
reworked as well.
Activities
• Create test cases, automation scripts (if applicable)
• Review and baseline test cases and scripts
• Create test data (If Test Environment is available)
Deliverables
• Test cases/scripts
• Test data
Test Environment Setup
• Test environment decides the software and
hardware conditions under which a work product is
tested. Test environment set-up is one of the critical
aspects of testing process and can be done in
parallel with Test Case Development Stage.
• Test team may not be involved in this activity if the
customer/development team provides the test
environment in which case the test team is required
to do a readiness check (smoke testing) of the given
environment.
Activities
• Understand the required architecture, environment
set-up and prepare hardware and software
requirement list for the Test Environment.
• Setup test Environment and test data
• Perform smoke test on the build
Deliverables
• Environment ready with test data set up
• Smoke Test Results.
Test Execution
During this phase test team will carry out the testing based on the test
plans and the test cases prepared. Bugs will be reported back to the
development team for correction and retesting will be performed.
Activities
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure
Deliverables
• Completed RTM with execution status
• Test cases updated with results
• Defect reports
Test Cycle Closure
• Testing team will meet , discuss and analyze
testing artifacts to identify strategies that have
to be implemented in future, taking lessons
from the current test cycle.
• The idea is to remove the process bottlenecks
for future test cycles and share best practices
for any similar projects in future.
Activities
• Evaluate cycle completion criteria based on Time,Test
coverage,Cost,Software,Critical Business Objectives , Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to
the customer.
• Test result analysis to find out the defect distribution by type and
severity.
Deliverables
• Test Closure report
• Test metrics
Testing Strategies and Tactics
Test Strategy
• A Test Strategy document is a high level
document and normally developed by project
Manager and Test manager. This document
defines “Software Testing Approach” to
achieve testing objectives.
• The Test Strategy is normally derived from the
Business Requirement Specification
document.
• The Test Strategy document is a static document
meaning that it is not updated too often.
• It sets the standards for testing processes and
activities and other documents such as the Test
Plan draws its contents from those standards
set in the Test Strategy Document.
• For larger projects, there is one Test Strategy
document and different number of Test Plans
for each phase or level of testing.
• Components of the Test Strategy document
– Scope and Objectives
– Business issues
– Roles and responsibilities
– Communication and status reporting
– Test deliverables
– Industry standards to follow
– Test automation and tools
– Testing measurements and metrices
– Risks and mitigation
– Defect reporting and tracking
– Change and configuration management
– Training plan
Test Plan
• A Software Test Plan is a document describing
the testing scope and activities. It is the basis
for formally testing any software/product in a
project.
• test plan: A document describing the scope,
approach, resources and schedule of intended
test activities.
• It is a record of the test planning process.
• master test plan: A test plan that typically
addresses multiple test levels.
• phase test plan: A test plan that typically
addresses one test phase.
TEST PLAN TYPES
• Master Test Plan: A single high-level test plan for a
project/product that unifies all other test plans.
• Testing Level Specific Test Plans:Plans for each level of
testing.
– Unit Test Plan
– Integration Test Plan
– System Test Plan
– Acceptance Test Plan
• Testing Type Specific Test Plans: Plans for major types of
testing like Performance Test Plan and Security Test Plan.
• TEST PLAN TEMPLATE
• The format and content of a software test plan
vary depending on the processes, standards,
and test management tools being
implemented.
• Nevertheless, the following format, which is
based on IEEE standard for software test
documentation, provides a summary of what a
test plan can/should contain.
• Test Plan Identifier:
– Provide a unique identifier for the document. (Adhere to
the Configuration Management System if you have one.)
• Introduction:
– Provide an overview of the test plan.
– Specify the goals/objectives.
– Specify any constraints.
• References:
– List the related documents, with links to them if
available, including the following:
– Project Plan
– Configuration Management Plan
• Test Items:
– List the test items (software/products) and their versions.
• Features to be Tested:
– List the features of the software/product to be tested.
– Provide references to the Requirements and/or Design
specifications of the features to be tested
• Features Not to Be Tested:
– List the features of the software/product which will not be
tested.
– Specify the reasons these features won’t be tested.
• Approach:
– Mention the overall approach to testing.
– Specify the testing levels [if it’s a Master Test Plan], the testing
types, and the testing methods [Manual/Automated; White
Box/Black Box/Gray Box]
• Item Pass/Fail Criteria:
– Specify the criteria that will be used to determine whether
each test item (software/product) has passed or failed testing.
• Suspension Criteria and Resumption Requirements:
– Specify criteria to be used to suspend the testing activity.
– Specify testing activities which must be redone when testing is
resumed.
• Test Deliverables:
– List test deliverables, and links to them if available, including
the following:
• Test Plan (this document itself)
• Test Cases
• Test Scripts
• Defect/Enhancement Logs
• Test Reports
• Test Environment:
– Specify the properties of test environment: hardware,
software, network etc.
– List any testing or related tools.
• Estimate:
– Provide a summary of test estimates (cost or effort) and/or
provide a link to the detailed estimation.
• Schedule:
– Provide a summary of the schedule, specifying key test
milestones, and/or provide a link to the detailed schedule.
• Staffing and Training Needs:
– Specify staffing needs by role and required skills.
– Identify training that is necessary to provide those skills, if
not already acquired.
• Responsibilities:
– List the responsibilities of each team/role/individual.
• Risks:
– List the risks that have been identified.
– Specify the mitigation plan and the contingency plan for each
risk.
• Assumptions and Dependencies:
– List the assumptions that have been made during the
preparation of this plan.
– List the dependencies.
• Approvals:
– Specify the names and roles of all persons who must approve
the plan.
– Provide space for signatures and dates. (If the document is to be
printed.)
Test Plan Outline
http://www.stellman-greene.com 247
Test Plan V/s Test Strategy
Test Plan Test Strategy
A test plan for software project can be Test strategy is a set of guidelines that
defined as a document that defines the explains test design and determines how
scope, objective, approach and emphasis testing needs to be done
on a software testing effort
Components of Test plan include- Test Components of Test strategy includes-
plan id, features to be tested, test objectives and scope, documentation
techniques, testing tasks, features pass or formats, test processes, team reporting
fail criteria, test deliverables, structure, client communication strategy,
responsibilities, and schedule, etc. etc.
Test plan is carried out by a testing A test strategy is carried out by the project
manager or lead that describes how to manager. It says what type of technique to
test, when to test, who will test and what follow and which module to test
to test
Test plan narrates about the specification Test strategy narrates about the general
approaches
Test Plan Test Strategy
Test plan can change Test strategy cannot be changed
Test planning is done to determine It is a long-term plan of action. You can
possible issues and dependencies in order abstract information that is not project
to identify the risks. specific and put it into test approach
A test plan exists individually In smaller project, test strategy is often
found as a section of a test plan
It is defined at project level It is set at organization level and can be
used by multiple projects
Test Case
• A Test Case is a set of actions executed to verify a
particular feature or functionality of your software
application. A Test Case contains test steps, test
data, precondition, postcondition developed for
specific test scenario to verify any requirement.
The test case includes specific variables or
conditions, using which a testing engineer can
compare expected and actual results to determine
whether a software product is functioning as per
the requirements of the customer.
Test case
• A test case is a set of conditions or variables
under which a tester will determine whether a
system under test satisfies requirements or
works correctly.
• The process of developing test cases can also
help find problems in the requirements or
design of an application.
Test Scenario Vs Test Case
Test Case
Test Case ID Test Steps Test Data Expected Results Actual Results Pass/Fail
Description
1.Go to site
http://demo.gur Userid = guru99
Check Customer User should
u99.com
TU01 Login with valid Password = Login into an As Expected Pass
Data 2.Enter UserId pass99 application
3.Enter Password
4.Click Submit
1.Go to site
Check Customer http://demo.gur Userid = guru99 User should not
u99.com
TU02 Login with invalid Password = Login into an As Expected Pass
2.Enter UserId
Data glass99 application
3.Enter Password
4.Click Submit
TEST CASE TEMPLATE
The ID of the test suite to which this test
Test Suite ID
case belongs.
Test Case ID The ID of the test case.
Test Case Summary The summary / objective of the test case.
The ID of the requirement this test case
Related Requirement
relates/traces to.
Any prerequisites or preconditions that
Prerequisites must be fulfilled prior to executing the
test.
Step-by-step procedure to execute the
Test Procedure
test.
The test data, or links to the test data, that
Test Data
are to be used while conducting the test.
Expected Result The expected result of the test.
The actual result of the test; to be filled
Actual Result
after executing the test.
Pass or Fail. Other statuses can be ‘Not
Status Executed’ if testing is not performed and
‘Blocked’ if testing is blocked.
Any comments on the test case or test
Remarks
execution.
Created By The name of the author of the test case.
Date of Creation The date of creation of the test case.
The name of the person who executed the
Executed By
test.
Date of Execution The date of execution of the test.
The environment
Test Environment (Hardware/Software/Network) in which
the test was executed.
TEST CASE EXAMPLE / TEST CASE SAMPLE
Test Suite ID TS001
Test Case ID TC001
To verify that clicking the Generate Coin
Test Case Summary
button generates coins.
Related Requirement RS001
1.User is authorized.
Prerequisites
2.Coin balance is available.
1.Select the coin denomination in the
Denomination field.
Test Procedure 2.Enter the number of coins in the
Quantity field.
3.Click Generate Coin.
1.Denominations: 0.05, 0.10, 0.25, 0.50, 1,
Test Data 2, 5
2.Quantities: 0, 1, 5, 10, 20
1.Coin of the specified denomination
should be produced if the specified
Quantity is valid (1, 5)
Expected Result
2.A message ‘Please enter a valid quantity
between 1 and 10’ should be displayed if
the specified quantity is invalid.
http://www.stellman-greene.com 260
• WRITING GOOD TEST CASES
– As far as possible, write test cases in such a way that you test only
one thing at a time. Do not overlap or complicate test cases.
– Ensure that all positive scenarios and negative scenarios are
covered.
• Language:
– Write in simple and easy to understand language.
– Use active voice: Do this, do that.
– Use exact and consistent names (of forms, fields, etc).
• Characteristics of a good test case:
– Accurate: Exacts the purpose.
– Economical: No unnecessary steps or words.
– Traceable: Capable of being traced to requirements.
– Repeatable: Can be used to perform the test over and over.
– Reusable: Can be reused if necessary.
Test Execution
The software testers begin executing the test plan after the
programmers deliver the alpha build, or a build that they feel is
feature complete.
The alpha should be of high quality—the programmers should feel that it
is ready for release, and as good as they can get it.
There are typically several iterations of test execution.
The first iteration focuses on new functionality that has been added since
the last round of testing.
A regression test is a test designed to make sure that a change to one
area of the software has not caused any other part of the software which
had previously passed its tests to stop working.
Regression testing usually involves executing all test cases which have
previously been executed.
There are typically at least two regression tests for any software project.
http://www.stellman-greene.com 262
Test Execution
When is testing complete?
– No defects found
– Or defects meet acceptance criteria outlined in test plan
http://www.stellman-greene.com 263
Test Scenario
• Test Scenario is ‘What to be tested’
• Test Case is ‘How to be tested’.
• Test scenarios are the high level classification
of test requirement grouped depending on
the functionality of a module.
In the above example, there are 2 scenarios identified for the above requirement and
each could have 4 and 6 test cases respectively.
• Take a sample application, say login page with
username, password, login, and cancel buttons. If
asked to write test cases for the same, we will end up
writing more than 50 test cases by combining different
options and details.
• But if test scenarios to be written, it will be a matter
of 10 lines as below:
• High Level Scenario: Login Functionality
Low Level Scenarios:
– 1. To check Application is Launching
2. To check text contents on login page
3. To check Username field
4. To check Password field
5. To check Login Button and cancel button functionality
Test Case Vs Test Scenarios
Test Scenarios
Test Cases
Beneficial to => A full-proof test case document is a Good test coverage can be
life line for new tester. achieved by dividing application in
test scenarios and it reduces
repeatability and complexity of
product
Disadvantage => Time and money consuming as it If created by specific person, the
requires more resources to detail reviewer or the other user might
out everything about what to test not sync the exact idea behind it.
and how to test Need more discussions and team
efforts.
Test Script
• A Test Script is a set of instructions (written
using a scripting/programming language) that
is performed on a system under test to verify
that the system performs as expected. Test
scripts are used in automated testing.
• Sometimes, a set of instructions (written in a
human language), used in manual testing, is
also called a Test Script but a better term for
that would be a Test Case.
Test Script
• This is the little tricky part to explain. Generally this is
where lot of debates happened. To most of us the test
scripts are the automation scripts written in any of the
programming language like VB script, Java, python etc
which can be interpreted and executed automatically by a
testing tool.
• Yes this is 80% correct by not 100%. To my definition a test
script is nothing but a test case fabricated with test data.
• A single test case can be fabricated with the combination
of multiple set of test data to form multiple test scripts of
the same test case.
How to create a Test Script Template:
• Example “Sign-up” test
cases included
• Manual test scripts – are the manual test
cases fabricated with the multiple set of test
data to enable even a layman to do the testing
as per the documentation
• Automation test Script – are the programmed
test cases with the combination of test data
which can be executed by a tool
Some scripting languages used in automated
testing are:
– JavaScript
– Perl
– Python
– Ruby
– Tcl
– Unix Shell Script
– VBScript
Test Procedure
• Detailed instructions document for the set-up,
execution, and evaluation of results for a given
test case. This document contains a set of
associated instructions.
• This documentation may have steps specifying
a sequence of actions for the execution of a
test
Performance Testing Metrics: Parameters Monitored