0% found this document useful (0 votes)
70 views

Module5 - Software Testing

Uploaded by

aryarajeev2204
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Module5 - Software Testing

Uploaded by

aryarajeev2204
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 279

Module-5

• Strategic Approach to Software Testing,


Testing Fundamentals – Test Plan, Test Design,
Test Execution, Reviews. Inspection &
Auditing, Functional Testing – Control
coverage, data coverage, conditional coverage,
Non-Functional Testing – Performance – Load,
volume, endurance, scalability stress testings
and Security Testing
Testing
• “Testing is the process of executing a program
with the intention of finding errors.” – Myers
• “Testing can show the presence of bugs but
never their absence.” – Dijkstra
• Testing is the process of exercising a program
with the specific intent of finding errors prior
to delivery to the end user.
Who Tests the Software?

developer independent tester


Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

5
Characteristics of Testable Software
• Operable
– The better it works (i.e., better quality), the easier it is to test
• Observable
– Incorrect output is easily identified; internal errors are automatically detected
• Controllable
– The states and variables of the software can be controlled directly by the tester
• Decomposable
– The software is built from independent modules that can be tested independently
• Simple
– The program should exhibit functional, structural, and code simplicity
• Stable
– Changes to the software during testing are infrequent and do not invalidate existing tests
• Understandable
– The architectural design is well understood; documentation is available and organized
Test Characteristics

• A good test has a high probability of finding an error


– The tester must understand the software and how it might fail
• A good test is not redundant
– Testing time is limited; one test should not serve the same purpose as another test
• A good test should be “best of breed”
– Tests that have the highest likelihood of uncovering a whole class of errors should be
used
• A good test should be neither too simple nor too complex
– Each test should be executed separately; combining a series of tests could cause side
effects and mask certain errors
General Characteristics of Strategic
Testing
• To perform effective testing, a software team should conduct effective
formal technical reviews
• Testing begins at the component level and work outward toward the
integration of the entire computer-based system
• Different testing techniques are appropriate at different points in time
• Testing is conducted by the developer of the software and (for large
projects) by an independent test group
• Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy
Levels of Testing
Testing Activities
Subsystem Requirements
Unit System
Code Test Analysis
Design Document
Tested Document User
Subsystem
Subsystem Manual
Unit
Code Test
Tested Integration Functional
Subsystem
Test Test
Integrated Functioning
Subsystems System

Tested Subsystem

Subsystem Unit
Code Test
All
Alltests
testsby
bydeveloper
developer
Cf. levels of testing
Testing Activities continued
Client’s
Global Understanding User
Requirements of Requirements Environment

Functioning Validated Accepted


System PerformanceSystem System
Acceptance Installation
Test Test Test

Usable
Tests
Testsby
byclient
client System
Tests
Testsby
bydeveloper
developer
User’s understanding
System in
Use
Tests
Tests(?)
(?) by
byuser
user
Unit Testing
• This type of testing is performed by developers before
the setup is handed over to the testing team to
formally execute the test cases.
• Unit testing is performed by the respective developers
on the individual units of source code assigned areas.
• The developers use test data that is different from the
test data of the quality assurance team.
• The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in
terms of requirements and functionality.
Unit Testing
• Informal:
– Incremental coding Write a little, test a little
• Static Analysis:
– Hand execution: Reading the source code
– Walk-Through (informal presentation to others)
– Code Inspection (formal presentation to others)
– Automated Tools checking for
• syntactic and semantic errors
• departure from coding standards
• Dynamic Analysis:
– Black-box testing (Test the input/output behavior)
– White-box testing (Test the internal logic of the subsystem or
object)
– Data-structure based testing (Data types determine test
cases)
Which is more effective, static or dynamic analysis?
Limitations of Unit Testing
• Testing cannot catch each and every bug in an
application.
• It is impossible to evaluate every execution
path in every software application.
• The same is the case with unit testing.
Integration Testing
• Integration testing is defined as the testing of
combined parts of an application to determine
if they function correctly.
• Integration testing can be done in two ways:
Bottom-up integration testing and Top-down
integration testing.
Top Down
Bottom up
System Testing
• System testing tests the system as a whole.
Once all the components are integrated, the
application as a whole is tested rigorously to
see that it meets the specified Quality
Standards.
• This type of testing is performed by a
specialized testing team.
System Testing
Regression Testing
• Whenever a change in a software application is
made, it is quite possible that other areas within
the application have been affected by this change.
• Regression testing is performed to verify that a
fixed bug hasn't resulted in another functionality
or business rule violation.
• The intent of regression testing is to ensure that a
change, such as a bug fix should not result in
another fault being uncovered in the application.
Acceptance Testing
• This is arguably the most important type of testing, as it
is conducted by the Quality Assurance Team who will
gauge whether the application meets the intended
specifications and satisfies the client’s requirement.
• The QA team will have a set of pre-written scenarios and
test cases that will be used to test the application.
• By performing acceptance tests on an application, the
testing team will deduce how the application will
perform in production.
• There are also legal and contractual requirements for
acceptance of the system.
Regression Testing
• Test the effects of the newly introduced
changes on all the previously integrated
code.
• The common strategy is to accumulate a
comprehensive regression bucket but
also to define a subset.
• The full bucket is run only occasionally,
but the subset is run against every spin.
• Disadvantages:
1. To decide how much of a subset to use
and which tests to select.
Alpha Testing
• Alpha testing is performed by testers who are usually internal
employees of the organization
• This test is the first stage of testing and will be performed amongst the
teams (developer and QA teams).
• Unit testing, integration testing and system testing when combined
together is known as alpha testing.
During this phase, the following aspects will be tested in the application:
• Spelling Mistakes
• Broken Links
• Cloudy Directions
• The Application will be tested on machines with the lowest
specification to test loading times and any latency problems.
Beta Testing
• This test is performed after alpha testing has been
successfully performed.
• In beta testing, a sample of the intended audience tests
the application.
• Beta testing is performed by clients who are not part of
the organization
• Beta testing is also known as pre-release testing.
• Beta test versions of software are ideally distributed to a
wide audience on the Web, partly to give the program a
"real-world" test and partly to provide a preview of the
next release.
• the beta test is being carried out by real users in the real
environment.
• In this phase, the audience will be testing the following:
• Users will install, run the application and send their feedback to
the project team.
• Typographical errors, confusing application flow, and even crashes.
• Getting the feedback, the project team can fix the problems before
releasing the software to the actual users.
• The more issues you fix that solve real user problems, the higher
the quality of your application will be.
• Having a higher-quality application when you release it to the
general public will increase customer satisfaction.
White Box Testing

• White Box Testing is a testing technique in which software’s internal structure,


design, and coding are tested to verify input-output flow and improve design,
usability, and security. In white box testing, code is visible to testers, so it is also
called Clear box testing, Open box testing, Transparent box testing, Code-based
testing, and Glass box testing.
Following are important WhiteBox Testing Techniques:
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Multiple Condition Coverage
• Finite State Machine Coverage
• Path Coverage
• Control flow testing
• Data flow testing
• Real life scenario
• Suppose there is a registration page in an e-commerce
website to be tested. If we are doing black box testing, it will
only be checked whether the registration page is working fine
or not. But in the case of white box testing, all the functions or
classes(which are called) will be checked when that
registration page is executed.
• If you are using a calculator, then in the case of black box
testing, you will be concerned if the output you are getting is
correct or not. But in the case of white box testing, testers will
check the internal working of the calculator and how this
output was calculated.

White-box Testing
Focus: Thoroughness (Coverage). Every statement in the component is
executed at least once.
• Four types of white-box testing
– Statement Testing
– Loop Testing
– Path Testing
– Branch Testing

White-box Testing (Continued)
Statement Testing (Algebraic Testing): Test single
statements
• Loop Testing:
– Cause execution of the loop to be skipped completely.
(Exception: Repeat loops)
– Loop to be executed exactly once
– Loop to be executed more than once
• Path testing:
– Make sure all paths in the program are executed
• Branch Testing (Conditional Testing): Make sure that
each possible outcome from a condition is tested at
least once

if ( i = TRUE) printf("YES\n");else printf("NO\n");


Test cases: 1) i = TRUE; 2) i = FALSE
=1) Statement coverage:
• In a programming language, a statement is nothing but the line of code or instruction for the computer to
understand and act accordingly. A statement becomes an executable statement when it gets compiled and
converted into the object code and performs the action when the program is in a running mode.
• Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether each and
every line of the code is executed at least once.
#2) Branch Coverage:
• “Branch” in a programming language is like the “IF statements”. An IF statement has two branches: True and
False.
• So in Branch coverage (also called Decision coverage), we validate whether each branch is executed at least
once.
• In case of an “IF statement”, there will be two test conditions:
• One to validate the true branch and,
• Other to validate the false branch.
• Hence, in theory, Branch Coverage is a testing method which is when executed ensures that each and every
branch from each decision point is executed.
#3) Path Coverage
• Path coverage tests all the paths of the program. This is a comprehensive technique which ensures that all
the paths of the program are traversed at least once. Path Coverage is even more powerful than Branch
coverage. This technique is useful for testing the complex programs.
White Box Testing Example

• Consider the below simple pseudocode:


INPUT A & B
C=A+B
IF C>100 PRINT “ITS DONE”
• For Statement Coverage – we would only need one test case to check all the lines of the
code.
• That means:
• If I consider TestCase_01 to be (A=40 and B=70), then all the lines of code will be executed.
• Now the question arises:
• Is that sufficient?
• What if I consider my Test case as A=33 and B=45?
• Because Statement coverage will only cover the true side, for the pseudo code, only one
test case would NOT be sufficient to test it. As a tester, we have to consider the negative
cases as well.
• Hence for maximum coverage, we need to consider “Branch Coverage”, which will evaluate
the “FALSE” conditions.
• So now the pseudocode becomes:
• INPUT A & B C = A + B IF C>100 PRINT “ITS DONE” ELSE PRINT “ITS PENDING” Since
Statement coverage is not sufficient to test the entire pseudo code, we would
require Branch coverage to ensure maximum coverage.
• So for Branch coverage, we would require two test cases to complete the testing of
this pseudo code.
• TestCase_01: A=33, B=45
• TestCase_02: A=25, B=30
• With this, we can see that each and every line of the code is executed at least once.
• Here are the Conclusions that are derived so far:
• Branch Coverage ensures more coverage than Statement coverage.
• Branch coverage is more powerful than Statement coverage.
• 100% Branch coverage itself means 100% statement coverage.
• But 100 % statement coverage does not guarantee 100% branch coverage.
• Path Coverage:
• Path coverage is used to test the complex
code snippets, which basically involve loop
statements or combination of loops and
decision statements.
• Consider this pseudocode:
• INPUT A & B C = A + B IF C>100 PRINT “ITS DONE” END
IF IF A>50 PRINT “ITS PENDING” END IF Now to ensure
maximum coverage, we would require 4 test cases.
• How? Simply – there are 2 decision statements, so for
each decision statement, we would need two branches
to test. One for true and the other for the false
condition. So for 2 decision statements, we would
require 2 test cases to test the true side and 2 test cases
to test the false side, which makes a total of 4 test
cases.
To simplify these let’s consider below flowchart of the pseudo code
• In order to have the full coverage, we would
need following test cases:
• TestCase_01: A=50, B=60
• TestCase_02: A=55, B=40
• TestCase_03: A=40, B=65
• TestCase_04: A=30, B=30
Red Line – TestCase_01 = (A=50, B=60)
Blue Line = TestCase_02 = (A=55, B=40)
Orange Line = TestCase_03 = (A=40, B=65)
Green Line = TestCase_04 = (A=30, B=30)
Statement Coverage Testing

Read A Scenario 1:
Read B f A = 7, B= 3
if A > B No of statements Executed= 5
Print “A is greater than B” Total statements= 7
else Statement coverage= 5 / 7 * 100
Print “B is greater than A” = 71.00 %
endif Scenario 2:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 7
Statement coverage= 6 / 7 * 100
= 85.20 %
White Box Test: Cyclomatic Complexity

• Cyclomatic complexity is a source code complexity


measurement that is being correlated to a number of
coding errors. It is calculated by developing a Control Flow
Graph of the code that measures the number of linearly-
independent paths through a program module.
• Lower the Program's cyclomatic complexity, lower the risk
to modify and easier to understand. It can be represented
using the below formula:
• Cyclomatic complexity = E - N + 2*P where, E = number of
edges in the flow graph. N = number of nodes in the flow
graph. P = number of nodes that have exit points
Flow graph notation for a program
Calculating Cyclomatic Complexity-

• cyclomatic complexity is calculated using the control


flow representation of the program code.
• In control flow representation of the program code,
• Nodes represent parts of the code having no
branches.
• Edges represent possible control flow transfers during
program execution

• There are 3 commonly used methods for calculating


the cyclomatic complexity
Method-01:
• Cyclomatic Complexity = Total number of closed regions in the control flow
graph + 1
Method-02:
• Cyclomatic Complexity = E – N + 2
Method-03:
• Cyclomatic Complexity = P + 1
• Here,
P = Total number of predicate nodes contained in the control flow graph

Note-

• Predicate nodes are the conditional nodes.


• They give rise to two branches in the control flow graph.
• Method-01:
• Cyclomatic Complexity = Total number of closed regions in the control flow graph + 1
2+1
=3
Method-02:
• Cyclomatic Complexity
• =E–N+2
• =8–7+2
• =3
• Method-03:

• Cyclomatic Complexity
• =P+1
• =2+1
• =3
V(G) = 9 – 7 + 2 = 4
V(G) = 3 + 1 = 4 (Condition nodes are 1,2 and 3
nodes)
Basis Set – A set of possible execution path of a
program
1, 7
1, 2, 6, 1, 7
1, 2, 3, 4, 5, 2, 6, 1, 7
1, 2, 3, 5, 2, 6, 1, 7
FlowGraph:
• Example :
IF A = 10 THEN
IF B > C THEN
A=B
ELSE A = C
ENDIF
ENDIF
Print A
Print B
Print C complexity is 8 - 7 + 2 = 3
cyclomatic
• A = 10
• IF B > C THEN
• A=B
• ELSE A = C
• ENDIF
• Print A
• Print B
• Print C
cyclomatic complexity is 7-7+2 = 2.
Method-02:

Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
Method-02:

Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
• Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative
complexity of various designs.
• It is able to compute faster than the Halstead’s
metrics.
• It is used to measure the minimum effort and
best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
• Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control
complexity and not the data complexity.
• In this, nested conditional structures are
harder to understand than non-nested
structures.
• In case of simple comparisons and decision
structures, it may give a misleading figure.
Black Box Testing

• Black Box Testing is a software testing method in


which the functionalities of software applications
are tested without having knowledge of internal
code structure, implementation details and
internal paths. Black Box Testing mainly focuses
on input and output of software applications and
it is entirely based on software requirements and
specifications. It is also known as Behavioral
Testing.
Types of black-box
• requirements based
• positive/negative - checks both good/bad results
• boundary value analysis
• decision tables
• equivalence partitioning - group related inputs/outputs
• state-based - based on object state
diagrams
• compatibility testing
• user documentation testing
• domain testing
• Equivalence Class Testing: It is used to minimize the number
of possible test cases to an optimum level while maintains
reasonable test coverage.
• Boundary Value Testing: Boundary value testing is focused on
the values at boundaries. This technique determines whether
a certain range of values are acceptable by the system or not.
It is very useful in reducing the number of test cases. It is most
suitable for the systems where an input is within certain
ranges.
• Decision Table Testing: A decision table puts causes and their
effects in a matrix. There is a unique combination in each
column.
Boundary testing
• boundary value analysis: Testing conditions on bounds
between classes of inputs.

• Why is it useful to test near boundaries?

– likely source of programmer errors (< vs. <=, etc.)


– language has many ways to implement boundary checking
– requirement specs may be fuzzy about behavior on boundaries
– often uncovers internal hidden limits in code
• example: array list must resize its internal array when it fills capacity
Boundary example
• Imagine we are testing a Date class with a
daysInMonth(month, year) method.
– What are some conditions and boundary tests for this
method?

• Possible answers:
– check for leap years (every 4th yr, no 100s, yes 400s)
– try years such as: even 100s, 101s, 4s, 5s
– try months such as: June, July, Feb, invalid values
Decision tables
Equivalence testing
• equivalence partitioning:
– A black-box test technique to reduce # of required test cases.
– What is it?

– steps in equivalence testing:


• identify classes of inputs with same behavior
• test on at least one member of each equivalence class
• assume behavior will be same for all members of class

– criteria for selecting equivalence classes:


• coverage : every input is in one class
• disjointedness : no input in more than one class
• representation : if error with 1 member of class, will occur with all
• Perform boundary value analysis and
equivalence-class testing for this.
• Conditions :
• D: 1<Day<31
• M: 1<Month<12
• Y: 1800 <Year <2048
• Boundary Value Analysis:
• No. of test Cases (n = no. of variables) = 4n+1
= 4*3 +1 =13
Test Cases:

Expected
Test Case ID Day Month Year
Output
2-6-2000
1 1 6 2000

3-6-2000
2 2 6 2000

16-6-2000
3 15 6 2000

4 30 6 2000 1-7-2000

Invalid Date
5 31 6 2000

6 15 1 2000 16-1-2000

16-2-2000
7 15 2 2000

16-11-2000
8 15 11 2000

16-12-2000
9 15 12 2000

16-6-1800
10 15 6 1800

16-6-1801
11 15 6 1801

16-6-2047
12 15 6 2047

13 15 6 2048 16-6-2048
Test Cases:
Expected
Test Case ID Day Month Year
Output
16-4-2004
E1 15 4 2004

16-4-2003
E2 15 4 2003

16-1-2004
E3 15 1 2004

16-1-2003
E4 15 1 2003

16-2-2004
E5 15 2 2004

16-2-2003
E6 15 2 2003

30-4-2004
E7 29 4 2004

30-4-2003
E8 29 4 2003

30-1-2004
E9 29 1 2004

30-1-2003
E10 29 1 2003

1-3-2004
E11 29 2 2004

Invalid Date
E12 29 2 2003

1-5-2004
E13 30 4 2004

1-5-2003
E14 30 4 2003

31-1-2004
E15 30 1 2004

31-1-2003
E16 30 1 2003

Invalid Date
E17 30 2 2004

Invalid Date
E18 30 2 2003

Invalid Date
E19 31 4 2004

Invalid Date
E20 31 4 2003

1-2-2004
E21 31 1 2004

1-5-2003
E22 31 1 2003

Invalid Date
E23 31 2 2004

Invalid Date
E24 31 2 2003
• Equivalence Class Testing:
• Input classes:
• Day: D1: day between 1 to 28 D2: 29 D3: 30 D4:
31 Month: M1: Month has 30 days M2: Month
has 31 days M3: Month is February Year: Y1: Year
is a leap year Y2: Year is a normal year
• Output Classes:
• Increment Day Reset Day and Increment Month
Increment Year Invalid Date
Who does Testing?
• It depends on the process and the associated
stakeholders of the project(s). In the IT industry, large
companies have a team with responsibilities to
evaluate the developed software in context of the
given requirements

• Different companies have different designations for


people who test the software on the basis of their
experience and knowledge such as Software Tester,
Software Quality Assurance Engineer, QA Analyst, etc.
When to Start Testing?
• An early start to testing reduces the cost and time to
rework and produce error-free software that is
delivered to the client
• However in Software Development Life Cycle (SDLC),
testing can be started from the Requirements Gathering
phase and continued till the deployment of the
software.
• In the Waterfall model, formal testing is conducted in
the testing phase; but in the incremental model, testing
is performed at the end of every increment/iteration
and the whole application is tested at the end.
Testing is done in different forms at every
phase of SDLC
• During the requirement gathering phase, the
analysis and verification of requirements are
also considered as testing.
• Reviewing the design in the design phase with
the intent to improve the design is also
considered as testing.
• Testing performed by a developer on
completion of the code is also categorized as
testing.
When to Stop Testing?
• It is difficult to determine when to stop testing, as testing is a
never-ending process and no one can claim that a software is
100% tested
The following aspects are to be considered for stopping the
testing process:
• Testing Deadlines
• Completion of test case execution
• Completion of functional and code coverage to a certain point
• Bug rate falls below a certain level and no high-priority bugs
are identified
• Management decision
Verification & Validation
Software Testing - QA, QC & Testing
Software Testing - Types of Testing
• Manual Testing
• Automation Testing

• Manual testing includes testing a software


manually, i.e., without using any automated
tool or any script.
• Automation testing, which is also known as
Test Automation, is when the tester writes
scripts and uses another software to test the
product.
• This process involves automation of a manual
process. Automation Testing is used to re-run
the test scenarios that were performed
manually, quickly, and repeatedly.
• Apart from regression testing, automation
testing is also used to test the application from
load, performance, and stress point of view.
• It increases the test coverage, improves
accuracy, and saves time and money in
comparison to manual testing.
Functional Testing

• Functional testing is testing the ‘Functionality’


of a software or an application under test.
• It tests the behavior of the software under
test. Based on the requirement of the client, a
document called a software specification or
Requirement Specification is used as a guide
to test the application.
Types of Functional Testing

• Smoke Testing:
• Sanity Testing:
• Integration Testing:
• Regression Testing:
• Localization Testing:
• User Acceptance Testing
Smoke Testing

• This type of testing is performed before the


actual system testing to check if the critical
functionalities are working fine in order to
carry out further extensive testing.
• This, in turn, saves time installing the new
build again and avoids further testing if the
critical functionalities fail to work. This is a
generalized way of testing the application.
• Once a build is deployed to QA, the basic cycle followed
is that if the smoke test passes, the build is accepted by
the QA team for further testing but if it fails, then the
build is rejected until the reported issues are fixed.
• This testing is normally used in Integration Testing,
System Testing, and Acceptance Level Testing. Never
treat this as a substitute for actual end to end
complete testing. It comprises of both positive and
negative tests depending on the build implementation.
• Smoke Testing Cycle
• Who Should Perform the Smoke Test?
• Smoke Testing is ideally performed by the QA lead who
decides based on the result as to whether to pass the build
to the team for further testing or reject it. Or in the absence
of the lead, the QA’s themselves can also perform this
testing.
• At times, when the project is a large scale one, then a group
of QA can also perform this testing to check for any
showstoppers. But this is not so in the case of SCRUM
because SCRUM is a flat structure with no Leads or Managers
and each tester has their own responsibilities towards their
stories.
Suppose you’re smoke testing a new web application for user
authentication. You’d check if users can register and log in with
valid credentials and ensure the login process follows the
correct steps.
You may also need to verify that error messages appear for
incorrect credentials. Passing these checks confirms the
application’s core functionalities are stable enough for further
testing.
Sanity Testing
• It is a type of testing where only a specific functionality or a bug
which is fixed is tested to check whether the functionality is
working fine and see if there are no other issues due to the
changes in the related components. It is a specific way of testing
the application.
• Sanity Testing is done when as a QA we do not have sufficient time
to run all the test cases, be it Functional Testing, UI, OS or Browser
Testing.
• At times the testing is even done randomly with no test cases. But
remember, the sanity test should only be done when you are
running short of time, so never use this for your regular releases.
Theoretically, this testing is a subset of Regression Testing.
Sanity Testing
• Sanity testing is done at random to verify that each
functionality is working as expected.
• This is not a planned testing and is done only when
there’s a time crunch.
• It may not every time be possible to create the test
cases; a rough set of test cases is created usually.
• This mainly includes verification of business rules,
functionality.
• This mostly spans over 2-3 days max.
Difference Between Smoke and Sanity Testing
Localization Testing

• It is a testing process to check the software’s


functioning when it is transformed into an
application using a different language as required
by the client.
• Example: Say a website is working fine in English
language setup and now it is localized to Spanish
language setup. Changes in the language may affect
the overall user interface and functionality too.
Testing is done to check if these changes are known
as Localization testing.
User Acceptance Testing

• In User Acceptance testing, the application is


tested based on the user’s comfort and
acceptance by considering their ease of use.
• The actual end users or the clients are given a
trial version to be used in their office setup to
check if the software is working as per their
requirements in a real environment. This testing
is carried out before the final launch and is also
termed as Beta Testing or end-user testing.
Steps performed in Functional Testing

• Create input values


• Execute test cases
• Compare actual and expected output
Functional Testing Example

• An online HRMS portal on which the user logs in with their user account
and password. The login page has two text fields for username and
password. It also has two buttons – Login and Cancel.
• When successful, the login page directs the user to the HRMS home
page. The cancel button cancels the login.
Specifications:
• The user id field requires a minimum of 6 characters, a maximum of 10
characters, numbers(0-9), letters(a-z, A-z), special characters (only
underscore, period, hyphen allowed). It cannot be left blank. User id
must begin with a number/character. It cannot include special
characters.
• The password field requires a minimum of 6 characters, a maximum of 8
characters, numbers (0-9), letters (a-z, A-Z), all special characters. It
cannot be blank.
Non-Functional Testing

• There are some aspects which are complex such as the


performance of an application etc and this testing checks the
Quality of the software to be tested. Quality majorly depends on
time, accuracy, stability, correctness and durability of a product
under various adverse circumstances.
• In software terms, when an application works as per the user’s
expectation, smoothly and efficiently under any condition, then it
is stated as a reliable application. Based on these aspects of
quality, it is very critical to test under these parameters. This type
of testing is called Non- Functional Testing.
• It is not feasible to test this type manually, hence some special
automated tools are used to test it.
• Example tools: LoadRunner, JMeter etc.
Non-Functional Testing Types
• Performance Testing
• Compatibility Testing
• Load Testing
• Usability Testing
• Failover Testing
• Stress Testing
• Scalability Testing
• Maintainability Testing
• Volume Testing
• Recovery Testing
• Security Testing
• Portability Testing
• Compliance Testing
• Efficiency Testing
• Endurance Testing
• Reliability Testing
• Visual Testing
• Internationalization Testing
• Localization Testing
Types of Non-Functional Testing

• Performance Testing:
• Usability Testing:
• Security Testing:
• Performance Testing:
– Load Testing
– Stress Testing
– Volume Testing
– Endurance Testing
• Load Testing: An application which is expected to handle a particular workload is
tested for its response time in a real environment depicting a particular workload.
It is tested for its ability to function correctly in a stipulated time and is able to
handle the load.
Examples of load testing
• Downloading a series of large files from the internet
• Running multiple applications on a computer or server simultaneously
• Assigning many jobs to a printer in a queue
• Subjecting a server to a large amount of traffic
• Writing and reading data to and from a hard disk continuously
• Load Testing – Real time Examples
• Target.com lost $780,000 in sales in just 3 hours when the site was down during a
promotion in 2015
• When Amazon.com servers crashed in 2013 for 30 minutes, Amazon lost $66,240
per minute
• Example: load testing
• Check that the mail server can handle thousands of
concurrent users.
• Testing an e-commerce application by simulating
thousands of users making purchases simultaneously.
• Tools Preferred:
• Neotys Neoload
• JMeter
• Parasoft Load Test
• Stress Testing: In Stress testing, the application is stressed
with an extra workload to check if it works efficiently and
is able to handle the stress as per the requirement.
• Example: Consider a website which is tested to check its
behavior when the user accesses is at its peak. There
could be a situation where the workload crosses beyond
the specification. In this case, the website may fail, slow
down or even crash.
• Stress testing is to check these situations using
automation tools to create a real-time situation of
workload and find the defects.
• Usability Testing:
• In this type of testing, the User Interface is tested for
its ease of use and see how user-friendly it is.
• Security Testing:
• Security Testing is to check how secure the software is
regarding data over the network from malicious
attacks. The key areas to be tested in this testing
include authorization, authentication of users and
their access to data based on roles such as admin,
moderator, composer, and user level.
• Example security testing
• Suppose a financial institute is reforming its security
guidelines and introducing new privacy protocols to
safeguard customer data. Performing
penetration testing on a bank application to identify
security vulnerabilities in such cases is a way to do so.
• Tools Preferred:
• ImmuniWeb
• Vega
• Google Nogotofail
• Volume Testing: Under Volume testing the application’s ability to handle
data in the volume is tested by providing a real-time environment. The
application is tested for its correctness and reliability under adverse
conditions.
• Endurance Testing: In Endurance testing the durability of the software is
tested with a repeated and consistent flow of load in a scalable pattern. It
checks the endurance power of the software when loaded with a
consistent workload.
• While Stress testing takes the tested system to its limits, Endurance
testing takes the application to its limit over time.
• For Example, the most complex issues – memory leaks, database server
utilization, and unresponsive system – happen when software runs for an
extended period of time. If you skip the endurance tests, your chances of
detecting such defects prior to deployment are quite low.
White Box Testing

• White Box Testing is a testing technique in which software’s internal structure,


design, and coding are tested to verify input-output flow and improve design,
usability, and security. In white box testing, code is visible to testers, so it is also
called Clear box testing, Open box testing, Transparent box testing, Code-based
testing, and Glass box testing.
Following are important WhiteBox Testing Techniques:
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Multiple Condition Coverage
• Finite State Machine Coverage
• Path Coverage
• Control flow testing
• Data flow testing
• Real life scenario
• Suppose there is a registration page in an e-commerce
website to be tested. If we are doing black box testing, it will
only be checked whether the registration page is working fine
or not. But in the case of white box testing, all the functions or
classes(which are called) will be checked when that
registration page is executed.
• If you are using a calculator, then in the case of black box
testing, you will be concerned if the output you are getting is
correct or not. But in the case of white box testing, testers will
check the internal working of the calculator and how this
output was calculated.

White-box Testing
Focus: Thoroughness (Coverage). Every statement in the component is
executed at least once.
• Four types of white-box testing
– Statement Testing
– Loop Testing
– Path Testing
– Branch Testing

White-box Testing (Continued)
Statement Testing (Algebraic Testing): Test single
statements
• Loop Testing:
– Cause execution of the loop to be skipped completely.
(Exception: Repeat loops)
– Loop to be executed exactly once
– Loop to be executed more than once
• Path testing:
– Make sure all paths in the program are executed
• Branch Testing (Conditional Testing): Make sure that
each possible outcome from a condition is tested at
least once

if ( i = TRUE) printf("YES\n");else printf("NO\n");


Test cases: 1) i = TRUE; 2) i = FALSE
=1) Statement coverage:
• In a programming language, a statement is nothing but the line of code or instruction for the computer to
understand and act accordingly. A statement becomes an executable statement when it gets compiled and
converted into the object code and performs the action when the program is in a running mode.
• Hence “Statement Coverage”, as the name itself suggests, it is the method of validating whether each and
every line of the code is executed at least once.
#2) Branch Coverage:
• “Branch” in a programming language is like the “IF statements”. An IF statement has two branches: True and
False.
• So in Branch coverage (also called Decision coverage), we validate whether each branch is executed at least
once.
• In case of an “IF statement”, there will be two test conditions:
• One to validate the true branch and,
• Other to validate the false branch.
• Hence, in theory, Branch Coverage is a testing method which is when executed ensures that each and every
branch from each decision point is executed.
#3) Path Coverage
• Path coverage tests all the paths of the program. This is a comprehensive technique which ensures that all
the paths of the program are traversed at least once. Path Coverage is even more powerful than Branch
coverage. This technique is useful for testing the complex programs.
White Box Testing Example

• Consider the below simple pseudocode:


INPUT A & B
C=A+B
IF C>100 PRINT “ITS DONE”
• For Statement Coverage – we would only need one test case to check all the lines of the
code.
• That means:
• If I consider TestCase_01 to be (A=40 and B=70), then all the lines of code will be executed.
• Now the question arises:
• Is that sufficient?
• What if I consider my Test case as A=33 and B=45?
• Because Statement coverage will only cover the true side, for the pseudo code, only one
test case would NOT be sufficient to test it. As a tester, we have to consider the negative
cases as well.
• Hence for maximum coverage, we need to consider “Branch Coverage”, which will evaluate
the “FALSE” conditions.
• So now the pseudocode becomes:
• INPUT A & B C = A + B IF C>100 PRINT “ITS DONE” ELSE PRINT “ITS PENDING” Since
Statement coverage is not sufficient to test the entire pseudo code, we would
require Branch coverage to ensure maximum coverage.
• So for Branch coverage, we would require two test cases to complete the testing of
this pseudo code.
• TestCase_01: A=33, B=45
• TestCase_02: A=25, B=30
• With this, we can see that each and every line of the code is executed at least once.
• Here are the Conclusions that are derived so far:
• Branch Coverage ensures more coverage than Statement coverage.
• Branch coverage is more powerful than Statement coverage.
• 100% Branch coverage itself means 100% statement coverage.
• But 100 % statement coverage does not guarantee 100% branch coverage.
• Path Coverage:
• Path coverage is used to test the complex
code snippets, which basically involve loop
statements or combination of loops and
decision statements.
• Consider this pseudocode:
• INPUT A & B C = A + B IF C>100 PRINT “ITS DONE” END
IF IF A>50 PRINT “ITS PENDING” END IF Now to ensure
maximum coverage, we would require 4 test cases.
• How? Simply – there are 2 decision statements, so for
each decision statement, we would need two branches
to test. One for true and the other for the false
condition. So for 2 decision statements, we would
require 2 test cases to test the true side and 2 test cases
to test the false side, which makes a total of 4 test
cases.
To simplify these let’s consider below flowchart of the pseudo code
• In order to have the full coverage, we would
need following test cases:
• TestCase_01: A=50, B=60
• TestCase_02: A=55, B=40
• TestCase_03: A=40, B=65
• TestCase_04: A=30, B=30
Red Line – TestCase_01 = (A=50, B=60)
Blue Line = TestCase_02 = (A=55, B=40)
Orange Line = TestCase_03 = (A=40, B=65)
Green Line = TestCase_04 = (A=30, B=30)
Statement Coverage Testing

Read A Scenario 1:
Read B f A = 7, B= 3
if A > B No of statements Executed= 5
Print “A is greater than B” Total statements= 7
else Statement coverage= 5 / 7 * 100
Print “B is greater than A” = 71.00 %
endif Scenario 2:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 7
Statement coverage= 6 / 7 * 100
= 85.20 %
White Box Test: Cyclomatic Complexity

• Cyclomatic complexity is a source code complexity


measurement that is being correlated to a number of
coding errors. It is calculated by developing a Control Flow
Graph of the code that measures the number of linearly-
independent paths through a program module.
• Lower the Program's cyclomatic complexity, lower the risk
to modify and easier to understand. It can be represented
using the below formula:
• Cyclomatic complexity = E - N + 2*P where, E = number of
edges in the flow graph. N = number of nodes in the flow
graph. P = number of nodes that have exit points
Flow graph notation for a program
Calculating Cyclomatic Complexity-

• yclomatic complexity is calculated using the control


flow representation of the program code.
• In control flow representation of the program code,
• Nodes represent parts of the code having no
branches.
• Edges represent possible control flow transfers during
program execution

• There are 3 commonly used methods for calculating


the cyclomatic complexity
Method-01:
• Cyclomatic Complexity = Total number of closed regions in the control flow
graph + 1
Method-02:
• Cyclomatic Complexity = E – N + 2
Method-03:
• Cyclomatic Complexity = P + 1
• Here,
P = Total number of predicate nodes contained in the control flow graph

Note-

• Predicate nodes are the conditional nodes.


• They give rise to two branches in the control flow graph.
• Method-01:
• Cyclomatic Complexity = Total number of closed regions in the control flow graph + 1
2+1
=3
Method-02:
• Cyclomatic Complexity
• =E–N+2
• =8–7+2
• =3
• Method-03:

• Cyclomatic Complexity
• =P+1
• =2+1
• =3
V(G) = 9 – 7 + 2 = 4
V(G) = 3 + 1 = 4 (Condition nodes are 1,2 and 3
nodes)
Basis Set – A set of possible execution path of a
program
1, 7
1, 2, 6, 1, 7
1, 2, 3, 4, 5, 2, 6, 1, 7
1, 2, 3, 5, 2, 6, 1, 7
FlowGraph:
• Example :
IF A = 10 THEN
IF B > C THEN
A=B
ELSE A = C
ENDIF
ENDIF
Print A
Print B
Print C complexity is 8 - 7 + 2 = 3
cyclomatic
• A = 10
• IF B > C THEN
• A=B
• ELSE A = C
• ENDIF
• Print A
• Print B
• Print C
cyclomatic complexity is 7-7+2 = 2.
Method-02:

Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
Method-02:

Cyclomatic Complexity
=E–N+2
= 16 – 14 + 2
=4
• Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative
complexity of various designs.
• It is able to compute faster than the Halstead’s
metrics.
• It is used to measure the minimum effort and
best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
• Disadvantages of Cyclomatic Complexity:
• It is the measure of the programs’s control
complexity and not the data complexity.
• In this, nested conditional structures are
harder to understand than non-nested
structures.
• In case of simple comparisons and decision
structures, it may give a misleading figure.
Black Box Testing

• Black Box Testing is a software testing method in


which the functionalities of software applications
are tested without having knowledge of internal
code structure, implementation details and
internal paths. Black Box Testing mainly focuses
on input and output of software applications and
it is entirely based on software requirements and
specifications. It is also known as Behavioral
Testing.
Types of black-box
• requirements based
• positive/negative - checks both good/bad results
• boundary value analysis
• decision tables
• equivalence partitioning - group related inputs/outputs
• state-based - based on object state
diagrams
• compatibility testing
• user documentation testing
• domain testing
• Equivalence Class Testing: It is used to minimize the number
of possible test cases to an optimum level while maintains
reasonable test coverage.
• Boundary Value Testing: Boundary value testing is focused on
the values at boundaries. This technique determines whether
a certain range of values are acceptable by the system or not.
It is very useful in reducing the number of test cases. It is most
suitable for the systems where an input is within certain
ranges.
• Decision Table Testing: A decision table puts causes and their
effects in a matrix. There is a unique combination in each
column.
Boundary testing
• boundary value analysis: Testing conditions on bounds
between classes of inputs.

• Why is it useful to test near boundaries?

– likely source of programmer errors (< vs. <=, etc.)


– language has many ways to implement boundary checking
– requirement specs may be fuzzy about behavior on boundaries
– often uncovers internal hidden limits in code
• example: array list must resize its internal array when it fills capacity
Boundary example
• Imagine we are testing a Date class with a
daysInMonth(month, year) method.
– What are some conditions and boundary tests for this
method?

• Possible answers:
– check for leap years (every 4th yr, no 100s, yes 400s)
– try years such as: even 100s, 101s, 4s, 5s
– try months such as: June, July, Feb, invalid values
Decision tables
Equivalence testing
• equivalence partitioning:
– A black-box test technique to reduce # of required test cases.
– What is it?

– steps in equivalence testing:


• identify classes of inputs with same behavior
• test on at least one member of each equivalence class
• assume behavior will be same for all members of class

– criteria for selecting equivalence classes:


• coverage : every input is in one class
• disjointedness : no input in more than one class
• representation : if error with 1 member of class, will occur with all
• Perform boundary value analysis and
equivalence-class testing for this.
• Conditions :
• D: 1<Day<31
• M: 1<Month<12
• Y: 1800 <Year <2048
• Boundary Value Analysis:
• No. of test Cases (n = no. of variables) = 4n+1
= 4*3 +1 =13
Test Cases:

Expected
Test Case ID Day Month Year
Output
2-6-2000
1 1 6 2000

3-6-2000
2 2 6 2000

16-6-2000
3 15 6 2000

4 30 6 2000 1-7-2000

Invalid Date
5 31 6 2000

6 15 1 2000 16-1-2000

16-2-2000
7 15 2 2000

16-11-2000
8 15 11 2000

16-12-2000
9 15 12 2000

16-6-1800
10 15 6 1800

16-6-1801
11 15 6 1801

16-6-2047
12 15 6 2047

13 15 6 2048 16-6-2048
Test Cases:
Expected
Test Case ID Day Month Year
Output
16-4-2004
E1 15 4 2004

16-4-2003
E2 15 4 2003

16-1-2004
E3 15 1 2004

16-1-2003
E4 15 1 2003

16-2-2004
E5 15 2 2004

16-2-2003
E6 15 2 2003

30-4-2004
E7 29 4 2004

30-4-2003
E8 29 4 2003

30-1-2004
E9 29 1 2004

30-1-2003
E10 29 1 2003

1-3-2004
E11 29 2 2004

Invalid Date
E12 29 2 2003

1-5-2004
E13 30 4 2004

1-5-2003
E14 30 4 2003

31-1-2004
E15 30 1 2004

31-1-2003
E16 30 1 2003

Invalid Date
E17 30 2 2004

Invalid Date
E18 30 2 2003

Invalid Date
E19 31 4 2004

Invalid Date
E20 31 4 2003

1-2-2004
E21 31 1 2004

1-5-2003
E22 31 1 2003

Invalid Date
E23 31 2 2004

Invalid Date
E24 31 2 2003
• Equivalence Class Testing:
• Input classes:
• Day: D1: day between 1 to 28 D2: 29 D3: 30 D4:
31 Month: M1: Month has 30 days M2: Month
has 31 days M3: Month is February Year: Y1: Year
is a leap year Y2: Year is a normal year
• Output Classes:
• Increment Day Reset Day and Increment Month
Increment Year Invalid Date
Who does Testing?
• It depends on the process and the associated
stakeholders of the project(s). In the IT industry, large
companies have a team with responsibilities to
evaluate the developed software in context of the
given requirements

• Different companies have different designations for


people who test the software on the basis of their
experience and knowledge such as Software Tester,
Software Quality Assurance Engineer, QA Analyst, etc.
When to Start Testing?
• An early start to testing reduces the cost and time to
rework and produce error-free software that is
delivered to the client
• However in Software Development Life Cycle (SDLC),
testing can be started from the Requirements Gathering
phase and continued till the deployment of the
software.
• In the Waterfall model, formal testing is conducted in
the testing phase; but in the incremental model, testing
is performed at the end of every increment/iteration
and the whole application is tested at the end.
Testing is done in different forms at every
phase of SDLC
• During the requirement gathering phase, the
analysis and verification of requirements are
also considered as testing.
• Reviewing the design in the design phase with
the intent to improve the design is also
considered as testing.
• Testing performed by a developer on
completion of the code is also categorized as
testing.
When to Stop Testing?
• It is difficult to determine when to stop testing, as testing is a
never-ending process and no one can claim that a software is
100% tested
The following aspects are to be considered for stopping the
testing process:
• Testing Deadlines
• Completion of test case execution
• Completion of functional and code coverage to a certain point
• Bug rate falls below a certain level and no high-priority bugs
are identified
• Management decision
Software Testing - QA, QC & Testing
Software Testing - Types of Testing
• Manual Testing
• Automation Testing

• Manual testing includes testing a software


manually, i.e., without using any automated
tool or any script.
• Automation testing, which is also known as
Test Automation, is when the tester writes
scripts and uses another software to test the
product.
• This process involves automation of a manual
process. Automation Testing is used to re-run
the test scenarios that were performed
manually, quickly, and repeatedly.
• Apart from regression testing, automation
testing is also used to test the application from
load, performance, and stress point of view.
• It increases the test coverage, improves
accuracy, and saves time and money in
comparison to manual testing.
What is Automate?
• It is not possible to automate everything in a
software. The areas at which a user can make
transactions such as the login form or
registration forms, any area where large number
of users can access the software simultaneously
should be automated.
• Furthermore, all GUI items, connections with
databases, field validations, etc. can be efficiently
tested by automating the manual process.
When to Automate?
Test Automation should be used by considering the
following aspects of a software:
• Large and critical projects
• Projects that require testing the same areas frequently
• Requirements not changing frequently
• Accessing the application for load and performance
with many virtual users
• Stable software with respect to manual testing
• Availability of time
How to Automate?
• Identifying areas within a software for automation
• Selection of appropriate tool for test automation
• Writing test scripts
• Development of test suits
• Execution of scripts
• Create result reports
• Identify any potential bug or performance issues
Testing Techniques
• Black-Box Testing
• The technique of testing without having any knowledge
of the interior workings of the application is called
black-box testing.
• The tester is oblivious to the system architecture and
does not have access to the source code.
• Typically, while performing a black-box test, a tester will
interact with the system's user interface by providing
inputs and examining outputs without knowing how
and where the inputs are worked upon.
White Box Testing
• White-box testing is the detailed investigation of
internal logic and structure of the code. White-box
testing is also called glass testing or open-box
testing.
• In order to perform white-box testing on an
application, a tester needs to know the internal
workings of the code.
• The tester needs to have a look inside the source
code and find out which unit/chunk of the code is
behaving inappropriately.
Grey-Box Testing
• Grey-box testing is a technique to test the
application with having a limited knowledge of
the internal workings of an application.
Software Testing - Levels
Functional Testing
• This is a type of black-box testing that is based on the
specifications of the software that is to be tested.
• The application is tested by providing input and then
the results are examined that need to conform to the
functionality it was intended for.
• Functional testing of a software is conducted on a
complete, integrated system to evaluate the system's
compliance with its specified requirements.
Unit Testing
• This type of testing is performed by developers before
the setup is handed over to the testing team to
formally execute the test cases.
• Unit testing is performed by the respective developers
on the individual units of source code assigned areas.
• The developers use test data that is different from the
test data of the quality assurance team.
• The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in
terms of requirements and functionality.
Limitations of Unit Testing
• Testing cannot catch each and every bug in an
application.
• It is impossible to evaluate every execution
path in every software application.
• The same is the case with unit testing.
Integration Testing
• Integration testing is defined as the testing of
combined parts of an application to determine
if they function correctly.
• Integration testing can be done in two ways:
Bottom-up integration testing and Top-down
integration testing.
Top Down
Bottom up
System Testing
• System testing tests the system as a whole.
Once all the components are integrated, the
application as a whole is tested rigorously to
see that it meets the specified Quality
Standards.
• This type of testing is performed by a
specialized testing team.
System Testing
Regression Testing
• Whenever a change in a software application is
made, it is quite possible that other areas within
the application have been affected by this change.
• Regression testing is performed to verify that a
fixed bug hasn't resulted in another functionality
or business rule violation.
• The intent of regression testing is to ensure that a
change, such as a bug fix should not result in
another fault being uncovered in the application.
Acceptance Testing
• This is arguably the most important type of testing, as it
is conducted by the Quality Assurance Team who will
gauge whether the application meets the intended
specifications and satisfies the client’s requirement.
• The QA team will have a set of pre-written scenarios and
test cases that will be used to test the application.
• By performing acceptance tests on an application, the
testing team will deduce how the application will
perform in production.
• There are also legal and contractual requirements for
acceptance of the system.
Alpha Testing
• This test is the first stage of testing and will be performed
amongst the teams (developer and QA teams).
• Unit testing, integration testing and system testing when
combined together is known as alpha testing.
During this phase, the following aspects will be tested in the
application:
• Spelling Mistakes
• Broken Links
• Cloudy Directions
• The Application will be tested on machines with the lowest
specification to test loading times and any latency problems.
Beta Testing
• This test is performed after alpha testing has
been successfully performed.
• In beta testing, a sample of the intended
audience tests the application.
• Beta testing is also known as pre-release testing.
• Beta test versions of software are ideally
distributed to a wide audience on the Web,
partly to give the program a "real-world" test and
partly to provide a preview of the next release.
• In this phase, the audience will be testing the following:
• Users will install, run the application and send their feedback
to the project team.
• Typographical errors, confusing application flow, and even
crashes.
• Getting the feedback, the project team can fix the problems
before releasing the software to the actual users.
• The more issues you fix that solve real user problems, the
higher the quality of your application will be.
• Having a higher-quality application when you release it to
the general public will increase customer satisfaction.
Non-Functional Testing
• This section is based upon testing an
application from its non-functional attributes.
Non-functional testing involves testing a
software from the requirements which are
nonfunctional in nature but important such as
performance, security, user interface, etc.
Verification vs. validation
• Verification: "Are we building the product right"
– The software should conform to its specification
• Validation: "Are we building the right product"
– The software should do what the user really requires
• V & V must be applied at each stage in the
software process
• Two principal objectives
– Discovery of defects in a system
– Assessment of whether the system is usable in an
operational situation
Verification
• Verification testing includes different activities
such as business requirements, system
requirements, design review, and code
walkthrough while developing a product.
• It is also known as static testing, where we are
ensuring that "we are developing the right
product or not". And it also checks that the
developed application fulfilling all the
requirements given by the client.
Validation testing

• Validation testing is testing where tester


performed functional and non-functional
testing. Here functional testing includes
Unit Testing (UT), Integration Testing (IT) and
System Testing (ST), and non-functional
testing includes User acceptance testing (UAT).
Static and dynamic verification

• STATIC – Software inspections


– Concerned with analysis of the static system
representation to discover problems
– May be supplement by tool-based document and
code analysis
• DYNAMIC – Software testing
– Concerned with exercising and observing product
behaviour
– The system is executed with test data and its
operational behaviour is observed
Static and dynamic V&V

Static
verification

Requirements High-level Formal Detailed


specification Program
specification design design

Dynamic
Prototype
validation
Verification & Validation
Software Testing Tools

• HP Quick Test Professional


• Selenium
• IBM Rational Functional Tester
• IBM Rational Performance Tester
• SilkTest
• TestComplete
• Testing Anywhere
• WinRunner
• LoadRunner
• Visual Studio Test Professional
• Junit
• SoapUI
Major Criteria for Selecting a Testing Tool
Major Criteria for Selecting a Right Test Automation Tool
How to Select Automation Tool for Your Project?

• Do you have the necessary skilled resource to


allocate for automation tasks?
• 2) What is your budget?
• 3) Does the tool satisfy your testing needs?
• Does the tool provide you the free trial version
so that you can evaluate it before making a
decision
• Is the current tool version stable?
• How is the tool learning curve? Is the learning
time acceptable for your goals?
• Which testing types does it support?
• Does the tool support easy interface to create
and maintain test scripts
• Does it provide simple interface yet powerful
features to accomplish complex tasks?
• How easy is it to provide input test data for
complex or load tests
• Does it provide the powerful reporting with
graphical interface?
• Does it integrate well with your other testing
tools like project planning and
test management tools?
• Tool vendor refund policy
• Existing customer reviews for the tool
• Is the vendor providing initial training?
Program testing
• Can reveal the presence of errors, not their
absence
• A successful test is a test which discovers one
or more errors
• The only validation technique for non-
functional
requirements
• Should be used in conjunction with static
verification to provide full V&V coverage
Types of testing
• Defect testing
– Tests designed to discover system defects.
– A successful defect test is one which reveals the
presence
of defects in a system.
• Statistical testing
– Tests designed to reflect the frequency of user
inputs
– Used for reliability estimation
Testing and debugging
• Defect testing and debugging are distinct
processes
• Verification and validation is concerned with
establishing the existence of defects in a program
• Debugging is concerned with locating and
repairing these errors
– Debugging involves formulating hypotheses about
program behaviour then testing these hypotheses to
find the system error
V & V planning
• Careful planning is required to get the most out
of testing and inspection processes
• Planning should start early in the development
process
• The plan should identify the balance between
static verification and testing
• Test planning is about defining standards for
the testing process rather than describing
product tests
The structure of a software test plan
• The testing process
• Requirements traceability
• Tested items
• Testing schedule
• Test recording procedures
• Hardware and software requirements
• Constraints
Software Review

• Software review is an important part of


"Software Development Life Cycle (SDLC)"
that assists software engineers in validating
the quality, functionality, and other vital
features and components of the software.
• software review is used to verify various
documents like requirements, system designs,
codes, "test plans", & "test cases".
Software inspections
• Involve people examining the source representation with the
aim of discovering anomalies and defects
• Do not require execution of a system
– May be used before implementation
• May be applied to any representation of the system
– Requirements, design, test data, etc.
• Very effective technique for discovering errors
• Many different defects may be discovered in a single inspection
– In testing, one defect may mask another so several executions are required
• Reuse of domain and programming knowledge
– Reviewers are likely to have seen the types of error that commonly arise
Inspections and testing
• Inspections and testing are complementary and
not opposing verification techniques
• Both should be used during the V & V process
• Inspections can check conformance with a
specification but not conformance with the
customer’s real requirements
• Inspections cannot check non-functional
characteristics such as performance, usability,
etc.
Program inspections
• Formalised approach to document reviews
• Intended explicitly for defect DETECTION (not
correction)
• Defects may be
– logical errors
– anomalies in the code that might indicate an
erroneous condition (e.g. an uninitialized variable)
– non-compliance with standards
Inspection procedure
• System overview presented to inspection team
• Code and associated documents are
distributed to inspection team in advance
• Inspection takes place and discovered errors
are noted
• Modifications are made to repair discovered
errors
• Re-inspection may or may not be required
Inspection teams
• Made up of at least 4 members
• Author of the code being inspected
• Inspector who finds errors, omissions and
inconsistencies
• Reader who reads the code to the team
• Moderator who chairs the meeting and notes
discovered errors
• Other roles are Scribe and Chief moderator
Inspection checklists
• Checklist of common errors should be used to
drive the inspection
• Error checklist is programming language
dependent
• The 'weaker' the type checking, the larger the
checklist
• Examples
– Initialisation
– Constant naming
– Loop termination
– Array bounds
Inspection rate
• 500 statements/hour during overview
• 125 source statement/hour during individual
preparation
• 90-125 statements/hour can be inspected
• Inspection is therefore an expensive process
• Inspecting 500 lines costs about 40
person/hours
Automated static analysis
• Static analysers are software tools for source
text processing
• They parse the program text and try to
discover potentially erroneous conditions and
bring these to the attention of the V & V team
• Very effective as an aid to inspections. A
supplement to but not a replacement for
inspections
Static analysis checks
Fault class Static analysis check
Data faults Variables used before initialisation
Variables declared but never used
Variables assigned twice but never used
between assignments
Possible array bound violations
Undeclared variables
Control faults Unreachable code
Unconditional branches into loops
Input/output faults Variables output twice with no intervening
assignment
Interface faults Parameter type mismatches
Parameter number mismatches
Non-usage of the results of functions
Uncalled functions and procedures
Storage management Unassigned pointers
faults Pointer arithmetic
Stages of static analysis
• Control flow analysis
– Checks for loops with multiple exit or entry points,
finds unreachable code, etc.
• Data use analysis
– Detects uninitialized variables, variables written twice
without an intervening assignment, variables which
are declared but never used, etc.
• Interface analysis
– Checks the consistency of routine and procedure
declarations and their use
Stages of static analysis (2)
• Information flow analysis
– Identifies the dependencies of output variables
– Does not detect anomalies, but highlights information
for code inspection or review
• Path analysis
– Identifies paths through the program and sets out the
statements executed in that path
– Also potentially useful in the review process
• Both these stages generate vast amounts of
information
– Handle with caution!
Procedure of Audit

• Analysing the Testing Method as Recognized in the


Superiority Description: This aids the assessor in recognizing
the progression as it is described.
• Analysing the Achievable Credentials at Every Stage: The
documents are reviewed which comprises, test strategy, test
plan, test cases, test logs, defects tracked, test treatment
medium, and some additional applicable documents.
• Questioning the Development Group at Different Stages: The
Project Manager, lead, testers and developers are questioned
at different stages to get information about the considered
methods and techniques used during the testing procedure. It
provides precious intuition of what was in fact predictable.
Key points
• Verification and validation are not the same thing.
– Verification shows conformance with specification
– Validation shows that the program meets the customer’s needs
• Test plans should be drawn up to guide the testing process
• Static verification techniques involve examination and analysis
of the program for error detection
• Program inspections are very effective in discovering errors
• Program code in inspections is checked by a small team to
locate software faults
• Static analysis tools can discover program anomalies which may
be an indication of faults in the code
• The Cleanroom development process depends on incremental
development, static verification and statistical testing
SDLC VS STLC
Testing in SDLC

Testing is one of the important aspects in SDLC because of the following reasons:

Testing in SDLC helps to prove that all the software requirements are always
implemented correctly or not.

Testing helps in identifying defects and ensuring that testing are addressed before
software deployment. If any defect is discover and fixed after deployment, then the
correction cost will be much huge than the cost of fixing it at earlier stages of
development

Testing in SDLC demonstrates that software always appears to be working


correspond to specification, and the sociology and performance requirements
always appear to have been met.
• Whenever several systems are developed in different components, different
levels of testing help to verify proper integration or interaction of all
components to rest of the system.

• Testing in SDLC means that testing always improves the quality of product
and project.

• Testing plays an important role in SDLC and apart from that testing also
improves the quality of the product and project by discovering bugs early
in the software.

• And remember testing not only improves the quality of the product, but it
also improves the company quality also.
STLC
Requirement Analysis
• During this phase, test team studies the requirements
from a testing point of view to identify the testable
requirements.
• The QA team may interact with various stakeholders
(Client, Business Analyst, Technical Leads, System
Architects etc) to understand the requirements in detail.
• Requirements could be either Functional (defining what
the software must do) or Non Functional (defining
system performance /security availability ) .
• Automation feasibility for the given testing project is also
done in this stage.
Activities
• Identify types of tests to be performed.
• Gather details about testing priorities and focus.
• Prepare Requirement Traceability Matrix (RTM).
• Identify test environment details where testing is
supposed to be carried out.
• Automation feasibility analysis (if required).
Deliverables
• RTM
• Automation feasibility report. (if applicable)
Test Planning
• This phase is also called Test Strategy phase.
• Typically , in this stage, a Senior QA manager will determine effort and
cost estimates for the project and would prepare and finalize the Test
Plan.
Activities
• Preparation of test plan/strategy document for various types of testing
• Test tool selection
• Test effort estimation
• Resource planning and determining roles and responsibilities.
• Training requirement
Deliverables
• Test plan /strategy document.
• Effort estimation document.
Test Case Development
• This phase involves creation, verification and rework of test
cases & test scripts.
• Test data , is identified/created and is reviewed and then
reworked as well.
Activities
• Create test cases, automation scripts (if applicable)
• Review and baseline test cases and scripts
• Create test data (If Test Environment is available)
Deliverables
• Test cases/scripts
• Test data
Test Environment Setup
• Test environment decides the software and
hardware conditions under which a work product is
tested. Test environment set-up is one of the critical
aspects of testing process and can be done in
parallel with Test Case Development Stage.
• Test team may not be involved in this activity if the
customer/development team provides the test
environment in which case the test team is required
to do a readiness check (smoke testing) of the given
environment.
Activities
• Understand the required architecture, environment
set-up and prepare hardware and software
requirement list for the Test Environment.
• Setup test Environment and test data
• Perform smoke test on the build
Deliverables
• Environment ready with test data set up
• Smoke Test Results.
Test Execution
During this phase test team will carry out the testing based on the test
plans and the test cases prepared. Bugs will be reported back to the
development team for correction and retesting will be performed.
Activities
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the defect fixes
• Track the defects to closure
Deliverables
• Completed RTM with execution status
• Test cases updated with results
• Defect reports
Test Cycle Closure
• Testing team will meet , discuss and analyze
testing artifacts to identify strategies that have
to be implemented in future, taking lessons
from the current test cycle.
• The idea is to remove the process bottlenecks
for future test cycles and share best practices
for any similar projects in future.
Activities
• Evaluate cycle completion criteria based on Time,Test
coverage,Cost,Software,Critical Business Objectives , Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to
the customer.
• Test result analysis to find out the defect distribution by type and
severity.
Deliverables
• Test Closure report
• Test metrics
Testing Strategies and Tactics
Test Strategy
• A Test Strategy document is a high level
document and normally developed by project
Manager and Test manager. This document
defines “Software Testing Approach” to
achieve testing objectives.
• The Test Strategy is normally derived from the
Business Requirement Specification
document.
• The Test Strategy document is a static document
meaning that it is not updated too often.
• It sets the standards for testing processes and
activities and other documents such as the Test
Plan draws its contents from those standards
set in the Test Strategy Document.
• For larger projects, there is one Test Strategy
document and different number of Test Plans
for each phase or level of testing.
• Components of the Test Strategy document
– Scope and Objectives
– Business issues
– Roles and responsibilities
– Communication and status reporting
– Test deliverables
– Industry standards to follow
– Test automation and tools
– Testing measurements and metrices
– Risks and mitigation
– Defect reporting and tracking
– Change and configuration management
– Training plan
Test Plan
• A Software Test Plan is a document describing
the testing scope and activities. It is the basis
for formally testing any software/product in a
project.
• test plan: A document describing the scope,
approach, resources and schedule of intended
test activities.
• It is a record of the test planning process.
• master test plan: A test plan that typically
addresses multiple test levels.
• phase test plan: A test plan that typically
addresses one test phase.
TEST PLAN TYPES
• Master Test Plan: A single high-level test plan for a
project/product that unifies all other test plans.
• Testing Level Specific Test Plans:Plans for each level of
testing.
– Unit Test Plan
– Integration Test Plan
– System Test Plan
– Acceptance Test Plan
• Testing Type Specific Test Plans: Plans for major types of
testing like Performance Test Plan and Security Test Plan.
• TEST PLAN TEMPLATE
• The format and content of a software test plan
vary depending on the processes, standards,
and test management tools being
implemented.
• Nevertheless, the following format, which is
based on IEEE standard for software test
documentation, provides a summary of what a
test plan can/should contain.
• Test Plan Identifier:
– Provide a unique identifier for the document. (Adhere to
the Configuration Management System if you have one.)
• Introduction:
– Provide an overview of the test plan.
– Specify the goals/objectives.
– Specify any constraints.
• References:
– List the related documents, with links to them if
available, including the following:
– Project Plan
– Configuration Management Plan
• Test Items:
– List the test items (software/products) and their versions.
• Features to be Tested:
– List the features of the software/product to be tested.
– Provide references to the Requirements and/or Design
specifications of the features to be tested
• Features Not to Be Tested:
– List the features of the software/product which will not be
tested.
– Specify the reasons these features won’t be tested.
• Approach:
– Mention the overall approach to testing.
– Specify the testing levels [if it’s a Master Test Plan], the testing
types, and the testing methods [Manual/Automated; White
Box/Black Box/Gray Box]
• Item Pass/Fail Criteria:
– Specify the criteria that will be used to determine whether
each test item (software/product) has passed or failed testing.
• Suspension Criteria and Resumption Requirements:
– Specify criteria to be used to suspend the testing activity.
– Specify testing activities which must be redone when testing is
resumed.
• Test Deliverables:
– List test deliverables, and links to them if available, including
the following:
• Test Plan (this document itself)
• Test Cases
• Test Scripts
• Defect/Enhancement Logs
• Test Reports
• Test Environment:
– Specify the properties of test environment: hardware,
software, network etc.
– List any testing or related tools.
• Estimate:
– Provide a summary of test estimates (cost or effort) and/or
provide a link to the detailed estimation.
• Schedule:
– Provide a summary of the schedule, specifying key test
milestones, and/or provide a link to the detailed schedule.
• Staffing and Training Needs:
– Specify staffing needs by role and required skills.
– Identify training that is necessary to provide those skills, if
not already acquired.
• Responsibilities:
– List the responsibilities of each team/role/individual.
• Risks:
– List the risks that have been identified.
– Specify the mitigation plan and the contingency plan for each
risk.
• Assumptions and Dependencies:
– List the assumptions that have been made during the
preparation of this plan.
– List the dependencies.
• Approvals:
– Specify the names and roles of all persons who must approve
the plan.
– Provide space for signatures and dates. (If the document is to be
printed.)
Test Plan Outline

http://www.stellman-greene.com 247
Test Plan V/s Test Strategy
Test Plan Test Strategy
A test plan for software project can be Test strategy is a set of guidelines that
defined as a document that defines the explains test design and determines how
scope, objective, approach and emphasis testing needs to be done
on a software testing effort
Components of Test plan include- Test Components of Test strategy includes-
plan id, features to be tested, test objectives and scope, documentation
techniques, testing tasks, features pass or formats, test processes, team reporting
fail criteria, test deliverables, structure, client communication strategy,
responsibilities, and schedule, etc. etc.
Test plan is carried out by a testing A test strategy is carried out by the project
manager or lead that describes how to manager. It says what type of technique to
test, when to test, who will test and what follow and which module to test
to test
Test plan narrates about the specification Test strategy narrates about the general
approaches
Test Plan Test Strategy
Test plan can change Test strategy cannot be changed
Test planning is done to determine It is a long-term plan of action. You can
possible issues and dependencies in order abstract information that is not project
to identify the risks. specific and put it into test approach
A test plan exists individually In smaller project, test strategy is often
found as a section of a test plan
It is defined at project level It is set at organization level and can be
used by multiple projects
Test Case
• A Test Case is a set of actions executed to verify a
particular feature or functionality of your software
application. A Test Case contains test steps, test
data, precondition, postcondition developed for
specific test scenario to verify any requirement.
The test case includes specific variables or
conditions, using which a testing engineer can
compare expected and actual results to determine
whether a software product is functioning as per
the requirements of the customer.
Test case
• A test case is a set of conditions or variables
under which a tester will determine whether a
system under test satisfies requirements or
works correctly.
• The process of developing test cases can also
help find problems in the requirements or
design of an application.
Test Scenario Vs Test Case

• For a Test Scenario: Check Login Functionality


there many possible test cases are:
• Test Case 1: Check results on entering valid
User Id & Password
• Test Case 2: Check results on entering Invalid
User ID & Password
• Test Case 3: Check response when a User ID is
Empty & Login Button is pressed, and many
more
The format of Standard Test Cases
Below is a format of a standard login Test cases example.

Test Case
Test Case ID Test Steps Test Data Expected Results Actual Results Pass/Fail
Description

1.Go to site
http://demo.gur Userid = guru99
Check Customer User should
u99.com
TU01 Login with valid Password = Login into an As Expected Pass
Data 2.Enter UserId pass99 application
3.Enter Password
4.Click Submit

1.Go to site
Check Customer http://demo.gur Userid = guru99 User should not
u99.com
TU02 Login with invalid Password = Login into an As Expected Pass
2.Enter UserId
Data glass99 application
3.Enter Password
4.Click Submit
TEST CASE TEMPLATE
The ID of the test suite to which this test
Test Suite ID
case belongs.
Test Case ID The ID of the test case.
Test Case Summary The summary / objective of the test case.
The ID of the requirement this test case
Related Requirement
relates/traces to.
Any prerequisites or preconditions that
Prerequisites must be fulfilled prior to executing the
test.
Step-by-step procedure to execute the
Test Procedure
test.
The test data, or links to the test data, that
Test Data
are to be used while conducting the test.
Expected Result The expected result of the test.
The actual result of the test; to be filled
Actual Result
after executing the test.
Pass or Fail. Other statuses can be ‘Not
Status Executed’ if testing is not performed and
‘Blocked’ if testing is blocked.
Any comments on the test case or test
Remarks
execution.
Created By The name of the author of the test case.
Date of Creation The date of creation of the test case.
The name of the person who executed the
Executed By
test.
Date of Execution The date of execution of the test.
The environment
Test Environment (Hardware/Software/Network) in which
the test was executed.
TEST CASE EXAMPLE / TEST CASE SAMPLE
Test Suite ID TS001
Test Case ID TC001
To verify that clicking the Generate Coin
Test Case Summary
button generates coins.
Related Requirement RS001
1.User is authorized.
Prerequisites
2.Coin balance is available.
1.Select the coin denomination in the
Denomination field.
Test Procedure 2.Enter the number of coins in the
Quantity field.
3.Click Generate Coin.
1.Denominations: 0.05, 0.10, 0.25, 0.50, 1,
Test Data 2, 5
2.Quantities: 0, 1, 5, 10, 20
1.Coin of the specified denomination
should be produced if the specified
Quantity is valid (1, 5)
Expected Result
2.A message ‘Please enter a valid quantity
between 1 and 10’ should be displayed if
the specified quantity is invalid.

1.If the specified quantity is valid, the result


is as expected.
Actual Result 2.If the specified quantity is invalid, nothing
happens; the expected message is not
displayed
Status Fail
Remarks This is a sample test case.
Created By John Doe
Date of Creation 01/14/2020
Executed By Jane Roe
Date of Execution 02/16/2020

Test Environment •OS: Windows Y


•Browser: Chrome N
Sample Test Case for Net Banking Login Application

Sample test cases

•Verify Admin login with valid and Invalid data


•Verify admin login without data
•Verify all admin home links
For Admin •Verify admin change password with valid and invalid data
•Verify admin change password without data
•Verify admin change password with existing data
•Verify admin logout

•Create a new branch with valid and invalid data


•Create a new branch without data
•Create a new branch with existing branch data
•Verify reset and cancel option
For new Branch •Update branch with valid and invalid data
•Update branch without data
•Update branch with existing branch data
•Verify cancel option
•Verify branch deletion with and without dependencies
•Verify branch search option

•Create a new role with valid and invalid data


•Create a new role without data
•Verify new role with existing data
For New Role •verify role description and role types
•Verify cancel and reset option
•Verify role deletion with and without dependency
•verify links in role details page

•Verify all visitor or customer links


•Verify customers login with valid and invalid data
For customer & Visitors •Verify customers login without data
•Verify banker’s login without data
•Verify banker’s login with valid or invalid data

•Create a new user with valid and invalid data


•Create a new user without data
•Create a new user with existing branch data
For New users •Verify cancel and reset option
•Update user with valid and invalid data
•Update user with existing data
•Verify cancel option
•Verify deletion of the user
Test Cases – Bad Example

http://www.stellman-greene.com 260
• WRITING GOOD TEST CASES
– As far as possible, write test cases in such a way that you test only
one thing at a time. Do not overlap or complicate test cases.
– Ensure that all positive scenarios and negative scenarios are
covered.
• Language:
– Write in simple and easy to understand language.
– Use active voice: Do this, do that.
– Use exact and consistent names (of forms, fields, etc).
• Characteristics of a good test case:
– Accurate: Exacts the purpose.
– Economical: No unnecessary steps or words.
– Traceable: Capable of being traced to requirements.
– Repeatable: Can be used to perform the test over and over.
– Reusable: Can be reused if necessary.
Test Execution
The software testers begin executing the test plan after the
programmers deliver the alpha build, or a build that they feel is
feature complete.
 The alpha should be of high quality—the programmers should feel that it
is ready for release, and as good as they can get it.
There are typically several iterations of test execution.
 The first iteration focuses on new functionality that has been added since
the last round of testing.
 A regression test is a test designed to make sure that a change to one
area of the software has not caused any other part of the software which
had previously passed its tests to stop working.
 Regression testing usually involves executing all test cases which have
previously been executed.
 There are typically at least two regression tests for any software project.

http://www.stellman-greene.com 262
Test Execution
When is testing complete?
– No defects found
– Or defects meet acceptance criteria outlined in test plan

http://www.stellman-greene.com 263
Test Scenario
• Test Scenario is ‘What to be tested’
• Test Case is ‘How to be tested’.
• Test scenarios are the high level classification
of test requirement grouped depending on
the functionality of a module.
In the above example, there are 2 scenarios identified for the above requirement and
each could have 4 and 6 test cases respectively.
• Take a sample application, say login page with
username, password, login, and cancel buttons. If
asked to write test cases for the same, we will end up
writing more than 50 test cases by combining different
options and details.
• But if test scenarios to be written, it will be a matter
of 10 lines as below:
• High Level Scenario: Login Functionality
Low Level Scenarios:
– 1. To check Application is Launching
2. To check text contents on login page
3. To check Username field
4. To check Password field
5. To check Login Button and cancel button functionality
Test Case Vs Test Scenarios
Test Scenarios
Test Cases

What it is => A concept which provides A concept which provides


detailed information how one-line information about
to test, steps to be taken what to test.
and expected result of the
same
It’s about => It’s more about It’s more about thinking
documenting details. and discussing details.
Importance => It’s important when testing It’s important when time is
is off shored and less and most of the team
development is onsite. members can agree /
Writing test cases with understand the details from
details will help both dev one-liner scenario.
and QA team in sync.
One time documentation of all the A time saver and idea generation
test cases is beneficial to track activity, preferred by new
1000s rounds of regression testing generation software testing
in future. community.
Most of the time, its helpful while Modification and addition is
bug reporting. Tester just need to simple and not specific to a
give reference of test case ID and person.
does not require mentioning each For a huge project, where group of
and every minute detail. people know specific modules
only, this activity gives a chance to
everyone to look into other
modules and brain storm and
discuss

Beneficial to => A full-proof test case document is a Good test coverage can be
life line for new tester. achieved by dividing application in
test scenarios and it reduces
repeatability and complexity of
product

Disadvantage => Time and money consuming as it If created by specific person, the
requires more resources to detail reviewer or the other user might
out everything about what to test not sync the exact idea behind it.
and how to test Need more discussions and team
efforts.
Test Script
• A Test Script is a set of instructions (written
using a scripting/programming language) that
is performed on a system under test to verify
that the system performs as expected. Test
scripts are used in automated testing.
• Sometimes, a set of instructions (written in a
human language), used in manual testing, is
also called a Test Script but a better term for
that would be a Test Case.
Test Script
• This is the little tricky part to explain. Generally this is
where lot of debates happened. To most of us the test
scripts are the automation scripts written in any of the
programming language like VB script, Java, python etc
which can be interpreted and executed automatically by a
testing tool.
• Yes this is 80% correct by not 100%. To my definition a test
script is nothing but a test case fabricated with test data.
• A single test case can be fabricated with the combination
of multiple set of test data to form multiple test scripts of
the same test case.
How to create a Test Script Template:
• Example “Sign-up” test
cases included
• Manual test scripts – are the manual test
cases fabricated with the multiple set of test
data to enable even a layman to do the testing
as per the documentation
• Automation test Script – are the programmed
test cases with the combination of test data
which can be executed by a tool
Some scripting languages used in automated
testing are:
– JavaScript
– Perl
– Python
– Ruby
– Tcl
– Unix Shell Script
– VBScript
Test Procedure
• Detailed instructions document for the set-up,
execution, and evaluation of results for a given
test case. This document contains a set of
associated instructions.
• This documentation may have steps specifying
a sequence of actions for the execution of a
test
Performance Testing Metrics: Parameters Monitored

• processor Usage – an amount of time processor spends executing non-idle threads.


• Memory use – amount of physical memory available to processes on a computer.
• Disk time – amount of time disk is busy executing a read or write request. Bandwidth –
shows the bits per second used by a network interface.
• Private bytes – number of bytes a process has allocated that can’t be shared amongst other
processes. These are used to measure memory leaks and usage.
• Committed memory – amount of virtual memory used.
• Memory pages/second – number of pages written to or read from the disk in order to resolve
hard page faults. Hard page faults are when code not from the current working set is called
up from elsewhere and retrieved from a disk.
• Page faults/second – the overall rate in which fault pages are processed by the processor.
This again occurs when a process requires code from outside its working set.
• CPU interrupts per second – is the avg. number of hardware interrupts a processor is
receiving and processing each second.
• Disk queue length – is the avg. no. of read and write requests queued for the selected disk
during a sample interval.
• Network output queue length – length of the output packet queue in packets. Anything more than two
means a delay and bottlenecking needs to be stopped.
• Network bytes total per second – rate which bytes are sent and received on the interface including
framing characters.
• Response time – time from when a user enters a request until the first character of the response is
received.
• Throughput – rate a computer or network receives requests per second.
• Amount of connection pooling – the nu
• mber of user requests that are met by pooled connections. The more requests met by connections in the
pool, the better the performance will be. Maximum active sessions – the maximum number of sessions
that can be active at once.
• Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of
expensive I/O operations. This is a good place to start for solving bottlenecking issues.
• Hits per second – the no. of hits on a web server during each second of a load test. Rollback segment –
the amount of data that can rollback at any point in time.
• Database locks – locking of tables and databases needs to be monitored and carefully tuned.
• Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast
data is retrieved from memory
• Thread counts – An applications health can be measured by the no. of threads that are running and
currently active.
• Garbage collection – It has to do with returning unused memory back to the system. Garbage collection
needs to be monitored for efficiency.
• Response time= time of response- time of
request
• Screen transaction rate =No of screen
transaction/time(in min)
• Through put= No of successful API request/
total time.
• Performance measurement= total response
time/ no request
Example
• You are testing the scalability of a web application designed for
e-come.. This wen app having (a) Time of request is
11:00:00.000 and time of response is 11:00:02.500 (b) Number
of transactions in one minute is 120 (c) Number of successful
API request is 500 and total time 5 minutes (d) Total response
time for login 60 seconds and number of login request 20.
• Assume you are testing the scalability of the web
application designed for this software. The objective is to be
measure the system performance as the number of concurrent
user increases. You find the following (i) Response time (ii)
Screen transaction rate (iii) Throughput and (iv) performance
measurement.
Thank You

You might also like