Sofware Testing Notes
Sofware Testing Notes
Page 1
Manual Testing Tutorial
Types of SDLC
1. Waterfall Model
2. Incremental Model
3. Prototype Model
4. ‘V’ Model
1) Waterfall Model - The waterfall model is a sequential design process, often used in
software development processes, in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases of Conception, Initiation, Analysis, Design,
Construction, Testing and Maintenance.
Disadvantages –
Advantages
If a customer’s requirements are constant, like constant input and constant output. the waterfall
model is a best one.
2) Incremental model – It’s like waterfall. They will do it in small sets. Main advantages in
incremental is time period.
Disadvantages in this model is while integrating the sets, it gives a lot of problems.
3) Prototype model – This model is very useful, when the project is small. The customer will
interact in the whole process.
Page 2
Manual Testing Tutorial
CRS UAT
SRS ST
HLD IT
LLD UT
Coding
Advantages
If we find any issue in a document level, it’s easy to correct it with a minimal cost.
5) Agile Model – In this model, the customer managing the whole projects, with a help of the
service provider.
Page 3
Manual Testing Tutorial
DESIGN
Convert a document in a diagram and modules. Like how many modules going to there?
How the modules are interacting? What kind of data are going to use?
DEVELOPMENT
Types of error.
Syntax - while a compiling a code a does the system throw an error, is called syntax error.
Logic - when executing the application, system won't throw any error, but it may give wrong or
unexpected result.
TESTING
Page 4
Manual Testing Tutorial
IMPLEMENTATION/MAINTENANCE
Maintenance
After 90 days the software need maintenance. IT companies are getting 80% profit from
maintenance.
Upgrading -compatibility
TESTING
What is a testing?
what to test?
we have to find, whether the product or process meats the customer's expectation.
Level of Testing
UT – Unit Testing
IT – Integrated Testing
ST – System Testing
Page 5
Manual Testing Tutorial
Unit Testing – Unit level testing are done by developers. Developers will test their own modules.
Developer will verify the coding standards in this level. (Type of white box testing)
Effective Coding
Coding complexity
Integration Testing – in this phase they will combine small modules, and then they will do
testing. They will test how the modules are interacting with each other. I’s module dependent.
To do integration testing, tester needs some scripting knowledge. (Gray Box testing)
Kind of approaches
Top down - The program are merged and tested from top to bottom.
Bottom up – The program are merged and tested from bottom to top.
Sandwich Testing is an approach to combine top down testing with bottom up testing.
Big Bang
In this approach, all or most of the developed modules are coupled together to form a complete
software system or major part of the system and then used for integration testing. The Big Bang
method is very effective for saving time in the integration testing process. However, if the test cases
and their results are not recorded properly, the entire integration process will be more complicated
and may prevent the testing team from achieving the goal of integration testing.
In big bang Integration testing, individual modules of the programs are not integrated until
everything is ready. This approach is seen mostly in inexperienced programmers who rely on 'Run
it and see' approach. In this approach, the program is integrated without any formal integration
testing, and then run to ensures that all the components are working properly.
System Testing – In system testing, we are testing whole application. It’s called end to end testing.
In this phase we are testing the functionality. (Black box testing)
Alpha level testing – In this level, developers do the testing for the customers, in front of customer.
Page 6
Manual Testing Tutorial
Beta level testing – In this level, client will do testing in customer’s place.
Types of testing
White box testing – Internal behavior of the program is called white box testing. They will
test structure, code and design. Developers will do white box testing. It’s also called glass box or
clear box testing. In this phase developers test the coding standard and so on.
Black box testing – Testing the functionality of the application is called black box testing. It’s
done by testers.
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Limitations of Boundary Value Analysis - Boundary value testing is efficient only for variables of
fixed values i.e boundary.
Page 7
Manual Testing Tutorial
Exhaustive testing, testing all the possible test cases (which is in many cases not possible due to
time and budget factors)
Non-exhaustive testing is executing the chosen test cases based on priority. This is normally
followed in many testing projects.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of
guessing where errors can be hidden. For this technique there are no specific tools, writing the test
cases that cover all the application paths.
Grey box testing – Grey box testing is a combination of black & white box testing. Testing team lead
will do this testing.
Static analysis involves going through the code in order to find out any possible defect in the code.
Dynamic analysis involves executing the code and analyzing the output.
1. Requirements Analysis: Testing should begin in the requirements phase of the software
development life cycle.
Ensure software quality by as soon as possible. Error find in this stage, we can correct with
minimal cost.
Page 8
Manual Testing Tutorial
Requirements
Look and feel(GUI) – Graphical interface of object is called look & feel.
Usability – In usability testing, tester check whether the application is user friendly or not.
Performance – In performance testing, tester will test how the applications are performing.
He will check how the application is requested and responding under the load and stress.
Security - Security Testing is carried out in order to find out how well the system can
protect itself from unauthorized access, hacking – cracking, any code damage etc. which
deals with the code of application. This type of testing needs sophisticated testing
techniques.
Scalability
Constitute of requirements
Clear -
Concise- content should be short and understandable.
Consistent - standard
Complete -
Kind of projects
Migration - Moving a existing project or application into another technology is called
migration projects.
Page 9
Manual Testing Tutorial
2. Test Plan:
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how'
of product validation. It should be thorough enough to be useful but not so overly detailed
that no one outside the test group will read it. Test plan is prepared by test manager or test
team lead.
Document details – It will contain project name, version and last updated.
Version - It’s detailed about document version.
Contributors – Who are all involving to prepare this test plan? Ex. test manager-onsite & off
shore, author, reviewer
Contacts detail of contributor and coordinators.
Table of content
Introduction – It’s explains about the project.
Scope to test
In scope – what they are going to test?
Out scope – what they are not going to test?
Assumptions and dependencies
Approach
Schedule – time line. When, what they are going to deliver
Milestone – what they are going to achieve?
Deliverable
Execution plan - how we are execute the test case? Module by module or phase by phase.
Tool plan
Test data
Test environment –
Risk mitigation - analyzing about risks and improvement on that.
Contingency – Action has to be taken when they are under risk.
Enter & exit criteria- SRS document & AUD
AUD-test case document
Test case document – application- executed test case & defects.
Suspension criteria – when are facing a critical issue, what action going to be take?
Page 10
Manual Testing Tutorial
3. Test Development: Test Procedures, Test Scenarios, Test Cases, and Test Scripts to use in
testing software.
Test scenario is combination of test. Test scenario is a series of tests, followed by other test.
A test case describes an input, action, or event and an expected response, to determine if a feature
of a software application is working correctly.
A test case describes an input, action, or event and an expected response, to determine if a feature
of a software application is working correctly. A test case may contain particulars such as test case
identifier, test case name, objective, test conditions/setup, input data requirements, steps, and
expected results. The level of detail may vary significantly depending on the organization and
project context.
Page 11
Manual Testing Tutorial
Date of creation-
Last modified
Serial No.
Test data – The data needs to provide to execute the test case.
Test environment – In which environment they want to execute the test case
Expected Results – how the application behave after executing test case
RTM – requirement traceability matrix –mapping a requirements test scenario test tests
test case executed results actual results
Bi directional matrix - mapping a requirements test scenario test tests test case
executed results actual results defect retestactual results.
4.Test Setup & Execution: Testers execute the software based on the plans and tests and report
any errors found to the development team.
To test the test cases they particular set up like hard ware or soft ware. The environments where
they are going execute the test cases.
Page 12
Manual Testing Tutorial
5. Test Analysis & Reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for release.
During execution if any of test case is failed, should be report issue to the customer.
Process between the defect logged or raised and its fixed and ready for retesting.
New, open, in progress, fixed, assign, reopen, reject, deferred (not fixing immediately), closed, can’t
reproduced, duplicate
New:
When a bug is found/revealed for the first time, the software tester communicates it to his/her
team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from
the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug.
Assigned:
After the bug is reported as ‘New’, it comes to the Development Team. The development team
verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it
and a status of ‘Assigned’ is assigned to it.
Open:
Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to
indicate that he/she is working on it to find a solution.
Fixed:
Once the developer makes necessary changes in the code and verifies the code, he/she marks the
bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.
Page 13
Manual Testing Tutorial
Pending Retest:
After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending
Retest’ is assigned to it.
Retest:
The testing team leader changes the status of the bug, which is previously marked with ‘Pending
Retest’ to ‘Retest’ and assigns it to a tester for retesting.
Closed:
After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester
closes it and marks it with ‘Closed’ status.
Reopen:
If after retesting the software for the bug opened, if the system behaves in the same way or same
bug arises once again, then the tester reopens the bug and again sends it back to the developer
marking its status as ‘Reopen’.
Pending Reject:
If the developers think that a particular behavior of the system, which the tester reports as a bug
has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending
Reject’.
Rejected:
If the Testing Leader finds that the system is working according to the specifications or the bug is
invalid as per the explanation from the development, he/she rejects the bug and marks its status as
‘Rejected’.
Postponed:
Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation
may occur because of many reasons, such as unavailability of Test data, unavailability of particular
functionality etc. That time, the bug is marked with ‘Postponed’ status.
Deferred:
In some cases a particular bug stands no importance and is needed to be/can be avoided, that time
it is marked with ‘Deferred’ status.
Duplicate:
If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug
status is changed to “DUPLICATE”.
Page 14
Manual Testing Tutorial
DEFECT TEMPLATE
Serial No.
Defect Description -
Environment
Comments/Remarks
Defect Metrics
Software testing defect metrics is used to measure the quantification of a software
related to its development resources and/or development process. It is usually responsible
for quantifying factors like schedule, work effort, product size, project status and quality
performance. Such metrics is used to estimate that how much of more future work is
needed to improve the software quality before delivering it to the end-user.
This process of testing defect metrics is used to analyze the major causes of defects and in
which phase most of the defects are introduced.
Page 15
Manual Testing Tutorial
Total no. defect prevention – who is writing, reviewing and verifying the process CRS & SRS, they
can prevent defect.
Subjective -
Defect Density:
The defect density is measured by adding up the number of defects reported by the Software
Quality Assurance to the number of defects that are reported by the peer and dividing it by the
actual size(which can be in either KlOC, SLOC or the function points to measure the size of the
software product).
Test effectiveness:
There are several approaches towards analyzing test effectiveness, one of them is t/(t+Uat) and in
this formula "t" is the total number of defects reported during the testing, whereas U at means the
total number of defects that are reported during the user acceptance testing.
Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.
Page 16
Manual Testing Tutorial
Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100.
Schedule Slippage:
No. Of days behind the schedule is called schedule slippage. When a task has been delayed from its
original baseline schedule then the amount of time that it has taken is defined as schedule slippage.
Its calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100
(Actual review effort spent in that particular phase / Total actual efforts spent in that phase) * 100
Requirements Stability Index is defined as stability in the projects requirements. How far the
changes are happening in the requirements of project?
{1 - (Total number of changes /number of initial requirements)}
Requirements Creep:
Defect weight age is calculate depend upon the severity and the priority.
(5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)
Where are the values 5,3,1 corresponds to the severities of the detect?
Review Efficiency:
Ho w effectively we are reviewing defect based the change requirements? It should be low.
Average of total no. of defects rejected and total no. of test cases
Page 17
Manual Testing Tutorial
Residual defect – It’s a important factor to decide whether the product is ready to be release or
not.
Manual test case execution productivity - No. of test cases executed per hour
Test case passing rate – no. of test case passed/no. test case executed
Defect trend analysis - An aspect of technical analysis that tries to predict the future movement of
a stock based on past data. Trend analysis is based on the idea that what has happened in the past
gives traders an idea of what will happen in the future.
And at the end we’d like to tell you that purpose of metrics isn't just to measure the software
performance but also to understand the progress of the application to the organizational goal.
Below we will discuss some parameters of determining metrics of various software packages:
• Duration
• Complexity
• Technology Constraints
• Previous Experience in Same Technology
• Business Domain
• Clarity of the scope of the project
Defect Density
Defect Density can be defined as the ratio of Defects to Size or the ratio of Defects to effort. The
Unit of Measure for Size could vary as defined by the project. The total defects would
include both Review and Test defects.
Overall Defect Density = Total number of Defects/ Total Size of the Project
Stage wise Defect Density = Total number of Defects attributed to the particular stage/
Actual effort of the Particular Stage
Note: If UOM is Lines of Code, Defect Density will be Defects per KLOC
Page 18
Manual Testing Tutorial
Review Efficiency%
This metric shows the efficiency of the review process. A higher ratio of Review
Defects to Total Defects indicates a very efficient Review Process. However, a lower
ratio need not necessarily mean that the review is inadequate. This may be due to
frequent changes in scope, requirements etc.
Typically, the review defects include all reviews, starting from the Requirements Stage.
All review comments that arise from the review of Code/unit / integration / system
test plans / procedures / cases shall be counted under Review defects. The total
number of Defects includes all the defects found in reviews and testing.
In the case of Application Development projects, this metric is applicable to all stages.
In the case of Application Maintenance projects, it is applicable to Analysis stage of
super major enhancement request type only.
Stage level
Stage wise Review Efficiency % = (Number of defects detected during current stage
review) / (Number of defects detected during current stage Review + Number
defects detected in subsequent stages attributed to current stage) *100
Page 19
Manual Testing Tutorial
This metrics shows the efficiency of removing defects by Internal Review and Testing
process before shipment of the product to customer. The metric involves pre-ship
defects and post-ship defects.
Defect Removal Efficiency (DRE) = (Total number of Pre-shipment Defects)/ (Total number
of Pre-shipment Defects + Total number of post-shipment Defects) *100
Here pre-shipment defects will include all review and test defects across all stages
found at offshore and post –shipment defects will include acceptance testing and post
implementation defects.
This is a very good metrics for all operational models, as it is indicative of quality of the
product delivered. For projects following ‘Testing and QA’ operational model where
the Test Plan/Test Case is available as a scope of the project, indicates the quality of
testing carried out.
During the course of the project, Defect Removal efficiency (DRE) can be obtained at end of
every delivery to the customer.
DRE for the entire project can be calculated at the end of post implementation. If post
implementation support is not in scope of the project then DRE is computed at
the end of Acceptance Testing.
Page 20
Manual Testing Tutorial
Defect Leakage %
This metric gives a very good indication of the review / testing process within a stage.
Any defect leaked to the next stage indicates that the test / review mechanism for that
stage / work product is not adequate. A high leakage rate indicates that the process of
review / testing carried out needs to be looked upon.
Stage level
For example
Total Defects attributed to Design but captured in subsequent phases (i.e. Coding & Testing) = 6
Page 21
Manual Testing Tutorial
Stage-wise defects:
This metric gives an indication as to on an average how long it takes for the defects to get detected,
in terms of number of stages.
For example consider the following table of defect distribution in an Application Development
project,
{(1-1)*15+(2-1)*5+(3-1)*5+*(6-1)*1+(2-2)*15+(3-2)*5+(4-2)*2+
(6-2)*1+(3-3)*20+(4-3)*4+(5-3)*3+(6-3)*2+(4-4)*14+(5-4)*2+
(5-5)*5+(6-5)*1+(6-6)*5)/105} = 0.50
If the average defect age value is less than 1 indicates that the project, on an average,
is able to find the defects within the same stage itself. If it is greater than or equal to
Page 22
Manual Testing Tutorial
1 then the project finds more defects only in the subsequent stages and not in the
same stage.
Here the stage of origin of defect could also be any testing stage like IT, ST and AT,
where defects can occur due to environment setup related problems or inadequate
test cases, etc.
Cause-wise defects
This metric gives an indication of the causes that occur more frequently in a stage or
across stages, for which corrective action may be required to prevent such defects
from occurring in future.
(Note: This list is not exhaustive in nature. The full list of causes for a defect shall be
found in eTracker / eMetrics).
Coding: bad fix / code, code conversion error, insufficient adherence to standards, incomplete
conversion guidelines / cookbook
The top causes for the defects (due to which more defects are caused) shall be
identified and the projects shall fill the reasons and corrective action planned to
reduce these defects from occurring in future
Page 23
Manual Testing Tutorial
Defect Priority
Blocker: This bug prevents developers from testing or developing the software. The defect or issue
stops the following functionality and no workaround is there it called a blocker.
Critical: If the major functionality is not working or the security belching or data losses. The
software crashes, hangs, or causes you to lose data. No work around is there
Major: A major feature is broken. Functionality is not working as expected.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.GUI issues come under minor.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.
Defect severity
S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A
major area of the users system is affected by the incident and it is significant to business processes.
S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on
with testing.Incident affects an area of functionality but there is a work-around which negates
impact to business process.
Page 24
Manual Testing Tutorial
S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are
unlikely to occur in normal use, or minor errors in layout/formatting. Problems do not impact use
of the product in any substantive way. These are incidents that are cosmetic in nature and of no or
very low impact to business processes.
High severity, low priority - Critical impact on user: nuclear missiles are launched by accident.
Factor influencing priority: analysis reveals that this defect can only be encountered on the second
Tuesday of the first month of the twentieth year of each millennium, and only then if it's raining and
five other fail safes have failed.
Business decision: the likelihood of the user encountering this defect is so low that we don't feel it's
necessary to fix it. We can mitigate the situation directly with the user.
High severity, low priority - Critical impact on user: when this error is encountered, the
application must be killed and restarted, which can take the application off-line for several minutes.
Factors influencing priority: (1) analysis reveals that it will take our dev team six months full-time
refactoring work fix this defect. We'd have to put all other work on hold for that time. (2) Since this
is a mission-critical enterprise application, we tell customers to deploy it in a redundant
environment that can handle a server going down, planned or unplanned.
Business decision: it's a better business investment to make customers aware of the issue, how
often they're likely to encounter it, and how to work through an incidence of it than to devote the
time to fixing it.
Low severity, high priority - Minimal user impact: typo. Factors influencing priority. (1) The typo
appears prominently on our login screen; it's not a terribly big deal for existing customers, but it's
the first thing our sales engineers demo to prospective customers, and (2) the effort to fix the typo
is minimal.
Decision: fix it for next release and release it as an unofficial hot fix for our field personnel.
Page 25
Manual Testing Tutorial
Experience
Who is well known about the subject, he know about short cuts.
Page 26
Manual Testing Tutorial
Page 27