0% found this document useful (0 votes)
121 views

Sofware Testing Notes

The document provides an overview of manual testing and the software development life cycle. It discusses different SDLC models like waterfall, incremental, prototype, and V-model. It then covers planning, design, development, testing, implementation, and maintenance phases of manual testing. The document also describes different types of testing like unit testing, integration testing, system testing, and user acceptance testing. Finally, it discusses testing techniques like boundary value analysis and equivalence class partitioning.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

Sofware Testing Notes

The document provides an overview of manual testing and the software development life cycle. It discusses different SDLC models like waterfall, incremental, prototype, and V-model. It then covers planning, design, development, testing, implementation, and maintenance phases of manual testing. The document also describes different types of testing like unit testing, integration testing, system testing, and user acceptance testing. Finally, it discusses testing techniques like boundary value analysis and equivalence class partitioning.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Manual Testing Tutorial

Manual Testing Tutorial

Page 1
Manual Testing Tutorial

Software Development Life Cycle


SDLC helps to reduce the issues with minimal cost. Plan the project with cost effective manner.

Types of SDLC

1. Waterfall Model
2. Incremental Model
3. Prototype Model
4. ‘V’ Model

1) Waterfall Model - The waterfall model is a sequential design process, often used in
software development processes, in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases of Conception, Initiation, Analysis, Design,
Construction, Testing and Maintenance.

Disadvantages –

 It will take long time to finish the projects.

 It’s difficult to do trackback. It costs more.

Advantages

If a customer’s requirements are constant, like constant input and constant output. the waterfall
model is a best one.

Change requirements means add, modify & delete.

2) Incremental model – It’s like waterfall. They will do it in small sets. Main advantages in
incremental is time period.

Disadvantages in this model is while integrating the sets, it gives a lot of problems.

3) Prototype model – This model is very useful, when the project is small. The customer will
interact in the whole process.

4) V Model- Verification and validation.

Verification: building the product right.

Page 2
Manual Testing Tutorial

Validation: building the right product.

Verification is a document oriented. Validation is a process oriented.

CRS UAT

SRS ST

HLD IT

LLD UT

Coding

In a V model, development and testing occurs parallel.

Advantages

 We save a lot of time.

 If we find any issue in a document level, it’s easy to correct it with a minimal cost.

5) Agile Model – In this model, the customer managing the whole projects, with a help of the
service provider.

 Customer satisfaction by rapid delivery of useful software


 Welcome changing requirements, even late in development.
 Working software is delivered frequently (weeks rather than months)

Page 3
Manual Testing Tutorial

 Close, daily cooperation between businesspeople and developers


 Face-to-face conversation is the best form of communication (co-location)
 Projects are built around motivated individuals, who should be trusted
 Continuous attention to technical excellence and good design
 Simplicity
 Self-organizing teams
 Regular adaptation to changing circumstances

PLANNING OR REQUIREMENT ANALYSIS

BA has Domain knowledge about the projects.

Value adds -do more than committed.

DESIGN

High Level Design

Convert a document in a diagram and modules. Like how many modules going to there?

How the modules are interacting? What kind of data are going to use?

Low Level Design

All detailed requires for high level design.

DEVELOPMENT

Types of error.

Syntax - while a compiling a code a does the system throw an error, is called syntax error.

Logic - when executing the application, system won't throw any error, but it may give wrong or
unexpected result.

TESTING

Failure-we cannot fix it.

Page 4
Manual Testing Tutorial

Defect-we can fix it.

IMPLEMENTATION/MAINTENANCE

Implementation means moving a code to another source or a product implementation.

Maintenance

After 90 days the software need maintenance. IT companies are getting 80% profit from
maintenance.

Enhancement – Increasing the functionality

Upgrading -compatibility

TESTING
What is a testing?

Testing is process to find defects in a products or process.

Why do we have to do test?

To find or determined the defects.

what to test?

we have to find, whether the product or process meats the customer's expectation.

who will test

Tester will perform the testing.

Input of test phase is coding & output is defect.

Level of Testing

UT – Unit Testing

IT – Integrated Testing

ST – System Testing

UAT – User Acceptance Testing

Page 5
Manual Testing Tutorial

Levels of testing helps to reduce defects, it costs less.

Unit Testing – Unit level testing are done by developers. Developers will test their own modules.

Developer will verify the coding standards in this level. (Type of white box testing)

Effective Coding

Coding complexity

Integration Testing – in this phase they will combine small modules, and then they will do
testing. They will test how the modules are interacting with each other. I’s module dependent.

To do integration testing, tester needs some scripting knowledge. (Gray Box testing)

Kind of approaches

Top down - The program are merged and tested from top to bottom.

Bottom up – The program are merged and tested from bottom to top.

Sandwich Testing is an approach to combine top down testing with bottom up testing.

Big Bang

In this approach, all or most of the developed modules are coupled together to form a complete
software system or major part of the system and then used for integration testing. The Big Bang
method is very effective for saving time in the integration testing process. However, if the test cases
and their results are not recorded properly, the entire integration process will be more complicated
and may prevent the testing team from achieving the goal of integration testing.

In big bang Integration testing, individual modules of the programs are not integrated until
everything is ready. This approach is seen mostly in inexperienced programmers who rely on 'Run
it and see' approach. In this approach, the program is integrated without any formal integration
testing, and then run to ensures that all the components are working properly.

System Testing – In system testing, we are testing whole application. It’s called end to end testing.
In this phase we are testing the functionality. (Black box testing)

User Acceptance Testing

Alpha level testing – In this level, developers do the testing for the customers, in front of customer.

Page 6
Manual Testing Tutorial

Beta level testing – In this level, client will do testing in customer’s place.

Types of testing

White box testing – Internal behavior of the program is called white box testing. They will
test structure, code and design. Developers will do white box testing. It’s also called glass box or
clear box testing. In this phase developers test the coding standard and so on.

Black box testing – Testing the functionality of the application is called black box testing. It’s
done by testers.

Testing methods in block box testing

BVA - Boundary Value Analysis:


Many systems have tendency to fail on boundary. So testing boundary values of application is
important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the
extreme boundary values are chosen. Boundary values include maximum, minimum, just
inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning


Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables

Advantages in Boundary Value Analysis:


1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis - Boundary value testing is efficient only for variables of
fixed values i.e boundary.

Page 7
Manual Testing Tutorial

Exhaustive testing, testing all the possible test cases (which is in many cases not possible due to
time and budget factors)
Non-exhaustive testing is executing the chosen test cases based on priority. This is normally
followed in many testing projects.

Equivalence Class Partitioning:


Equivalence partitioning is a black box testing method that divides the input domain of a program
into classes of data from which test cases can be derived.

How is this partitioning performed while testing:


1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is
defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of
guessing where errors can be hidden. For this technique there are no specific tools, writing the test
cases that cover all the application paths.

Grey box testing – Grey box testing is a combination of black & white box testing. Testing team lead
will do this testing.

Static analysis involves going through the code in order to find out any possible defect in the code.
Dynamic analysis involves executing the code and analyzing the output.

Stages involves in Software Testing Development Cycle.

1. Requirements Analysis: Testing should begin in the requirements phase of the software
development life cycle.

Ensure software quality by as soon as possible. Error find in this stage, we can correct with
minimal cost.

Page 8
Manual Testing Tutorial

Requirements

Functional – functional requirements of an application


Data – To generate a report we need a data.
Back end is data stored in table.
Front end is application.
Data base

Look and feel(GUI) – Graphical interface of object is called look & feel.

Usability – In usability testing, tester check whether the application is user friendly or not.

Performance – In performance testing, tester will test how the applications are performing.
He will check how the application is requested and responding under the load and stress.

Maintainability - how we are maintain application with low cost.

Security - Security Testing is carried out in order to find out how well the system can
protect itself from unauthorized access, hacking – cracking, any code damage etc. which
deals with the code of application. This type of testing needs sophisticated testing
techniques.

Scalability

Constitute of requirements

Clear -
Concise- content should be short and understandable.
Consistent - standard
Complete -

Kind of projects
Migration - Moving a existing project or application into another technology is called
migration projects.

Enhancement – Adding or increasing functionality is happen in the project, is called


enhancement.

Scratch – New projects

Page 9
Manual Testing Tutorial

2. Test Plan:
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how'
of product validation. It should be thorough enough to be useful but not so overly detailed
that no one outside the test group will read it. Test plan is prepared by test manager or test
team lead.

Document details – It will contain project name, version and last updated.
Version - It’s detailed about document version.
Contributors – Who are all involving to prepare this test plan? Ex. test manager-onsite & off
shore, author, reviewer
Contacts detail of contributor and coordinators.
Table of content
Introduction – It’s explains about the project.
Scope to test
In scope – what they are going to test?
Out scope – what they are not going to test?
Assumptions and dependencies
Approach
Schedule – time line. When, what they are going to deliver
Milestone – what they are going to achieve?
Deliverable
Execution plan - how we are execute the test case? Module by module or phase by phase.
Tool plan
Test data
Test environment –
Risk mitigation - analyzing about risks and improvement on that.
Contingency – Action has to be taken when they are under risk.
Enter & exit criteria- SRS document & AUD
AUD-test case document
Test case document – application- executed test case & defects.
Suspension criteria – when are facing a critical issue, what action going to be take?

Page 10
Manual Testing Tutorial

Sanity level execution –Checking the product or an application is worthy to test.


Escalation chart – Project query, Point of contacts
Defect management – how they are going to manage their defects. what tools they are going
to use. It should have all the information about defects.
Signing off.

3. Test Development: Test Procedures, Test Scenarios, Test Cases, and Test Scripts to use in
testing software.

What is mean test scenario?

Test scenario is combination of test. Test scenario is a series of tests, followed by other test.

Negative test scenarios make application positive.

What is a test case?

A test case describes an input, action, or event and an expected response, to determine if a feature
of a software application is working correctly.

What are the qualities of good test case?

A test case describes an input, action, or event and an expected response, to determine if a feature
of a software application is working correctly. A test case may contain particulars such as test case
identifier, test case name, objective, test conditions/setup, input data requirements, steps, and
expected results. The level of detail may vary significantly depending on the organization and
project context.

It should have maximum possibility to get fail.

It should discover undiscovered error.

It should to more powerful.

It should to be easier to evaluate.

It should to be more informative.

And it should be useful for trouble shooting.

It should more credible.

Page 11
Manual Testing Tutorial

Fields are contain in test case

Author - The person who is writing the test case.

Date of creation-

Last modified

Serial No.

Test case Id - Unique Id for the particular test case.

Test Description – It defines which functionality going to be tested.

Pre requisite - It explains, available setup needs before execute application.

Test data – The data needs to provide to execute the test case.

Test environment – In which environment they want to execute the test case

Test tests – Step by step execution of the application

Expected Results – how the application behave after executing test case

Actual results – Actual behavior of application is actual results

Priority – which module test case should test first?

Status – pass/fail, cancel, in progress, not started

Comments- updating (in progress, fail)

RTM – requirement traceability matrix –mapping a requirements  test scenario  test tests 
test case  executed results actual results

Bi directional matrix - mapping a requirements  test scenario  test tests  test case 
executed results actual results  defect retestactual results.

Uni directional matrix

4.Test Setup & Execution: Testers execute the software based on the plans and tests and report
any errors found to the development team.

To test the test cases they particular set up like hard ware or soft ware. The environments where
they are going execute the test cases.

Before start test execution tester will do sanity level testing.

Page 12
Manual Testing Tutorial

5. Test Analysis & Reporting: Once testing is completed, testers generate metrics and make final
reports on their test effort and whether or not the software tested is ready for release.

During execution if any of test case is failed, should be report issue to the customer.

6. Retesting the Defects

After retest, tester will do regression testing.

Unit - module by module

Regional – group of modules

Entire –whole application

Bugs Life Cycle

Process between the defect logged or raised and its fixed and ready for retesting.

Phases in Bug life cycle

New, open, in progress, fixed, assign, reopen, reject, deferred (not fixing immediately), closed, can’t
reproduced, duplicate

New:
When a bug is found/revealed for the first time, the software tester communicates it to his/her
team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from
the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug.

Assigned:
After the bug is reported as ‘New’, it comes to the Development Team. The development team
verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it
and a status of ‘Assigned’ is assigned to it.

Open:
Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to
indicate that he/she is working on it to find a solution.

Fixed:
Once the developer makes necessary changes in the code and verifies the code, he/she marks the
bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.

Page 13
Manual Testing Tutorial

Pending Retest:
After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending
Retest’ is assigned to it.

Retest:
The testing team leader changes the status of the bug, which is previously marked with ‘Pending
Retest’ to ‘Retest’ and assigns it to a tester for retesting.

Closed:
After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester
closes it and marks it with ‘Closed’ status.

Reopen:
If after retesting the software for the bug opened, if the system behaves in the same way or same
bug arises once again, then the tester reopens the bug and again sends it back to the developer
marking its status as ‘Reopen’.

Pending Reject:
If the developers think that a particular behavior of the system, which the tester reports as a bug
has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending
Reject’.

Rejected:
If the Testing Leader finds that the system is working according to the specifications or the bug is
invalid as per the explanation from the development, he/she rejects the bug and marks its status as
‘Rejected’.

Postponed:
Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation
may occur because of many reasons, such as unavailability of Test data, unavailability of particular
functionality etc. That time, the bug is marked with ‘Postponed’ status.

Deferred:
In some cases a particular bug stands no importance and is needed to be/can be avoided, that time
it is marked with ‘Deferred’ status.

Duplicate:
If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug
status is changed to “DUPLICATE”.

Page 14
Manual Testing Tutorial

DEFECT TEMPLATE

Serial No.

Defect Id – unique no. for particular defect

Defect Description -

Steps to reproduce - what are steps to reproduce the defects.

Environment

Build version – in what version they found defect.

Author – what raising the issue

Assignee - developer or application owner

Date of raise – when the defect is raised

Defect priority – low, medium, high

Defect severity – critical, major, minor(customer point of view)

Comments/Remarks

Defect Metrics
Software testing defect metrics is used to measure the quantification of a software
related to its development resources and/or development process. It is usually responsible
for quantifying factors like schedule, work effort, product size, project status and quality
performance. Such metrics is used to estimate that how much of more future work is
needed to improve the software quality before delivering it to the end-user.
This process of testing defect metrics is used to analyze the major causes of defects and in
which phase most of the defects are introduced.

Defect leakage - the defects identified during UAT .

Defect density – No of defects raised in the specific module

Defect rejection – The ratio of no. of fixed & rejected defect.(%5)

Page 15
Manual Testing Tutorial

Metrics is quantitative measure of degree.

Total no. of test

Total no. test executed to date

Total no. test successfully executed to date

Defect metrics – defect metrics is calculating, analyzing about defects.

Total no. defect prevention – who is writing, reviewing and verifying the process CRS & SRS, they
can prevent defect.

Total no. defect detection – tester can detect the defects.

Objective – We can calculate objective

Subjective -

Defect Density:
The defect density is measured by adding up the number of defects reported by the Software
Quality Assurance to the number of defects that are reported by the peer and dividing it by the
actual size(which can be in either KlOC, SLOC or the function points to measure the size of the
software product).

Defect Age: (defect turnaround time)

Time or date of defect fixed – Time or date of defect raised

Test effectiveness:
There are several approaches towards analyzing test effectiveness, one of them is t/(t+Uat) and in
this formula "t" is the total number of defects reported during the testing, whereas U at means the
total number of defects that are reported during the user acceptance testing.

Defect Removal Efficiency:


The percentage of total no. defects occurring in a phase or activity removed by the end of the
activity.

Effort Variance: (time spend in the project)

Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.

Page 16
Manual Testing Tutorial

Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100.

Schedule Slippage:
No. Of days behind the schedule is called schedule slippage. When a task has been delayed from its
original baseline schedule then the amount of time that it has taken is defined as schedule slippage.
Its calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100

Rework Effort Ratio:


The time took review or rework on the project or phase.

(Actual review effort spent in that particular phase / Total actual efforts spent in that phase) * 100

Requirements Stability Index:

Requirements Stability Index is defined as stability in the projects requirements. How far the
changes are happening in the requirements of project?
{1 - (Total number of changes /number of initial requirements)}

Requirements Creep:

The percentage of requirements added in the project of phase.


(Total Number of requirements added/Number of initial requirements) * 100

Defect Weight age:

Defect weight age is calculate depend upon the severity and the priority.
(5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)
Where are the values 5,3,1 corresponds to the severities of the detect?

Review Efficiency:

Ho w effectively we are reviewing defect based the change requirements? It should be low.

Overall Review Efficiency% = (Number of Review defects) / (Total number of Review +


Testing Defects [including customer reported test defects])*100

Defect rejection ratio :

Average of total no. of defects rejected and total no. of test cases

Page 17
Manual Testing Tutorial

Residual defect – It’s a important factor to decide whether the product is ready to be release or
not.

Manual test case execution productivity - No. of test cases executed per hour

Test case passing rate – no. of test case passed/no. test case executed

Defect trend analysis - An aspect of technical analysis that tries to predict the future movement of
a stock based on past data. Trend analysis is based on the idea that what has happened in the past
gives traders an idea of what will happen in the future.

Defect trend report – It shows, defect counts by status.

Productivity - Cost of testing with overall cost of project

And at the end we’d like to tell you that purpose of metrics isn't just to measure the software
performance but also to understand the progress of the application to the organizational goal.
Below we will discuss some parameters of determining metrics of various software packages:

• Duration
• Complexity
• Technology Constraints
• Previous Experience in Same Technology
• Business Domain
• Clarity of the scope of the project

Defect Density

Defect Density can be defined as the ratio of Defects to Size or the ratio of Defects to effort. The
Unit of Measure for Size could vary as defined by the project. The total defects would
include both Review and Test defects.

Defect Density based on size is calculated at the Overall Project level.

Defect Density based on effort is calculated at the Stage level.

Overall Defect Density = Total number of Defects/ Total Size of the Project

Stage wise Defect Density = Total number of Defects attributed to the particular stage/
Actual effort of the Particular Stage

Note: If UOM is Lines of Code, Defect Density will be Defects per KLOC

Page 18
Manual Testing Tutorial

Examples of corrective Action (for Defect Density) include:

 Usage of thoroughly tested & implemented re-usable components can


reduce the defects
 By devising a good Defect Prevention mechanism based on past project
data with similar technology/domain etc
 Wherever there are changes to scope (requirements), the size needs to
be re-estimated and the goals for defect detection should be revised
based on the revised size.
 If the project is more UI intense, a prototype could be developed and
customer sign-off could be obtained so that defects pertaining to UI is
eliminated
 Training in Technology/Domain could be given to the project team if
the team comprises of resources relatively new to Technology/Domain.

Review Efficiency%

This metric shows the efficiency of the review process. A higher ratio of Review
Defects to Total Defects indicates a very efficient Review Process. However, a lower
ratio need not necessarily mean that the review is inadequate. This may be due to
frequent changes in scope, requirements etc.

Typically, the review defects include all reviews, starting from the Requirements Stage.
All review comments that arise from the review of Code/unit / integration / system
test plans / procedures / cases shall be counted under Review defects. The total
number of Defects includes all the defects found in reviews and testing.

In the case of Application Development projects, this metric is applicable to all stages.
In the case of Application Maintenance projects, it is applicable to Analysis stage of
super major enhancement request type only.

Review Efficiency % is calculated at

 Overall Project level

 Stage level

Stage wise Review Efficiency % = (Number of defects detected during current stage
review) / (Number of defects detected during current stage Review + Number
defects detected in subsequent stages attributed to current stage) *100

Overall Review Efficiency% = (Number of Review defects) / (Total number of


Review + Testing Defects [including customer reported test defects])*100

Page 19
Manual Testing Tutorial

Defect Removal Efficiency %

This metrics shows the efficiency of removing defects by Internal Review and Testing
process before shipment of the product to customer. The metric involves pre-ship
defects and post-ship defects.

Defect Removal Efficiency (DRE) = (Total number of Pre-shipment Defects)/ (Total number
of Pre-shipment Defects + Total number of post-shipment Defects) *100

Here pre-shipment defects will include all review and test defects across all stages
found at offshore and post –shipment defects will include acceptance testing and post
implementation defects.

This is a very good metrics for all operational models, as it is indicative of quality of the
product delivered. For projects following ‘Testing and QA’ operational model where
the Test Plan/Test Case is available as a scope of the project, indicates the quality of
testing carried out.

During the course of the project, Defect Removal efficiency (DRE) can be obtained at end of
every delivery to the customer.

DRE for the entire project can be calculated at the end of post implementation. If post
implementation support is not in scope of the project then DRE is computed at
the end of Acceptance Testing.

Examples of corrective Action (for Defect Removal Efficiency) include:

 Adequate Review and Testing should be done at offshore to unearth defects


 Test Strategy/Planning should be evolved upfront as soon as the
requirements are freezed to decide on the testing methodology that would
be adopted at offshore to carryout a comprehensive testing
 Good traceability should be established and maintained through out the
project life cycle starting from Requirement and Testing to check if all the
states requirements are implement
 Defects and Changes to requirements should be identified and tracked
separately so that only the actual defect gets in to the calculation of defect
removal efficiency and changes to requirements are tracked through
change management process

Page 20
Manual Testing Tutorial

Defect Leakage %

This metric gives a very good indication of the review / testing process within a stage.
Any defect leaked to the next stage indicates that the test / review mechanism for that
stage / work product is not adequate. A high leakage rate indicates that the process of
review / testing carried out needs to be looked upon.

Defect Leakage is calculated at:

 Overall Project level

 Stage level

Information on the number of defects captured in a particular stage and number of


defects captured in subsequent stage, but attributed to previous stage, is used in the
calculation of Defect Leakage Ratio:

Stage-wise Defect Leakage% = (Number of defects attributed to a stage but only


captured in subsequent stages) / (Total number of defects captured in that stage
+ Total Number of defects attributed to a stage but only captured in subsequent
stages) *100

Overall Defect Leakage % = Sum((Number of defects attributed to a stage but


only captured in subsequent stages) / (Total number of defects captured in that
stage + Total Number of defects attributed to a stage but only captured in
subsequent stages)) *100

For example

Stage Detected Number of Defects Stage of Origin

Coding & Testing Construction & Testing

Total Defects captured in Design = 24

Total Defects attributed to Design but captured in subsequent phases (i.e. Coding & Testing) = 6

Defect Leakage for Design Phase = (6/(24+6)) * 100 = (0.2)*100 = 20%

Page 21
Manual Testing Tutorial

20 % of defects have leaked through the reviews in the Design Stage.

Examples of corrective Action include

Stage-wise defects:

 Review of the items more rigorously in the same stage itself.

General Corrective action:

 To uncover more defects in reviews, more training is needed in review


procedures.

 To uncover more defects in testing, re-review (if required) the module /


work packet in question.

Higher defect Leakage Ratio:

 Define and Implement more rigorous reviews in earlier stages of the


project.

Average Defect Age

This metric gives an indication as to on an average how long it takes for the defects to get detected,
in terms of number of stages.

This metric is calculated at

 Overall Project level

The formula for Average Defect Age is

((Stage Detected – Stage Origin)*number of defects)/ Total number of defects

For example consider the following table of defect distribution in an Application Development
project,

The computation of defect age would be

{(1-1)*15+(2-1)*5+(3-1)*5+*(6-1)*1+(2-2)*15+(3-2)*5+(4-2)*2+

(6-2)*1+(3-3)*20+(4-3)*4+(5-3)*3+(6-3)*2+(4-4)*14+(5-4)*2+

(5-5)*5+(6-5)*1+(6-6)*5)/105} = 0.50

If the average defect age value is less than 1 indicates that the project, on an average,
is able to find the defects within the same stage itself. If it is greater than or equal to

Page 22
Manual Testing Tutorial

1 then the project finds more defects only in the subsequent stages and not in the
same stage.

Here the stage of origin of defect could also be any testing stage like IT, ST and AT,
where defects can occur due to environment setup related problems or inadequate
test cases, etc.

Cause-wise defects

This metric gives an indication of the causes that occur more frequently in a stage or
across stages, for which corrective action may be required to prevent such defects
from occurring in future.

Causes shall be classified as follows:

(Note: This list is not exhaustive in nature. The full list of causes for a defect shall be
found in eTracker / eMetrics).

At the highest level:

Requirements defects, Analysis defects, Design defects, Coding defects, User


documentation defects, Bad fixes

This shall be broken down as below:

Requirements: Requirements not identified, ambiguous requirements, Changed assumptions,


modified requirements

Coding: bad fix / code, code conversion error, insufficient adherence to standards, incomplete
conversion guidelines / cookbook

General: insufficient skills, inadequate previous review, configuration management problem,


ineffective handover / takeover, insufficient client involvement, incomplete / incorrect
input from customer, wrong compiler options / environment, insufficient test
coverage, wrong test data, tool errors, interface errors.

The top causes for the defects (due to which more defects are caused) shall be
identified and the projects shall fill the reasons and corrective action planned to
reduce these defects from occurring in future

Page 23
Manual Testing Tutorial

What are the qualities of good tester?

Test to break attitude.

Strong testing skills is important.

Knowledge about the product or an application is needed.

Experience & good communication skill is also important.

Attention and dedication.

Minimal programming knowledge is also helpful.

Think from customer point of view.

Defect Priority

Blocker: This bug prevents developers from testing or developing the software. The defect or issue
stops the following functionality and no workaround is there it called a blocker.
Critical: If the major functionality is not working or the security belching or data losses. The
software crashes, hangs, or causes you to lose data. No work around is there
Major: A major feature is broken. Functionality is not working as expected.
Normal: It's a bug that should be fixed.
Minor: Minor loss of function, and there's an easy work around.GUI issues come under minor.
Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
Enhancement: Request for new feature or enhancement.

Defect severity

Severity Levels can be defined as follow:

S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A
major area of the users system is affected by the incident and it is significant to business processes.
S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on
with testing.Incident affects an area of functionality but there is a work-around which negates
impact to business process.

Page 24
Manual Testing Tutorial

This is a problem that:


a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are
unlikely to occur in normal use, or minor errors in layout/formatting. Problems do not impact use
of the product in any substantive way. These are incidents that are cosmetic in nature and of no or
very low impact to business processes.

High severity, low priority - Critical impact on user: nuclear missiles are launched by accident.
Factor influencing priority: analysis reveals that this defect can only be encountered on the second
Tuesday of the first month of the twentieth year of each millennium, and only then if it's raining and
five other fail safes have failed.

Business decision: the likelihood of the user encountering this defect is so low that we don't feel it's
necessary to fix it. We can mitigate the situation directly with the user.

High severity, low priority - Critical impact on user: when this error is encountered, the
application must be killed and restarted, which can take the application off-line for several minutes.
Factors influencing priority: (1) analysis reveals that it will take our dev team six months full-time
refactoring work fix this defect. We'd have to put all other work on hold for that time. (2) Since this
is a mission-critical enterprise application, we tell customers to deploy it in a redundant
environment that can handle a server going down, planned or unplanned.

Business decision: it's a better business investment to make customers aware of the issue, how
often they're likely to encounter it, and how to work through an incidence of it than to devote the
time to fixing it.

Low severity, high priority - Minimal user impact: typo. Factors influencing priority. (1) The typo
appears prominently on our login screen; it's not a terribly big deal for existing customers, but it's
the first thing our sales engineers demo to prospective customers, and (2) the effort to fix the typo
is minimal.

Decision: fix it for next release and release it as an unofficial hot fix for our field personnel.

Page 25
Manual Testing Tutorial

ETA – Expected Turnaround Time

AUD Application Understand Document

DPU – Defect per Unit

DPO – Defect per Opportunity

LCL – Lower Control Limit

UCL – Upper Control Limit

LSL- Lower Specification Limit

USL –Upper Specification Limit

CPDH – Cost Per Developer Hour

CRS - Customer Requirements Specification

SRS - System Requirements Specification

RFP - Required For Proposal

RTM - Released To Manufacturing

MSA - Mutual Service Agreement

API –Application Program Integration

Product – They are manufacturing the product.

Process - Service providing

How to increase the productivity?

The person who’s having

Experience

Well known about the subject.

Who is well known about the subject, he know about short cuts.

Template - predefined things

Page 26
Manual Testing Tutorial

When should stop testing?

Defect rates fall below certain ratio level.

Testing budget is exhausted.

High priority defects are fixed.

If the project reached it’s deadline.

The project processed from Alpha to Beta.

If a customer, wants to stop.

Page 27

You might also like