System Defecect
System Defecect
System Defecect
AND METRICS
Process: Testing
1 Introduction 3
1.1 What is a defect and why it is required to be managed 3
1.2 Defect Management Principles 3
4 Conclusion 15
This Whitepaper documents the process to be followed to manage individual defect raised
during a test phase and describe various defect metrics like defect density, defect cause,
defect priority v/s state, defect detection against their closure etc.
A defect is a discrepancy between expected and actual results of the given system.
For example, an incorrect implementation of specification and missing of the specific
requirements(s) from the software
Software defects are expensive. Moreover, the cost of finding and correcting defects
represents one of the most expensive software development activities. It will not be possible
to eliminate defects but we can minimize their number and impact on our projects. To do this
test team need to implement a defect management process that focuses on preventing
defects, catching defects as early in the process as possible, and minimizing the impact of
defects. A little investment in this process can yield significant returns.
The primary goal is to prevent defects. Where this is not possible or practical, the
goals are to both find the defect as quickly as possible and minimize the impact of the
defect.
Defect information should be used to improve the process. This, in fact, is the primary
reason for gathering defect information
Most defects are caused by faulty or inconsistent processes. Thus to prevent defects,
the process must be altered
A good defect management process is the one that doesn't just fulfil formal steps and
procedures but strictly ensure that defects are being handled in a well-appropriate and
organized manner from the time they are discovered till their resolution. A well planned
progress will always have priorities in itself regarding the value of a defect, so normally there
are four types of defects:
For all test stages, defects will be logged by testers on defect tracking system against a
project name unique to the application and release. This name will be stated within the Test
Plan. An illustration of the defect template (otherwise known as a Defect Log) is shown in
paragraph 2.6.
The Development Lead will then be responsible for assigning these to the relevant Developer
for investigation. Once resolved, the Development Lead will assign back to the Test Team.
Status Description
Submitted Defect created by Tester and Assigned
Opened Triage determined as a valid defect
Assigned Development Lead has assigned to a Developer
Resolved Developer has fixed defect
Closed No further action required – Closed by Tester
Reopened When the defect is retested but not ok
Duplicate Duplicate record (reference back to other defect)
Postponed Defect put on hold
Assigned
Reopened Resolved
Closed
The life-cycle described above is just an example as to how a defect can be managed across
its life cycle. It is purely a customisation based on organisation needs, several new states can
be introduced or some of those described above may be removed from the defect life cycle.
If an unexpected occurrence is observed during test execution that is not scripted, a new
script should be produced and the defect logged against that script.
Testers will be responsible for ensuring that any relevant screenshots are attached to the
defect record.
A defect cannot be closed without agreement between developers, testers and in some cases
business analysts (i.e. where clarification of business requirements interpretation is required).
Defect raised by the Client during UAT will be assigned after initial classification, usually to
the Project Lead Tester.
All defects that are fixed during UAT must be successfully retested in System Test
environment prior to release back into UAT. All of the defects that are identified must be
captured in defect tracking system. Those that are deemed to be defects must be accurately
described and categorised, fully analysed for impact and the status updated when it changes.
The defect management process will be controlled by a project specific Defect Management
Group (DMG), as referenced in the following section.
The Defect Management Group (DMG) is used to categorise, classify and prioritise defects
When a defect is raised during test execution:-
the DMG investigates to determine the cause
If the defect is believed to be caused by an error in code or incorrect configuration
of the infrastructure, the defect is deemed valid.
Priority level informs the impact on the testing process and therefore determines how urgent it
is to fix a defect that testing identifies.
Priority Description
1 Prevents any meaningful testing being carried out
2 Stops a significant area of testing from progressing to completion
3 Stops a specific test being completed
4 Defect which does not stop a test being run to completion
5 Query resulting from an unexpected occurrence that does not impact on test
results
Severity level is informed by the impact of the defect on the system if it is not fixed.
Severity Description
1 Critical – System cannot go live
2 High – The defect is significant with part of the system not working as specified
3 Moderate – the system does not meet stated requirements and impacts on the
use of a significant part of the system
4 Low – does not meet stated requirements but there is an acceptable work-
around available
5 Minor - does not meet stated requirements but has no adverse impact on the
use of the system
Every defect must be closed down in defect tracking system with an associated Resolution
Code from the list of valid values:-
Change Request/Next Release
Code Error
Data Error
Environment Build Problem
Error already in production
Raised in Error
Tolerate
In the case of Live Verification defects raised by the client (business user), these will be
raised in IT Service Management (ITSM) tool and the resolution codes will be in accordance
with other Live defects. ITSM is configured with the following codes for „Cause‟ in the
Resolution Details:
Capacity
Change
During Test execution, regular (usually daily) meetings will be held between the Project
Manager, Lead Tester, the Client‟s Project Manager and Business Representative (DMG
Group). Other parties, e.g. System Architects or Business Analyst may also be consulted on
specific issues.
Review all new application and environment defects raised in order to agree that they
are correctly categorised and prioritised.
Discuss requirements for any unscheduled code deployments or data refreshes to the
test environment from live
In the event of any disagreement between the parties regarding the categorisation of defects,
the matter shall be resolved in accordance with the provisions of the Dispute Resolution
Procedure.
The following screenshot from Defect Tracking System (Rational Clear Quest) illustrates the
Test Defect Log used. Every defect raised during testing will have an associated defect
record logged and will follow the defect management process detailed earlier in this
document.
Defect Metrics drive information from defect data (raised in defect tracking system) with a
view to help in decision making. We can use the following defect metrics for effective project
management
for a duration (say, the first month, the quarter, or the year).
for each phase of the software life cycle (say testing, it may also be a specific testing
stage like System Integration or User Acceptance Testing )
for the whole of the software life cycle (Requirement and Design Reviews, Testing
etc.)
Let say we have found 3 defects during System Integration Testing (SIT) phase in 1000 lines
of code but organisation data says 5 defects should be found from 1000 lines of code, it
indicates we require further testing in SIT phase. On the other hand, if we find 8 defects in
1000 lines of code, it will be interpreted as system is quite unstable and it is required
immediate attention.
This metrics provide information about the defect root cause i.e. the reason by which defect
exists in the software. Finding the root cause of the defects help in identifying more defects
and sometimes help in even preventing the defects
The above chart shows that maximum defects were caused due to Environment and User
Role Permission. Accordingly Project Management team will take preventive action to reduce
these defects.
This metrics provides information about detection of defects against their closure
(cumulative). This comparison provides us vital information about software quality at any
given point, prediction about remaining defects and time to fix remaining defects
The chart above shows that the test team identified most of the defects at early stage,
detection of the defects at later stage was gradually decreased. Development team managed
to close out the gap between submitted and closed defects towards the end.
If the gap between Submitted and Closed defects is not closed out then it indicates defect
fixing activity is not effective and there are more defects introduced in the system.
If the submitted defects increase gradually towards the end of the project, it means that the
test team identified most of the defects at the end of the life cycle.
This metrics provides a number of defects against each Severity. If there are a lot of critical
severity defects, it will be a question on software quality and project management can take
drastic steps to improve the software quality
This metrics provides information about defect count with state against each priority. If there a
lot of P1 and P2 defects with Submitted/Assigned/Open state then Project Management will
ask development team to work first on fixing these defects. By this metrics, project
management team also find out closed defects count for any given priority
This metrics indicates the level of business understating among the testers. If the percentage
of defect rejection is large then we can come on conclusion that tester doesn‟t have sufficient
business knowledge
This metrics indicates the number of improper defect resolutions. There will be more
unplanned efforts require re-fixing and testing these defects. It is an indicator on development
process, it means how mature organisation development processes are
At last not the least, I can say defect metrics are an indicator of quality of a software/
product under test and test team’s effectiveness. These sets of metrics are equally
important to track and base important test management decisions on, these metrics helping a
test manager decide “whether the test team is ready to sign off on the software under test”.
Also, as a best practice, I would recommend using these metrics on an ongoing basis (say
once in 15 days or so) to keep track of how the software/product quality is shaping up. This
will help the project or test management proactively recommend any corrective actions to the
development team as opposed to reactively accepting poor quality later in the game.
We can use the following lessons which we learnt by experience to prevent or early detection
of defects
System Integration Testing should be scheduled so that the core functionality are
initially tested
Design documents should contain all the necessary validations to avoid validation
error
Basic level review and testing should be done at developer‟s level before handing
over the code to testers