0% found this document useful (0 votes)
73 views41 pages

Cste Domain 9 Defect Tracking and Correction: Presented by Mike Webb, For The SISQA CSTE Study Group

The document discusses defect tracking and correction. It describes the defect management process, which includes defect prevention, deliverable baselining, defect discovery, defect naming, defect resolution, process improvement, and management reporting. The primary goal is to prevent defects by identifying and minimizing risks, and to improve the development process based on defect data.

Uploaded by

pavanvas
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views41 pages

Cste Domain 9 Defect Tracking and Correction: Presented by Mike Webb, For The SISQA CSTE Study Group

The document discusses defect tracking and correction. It describes the defect management process, which includes defect prevention, deliverable baselining, defect discovery, defect naming, defect resolution, process improvement, and management reporting. The primary goal is to prevent defects by identifying and minimizing risks, and to improve the development process based on defect data.

Uploaded by

pavanvas
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

CSTE

Domain 9
Defect Tracking
and Correction
Presented by Mike Webb, for the SISQA CSTE Study Group
Study Guidelines
A major test objective is to identify defects.
Once identified, defects need to be
• recorded
• monitored
• reported
• corrected
The CSTE candidate should be skilled in the
entire defect process.
2
Part 1 – Overview of
Defect Management Process
• Primary goal is to prevent defects
• Based on risk assessment and impact
• Integrated into development process
• Uses automated defect capture and analysis,
where possible
• Used to improve the development process

3
Part 2 - The Defect
Management Process
• Defect prevention
• Deliverable baselines
• Defect discovery
• Defect naming
• Defect resolution
• Process improvement
• Management reporting
4
D e fe c t M a n a g e m e n t P r o c e ss

D e fe c t D e liv e r a b le D e fe c t D e fe c t P ro cess
P r e v e n t io n B a s e lin e D is c o v e r y R e s o lu t io n Im p ro v e m e n t

M a n a g e m e n t R e p o r tin g
Defect Prevention
• Best approach to defects is to eliminate
them altogether!
• Since technology doesn’t exist yet to
eliminate all defects, strategies are
needed to find defects quickly and
minimize their impact.
• High priority is to identify and implement
the best defect prevention techniques

6
D e f e c t P r e v e n t io n S t e p s
Id e n tify E stim a te M in im iz e
C ritic a l E x p e c te d E x p e c te d
R isk s Im p act Im p act

7
Identify Critical Risks
• Identify the risks that pose the largest
threat to the system.
• Risks vary based on type of project,
technology, users skills, etc.
• Don’t try to identify every conceivable risk,
just those critical to success of the project.

8
Examples of Critical Risks
• Missing a key requirement
• Critical application software or vendor supplied
software doesn’t function properly
• Software doesn’t support major business functions,
requiring reengineering of the software
• Unacceptably slow performance
• Hardware malfunction
• Hardware and/or software integration problems
• Users unable or unwilling to embrace new system
9
Estimate Expected Impact
• For each critical risk, an assessment can
be made of the impact, in dollars, if the
risk does not become a problem, and the
probability that the risk will become a
problem.
• The product of these two numbers is the
expected impact.
• Precision is not important. What is
important is the order of magnitude of risk.
10
Annual Loss Expectation
Formula (ALE)
• ALE is one of the more effective methods
of estimating the expected impact of a risk.
• ALE = dollar loss per event times number
of events per year.
• Estimated loss can be calculated by using
average loss for a sample of loss events.

11
Factors Influencing
Expected Impact
• Probability that the risk will become a
problem
• How long it takes to recognize the problem
• How long it takes to fix the problem
———————————
• Knowing the impact provides insight into
how to reduce the risk

12
Minimize Expected Impact
• Eliminate the risk, or at least reduce risk
– Reduce scope of system
– Avoid untested technology
• Reduce possibility of risk becoming a problem
– Inspections
– Testing
• Reduce impact if there is a problem
– Contingency plans
– Disaster recovery plans

13
Two Ways to Minimize Risk
1) Reduce expected loss per event
2) Reduce frequency of an event
——————————
• If both are reduced to zero, the risk is
eliminated
• If loss per event is reduced, impact is
reduced when problem does occur.
• If frequency is reduced, probability of risk
becoming a problem is reduced.
14
Defect Prevention Techniques
• Quality Assurance
– Ensure that processes used are adequate to produce the desired
result
• Train and Educate the Work Force
– Better training leads to higher quality
• Train and Educate Customers
– Use creative strategies, such as more elaborate Help, cue cards,
multimedia tutorials, etc.
• Standardize the Process
– Methodology and standards produce a repeatable process that
prevent defects from reoccurring
• Defensive Design and Defensive Code
– Design such that two or more independent parts of system must fail
before a failure can occur.
– Use Assertions (code which brings unexpected conditions to the
attention of the programmer or user) 15
Deliverable Baselining
• A deliverable is baselined when it reaches a
predefined milestone in development.
• A milestone involves transferring the product
from one stage of development to the next.
• Cost of making changes and fixing defects
increases at each milestone.
• Deliverable should be baselined when changes
or defects can have an impact on deliverables
on which other people are working.

16
Baseline Activities
• Identify key deliverables
– Determine the point in the development
process where the deliverable will be
baselined
• Define standards for each deliverable
– Define the requirements and criteria that must
be met before deliverable can be baselined

17
Defects and Baselining
• A defect exists only when one or more
baselined components does not satisfy its
requirements.
• Errors caught before baseline are NOT
considered defects.

18
Defect Discovery
• A defect is discovered only when the
defect has been formally brought to
attention of developers, AND developers
acknowledge that the defect is valid.
• Finding a problem is NOT discovering a
defect!

19
D e fe c t D is c o v e r y S te p s

F in d R eport A c k n o w le d g e
D e fe c t D e fe c t D e fe c t

20
Step 1: Find Defect
Two usual methods for finding a defect
1. Preplanned activities intended to uncover
defects, ie inspection or testing.
2. By accident, such as user reports

21
Defect Finding Techniques
• Static techniques
– Deliverable is examined (manually or by a tool) for
defects
– Examples: reviews, walkthroughs, inspections
• Dynamic techniques
– A deliverable is used to discover defects
– Example: testing
• Operational techniques
– An operational system contains a defect found by
users, customers, or control personnel.
– Found as a result of a failure

22
Comments About
Defect Finding Techniques
• An effective defect management program
requires each of the three categories
• In each category, the more formal the integration
of the techniques, the more effective they are
• Static techniques generally find defects earlier,
so they are more efficient
• Inspection is effective at removing defects.
• Inspection is also a good way to train new staff
in best practices and the functioning of the
system being inspected.

23
Step 2: Report Defect
• Developers must quickly become aware of
the defect
• If found during the defect finding phase,
complete and submit a defect report
• If found by other means, plan for other
reporting methods, such as computer
forums, email, call center, etc

24
Step 3: Acknowledge Defect
• Developer must decide if defect is valid,
usually by trying to reproduce the problem
• Submitter can help the developer by
including details in the defect report about
how to reproduce the defect

25
Strategies to Pinpoint
Cause of a Defect
• Instrument the code to trap environment
state when error occurs
• Write code to check the validity of the
system
• Analyze reported defects. If not
reproducible, look for patterns in similar
defect reports.

26
Defect Naming
Level 1 – Name of the defect
1. Gather representative sample of defects, from help
desk, QA, project teams, etc.
2. Identify major developmental phases and activities.
Start with broad list, then reduce to no more than 20
items.
3. Sort identified defects by the phase/activity in which
they were found.
4. Within each phase/activity, categorize defects into
groups with similar characteristics. Follow Pareto’s
rule: About 80% will fall into very few groups, and
remain 20% will be widely dispersed among other
groups. Call these “all other”.
27
Defect Naming (continued)
Level 2 – Developmental phase of Activity
The phase should coincide with the
organization development methodology – e.g.
business requirements, technical design,
development, acceptance, installation – or an
activity such as data preparation.
Level 3 – Defect Category
Suggested defect categories for each phase:
1. Missing
2. Inaccurate
3. Incomplete
4. Inconsistent
28
Defect Naming Example
If an incorrect requirement was found, it
could be named as:
• Level 1 – Incorrect requirement
• Level 2 – Requirements phase
• Level 3 – Inaccurate
Note that Levels 2 and 3 are qualifiers to the
Level 1 name.

29
D e fe c t R e so lu tio n S te p s
P rio ritize S c h e d u le F ix R eport
F ix F ix D e fe c t R e s o lu tio n

30
Prioritize Fix
• Consider these questions:
– Is this a new defect, or previously submitted?
– What priority should be given to fixing defect?
– Are there actions to minimize impact of defect
prior to the fix? Is there a workaround?

31
Three-level
Prioritization Method
1. Critical: Defect would have serious
impact on organization’s business
operation
2. Major: Defect would cause output of
software to be incorrect or stop execution
3. Minor: Defect doesn’t directly affect user,
such as documentation error, or cosmetic
GUI error

32
Schedule Fix
• Schedule the fix based on priority of the
defect
• All defects are not created equal from the
perspective of how quickly they need to be
fixed.

33
Fix Defect
• Correct the problem
• Verify the fix
• Review test data, checklists, documentation,
etc. (and enhance them if needed), in order
to find similar defects earlier in the future.

34
Report Resolution
• After defect is fixed and verified, the
developers, users, management, and
others need to be notified that the defect
has been fixed.
• Include details such as the nature of the
fix, when and how the fix will be released.

35
Process Improvement
• Take the opportunity to learn how to improve the
process and prevent potentially major failures.
• Even if defect was not critical the fact that there
was a defect is a big deal.
• It is only the developer’s good luck that prevents a
defect from causing a major failure.
• Review the process which originated the defect.
• Review the validation process, which should have
caught the defect earlier in the process.
• If defect got this far before being caught, what
similar defects may not have been discovered yet?

36
Two Viewpoints of a Defect
The producer’s view:
A defect is a deviation from specifications,
whether missing, wrong, or extra.
The customer’s view:
A defect is anything that causes customer
dissatisfaction, whether in the requirements
or not. This view is known as “fit for use”.

37
Purposes of Defect Reporting
1. To correct the defect
2. To report status of the product
3. To gather statistics used to develop
defect expectations in future products
4. To improve the development process

38
Examples of Required Data
for Defect Reports
• Defect ID number
• Descriptive defect name and type
• Source of defect, ie test case or other source
• Defect severity
• Defect priority
• Defect status, ie open, fixed, closed, etc
• Date and Time tracking
• Detailed description, including how to reproduce
• Component or program where defect was found
• Screen prints, log files, etc
• Stage of origination
• Person assigned to research and/or correct the defect

39
Sample Defect Tracking Process
1. Execute test, compare actual results to expected results.
If discrepancy found, log it with status of ‘open’.
2. Test Manager or Tester reviews defect report with
developer to determine if really a defect.
3. Assign defect to developer for correction. Developer
fixes problem, and changes status to ‘fixed’ or ‘retest’.
4. Defect goes back to test team for retest. Regression
testing performed as needed.
5. If retest results match expectations, update status to
‘closed’. If still not fixed, change status back to ‘open’
and send back to developer.
6. Repeat 3-5 until problem is resolved.

40
The End

CSTE
Knowledge Domain 9
Defect Tracking
and Correction
41

You might also like