0% found this document useful (0 votes)
9 views40 pages

Bug Management

The document outlines the importance of bug management in software testing, emphasizing defect analysis and prevention to enhance software quality and reduce development costs. It provides guidelines for effective bug reporting, including structuring reports, isolating issues, and managing bug tracking to foster collaboration between testers and developers. Additionally, it discusses the significance of prioritizing bugs based on severity and risk to ensure efficient resolution.

Uploaded by

jnrius.alxnder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views40 pages

Bug Management

The document outlines the importance of bug management in software testing, emphasizing defect analysis and prevention to enhance software quality and reduce development costs. It provides guidelines for effective bug reporting, including structuring reports, isolating issues, and managing bug tracking to foster collaboration between testers and developers. Additionally, it discusses the significance of prioritizing bugs based on severity and risk to ensure efficient resolution.

Uploaded by

jnrius.alxnder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

ISYS338 – Testing and System

Implementation

Session 10:
Bug Management
Learning Outcomes

• LO-3 : Apply the testing design plan and tools to the


testing process
References

• Black, Rex. (2009). Managing the testing process :


practical tools and techniques for managing
hardware and software testing. 03. Wiley.
Indianapolis. ISBN: 9780470404157.

• Burnstein, Ilene. (2003). Practical Software Testing.


Springer. New York. ISBN: 0-387-95131-8

• Homès, Bernard. (2012).Fundamentals of Software


Testing. ISTE – Wiley. London – Hoboken. ISBN: 978-1-
84821-324-1
Sub Topics

• Benefit of Defect Analysis


• Bugs & Root Causes
• Steps for Better Bug Reports
• Bug Life Cycle
• Managing Bug Tracking
• Levels of Importance to Bug
Benefit of Defect Analysis
Argumentation
about Defect Analysis
• Defect analysis/prevention processes help to reduce
the costs of developing and maintaining software by
reducing the number of defects that require our
attention in both review and execution-based testing
activities.
• Defect analysis/prevention processes help to improve
software quality. If we identity the cause of a class of
defects and change our process so that it does not
reoccur, our software should be less defective with
respect to that class of defects and more able to meet
the customer’s requirements.
• If our software contains fewer defects, this reduces
the total number of problems we must look for; the
sheer volume of problems we need to address may be
significantly smaller.
Argumentation
about Defect Analysis
• Defect analysis/prevention (cont.)
processes provide a
framework for overall process improvement activities.
When we know the cause of a defect, we identify a
specific area of our process that needs work.
Improvements made in this area usually produce
readily visible benefits.
• Defect analysis/ prevention activities not only help to
fine-tune an organizations’ current process and
practices, but also support the identification and
implementation of new methods and tools so that
current process continues to evolve and comes closer to
being optimized.
• Defect analysis/prevention activities encourage
interaction between a diverse number of staff
members, the close interrelationships between
specialized group activities and the quality of internal
Benefits of Defect
Analysis and Prevention
Processes

Source: Burnstein (2003, pg. 443)


Bugs and Root Causes
Bugs and Their Root
Causes

Source: Black (2009, pg. 170)


Bugs and Their Root
Causes
• An anomaly
(cont..)
occurs when a tester observes an
unexpected behavior. If the test environment and the
tester’s actions were correct, this anomaly indicates
either a system failure or a test failure.
• The failure arises from a bug in either the system or
the test. The bug comes from an error committed by a
software or hardware engineer (while creating the
system under test) or a test engineer (while creating
the test system).
• That error is the root cause.
• Usually, the aim of performing a root cause analysis
isn’t to determine the exact error and how it happened.
Other than flogging some hapless engineer, you can’t
do much with such information.
• Instead, root cause analysis categorizes bugs into a
taxonomy.
Levels of Importance to Bug
Mechanism to Assign
Levels of Importance to
Bugs

Severity Priority
Risk
Priority
Number
(RPN)
Severity

• By severity, means the impact, immediate or delayed,


of a bug on the system under test, regardless of the
likelihood of occurrence under end-user conditions or
the effect such a bug would have on users.
• You can use the same scale used for failure mode
and effect analysis (FMEA):
1. Loss of data, hardware damage, or a safety issue
2. Loss of functionality with no workaround
3. Loss of functionality with a workaround
4. Partial loss of functionality
5. Cosmetic or trivial
Priority

• You use priority to capture the elements of


importance not considered in severity, such as the
likelihood of occurrence in actual customer use and the
subsequent impact on the target customer.
• When determining priority, you can also consider
whether this kind of bug is prohibited by regulation or
agreement, what kinds of customers are affected, and
the cost to the company if the affected customers take
their business elsewhere because of the bug.
• Again, you can use a scale like the priority scale
used in the FMEA:
1. Complete loss of system value
2. Unacceptable loss of system value
3. Possibly acceptable reduction in system value
4. Acceptable reduction in system value
5. Negligible reduction in system value
Risk Priority Number
(RPN)
• You can
for the Bug
multiply severity by priority to calculate a
risk priority number (RPN) for the bug.
• Using this approach, the RPN can range from
– 1 (an extremely dangerous bug) to 25 (a
completely trivial bug).
Bug Report
Example of Good Bug
Report

Source: Black (2009, pg. 149)


Example of Good Bug
Report
• The previous
(cont.)
bug report contains three basic
sections: summary, steps to reproduce, and
isolation.
• The summary is
– a one-or two-sentence description of the bug,
emphasizing its impact on the customer or the system
user. The summary tells managers, developers, and
other readers why they should care about the
problem.
• The sentence, ‘‘I had trouble with screen resolutions’’ is a lousy
summary; the sentence, ‘‘Setting screen resolution to 800 by
1024 renders the screen unreadable’’ is much better. A succinct,
hard-hitting summary hooks the reader and puts a label on the
report. Consider it your one chance to make a first impression.
Example of Good Bug
Report
• The steps to reproduce is
(cont.)
– To provide a precise description of how to repeat the
failure. For most bugs, you can write down a sequence
of steps that re-create the problem. Be concise yet
complete, unambiguous, and accurate.
– This information is critical for developers, who use
your report as a guide to duplicate the problem as a
first step to debugging it.
• Isolation is
– Refers to the results and information the
tester gathered to confirm that the bug is a real
problem and to identify those factors that affect the
bug’s manifestation. What variations or permutations
did the tester try in order to influence the behavior?
Example of Incomplete
Bug Report

Source: Black (2009, pg. 152)


Example of Confusing
Bug Report

Source: Black (2009, pg. 152)


Design for a Basic
Bug-Tracking Database

Source: Black (2009, pg. 155)


A Bug Detail Report

Source: Black (2009, pg. 156)


A Bug Detail Report
with Dynamic Information

Source: Black (2009, pg. 166)


Steps for Better Bug Reports
Environment in Dealing
with Bug Reports
• Some number of bug reports will always be
irreproducible or contested. Some bugs exhibit
symptoms only intermittently, under obscure or
extreme conditions.
• In some cases, such as system crashes and database
corruption, the symptoms of the bug often destroy the
information needed to track down the bug.
• Inconsistencies between test environments and the
programmers’ systems sometimes lead programmers
to respond, ‘‘works fine on my system’’.
• On some projects without clear requirements, there
can be reasonable differences of opinion over what is
correct behavior under certain test conditions.
– Sometimes testers misinterpret test results and report
bugs when the real problem is bad test procedures, bad
test data, or incorrect test cases.
Ten Steps
for Better Bug Reports
1. Structure: Test thoughtfully and carefully, whether
you’re using reactive techniques, following scripted
manual tests, or running automated tests.
2. Reproduce: My usual rule of thumb is to try to
reproduce the failure three times. If the problem is
intermittent, report the rate of occurrence; for example,
one in three tries, two in three tries, and so forth.
3. Isolate: See if you can identify variables— for example,
configuration changes, workflow, data sets— that might
change the symptoms of the bug.
Ten Steps
for Better Bug Reports
4. Generalize:
(cont.)
Look for places that the bug’s symptoms
might occur in other parts of the system, using different
data, and so forth, especially where more severe
symptoms might exist.
5. Compare: Review the results of running similar tests,
especially if you’re repeating a test run previously.
6. Summarize: Write a short sentence that relates the
symptom observed to the customers’ or users’
experiences of quality, keeping in mind that in many
bug review or triage meetings, the summary is the only
part of the bug report that is read.
7. Condense: Trim any unnecessary information,
especially extraneous test steps.
Ten Steps
for Better Bug Reports
8. Be clear:
(cont.)
Use clear words, avoiding especially words
that have multiple distinct or contradictory meanings;
for example, ‘‘The ship had a bow on its bow,’’ and
‘‘Proper oversight prevents oversights,’’ respectively.
9. Neutralize: Express yourself impartially, making
statements of fact about the bug and its symptoms and
avoiding hyperbole, humor, or sarcasm. Remember, you
never know who’ll end up reading your bug report.
10.Review: Have at least one peer, ideally an experienced
test engineer or the test manager, read the bug report
before you submit it.
Managing Bug Tracking
Politics and Misuse
of Bug Data
• We should briefly examine political issues that are
specifically related to bug data. From the most
adversarial point of view, for example, you can see
every bug report as an attack on a developer.
• You probably don’t — and certainly shouldn’t —
intend to offend, but it helps to remember that bug
data is potentially embarrassing and subject to
misuse.
• Candor and honesty are critical in gathering clean bug
data, but developers might distort the facts if they
think you might use the data to slam them with the bug
reports.
• Think of the detailed bug information your database
captures as a loaded gun: an effective tool in the right
hands and used with caution, but a dangerous
implement of mayhem if it’s treated carelessly.
Don’t Fail to Build Trust

• Some situations are irretrievable. Developers who are


convinced that a written bug report is one step
removed from a written warning in their personnel files
probably will never trust you.
• Most developers, though, approach testing with an
open mind. They understand that testing can provide
a useful service to them in helping them fix bugs and
deliver a better product.
• How do you keep the trust and support of these
developers?
– Don’t take bugs personally, and don’t become emotional about
them.
– Submit only quality bug reports: a succinct summary, clear steps to
reproduce, evidence of significant isolation work, accuracy in
classification information, and a conservative estimate in terms of
priority and severity. Also try to avoid cheap shot bug reports that
can seem like carping.
– Be willing to discuss bug reports with an open mind.
Don’t Be a Backseat
Driver
• The test manager needs to ensure that testers
identify, reproduce, and isolate bugs.
• It’s also part of the job to track the bugs to
conclusion and to deliver crisp bug status
summaries to senior and executive management.
• If you, as an outsider, make it your job to nag
developers about when a specific bug will be fixed or
to pester the development manager about how slow
the bug fix process is, you are setting yourself up for a
highly antagonistic situation.
• Reporting, tracking, re-testing, and summarizing bugs
are your worries. Whether any particular bug gets
fixed, how it gets fixed, and when it gets fixed are
someone else’s concerns.
Don’t Make Individuals
Look Bad
• It is a bad idea to create and distribute reports that
make individuals look bad. There’s probably no faster
way to guarantee that you will have trouble getting
estimated fix dates out of people than to produce a
report that points out every failure to meet such
estimated dates.
• Creating reports that show how many bug fixes
resulted in reopened rather than closed bugs, grouped
and totaled by developer, is another express lane to
bad relationships.
• Again, managing the developers is the development
manager’s job, not yours.
• No matter how useful a particular report seems, make
sure that it doesn’t bash individuals.
Sticky Wickets

• Challenging bugs crop up on nearly every project. The


most vexing are those that involve questions about
correct behavior, prairie dog bugs that pop up only when
they feel like it, and bugs that cause a tug-of-war
over priority.
Bug or Feature?

• Many projects have only informal specifications, and


the requirements can be scattered around in emails,
product road maps, and sales materials. In such
cases, disagreements can arise between
development and test over whether a particular bug is
in fact correct system behavior.
• How should you settle these differences? Begin by
discussing the situation with the developers, their
manager, and your testers. Most of these
disagreements arise from miscommunication. Before
making a major issue out of it, confirm that all the
parties are clear on what the alleged bug is and why
your team is concerned.
Irreproducible Bug

• The challenge with irreproducible bugs comes in two


flavors.
– First, some bugs simply refuse to reproduce their symptoms
consistently. This is especially the case in system testing, in which
complex combinations of conditions are required to re-create
problems. Sometimes these types of failures occur in clusters. If you
see a bug three times in one day and then don’t see it for a week,
has it disappeared, or is it just hiding? Tempting as it is to dismiss
this problem, be sure to write up these bugs. Random, intermittent
failures— especially ones that result in system crashes or any other
data loss— can have a significant effect on customers.
– The second category of irreproducible bugs involves
problems that seem to disappear with new revisions of the
system, although no specific fix was made for them. I refer to
these as ‘‘bugs fixed by accident.’’ You will find that more bugs are
fixed by accident than you expect, but that fewer are fixed by
accident than some project Pollyannas suggest. If the bug is an
elusive one, you might want to keep the bug report active until
you’re convinced it’s actually gone.
Deferring Trivia
or Creating Test Escapes?
• While bug severity is easy to quantify, priority is not.
Developing consensus on priority is often difficult.
What do you do when bugs are assigned a low priority?
Bugs that will not be fixed should be deferred.
• If you don’t keep the active bug list short, people will
start to ignore it. However, there’s a real risk that some
deferred bugs will come back to haunt you.
• What if a deferred bug pops up in the field as a critical
issue? Is that a test escape? Not if my team found it and
then deferred it on the advice or insistence of the
project manager.
• After you institute a bug-tracking system, including the
database and metrics discussed here, you will find
yourself the keeper of key indicators of project status.
• Fairness and accuracy should be your watchwords
in this role.
Thank

You might also like