0% found this document useful (0 votes)
55 views24 pages

Final-Chapter 4 Defect Management-updated-RECENT

Uploaded by

Sayee Lembhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views24 pages

Final-Chapter 4 Defect Management-updated-RECENT

Uploaded by

Sayee Lembhe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter 4 Defect Management 14 marks

4.1. Defect Classification, Defect Management Process.


4.2. Defect Life Cycle, Defect Template
4.3. Estimate Expected Impact of a Defect, Techniques for Finding Defects,
Reporting a Defect.

Introduction
i. Software defects are expensive.
ii. The cost of finding and correcting defects represents one of the most expensive
software development activities.
iii. While defects may be inevitable, we can minimize their number and impact on
our projects.
iv. To do this development teams need to implement a defect management process
that focuses on preventing defects, catching defects as early in the process as
possible, and minimizing the impact of defects.
v. A little investment in this process can yield significant returns.

 A Defect in Software Testing is a variation or deviation of the software


application from end user’s requirements or original business requirements.
 A software defect is an error in coding which causes incorrect or
unexpected results from a software program which does not meet actual
requirements. Testers might come across such defects while executing the
test cases.
 These two terms have very thin line of difference, In the Industry both are
faults that need to be fixed and so interchangeably used by some of
the Testing teams.
 When testers execute the test cases, they might come across such test results
which are contradictory to expected results.
 This variation in test results is referred to as a Software Defect.
 These defects or variations are referred by different names in different
organizations like issues, problems, bugs or incidents.

Defect Management :
Defect Management is the process of recognizing and recording
defects, classifying them, investigating them, taking action to resolve them, and
disposing of them when resolved.

Need of Defect Management :


1. Defect analysis at early stages of software development reduces the time, cost
and resources required for rework.
2. Early defect detection prevents defect migration from requirement phase to
design and from design phase into implementation phase.
3. It enhances quality by adding value to the most important attributes of
software like reliability, maintainability, efficiency and probability.
1| Chapter 4 Defect Management by Vaishali Rane
Goals of Defect Management Process (DMP)
Given below are the various goals of this process:
 Prevent the Defect
 Early Detection
 Minimize the impact
 Resolution of the Defect
 Process improvement

4.1 Defect Classification


i. A Software Defect / Bug is a condition in a software product which does not meet
a software requirement (as stated in the requirement specifications) or end-user
expectations (which may not be specified but are reasonable).
ii. In other words, a defect is an error in coding or logic that causes a program to
malfunction or to produce incorrect/unexpected results.
iii. A program that contains a large number of bugs is said to be buggy.
iv. Reports detailing bugs in software are known as bug reports.
v. Applications for tracking bugs are known as bug tracking tools.
vi. The process of finding the cause of bugs is known as debugging.
vii. The process of intentionally injecting bugs in a software program, to estimate
test coverage by monitoring the detection of those bugs, is known as bebugging.

There are various ways in which we can classify.


Severity Wise:
i. Major: A defect, which will cause an observable product failure or departure
from requirements.
ii. Minor: A defect that will not cause a failure in execution of the product.
iii. Fatal: A defect that will cause the system to crash or close abruptly or effect
other applications.
Type of Errors Wise
i. Comments: Inadequate/ incorrect/ misleading or missing comments in the source
code
ii. Computational Error: Improper computation of the formulae / improper
business validations in code.
iii. Data error: Incorrect data population / update in database
iv. Database Error: Error in the database schema/Design
v. Missing Design: Design features/approach missed/not documented in the design
document and hence does not correspond to requirements
vi. Inadequate or sub optimal Design: Design features/approach needs additional
inputs for it to be complete Design features described does not
provide the best approach (optimal approach) towards the solution required
vii. In correct Design: Wrong or inaccurate Design
viii. Ambiguous Design: Design feature/approach is not clear to the reviewer. Also
includes ambiguous use of words or unclear design features.
2| Chapter 4 Defect Management by Vaishali Rane
ix. Boundary Conditions Neglected: Boundary conditions not addressed/incorrect
x. Interface Error: Internal or external to application interfacing error, Incorrect
handling of passing parameters, Incorrect alignment, incorrect/misplaced
fields/objects, un friendly window/screen positions
xi. Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in
source code
xii. Message Error: Inadequate/ incorrect/ misleading or missing error messages in
source code
xiii. Navigation Error: Navigation not coded correctly in source code
xiv. Performance Error: An error related to performance/optimality of the code
xv. Missing Requirements: Implicit/Explicit requirements are missed/not
documented during requirement phase
xvi. Inadequate Requirements: Requirement needs additional inputs for to be
complete
xvii. Incorrect Requirements: Wrong or inaccurate requirements
xviii. Ambiguous Requirements: Requirement is not clear to the reviewer. Also
includes ambiguous use of words – e.g. Like, such as, may be, could be, might etc.
xix. Sequencing / Timing Error: Error due to incorrect/missing consideration to
timeouts and improper/missing sequencing in source code.
xx. Standards: Standards not followed like improper exception handling, use of E
& D Formats and project related design/requirements/coding standards
xxi. System Error: Hardware and Operating System related error, Memory leak
xxii. Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or
missing - Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
xxiii. Typographical Error: Spelling / Grammar mistake in documents/source
code
xxiv. Variable Declaration Error: Improper declaration / usage of variables, Type
mismatch error in source code

Status Wise:
i. Open
ii. Closed
iii. Deferred
iv. Cancelled

4.1.2. Defect Management Process

3| Chapter 4 Defect Management by Vaishali Rane


Figure 1: Defect management Process

Or

Figure 1: Defect management Process

i. Defect Prevention-- Implementation of techniques, methodology and standard


processes to reduce the risk of defects.
ii. Deliverable Baseline-- Establishment of milestones where deliverables will be
considered complete and ready for further development work. When a deliverable
is base lined, any further changes are controlled. Errors in a deliverable are not
considered defects until after the deliverable is base lined.
iii. Defect Discovery-- Identification and reporting of defects for development
team acknowledgment. A defect is only termed discovered when it has been
documented and acknowledged as a valid defect by the development team
member(s) responsible for the component(s) in error.
iv. Defect Resolution-- Work by the development team to prioritize, schedule and
fix a defect, and document the resolution. This also includes notification back to the
tester to ensure that the resolution is verified.
v. Process Improvement -- Identification and analysis of the process in which a
defect originated to identify ways to improve the process to prevent future

4| Chapter 4 Defect Management by Vaishali Rane


occurrences of similar defects. Also the validation process that should have
identified the defect earlier is analyzed to determine ways to strengthen that process.
vi. Management Reporting -- Analysis and reporting of defect information to
assist management with risk management, process improvement and project
management.

Defect management process is explained below in detail :


1) Defect Prevention:
Defect Prevention is the best method to eliminate the defects in the early stage of
testing instead of finding the defects in the later stage and then fixing it. This
method is also cost effective as the cost required for fixing the defects found in the
early stages of testing is very low.
However, it is not possible to remove all the defects but at least you can minimize
the impact of the defect and cost to fix the same.

The major steps involved in Defect Prevention are as follow:


 Identify Critical Risk: Identify the critical risks in the system which will
impact more if occurred during testing or in the later stage.
 Estimate Expected Impact: For each critical risk, calculate how much
would be the financial impact if the risk actually encountered.
 Minimize expected impact: Once you identify all critical risks, take the
topmost risks which may be harmful to the system if encountered and try to
minimize or eliminate the risk. For risks which cannot be eliminated, it
reduces the probability of occurrence and its financial impact.
2) Deliverable Baseline:
When a deliverable (system, product or document) reaches its pre-defined milestone
then you can say a deliverable is a baseline. In this process, the product or the
deliverable moves from one stage to another and as the deliverable moves from one
stage to another, the existing defects in the system also gets carried forward to the
next milestone or stage.
For Example, consider a scenario of coding, unit testing and then system testing. If
a developer performs coding and unit testing then system testing is carried out by
the testing team. Here coding and Unit Testing is one milestone and System Testing
is another milestone.
So during unit testing, if the developer finds some issues then it is not called as a
defect as these issues are identified before the meeting of the milestone deadline.
Once the coding and unit testing have been completed, the developer hand-overs the
code for system testing and then you can say that the code is “baselined” and ready
for next milestone, here, in this case, it is “system testing”.
Now, if the issues are identified during testing then it is called as the defect as it is
identified after the completion of the earlier milestone i.e. coding and unit testing.

5| Chapter 4 Defect Management by Vaishali Rane


Basically, the deliverables are baselined when the changes in the deliverables are
finalized and all possible defects are identified and fixed. Then the same deliverable
passes on to the next group who will work on it.
3) Defect Discovery:
It is almost impossible to remove all the defects from the system and make a system
as a defect-free one. But you can identify the defects early before they become
costlier to the project. We can say that the defect discovered means it is formally
brought to the attention of the development team and after analysis of that the defect
development team also accepted it as a defect.
Steps involved in Defect Discovery are as follows:
 Find a Defect: Identify defects before they become a major problem to the
system.
 Report Defect: As soon as the testing team finds a defect, their responsibility
is to make the development team aware that there is an issue identified which
needs to be analyzed and fixed.
 Acknowledge Defect: Once the testing team assigns the defect to the
development team, its the development team’s responsibility to acknowledge
the defect and continue further to fix it if it is a valid defect.
4) Defect Resolution:
In the above process, the testing team has identified the defect and reported to the
development team. Now here the development team needs to proceed for the
resolution of the defect.
The steps involved in the defect resolution are as follows:
 Prioritize the risk: Development team analyzes the defect and prioritizes the
fixing of the defect. If a defect has more impact on the system then they make
the fixing of the defect on a high priority.
 Fix the defect: Based on the priority, the development team fixes the defect,
higher priority defects are resolved first and lower priority defects are fixed at
the end.
 Report the Resolution: Its the development team’s responsibility to ensure
that the testing team is aware when the defects are going for a fix and how the
defect has been fixed i.e. by changing one of the configuration files or
making some code changes. This will be helpful for the testing team to
understand the cause of the defect.
5) Process Improvement:
Though in the defect resolution process the defects are prioritized and fixed, from a
process perspective, it does not mean that lower priority defects are not important
and are not impacting much to the system. From process improvement point of
view, all defects identified are same as a critical defect.
Even these minor defects give an opportunity to learn how to improve the process
and prevent the occurrences of any defect which may impact system failure in the
future. Identification of a defect having a lower impact on the system may not be a
big deal but the occurrences of such defect in the system itself is a big deal.
For process improvement, everyone in the project needs to look back and check
from where the defect was originated. Based on that you can make changes in the
6| Chapter 4 Defect Management by Vaishali Rane
validation process, base-lining document, review process which may catch the
defects early in the process which are less expensive.

4.2 Defect life cycle and Defect Template


1. Defect life cycle
i. Defect Life Cycle (Bug Life cycle) is the journey of a defect from its
identification to its closure.
ii. The Life Cycle varies from organization to organization and is governed by the
software testing process the organization or project follows and/or the Defect
tracking tool being used.
Nevertheless, the life cycle in general resembles the following:

Figure 2: Bugs Life Cycle

7| Chapter 4 Defect Management by Vaishali Rane


Table 1: Defect Status

Defect Status Explanation


i. NEW: Tester finds a defect and posts it with the status NEW. This defect is yet to
be studied/approved. The fate of a NEW defect is one of ASSIGNED, DROPPED
and DEFERRED.
ii. ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect
and if it is found to be valid it is assigned to a member of the Development Team.
The assigned Developer’s responsibility is now to fix the defect and have it
COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In
that case, a defect can be open yet unassigned.
iii. DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in
upcoming releases instead of the current release it is DEFERRED. This defect is
ASSIGNED when the time comes.
iv. DROPPED / REJECTED: Test / Development/ Project lead studies the NEW
defect and if it is found to be invalid, it is DROPPED / REJECTED. Note that the
specific reason for this action needs to be given.
v. COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect
that is ASSIGNED to him or her. Now, the ‘fixed’ defect needs to be verified by the
Test Team and the Development Team ‘assigns’ the defect back to the Test Team.
A COMPLETED defect is either CLOSED, if fine, or REASSIGNED, if still not
fine.
vi. If a Developer cannot fix a defect, some organizations may offer the following
statuses:
Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect due to
some reason.
Can’t Reproduce: The Developer is unable to reproduce the defect.
Need More Information: The Developer needs more information on the defect
from the Tester.
vii. REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is in
fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it.
A REASSIGNED defect needs to be COMPLETED again.
8| Chapter 4 Defect Management by Vaishali Rane
viii. CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed
fixed and is no more of any concern, it is CLOSED / VERIFIED.

Or

The below defect life cycle covers all possible status:

9| Chapter 4 Defect Management by Vaishali Rane


 Whenever the testing team finds a defect in the application, they raise the
defect with the status as “NEW”.
 When a new defect is reviewed by a QA lead and if the defect is valid, then
the status of the defect would be “Open” and it is ready to be assigned to the
development team.
 When a QA lead assigns the defect to the corresponding developer, the status
of the defect would be marked as “Assigned”. A developer should start
analyzing and fixing the defect at this stage.
 When the developer feels that the defect is not genuine or valid, then the
developer rejects the defect. The status of the defect is marked as “Rejected”
and assigned back to the testing team.
 If the defect logged is repeated twice or both the defects reported have similar
results and steps to reproduce, then one defect status is changed to
“Duplicate”.

10 | Chapter 4 Defect Management by Vaishali Rane


 If there are some issues or hurdles in the current release for fixing a particular
defect, then the defect would be taken in the upcoming releases instead of the
current release and then it is marked as “Deferred” or “Postponed”.
 When a developer is not able to reproduce the defect by the steps mentioned
in “Steps to Reproduce” by the testing team then the developer can mark the
defect as “Not Reproducible”. In this stage, the testing team should provide
detailed reproducing steps to a developer.
 If the developer is not clear about the steps to reproduce provided by a QA to
reproduce the defect, then he/she can mark it as “Need more information”.
In this case, the testing team needs to provide the required details to the
development team.
 If a defect is already known and currently present in the production
environment then the defect is marked as “Known defect”.
 When a developer makes the necessary changes, then the defect is marked as
“Fixed”.
 The developer now passes the defect to the testing team to verify, so the
developer changes the status as “Ready for Retest”.
 If the defect has no further issues and it is properly verified, then the tester
marks the defect as “Closed”.
 While retesting the defect if the tester found that, the defect is still
reproducible or partially fixed then the defect would be marked as
“Reopened”. Now the developer has to look again into this defect.
A well planned and controlled Defect Life Cycle gives the total number of defects
found in a release or in all releases. This standardized process gives a clear picture
of how the code was written, how properly the testing has been carried out, how the
defect or software has been released, etc. This will reduce the number of defects in
production by finding the defects in the testing phase itself.

4.2.2. Defect Template


(Question: Create the bug template for a login form. – 4Marks)
i. Reporting a bug/defect properly is as important as finding a defect.
ii. If the defect found is not logged/reported correctly and clearly in bug tracking
tools (like Bugzilla, ClearQuest etc.) then it won’t be addressed properly by the
developers, so it is very important to fill as much information as possible in the
defect template so that it is very easy to understand the actual issue with the
software.

11 | Chapter 4 Defect Management by Vaishali Rane


1. Sample defect template fields
Abstract :
Platform :
Testcase Name :
Release :
Build Level :
Client Machine IP/Hostname :
Client OS :
Server Machine IP/Hostname :
Server OS :
Defect Type :
Priority :
Severity :
Developer Contacted :
Test Contact Person :
Attachments :
Any Workaround :
Steps to Reproduce
1.
2.
3.
Expected Result:
Actual Result:

2. Defect report template


i. A defect reporting tool is used and the elements of a report can vary.
ii. A defect report can consist of the following elements.

12 | Chapter 4 Defect Management by Vaishali Rane


Table 2: Defect Report Template

While reporting the bug to developer, your Bug Report should contain the
following information

 Defect_ID - Unique identification number for the defect.

13 | Chapter 4 Defect Management by Vaishali Rane


 Defect Description - Detailed description of the Defect including
information about the module in which Defect was found.
 Version - Version of the application in which defect was found.
 Steps - Detailed steps along with screenshots with which the developer
can reproduce the defects.
 Date Raised - Date when the defect is raised
 Reference- where in you Provide reference to the documents like .
requirements, design, architecture or maybe even screenshots of the
error to help understand the defect
 Detected By - Name/ID of the tester who raised the defect
 Status - Status of the defect , more on this later
 Fixed by - Name/ID of the developer who fixed it
 Date Closed - Date when the defect is closed
 Severity which describes the impact of the defect on the application
 Priority which is related to defect fixing urgency. Severity Priority could
be High/Medium/Low based on the impact urgency at which the defect
should be fixed respectively

Sample Defect Report

14 | Chapter 4 Defect Management by Vaishali Rane


Sample Defect Report

3. Defect tracking tools


15 | Chapter 4 Defect Management by Vaishali Rane
Following are some of the commonly used defect tracking tools:
i. Bugzilla - Open Source Bug Tracking.
ii. Testlink - Open Source Bug Tracking.
iii. ClearQuest – Defect tracking tool by IBM Rational tools.
iv. HP Quality Center– Test Management tool by HP.

4.3 Estimate Expected Impact of a Defect, Techniques for Finding Defects,


Reporting a Defect.

4.3.1. Estimate Expected Impact of a Defect


i. There is a strong relationship between the number of test cases and the number of
function points.
ii. There is a strong relationship between the number of defects and the number of
test cases and number of function points.
iii. The number of acceptance test cases can be estimated by multiplying the number
of function points by 1.2.
iv. Acceptance test cases should be independent of technology and implementation
techniques.
v. If a software project was 100 function points the estimated number of test cases
would be 120.
vi. To estimate the number of potential defects is more involved.

a) Estimating Defects
i. Intuitively the number of maximum potential defects is equal to the number of
acceptance test cases which is 1.2 x Function Points.
b) Preventing, Discovering and Removing Defects
i. To reduce the number of defects delivered with a software project an organization
can engage in a variety of activities.
ii. While defect prevention is much more effective and efficient in reducing the
number of defects, most organization conduct defect discovery and removal.
iii. Discovering and removing defects is an expensive and inefficient process.
iv. It is much more efficient for an organization to conduct activities that prevent
defects.

c) Defect Removal Efficiency


i. If an organization has no defect prevention methods in place then they are
totally reliant on defect removal efficiency.

16 | Chapter 4 Defect Management by Vaishali Rane


Figure 3: Defect Removal Efficiency
1. Requirements Reviews up to 15% removal of potential defects.
2. Design Reviews up to 30% removal of potential defects.
3. Code Reviews up to 20% removal of potential defects.
4. Formal Testing up to 25% removal of potential defects.
d) Defect Discovery and Removal

Table 3: Defect Removal and Recovery

i. An organization with a project of 2,500 function points and was about medium at
defect discovery and removal would have 1,650 defects remaining after all defect
removal and discovery activities.
ii. The calculation is 2,500 x 1.2 = 3,000 potential defects.
iii. The organization would be able to remove about 45% of the defects or 1,350
defects.
iv. The total potential defects (3,000) less the removed defects (1,350) equals the
remaining defects of 1,650.

e) Defect Prevention

17 | Chapter 4 Defect Management by Vaishali Rane


If an organization concentrates on defect prevention (instead of defect detection)
then the number of defects inserted or created is much less. The amount of time
and effort required to discover and remove this defects is much less also.
i. Roles and Responsibilities Clearly Defined up to 15% reduction in number of
defects created
ii. Formalized Procedures up to 25% reduction in number of defects created
iii. Repeatable Processes up to 35% reduction in number of defects created
iv. Controls and Measures in place up to 30% reduction in number of defects
created.

4.3.2. Techniques to find defects


(Question: Explain any two techniques to find the defect with strength and
weakness. – 8 Marks)
a) Quick Attacks:
i. Strengths
 The quick-attacks technique allows you to perform a cursory analysis of a
system in a very compressed timeframe.
 Even without a specification, you know a little bit about the software, so the
time spent is also time invested in developing expertise.
 The skill is relatively easy to learn, and once you've attained some mastery
your quick-attack session will probably produce a few bugs.
 Finally, quick attacks are quick.
 They can help you to make a rapid assessment. You may not know the
requirements, but if your attacks yielded a lot of bugs, the programmers
probably aren't thinking about exceptional conditions, and it's also likely that
they made mistakes in the main functionality.
 If your attacks don't yield any defects, you may have some confidence in the
general, happy-path functionality.
ii. Weaknesses
 Quick attacks are often criticized for finding "bugs that don't matter"—
especially for internal applications.
 While easy mastery of this skill is a strength, it creates the risk that quick
attacks are "all there is" to testing; thus, anyone who takes a two day course
can do the work.

b) Equivalence and Boundary Conditions


i. Strengths
 Boundaries and equivalence classes give us a technique to reduce an infinite
test set into something manageable.
 They also provide a mechanism for us to show that the requirements are
"covered".
ii. Weaknesses
 The "classes" in the table in Figure 1 are correct only in the mind of the
person who chose them.
18 | Chapter 4 Defect Management by Vaishali Rane
 We have no idea whether other, "hidden" classes exist—for example, if a
numeric number that represents time is compared to another time as a set of
characters, or a "string," it will work just fine for most numbers.
c) Common Failure Modes
i. Strengths
 The heart of this method is to figure out what failures are common for the
platform, the project, or the team; then try that test again on this build.
 If your team is new, or you haven't previously tracked bugs, you can still
write down defects that "feel" recurring as they occur—and start checking for
them.
ii. Weaknesses
 In addition to losing its potency over time, this technique also entirely fails to
find "black swans"—defects that exist outside the team's recent experience.
 The more your team stretches itself (using a new database, new programming
language, new team members, etc.), the riskier the project will be—and, at
the same time, the less valuable this technique will be.

d) State-Transition Diagrams

Figure 4: State Transition Map


i. Strengths
 Mapping out the application provides a list of immediate, powerful test ideas.
 Model can be improved by collaborating with the whole team to find
"hidden" states—transitions that might be known only by the original
programmer or specification author.
 Once you have the map, you can have other people draw their own diagrams,
and then compare theirs to yours.
 The differences in those maps can indicate gaps in the requirements, defects
in the software, or at least different expectations among team members.
ii. Weaknesses
 The map you draw doesn't actually reflect how the software will operate; in
other words, "the map is not the territory."
 Drawing a diagram won't find these differences, and it might even give the
team the illusion of certainty.

19 | Chapter 4 Defect Management by Vaishali Rane


 Like just about every other technique on this list, a state-transition diagram
can be helpful, but it's not sufficient by itself to test an entire application.

e) Use Cases and Soap Opera Tests


Use cases and scenarios focus on software in its role to enable a human being to do
something.
i. Strengths
 Use cases and scenarios tend to resonate with business customers, and if done
as part of the requirement process, they sort of magically generate test cases
from the requirements.
 They make sense and can provide a straightforward set of confirmatory tests.
Soap opera tests offer more power, and they can combine many test types
into one execution.

ii. Weaknesses
 Soap opera tests have the opposite problem; they're so complex that if
something goes wrong, it may take a fair bit of troubleshooting to find
exactly where the error came from!
f) Code-Based Coverage Models
Imagine that you have a black-box recorder that writes down every single line of
code as it executes.
i. Strengths
 Programmers love code coverage. It allows them to attach a number— an
actual, hard, real number, such as 75%—to the performance of their unit
tests, and they can challenge themselves to improve the score.
 Meanwhile, looking at the code that isn't covered also can yield
opportunities for improvement and bugs!
ii. Weaknesses
 Customer-level coverage tools are expensive, programmer-level tools that
tend to assume the team is doing automated unit testing and has a continuous-
integration server and a fair bit of discipline.
 After installing the tool, most people tend to focus on statement coverage—
the least powerful of the measures.
 Even decision coverage doesn't deal with situations where the decision
contains defects, or when there are other, hidden equivalence classes; say, in
the third-party library that isn't measured in the same way as your compiled
source code is.
 Having code-coverage numbers can be helpful, but using them as a form of
process control can actually encourage wrong behaviours. In my experience,
it's often best to leave these measures to the programmers, to measure
optionally for personal improvement (and to find dead spots), not as a proxy
for actual quality.
g) Regression and High-Volume Test Techniques

20 | Chapter 4 Defect Management by Vaishali Rane


People spend a lot of money on regression testing, taking the old test ideas
described above and rerunning them over and over.
This is generally done with either expensive users or very expensive programmers
spending a lot of time writing and later maintaining those automated tests.
i. Strengths
 For the right kind of problem, say an IT shop processing files through a
database, this kind of technique can be extremely powerful.
 Likewise, if the software deliverable is a report written in SQL, you can hand
the problem to other people in plain English, have them write their own SQL
statements, and compare the results.
 Unlike state-transition diagrams, this method shines at finding the hidden
state in devices. For a pacemaker or a missile-launch device, finding those
issues can be pretty important.
ii. Weaknesses
 Building a record/playback/capture rig for a GUI can be extremely expensive,
and it might be difficult to tell whether the application hasn't broken, but has
changed in a minor way.
 For the most part, these techniques seem to have found a function in
IT/database work, at large companies like Microsoft and AT&T, which can
have programming testers doing this work in addition to traditional testing, or
finding large errors such as crashes without having to understand the details
of the business logic.
 While some software projects seem ready-made for this approach,
others...aren't.
 You could waste a fair bit of money and time trying to figure out where your
project falls.
3. Reporting defects effectively
(Question: Explain how defects can be effectively reported. – 4 Marks)
It is essential that you report defects effectively so that time and effort is not
unnecessarily wasted in trying to understand and reproduce the defect. Here are
some guidelines:
i. Be specific:
Specify the exact action: Do not say something like ‘Select Button B’.
Do you mean ‘Click Button B’ or ‘Press ALT+B’ or ‘Focus on Button B and click
ENTER’.
In case of multiple paths, mention the exact path you followed: Do not say
something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.”
Understanding all the paths at once will be difficult. Instead, say “Do ‘A and X’ and
you get D.” You can, of course, mention elsewhere in the report that “D can also be
got if you do ‘B and Y’ or ‘C and Z’.”
Do not use vague pronouns: Do not say something like “In Application A, open X,
Y, and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or
‘Application A’?”
ii. Be detailed:

21 | Chapter 4 Defect Management by Vaishali Rane


 Provide more information (not less). In other words, do not be lazy.
 Developers may or may not use all the information you provide but they sure
do not want to beg you for any information you have missed.
iii. Be objective:
 Do not make subjective statements like “This is a lousy application” or “You
fixed it real bad.”
 Stick to the facts and avoid the emotions.
iv. Reproduce the defect:
 Do not be impatient and file a defect report as soon as you uncover a defect.
Replicate it at least once more to be sure.
v. Review the report:
 Do not hit ‘Submit’ as soon as you write the report.
 Review it at least once.
 Remove any typing errors.

Questions of MSBTE papers:

Questions of MSBTE papers:


SUMMER 2016
4 MARKS
1. Draw labelled diagram of defect management process. List any two
characteristics of defect management process.
2. Explain the defect template with its attributes.
3. List the different techniques to detect defects. Describe any two of them.
4. What are the different points to be noted in reporting defects?

6 MARKS
1. Describe the requirement defects and coding defects in details.

WINTER 2016
4 MARKS
1. Draw labelled diagram of defect management process. List any two
characteristics of defect management process.
2. Explain the defect template with its attributes.
3. List the different techniques to detect defects. Describe any two of them.
4. What are the different points to be noted in reporting defects?

6 MARKS
1. Describe the requirement defects and coding defects in details.

22 | Chapter 4 Defect Management by Vaishali Rane


SUMMER 2017
4 MARKS
1. Describe defect life cycle with neat diagram.
2. Describe techniques for finding defects.
3. List all defect classification. Also describe any one defect in brief.
4. List & explain techniques of finding bugs.

6 MARKS
1. Enlist any six attributes of defect. Describe them with suitable example.
WINTER 2017
4 MARKS
1. Explain Defect Management Process.
2. Explain defect life cycle to identify status of defect with proper labelled
diagram.
3. Which parameters are considered while writing good defect report ?
Also write contents of defect template.
4. Define the terms Error, Defect, Fault and Bug in relation with Software
testing.
5. Give the defect classification and its meaning.
6 MARKS
1. What are the points considered while estimating impact of a defect ?
Also explain techniques to find defect.

SUMMER 2018
4 MARKS
1. Explain Requirement defects and Design defects.
2. Explain Defect template.
3. What are different techniques for finding defects ? Explain in detail.
4. Give any two root causes of defects. Also give any two effects of defects.
5. Explain Defect Life Cycle with diagram.
6 MARKS
1. Describe Defect Management Process with neat & labelled diagram.

WINTER 2018
4 MARKS
2. Explain defect management process with proper diagram.
3. Explain defect Report Template with it‘s attributes.
4. What are different techniques for finding defects ? Explain any one
technique with an example.
5. What is test case ? Which parameters are to be considered while
documenting test cases ?
6. Explain defect prevention cycle with neat diagram.
6 MARKS
23 | Chapter 4 Defect Management by Vaishali Rane
1. Explain defect classification with an example.

SUMMER 2019
4 MARKS
1. Explain the process, how the bug is reported.
2. Describe defect template with its attributes.
3. Explain defect life cycle with diagram.
4. Explain the impact of equivalence partitioning in coding & testing.

8 MARKS

1. Explain defect management process with suitable diagram.

WINTER 2019

2 MARKS
1. Define Defect
2. State any four advantages of using tools.
3. Define Bug, Error, Fault and Failure.
4-MARKS
1. Enlist different techniques for finding defects and describe any
one technique with an example.
6-MARKS

1. Draw a diagram for defect life cycle and write example for defect
template.

24 | Chapter 4 Defect Management by Vaishali Rane

You might also like