Chapter 8

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

CHAPTER 8

TESTING
TESTING GOALS
• Ideally you would sit down, write code that perfectly satisfies the
requirements, and you’d be done. Unfortunately that rarely happens.
More often than not, the first attempt at the software satisfies some
but not all the requirements. It may also incorrectly handle situations
that weren’t specified in the requirements. For example, the code
may not work in every possible situation.
• That’s where testing comes in. Testing lets you study a piece of code
to see whether it meets the requirements and whether it works
correctly under all circumstances. (Usually, the second goal means a
method works properly with any set of inputs.)
REASONS BUGS NEVER DIE
• a bug is a flaw in a program that causes it to produce an incorrect
result or to behave unexpectedly. Bugs are generally evil (although
occasionally they make games more fun), but it’s not always worth
your effort to try to remove every bug. Removing some bugs is just
more trouble than it’s worth. The following sections describe some
reasons why software developers don’t remove every bug from their
applications.
CONT…..
• Diminishing Returns -Finding the first few bugs in a newly written piece of
software is relatively easy
• Deadlines -In a just and fair world, software would be released when it was
ready
• Consequences -Sometimes a bug fi x might have undesirable consequences
• Usefulness- Sometimes users come to rely on a particular bug to do
something sneaky that you didn’t intend them to do
• Obsolescence - Over time, some features may become less useful.
• It’s Not a Bug- Sometimes users think a feature is a bug when actually they
just don’t understand what the program is supposed to do
CONT….
It’s Too Soon
• If you just released a version of a program, it may be too soon to give the users a new patch to fix a
minor bug. Users won’t like you if you release new bug fixes every 3 days. As a rule of thumb:
➤ If a bug is a security flaw, release a patch immediately, even if you just released a patch yesterday.
(If you did release a patch yesterday, you better be sure the new patch fixes things correctly!
➤ If a bug makes users swear at your program more than once a day, release a patch as soon as
possible (as often as monthly). Include a profuse apology.
➤ If a bug is annoying enough to make users smirk at your program occasionally, fix it in a minor
release (as often as twice a year). Include a huge fanfare about how great you are for looking after
the users’ needs.
➤ If a bug is just a nice‐to‐have new feature or a performance improvement, fi x it in the next major
release (at most once per year). Explain how responsive you are and that the users’ needs are your
number one concern.
• It Never Ends - If you try to fi x every bug, you’ll never release
anything.
• It’s Better Than Nothing- your application may not be perfect, but
hopefully it’s better than nothing.
• Fixing Bugs Is Dangerous- When you fix a bug, there’s a chance that
you’ll fix it incorrectly, so your work doesn’t actually help.
CONT…
• Which Bugs to Fix - There may be some good reasons not to fi x every bug, but in
general bugs are bad, so you should remove as many of them as possible.
For each bug, you should evaluate the following factors:
➤ Severity —How painful is the bug for the users? How much work, time, money,
or other resources are lost?
➤ Work-arounds —Are there work-arounds?
➤ Frequency —How often does the bug occur?
➤ Difficulty —How hard would it be to fi x the bug? (Of course, this is just a guess.)
➤ Riskiness —How risky would it be to fi x the bug? If the bug is in particularly
complex code, fixing it may introduce new bugs.
LEVELS OF TESTING
• Bugs are easiest to fix if you catch them as soon as possible. After a
bug has been in the code for a while, you forget how the code is
supposed to work. That means you’ll need to spend extra time
studying the code so that you don’t break anything. The longer the
bug has been around, the greater the chances are that other pieces of
code rely on the buggy behavior, so the longer you wait the more
things you may have to fix.
• Unit Testing - A unit test verifies the correctness of a specific piece
of code
• Integration Testing- verifies that the new method works and plays
well with others.
• Automated Testing- tools let you define tests and the results they
should produce.
• Component interface testing- studies the interactions between
components.
• System Testing - is an end‐to‐end run-through of the whole system
• Also suppose the program includes only a login screen and a single form that uses a grid to display dirt
information. Then you would need to try each of the following operations:
➤ Start the program and click Cancel on the login screen.
➤ Start the program, enter invalid login, click OK, verify that you get an error message, and finally click Cancel to
close the login screen.
➤ Start the program, enter invalid login, click OK, verify that you get an error message, enter valid login
information, and click OK. Verify that you can log in.
➤ Log in, view saved information, and close the program. Log in again and verify that the information is
unchanged.
➤ Log in, add new dirt information, and close the program. Login in again and verify that the information was
saved.
➤Log in, edit some dirt information, and close the program. Login in again and verify that the changes were
saved.
➤Log in, delete a dirt information entry, and close the program. Login in again and verify that the changes were
saved.
• Acceptance Testing- to determine whether the finished application meets the customers’
requirements.
• Other Testing Categories - other categories of testing that differ in their scope, focus, or point of
view are as follows:
Accessibility test —Tests the application for accessibility by those with visual, hearing, or other
impairments.
Alpha test —First round testing by selected customers or independent testers.
Beta test—Second round testing after alpha test. Generally, you shouldn’t give users beta versions
until the application is quite solid or you might damage your reputation for building good software.
Compatibility test —Focuses on compatibility with different environments such as computers
running older operating system versions.
Destructive test —Makes the application fail so that you can study its behavior when the worst
happens.
CONT…..
Functional test —Deals with features the application provides. These are generally listed in the
requirements.
 Installation test —Makes sure you can successfully install the system on a fresh computer.
Internationalization test —Tests the application on computers localized for different parts of the
world. This should be carried out by people who are natives of the locales.
 Nonfunctional test —Studies application characteristics that aren’t related to specific functions the
users will perform. For example, these tests might check performance under a heavy user load, with
limited memory, or with missing network connections. These often identify minimal requirements.
Performance test —Studies the application’s performance under various conditions such as normal
usage, heavy user load, limited resources (such as disk space), and time of day. Records metrics such
as the number of records processed per hour under different conditions.
Security test —Studies the application’s security. This includes security of the login process,
communications, and data.
Usability test —Determines whether the user interface is intuitive and easy to use
TESTING TECHNIQUES
• Exhaustive Testing
• Black‐Box Testing you pretend the method is a black box that you
can’t peek inside.
• white‐box testing , you get to know how the method does its work.
• Gray‐box testing is a combination of white‐box and black‐box
testing.
TESTING HABITS
• Test and Debug When Alert - you should test and debug when you’re alert
• Test Your Own Code - Before you check your code in and claim it’s ready for prime time, test it yourself.
• Have Someone Else Test Your Code -It’s important to test your own code, but you’re too close to your code to
be objective.
• Fix Your Own Bugs- When you fi x a bug, it’s important to understand the code as completely as possible.
• Think Before You Change - It’s common to see beginning programmers randomly changing code around
hoping one of the changes will make a bug go away.
• Don’t Believe in Magic
• See What Changed- If you’re debugging new code, you can’t check an older version to see what changed
• Fix Bugs, Not Symptoms- Sometimes developers focus so closely on the code that they don’t see the bigger
picture.
• Test Your Tests - After you write your tests, add a few bugs to the code you’re testing and make sure the tests
catch them.
HOW TO FIX A BUG
• Obviously, when you fix a bug you need to modify the code, but there
are a few other actions you should also take.
• First, ask yourself how you could prevent a similar bug in the future.
What techniques could you use in your code? What tests could you
run to detect the bug sooner?
• Second, ask yourself if a similar bug could be lurking somewhere else.
ESTIMATING NUMBER OF BUGS
• One of the unfortunate facts about bugs is that you can never tell when they’re all gone.
As Edsger W. Dijkstra put it, “Testing shows the presence, not the absence of bugs.” You
can run tests as long as you like, but you can never be sure you’ve found every bug.
• Tracking Bugs Found -One method for estimating bugs is to track the number of bugs
found over time. Typically, when testing gets started in a serious way, this number
increases.
• Seeding - Another approach for estimating bugs is to “seed” the code with bugs. Simply
scatter some bugs throughout the application.
• The Lincoln Index - Suppose you have two testers Lisa and Ramon. After they bash
away at the application for a while, Lisa finds 15 bugs and Ramon finds 13. Of the bugs,
they find 5 in common. In total, how many bugs does the application contain? ANS 15 ×
13 ÷ 5 = 39.

You might also like