Black Box Testing Examples

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Black Box Testing

Black Box Testing Examples


Ad Hoc Exploratory Testing
(Error Guessing)
• Based on intuition, guess what kinds of inputs might cause
the program to fail

• Create some test cases based on your guesses

• Intuition will often lead you toward boundary cases, but not always

• Some special cases aren't boundary values, but are mishandled


by many programs
– Try exiting the program while it's still starting up
– Try loading a corrupted file
– Try strange but legal URLs: hTtP://Www.bYu.EDU/
Comparison Testing
• Also called Back-to-Back testing

• If you have multiple implementations of the same functionality, you can run
test inputs through both implementations, and compare the results for equality

• Why would you have access to multiple implementations?


– Safety-critical systems sometimes use multiple, independent
implementations of critical modules to ensure the accuracy of results
– You might use a competitor's product, or an earlier version of your own,
as the second implementation
– You might write a software simulation of a new chip that serves as the
specification to the hardware designers. After building the chip, you
could compare the results computed by the chip hardware with the
results computed by the software simulator

• Inputs may be randomly generated or designed manually


Testing for race conditions and other
timing dependencies
• Many systems perform multiple concurrent activities
– Operating systems manage concurrent programs, interrupts, etc.
– Servers service many clients simultaneously
– Applications let users perform multiple concurrent actions

• Test a variety of different concurrency scenarios, focusing on activities that are


likely to share resources (and therefore conflict)

• "Race conditions" are bugs that occur only when concurrent activities
interleave in particular ways, thus making them difficult to reproduce

• Test on hardware of various speeds to ensure that your system works well on
both slower and faster machines
Performance Testing
• Measure the system's performance
– Running times of various tasks
– Memory usage, including memory leaks
– Network usage (Does it consume too much bandwidth? Does it
open too many connections?)
– Disk usage (Is the disk footprint reasonable? Does it clean up
temporary files properly?)
– Process/thread priorities (Does it play well with other applications,
or does it hog the whole machine?)
Limit Testing
• Test the system at the limits of normal use

• Test every limit on the program's behavior defined in the requirements


– Maximum number of concurrent users or connections
– Maximum number of open files
– Maximum request size
– Maximum file size
– Etc.

• What happens when you go slightly beyond the specified limits?


– Does the system's performance degrade dramatically, or gracefully?
Stress Testing
• Test the system under extreme conditions (i.e., beyond the limits of normal use)

• Create test cases that demand resources in abnormal quantity, frequency, or


volume
– Low memory conditions
– Disk faults (read/write failures, full disk, file corruption, etc.)
– Network faults
– Unusually high number of requests
– Unusually large requests or files
– Unusually high data rates (what happens if the network suddenly becomes
ten times faster?)

• Even if the system doesn't need to work in such extreme conditions, stress testing
is an excellent way to find bugs
Random Testing
• Randomly generate test inputs
– Could be based on some statistical model

• How do you tell if the test case succeeded?


– Where do the expected results come from?
– Some type of “oracle” is needed

• Expected results could be calculated manually


– Possible, but lots of work

• Automated oracles can often be created to measure characteristics of the system


– Performance (memory usage, bandwidth, running times, etc.)
– Did the system crash?
– Maximum and average user response time under simulated user load
Security Testing
• Any system that manages sensitive information or performs
sensitive functions may become a target for intrusion (i.e., hackers)

• How feasible is it to break into the system?


• Learn the techniques used by hackers
• Try whatever attacks you can think of
• Hire a security expert to break into the system

• If somebody broke in, what damage could they do?


• If an authorized user became disgruntled, what damage could they do?
Usability Testing
• Is the user interface intuitive, easy to use, organized, logical?
• Does it frustrate users?
• Are common tasks simple to do?
• Does it conform to platform-specific conventions?

• Get real users to sit down and use the software to perform some tasks
• Watch them performing the tasks, noting things that seem to give
them trouble
• Get their feedback on the user interface and any
suggested improvements

• Report bugs for any problems encountered


Recovery Testing
• Try turning the power off or otherwise crashing the program at
arbitrary points during its execution
– Does the program come back up correctly when you restart it?
– Was the program’s persistent data corrupted (files, databases, etc.)?
– Was the extent of user data loss within acceptable limits?

• Can the program recover if its configuration files have been corrupted
or deleted?

• What about hardware failures? Does the system need to keep working
when its hardware fails? If so, verify that it does so.
Configuration Testing
• Test on all required hardware configurations
– CPU, memory, disk, graphics card, network card, etc.

• Test on all required operating systems and versions thereof


– Virtualization technologies such as VMWare and Virtual PC are
very helpful for this

• Test as many Hardware/OS combinations as you can

• Test installation programs and procedures on all


relevant configurations
Compatibility Testing
• Test to make sure the program is compatible with other programs it
is supposed to work with

• Ex: Can Word 12.0 load files created with Word 11.0?
• Ex: "Save As… Word, Word Perfect, PDF, HTML, Plain Text"
• Ex: "This program is compatible with Internet Explorer and Firefox"

• Test all compatibility requirements


Documentation Testing
• Test all instructions given in the documentation to ensure
their completeness and accuracy

• For example, “How To ...” instructions are sometimes not updated


to reflect changes in the user interface

You might also like