Documents' Software Testing
Documents' Software Testing
• Black box testing - not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are
based on coverage of code statements, branches, paths, conditions.
• unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code; may require developing test driver modules or test
harnesses.
• incremental integration testing - continuous testing of an application as new functionality is added;
requires that various aspects of an application's functionality be independent enough to work
separately before all parts of the program are completed, or that test drivers be developed as
needed; done by programmers or by testers.
• integration testing - testing of combined parts of an application to determine if they function
together correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
• functional testing - black-box type testing geared to functional requirements of an application;
this type of testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of testing.)
• system testing - black-box type testing that is based on overall requirements specifications;
covers all combined parts of a system.
• end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of
a complete application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware, applications,
or systems if appropriate.
• sanity testing or smoke testing - typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing in its
current state.
• regression testing - re-testing after fixes or modifications of the software or its environment. It can
be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing approaches can be especially useful for this type of
testing.
• acceptance testing - final testing based on specifications of the end-user or customer, or based
on use by end-users/customers over some limited period of time.
• load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.
• stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex queries to a
database system, etc.
• performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or
QA or Test Plans.
• usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the
targeted end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as usability
testers.
• install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
• recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
• failover testing - typically used interchangeably with 'recovery testing'
• security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.
• compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
• exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.
• ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.
• context-driven testing - testing driven by an understanding of the environment, culture, and
intended use of software. For example, the testing approach for life-critical medical equipment
software would be completely different than that for a low-cost computer game.
• user acceptance testing - determining if software is satisfactory to an end-user or customer.
• comparison testing - comparing software weaknesses and strengths to competing products.
• alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
• beta testing - testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
• mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilties
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and
their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture
software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What's a 'test case'?
A test case describes an input, action, or event and an expected response, to determine if a feature of a
software application is working correctly. A test case may contain particulars such as test case identifier,
test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
The level of detail may vary significantly depending on the organization and project context.
Note that the process of developing test cases can help find problems in the requirements or design of an
application, since it requires completely thinking through the operation of the application. For this reason,
it's useful to prepare test cases early in the development cycle if possible.
1) Automated Testing is a testing that uses a variety of tools to automate the testing process and
where the importance of having a person manually is reduced.
2) Testing the system with the intent of confirming readiness of the product and customer acceptance
is known as Acceptance testing.
Bug-Tester view
A testing where the tester tries to break the software by randomly trying functionality of software.
The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user
of the software
In Compatibility testing we can test that software is compatible with other elements of system.
Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of
single-threaded code and locking semaphores.
The process of testing that an implementation conforms to the specification on which it is based.
Usually applied to testing conformance to a formal standard.
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and
creative evaluation of testing opportunities in light of the potential information revealed and the value
of that information to the organization right now.
Testing in which the action of a test case is parameterized by externally defined data values,
maintained as a file or spreadsheet. A common technique in Automated Testing.
Testing of programs or procedures used to convert data from existing systems for use in replacement
systems.
Examines an application's requirements for pre-existing software, initial states and configuration in
order to maintain proper functionality.
Checks for memory leaks or other problems that may occur with prolonged execution.
What is End-to-End testing ?
Testing a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
Testing which covers all combinations of input values and preconditions for an element of the software
under test.
Confirms that the application under test recovers from expected or unexpected events without loss of
data or functionality. Events can include shortage of disk space, unexpected loss of communication, or
power out conditions.
This term refers to making software specifically designed for a specific locality.
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to determine
if the 'bugs' are detected. Proper implementation requires large computational resources
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or
an application does not crash out.
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive
Testing.
Testing conducted to evaluate the compliance of a system or component with specified performance
requirements. Often this is performed using an automated test tool to simulate large number of users.
Also know as "Load Testing".
Confirms that the program recovers from expected or unexpected events without loss of data or
functionality. Events can include shortage of disk space, unexpected loss of communication, or power
out conditions.
Regression- Check that change in code have not effected the working functionality
Brief test of major functional elements of a piece of software to determine if its basically operational.
Performance testing focused on ensuring the application under test gracefully handles increases in
work load.
Testing which confirms that the program can restrict access to authorized personnel and that the
authorized personnel can access the functions available to their security level.
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It
involves testing beyond normal operational capacity, often to a breaking point, in order to observe the
results.
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware
testing practice of turning on a new piece of hardware for the first time and considering it a success if
it does not catch on fire.
What is Soak Testing ?
Running a system at high load for a prolonged period of time. For example, running several times
more transactions in an entire day (or night) than would be expected in a busy day, to identify and
performance problems that appear after a large number of transactions have been executed.
We can perform the Volume testing, where the system is subjected to large volume of data.