TESTFAQ - Imp
TESTFAQ - Imp
What is Validation?
“Validation” checks whether we are building the system right. Validation typically involves actual testing
and takes place after verifications are completed.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is
usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator,
reader (the author of whatever is being reviewed), and a recorder to take notes.
The subject of the inspection is typically a document such as a requirements spec or a test plan, and the
purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this
type of meeting by reading thru the document; most problems will be found during this preparation. The
result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost-effective methods of ensuring quality. Employees who are
most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for
management to get serious about quality assurance? Their skill may have low visibility but they are
extremely valuable to any software development organization, since bug prevention is far more cost
effective than bug detection.
What kinds of testing should be considered?
• Black box testing - not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are
based on coverage of code statements, branches, paths, conditions.
• Unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code, may require developing test driver modules or test
harnesses.
• Functional testing – Black box type testing geared to functional requirements of an application;
this type of testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of testing.)
• System testing - Black box type testing that is based on overall requirements specifications;
covers all combined parts of a system.
• end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of
a complete application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware, applications,
or systems if appropriate.
• Sanity testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new software is
crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the
software may not be in a 'sane' enough condition to warrant further testing in its current state.
• regression testing - re-testing after fixes or modifications of the software or its environment. It
can be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
• acceptance testing - final testing based on specifications of the end-user or customer, or based on
use by end-users/customers over some limited period of time.
• load testing - testing an application under heavy loads, such as testing of a web site under a range
of loads to determine at what point the system's response time degrades or fails.
• stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex queries to a
database system, etc.
• Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or
QA or Test Plans.
• Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the
targeted end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as usability
testers.
• Recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
• Security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.
• Exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.
• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.
• Alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.
• Beta testing - testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
• Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
• poor requirements
• unrealistic schedule
• inadequate testing .
• Miscommunication
Level 1 –
Level 2 –
Level 3 -
Level 4 -
Level 5 -
(Perspective on CMM ratings: During 1992-1996 533 organizations were assessed. Of those,
62% were rated at Level 1, 23% at 2,13% at 3, 2% at 4, and 0.4% at 5. The median size of
organizations was 100 software engineering/maintenance personnel; 31% of organizations were
U.S. federal contractors. For those rated at Level 1, the most problematical key process area was
in Software Quality Assurance.)
• ISO = 'International Organization for Standards' - The ISO 9001, 9002, and 9003 standards
concern quality systems that are assessed by outside auditors, and they apply to many kinds of
production and manufacturing organizations, not just software. The most comprehensive is 9001,
and this is the one most often used by software development organizations. It covers
documentation, design, development, production, testing, installation, servicing, and other
processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software
development organizations. The U.S. version of the ISO 9000 series standards is exactly the same
as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be
purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To
be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically
good for about 3 years, after which a complete reassessment is required. Note that ISO 9000
certification does not necessarily indicate quality products - it indicates only that documented
processes are followed. (Publication of revised ISO standards are expected in late 2000; see
http://www.iso.ch/ for latest info.)
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards
such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE
Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.;
publishes some software-related standards in conjunction with the IEEE and ASQ (American
Society for Quality).
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements
are spelled out in a document with statements such as 'The product shall...’
'Design' specifications should not be confused with 'requirements'; design specifications should be traceable
back to the requirements.
In some organizations requirements may end up in high level project plans, functional specification
documents, in design documents, or in other documents at various levels of detail. No matter what they are
called, some type of documentation with detailed requirements will be needed by testers in order to
properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if
a software application is performing correctly.
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects
with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme
Programming Explained' (See the Softwareqatest.com
Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming. Programmers are
expected to write unit and functional test code first - before the application is developed. Test code is under
source control along with the rest of the code. Customers are expected to be an integral part of the project
team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably
automated, and are modified and rerun for each of the frequent development iterations. QA and test
personnel are also required to be an integral part of the project team. Detailed requirements documentation
is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info see the
XP-related listings in the Softwareqatest.com 'Other Resources' section.