Quiz Let
Quiz Let
with the main aim of finding and removing errors Types of defects found in reviews 1) Deviations from standards 2) Requirements defects 3) Design defects 4) Insufficient maintainability 5) Incorrect interface specifications Basic review process 1) Document under review is studied 2) Reviewers identify issues and inform the author 3) Author decides on action to take and updates as needed Formal review process 1) Planning (select personnel, allocate roles, defining entry and exit criteria, selecting parts of document to review) 2) Kick-off 3) Review entry criteria 4) Individual preparation 5) Noting incidents 6) Review meeting 7) Examine 8) Rework 9) Fixing defects 10) Follow-up 11) Checking exit criteria Review roles 1) Manager - decides on what to review 2) Moderator - leads the review 3) Author - person who wrote the document 4) Reviewers - performs the review of documents 5) Scribe - documents the review Four types of reviews 1) Informal 2) Walkthrough 3) Technical review 4) Inspection Informal review
Main purpose is to find defects and uses no real formal process Walktrhough Enable learning of the content of the document and find defects Technical review Enable decision making, finding defects, solving technical problems, and checking conformance of document Inspection Main purpose is to find defects and process improvement Success factors for reviews 1) Each review should have predefined and agreed objective 2) Any defects should be welcomes 3) Conducted in an atmosphere of trust 4) Techniques are suitable to work-product type 5) Emphasis on learning 6) Management support Static analysis Used to find defects in software source code and software models Benefits of static analysis 1) Early detection of issues prior to test execution 2) Early warning about suspicious code or design 3) Identification of defects not easily found by dynamic testing 4) Improved maintainability 5) Prevention of defects
Software computer programs, procedures and associated documentation and data pertaining to the operation of a computer system risk a factor that could result in future negative consequences (impact, likelihood) error a human action that produces an incorrect result defect
flaw in a component or system that can cause the component or system to fail to perform its required function failure deviation of the component or system from its expected delivery, service or result quality the degree to which a component, system or process meets specified requirements and or user customer needs and expectations exhaustive testing a test approach in which the test suite comprises all combinations of input and preconditions testing the process consisting of all life cycle activities both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements to demonstrate that are fit for purpose and to detect defects software development all the activities carried out during the construction of a software product (SDLC) code computer instructions and data definitions expressed in a programming language or in a form output by an assembler, compiler or other translator test basis all documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. frozen test basis when a document can only be amended with a formal amendment requirement a condition or capability needed by a user to solve a problem; or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document. review
an evaluation of a product or project status to acertain discrepancies from planned results; and to recommend improvements. Types: Management, informal, technical, inspection and walkthrough test case a set of input values, execution preconditions, expected results and execution postconditions developed for a particular objective or test condition, such as to excercise a particular program path or to verify compliance with a specific requirement. test objective a reason or purpose for designing and executing a test
Test execution is assessed for the defined objectives. Checking test logs against the exit criteria specified in test planning Assessing if more testing is needed or if the exit criteria should be reevaluated Writing a test summary Test Closure activities These are all the activities that require collecting data regarding the testing effort. Checking deliverables Closing incident (bug) reports Documenting the acceptance of the system Finalizing and archiving testware Post Mortems
7 Principles are Principle 1 - Testing shows presence of defects. Principle 2 - Exhaustive testing is impossible. Principle 3 - Early testing. Principle 4 - Defect clustering. Principle 5 - Pesticide paradox. Principle 6 - Testing is context dependent. Principle 7 - Absence-of-errors fallacy. Principle 1 - Testing shows presence of defects Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness. Principle 2 - Exhaustive testing is impossible Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. Principle 3 - Early testing Testing activities should start as early as possible in the software or system development life cycle, and should be focused on defined objectives. Principle 4 - Defect clustering A small number of modules contain most of the defects discovered during pre-release testing, or are responsible for the most operational failures. Principle 5 - Pesticide paradox If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this "pesticide paradox", the test cases need to be
regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects. Principle 6 - Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. Principle 7 - Absence-of-errors fallacy Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations.
V-model (sequential development model) Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels. The four levels used in this syllabus are: o component (unit) testing; o integration testing; o system testing; o acceptance testing. Iterative-incremental development models (K2) Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models. The resulting system produced by an iteration may be tested at several levels as part of its development. An increment, added to others developed previously, forms a growing partial system, which should also be tested. Regression testing is increasingly important on all iterations after the first one. Verification and validation can be carried out on each increment. Testing within a life cycle model (K2) In any life cycle model, there are several characteristics of good testing: o For every development activity there is a corresponding testing activity. o Each test level has test objectives specific to that level. o The analysis and design of tests for a given test level should begin during the corresponding development activity. o Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.
LO-2.1.1 Understand the relationship between development, test activities and work products in the development life cycle, and give examples based on project and product characteristics and context (K2). LO-2.1.2 Recognize the fact that software development models must be adapted to the context of project and product characteristics. (K1) LO-2.1.3 Recall reasons for different levels of testing, and characteristics of good testing in any life cycle model. (K1) Test levels Compare the different levels of testing: major objectives, typical objects of testing, typical targets of testing (e.g. functional or structural) and related work products, people who test, types of defects and failures to be identified. (K2) Test types LO-2.3.1 Compare four software test types (functional, non-functional, structural and changerelated) by example. (K2) LO-2.3.2 Recognize that functional and structural tests occur at any test level. (K1) LO-2.3.3 Identify and describe non-functional test types based on non-functional requirements. (K2) LO-2.3.4 Identify and describe test types based on the analysis of a software system's structure or architecture. (K2) LO-2.3.5 Describe the purpose of confirmation testing and regression testing. (K2) Maintenance testing LO-2.4.1 Compare maintenance testing (testing an existing system) to testing a new application with respect to test types, triggers for testing and amount of testing. (K2) LO-2.4.2 Identify reasons for maintenance testing (modification, migration and retirement). (K1) LO-2.4.3. Describe the role of regression testing and impact analysis in maintenance. (K2)
verification confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled.
validation confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled V-model a framework to describe the SDLC activites from requirements speficication to maintenance. This illustrated how testing activities can be integrated into each phase of the SDLC. test level
a group of test activities that are organized and managed together. Linked to the responsibilities in a project Off the shelf software a software product that is developed for the general market - COTS incremental development model a development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. the req are prioritized and delivered in that order. component testing the testing of individual software components stub a skeletal or special purpose implementation of a software component used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component driver a software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system robustness the degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions robustness testing testing to determine the robustness of the software product test driven development a way of developing software where test cases are developed, automated before the software is developed to run those test cases integration the process of combining components or systems into larger assemblies integration testing testing performed to expose defects in the interfaces and in the interations between integrates components or systems
system testing the process of testing an integrated system to verify that it meets specified requirements requirement a condition of capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed decument functional requirement a requirement that specifies a funtion that a component or system must perform non-functional requirement a requirement that does not relate to functionality, but to attributes like reliability, efficiency, usability, maintainability and portability test environment an environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test Black-Box Testing Testing, either functional or non-functional, without reference to the internal structure of the component or system.
Code Coverage An analysis method that determines which parts of the software have been executed (covered by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage, or condition coverage).
Functional Testing Testing based on an analysis of the specification of the functionality of a component or system.
Interoperability Testing The process of testing to determine the interoperability of a software product.
Iterative-Incremental Development Model A development life cycle where a project is broken into a (usually large) number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development which grows from iteration to iteration to become the final product.
Load Testing A test type concerned with measuring the behavior of a component or system with increasing load, e.g. the number of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Maintainability Testing The process of testing to determine how difficult it is to maintain a software program.
Off the Shelf Software A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Performance Testing The process of testing to determine the performance of a software product.
Portability Testing The process of testing to determine how easily the software product can be moved to different hardware configurations.
Reliability Testing The process of testing to determine the reliability of the software product.
Security Testing Testing to determine the software product's ability to resist malicious code, like viruses.
Stress Testing Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]
System testing Testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic
Usability Testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions. [ISO 9126]
Validation Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]
Verification Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000] Commercial off-the-shelf A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Iterative-incremental development model The process of establishing requirements, designing, building and testing a system in a series of short development cycles.
Validation Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
Verification Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
V-model
A framework to describe the software development lifecycle activities from requirements specification to maintenance. It illustrates how test activities can be integrated into each phase of the software development lifecycle.
Which of the following characteristics of good testing apply to any software development life cycle model? a) Acceptance testing is always the final test level to be applied. b) All test levels are planned and completed for each developed feature. c) Testers are involved as soon as the first piece of code can be executed. d) For every development activity there is a corresponding testing activity. d) For every development activity there is a corresponding testing activity.
Which of the following comparisons of component testing and system testing are TRUE? a) Component testing verifies the functioning of software modules, program objects, and classes that are separately testable, whereas system testing verifies interfaces between components and interactions with different parts of the system. b) Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases. c) Component testing focuses on functional characteristics, whereas system testing focuses on functional and non-functional characteristics. d) Component testing is the responsibility of the technical testers, whereas system testing typically is the responsibility of the users of the system. b) Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases.
Alpha testing Simulated or actual operational testing by potential users/customers or an independent test team at the developer's site, but outside the development organization. It is often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing (field testing) Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the
user/customer needs and fits within the business processes. It is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Driver A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
Field testing Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. It is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Integration testing Testing performed to expose defects in the interfaces and in the interactions between components or systems.
Non-functional requirement
A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
Stub A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
System testing The process of testing an integrated system to verify that it meets specified requirements
Test environment An environment containing hardware, instrumentation, simulators, software tools, and other needed support elements.
Test level A group of activities that are organized and managed together. It is linked to the responsibilities in a project. Examples are component test, integration test, system test and acceptance test.
Test-driven development A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the criteria and to enable the user, customers or other authorized entity to determine whether or not to adopt the system.
Black-box testing Testing, either functional or non-functional, without reference to the internal structure of the component or system.
Code coverage An analysis method that determines which parts of the software have been executed by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage, or condition coverage.
Functional testing Testing based on an analysis of the specification of the functionality of a component or system.
Interoperability testing The process of testing to determine the interoperability of a software product.
Load testing A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Maintainability testing The process of testing to determine the maintainability of a software product.
Performance testing
Portability testing The process of testing to determine the portability of a software product.
Reliability testing The process of testing to determine the reliability of a software product.
Stress testing A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
Structural testing Testing based on an analysis of the internal structure of the component or system.
Usability testing Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.
White-box testing Testing based on an analysis of the internal structure of the component or system.
Impact analysis The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
Maintenance testing Testing the changes to an operational system or the impact of a changed environment to an operational system.
Which of the following comparisons of component testing and system testing are TRUE? a) Component testing verifies the functioning of software modules, program objects, and classes that are separately testable, whereas system testing verifies interfaces between components and interactions with different parts of the system. b) Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases. c) Component testing focuses on functional characteristics, whereas system testing focuses on functional and non-functional characteristics. d) Component testing is the responsibility of the technical testers, whereas system testing typically is the responsibility of the users of the system. b) Test cases for component testing are usually derived from component specifications, design specifications, or data models, whereas test cases for system testing are usually derived from requirement specifications, functional specifications or use cases.
Which statement below BEST describes non-functional testing? a) The process of testing an integrated system to verify that it meets specified requirements. b) The process of testing to determine the compliance of a system to coding standards. c) Testing without reference to the internal structure of a system. d) Testing system attributes, such as usability, reliability or maintainability. d) Testing system attributes, such as usability, reliability or maintainability.
Which of the following statements are TRUE? A. Regression testing and acceptance testing are the same. B. Regression tests show if all defects have been resolved. C. Regression tests are typically well-suited for test automation. D. Regression tests are performed to find out if code changes have introduced or uncovered defects. E. Regression tests should be performed in integration testing.
a) A, C and D and E are true; B is false. b) A, C and E are true; B and D are false. c) C and D are true; A, B and E are false. d) B and E are true; A, C and D are false. c) C and D are true; A, B and E are false.
For which of the following would maintenance testing be used? a) Correction of defects during the development phase. b) Planned enhancements to an existing operational system. c) Complaints about system quality during user acceptance testing. d) Integrating functions during the development of a new system. b) Planned enhancements to an existing operational system. Two main static testing techniques 1) Reviews 2) Static analysis Reviews Systematic examination of a document by one of more people with the main aim of finding and removing errors Types of defects found in reviews 1) Deviations from standards 2) Requirements defects 3) Design defects 4) Insufficient maintainability 5) Incorrect interface specifications Basic review process 1) Document under review is studied 2) Reviewers identify issues and inform the author 3) Author decides on action to take and updates as needed Formal review process 1) Planning (select personnel, allocate roles, defining entry and exit criteria, selecting parts of document to review) 2) Kick-off 3) Review entry criteria 4) Individual preparation 5) Noting incidents 6) Review meeting
7) Examine 8) Rework 9) Fixing defects 10) Follow-up 11) Checking exit criteria Review roles 1) Manager - decides on what to review 2) Moderator - leads the review 3) Author - person who wrote the document 4) Reviewers - performs the review of documents 5) Scribe - documents the review Four types of reviews 1) Informal 2) Walkthrough 3) Technical review 4) Inspection Informal review Main purpose is to find defects and uses no real formal process Walktrhough Enable learning of the content of the document and find defects Technical review Enable decision making, finding defects, solving technical problems, and checking conformance of document Inspection Main purpose is to find defects and process improvement Success factors for reviews 1) Each review should have predefined and agreed objective 2) Any defects should be welcomes 3) Conducted in an atmosphere of trust 4) Techniques are suitable to work-product type 5) Emphasis on learning 6) Management support Static analysis
Used to find defects in software source code and software models Benefits of static analysis 1) Early detection of issues prior to test execution 2) Early warning about suspicious code or design 3) Identification of defects not easily found by dynamic testing 4) Improved maintainability 5) Prevention of defects
acceptance Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610].
acceptance testing Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610].
agile manifesto A statement on the values that underpin agile software development. The values are: - individuals and interactions over processes and tools - working software over comprehensive documentation - customer collaboration over contract negotiation - responding to change over following a plan
agile software development A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
alpha testing
Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for of-theshelf software as a form of internal acceptance testing.
attack Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur (negative testing).
beta testing Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for the off-the-shelf software in order to acquire feedback from the market.
black-box technique Procedure to derive and/or select test cases based on an analysis if the specification, either functional or non-functional, of a component or system without reference to its internal structure.
black box test design technique Procedure to derive and/or select test cases based on an analysis if the specification, either functional or non-functional, of a component or system without reference to its internal structure.
black box testing Testing, either functional or non-functional, without reference to the internal structure of the component or system.
boundary value An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
boundary value analysis A black box test design technique in which test cases are designed based on boundary values.
branch coverage The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.
bug Defect
capture/playback tool A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
capture/replay tool A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.
code coverage An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
commercial off-the-shelf software A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
compiler A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610].
complexity The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
component testing The testing of individual software components. [After IEEE 610].
configuration control An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]
configuration management A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]
configuration management tool A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items.
control flow A sequence of events (paths) in the execution through a component or system.
coverage The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
coverage tool A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.
cyclomatic complexity The number of independent paths through a program. Cyclomatic complexity is defined as: L - N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)
data driven testing A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.
data flow An abstract representation of the sequence, An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction.
debugger A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
debugging The process of finding, analyzing and removing the causes of failures in software.
debugging tool A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.
decision coverage The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.
decision table
A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
decision table testing A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
defect A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
defect density The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).
defect management tool A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities.
defect report A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]
driver A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
dynamic analysis tool A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.
dynamic testing Testing that involves the execution of the software of a component or system.
efficiency The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126].
efficiency testing The process of testing to determine the efficiency of a software product.
entry criteria The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
equivalence partition A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
equivalence partitioning A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
error A human action that produces an incorrect result. [After IEEE 610].
error guessing A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
exhaustive testing A test approach in which the test suite comprises all combinations of input values and preconditions.
exit criteria The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
experience-based test design technique Procedure to derive and/or select test cases based on the tester's experience, knowledge and intuition.
exploratory testing An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
failure Deviation of the component or system from its expected delivery, service or result.
failure rate The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]
fault Defect
fault attack Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur. See also negative testing.
formal review A review characterized by documented procedures and requirements, e.g. inspection.
functional requirement A requirement that specifies a function that a component or system must perform. [IEEE 610]
functional testing Testing based on an analysis of the specification of the functionality of a component or system.
functionality The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]
functionality testing The process of testing to determine the functionality of a software product.
horizontal traceability The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification or test script).
impact analysis The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
incident Any event occurring that requires investigation. [After IEEE 1008]
incident logging Recording the details of any incident that occurred, e.g. during testing
incident management The process of recognizing, investigating, taking action and disposing of incidents. It involves logging incidents, classifying them and identifying the impact. [After IEEE 1044]
incident management tool A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.
incident report
A document reporting on any event that occurred, e.g. during the testing, which requires investigation. [After IEEE 829]
incremental development model A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a 'mini V-model' with its own design, coding and testing phases.
independence of testing Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO178b]
inspection A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to high level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028].
integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
interoperability testing The process of testing to determine the capability of the software product to interact with one or more specified components or systems.
iterative development model A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
keyword driven testing A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.
load testing A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
maintainability The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]
maintainability testing The process of testing to determine the maintainability of a software product.
maintenance Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment. [IEEE 1219]
maintenance testing Testing the changes to an operational system or the impact of a changed environment to an operational system.
metric A measurement scale and the method used for measurement. [ISO 14598]
mistake Error
modeling tool A tool that supports the validation of models of the software or system.
moderator The leader and main person responsible for an inspection or other review process.
monitor A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]
non-functional requirement A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
off-the-shelf software A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
peer review A review of a software work product by colleagues of the producer of the product for the purpose pf identifying defects and improvement. Examples are inspection, technical review and walkthrough.
performance The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time throughput rate. [After IEEE 610]
performance testing The process of testing to determine the performance of a software product.
performance testing tool A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
portability The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
portability testing The process of testing to determine the portability of a software product.
probe effect The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
project risk A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.
quality The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
regression testing Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
regulation testing The process of testing to determine the compliance of the component or system.
reliability The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISO 9126]
reliability testing The process of testing to determine the reliability of a software product.
requirement A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]
requirements management tool A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.
re-testing Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
review An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]
review tool A tool that provides support to the review process. Typical features include review planning and tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.
reviewer The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
risk A factor that could result in future negative consequences; usually expressed as impact and likelihood.
risk-based testing An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.
robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610]
root cause A source of a defect such that if it is removed, the occurance of the defect type is decreased or removed. [CMMI]
scribe The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe should ensure that the logging form us readable and understandable.
scripting language A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/playback tool).
security Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126]
security testing tool A tool that provides support for testing security characteristics and vulnerabilities.
severity The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]
state diagram A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]
state table A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.
statement coverage The percentage of executable statements that have been exercised by a test suite.
static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts. Static analysis is usually carried out by means of a supporting tool.
static code analysis Analysis of source code carried out without execution of that software.
static testing Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
stress testing A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610]
stub A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]
1. Test granularity refers to: A. The impact of a bug on the system under test. B. The fineness or coarseness of a test's focus. C. Any way of determining the expected result for a test case. D. A quality improvement idea common in software development.
B. The fineness or coarseness of a test's focus.
2. The prime benefit of testing is that it results in improved defects a. True b. False
b. False
3. A bug report is a: A. A & B B. A deliverable that details the strategic approach to a testing effort C. A collection of independent, reusable test cases.
D. A technical document that describes the various symptoms or failure modes associated with a single bug.
D. A technical document that describes the various symptoms or failure modes associated with a single bug.
4. A software error can be described as: A. A mismatch between the program and its specification. B. The process in which developers determine the root cause of a bug and identify possible fixes. C. A description of the relationship between two or more variables or set members in which the value of one does not influence the values of others. D. Any ill-advised, substandard, or temporary fix applied to an urgent problem in the (often misguided) belief that doing so will keep a project moving forward.
A. A mismatch between the program and its specification.
5. Select a reason that does not agree with the fact that complete testing is impossible: A. The user interface issues (and thus the design issues) are too complex to completely test. B. There are too many possible paths through the program to test. C. The domain of possible inputs is too large to test . D. Limited financial resources.
D. Limited financial resources.
6. Testing looks for situations in which a product fails to meet the developers expectations in specific areas. a. True b. False
b. False
7. Select a reason that does not support the idea of using separate test plans for test subprojects that are distinct in one or more ways: A. Different audiences B. Different methodologies C. Different time periods D. Different resources E. Different objectives
D. Different resources
8. The testing effort begins with A. A & B B. Test execution C. Test planning D. Test case design E. B & C
A. A & B
9. Testing during the design stage involves: A. Reading drafts of the planning documents B. Acceptance or qualification testing C. Examining the design documents d. None of the above
C. Examining the design documents
C. Principles D. Actions
A. Accountability
11. When testing operating systems or applications, the first step of testing a new build should consist of : A. A & B B. Testing the upgrade/installation procedures C. Notifying test lead D. Updating requirements
B. Testing the upgrade/installation procedures
12. The general rule of test execution is that you must always create a test procedure that will force the program to use the data youve entered and to prove that it is using your data correctly. a. True b. False
a. True
13. Which is not a goal of writing effective Problem/Bug reports? A. Write a report that is complete, easy to understand, and non-antagonistic B. Analyze the error so you can describe it in a minimum number of steps C. Illustrate how to fix the problem D. Explain how to reproduce the problem
C. Illustrate how to fix the problem
14. Which of the following displays an exit criterion for the test team? A. The Development teams have unit-tested all features and bug fixes scheduled for release. B. Twice-weekly bug review meetings (under the Change Control Board) occur until System Test Phase Exit to manage the open bug backlog and bug closure times. C. All software released to the test team is accompanied by release notes D. The test team has executed the entire planned tests against the application under test.
D. The test team has executed the entire planned tests against the application under test.
15. The daily closure period refers to: A. The amount of bugs opened over a 24 hour period B. The average number of days between the opening of a bug and its resolution for all bugs closed on the same day. C. The average for all closed bugs, including the current day and all previous days d. None of the Above
B. The average number of days between the opening of a bug and its resolution for all bugs closed on the same day.
16. Integrity testing involves: A. The final phase of testing prior to deployment B. Alpha testing C. The testing of pseudo code D. Performance testing
A. The final phase of testing prior to deployment
17. Testing literature reflects and promotes a strongly held belief that product reliability will not be better if testing is done by a fully independent test agency. a. True b. False
b. False
18. Select the item(s) that are general testing principles: a. Testing shows a presence of defects b. Exhaustive testing is impossible c. Automation tools can be a great strategy d. Absence-of-errors fallacy
o o o
19. Which is not a major task of test implementation and execution: A. Verifying that the test environment has been set up correctly B. Checking test logs against the exit criteria specified in test planning. C. Develop and prioritizing test cases, creating test data, writing test procedures and optionally, preparing test harness and writing automated test scripts. D. Logging the outcome of test execution and recording the identities and versions of the software under test, test tools and testware.
B. Checking test logs against the exit criteria specified in test planning.
20. Select the item(s) that compose test objectives: a. Finding defects b. Gaining confidence about the level of quality and
a. Finding defects b. Gaining confidence about the level of quality and providing information c. Preventing defects
21. A bug or defect is: A. the result of a failure, which may lead to an error B. the result of an error or mistake; C. a mistake made by a person D. a run-time problem experienced by a user
B. the result of an error or mistake;
22. The effect of testing is to: A. show there are no problems remaining? B. enable those responsible for software failures to be identified; C. increase software quality; D.give an indication of the software quality;
D. give an indication of the software quality;
23. What is retesting? A. Running a previously failed test against new software/data/documents to see if the problem is solved. B. Checking that the predetermined exit criteria for the test phase have been met. C. Running the same test again in the same circumstances to reproduce the problem. D. A cursory run through a test pack to see if any new errors have been introduced.
A. Running a previously failed test against new software/data/documents to see if the problem is solved.
24. Debugging is: A. Checking that no unintended consequences have occurred as a result of a fix. B. Identifying the cause of a defect, repairing the code and checking the fix is correct. C. Testing/checking whether the software performs correctly. D. Checking that a previously reported defect has been corrected.
B. Identifying the cause of a defect, repairing the code and checking the fix is correct. (a. is a brief definition of testing. b. is restesting, and d. is regression testiong.)
25. Which of the following are aids to good communication, and which hinder it? i. Try to understand how the other person feels. ii. Communicate personal feelings, concentrating upon individuals. iii. Confirm the other person has understood what you have said and vice versa. iv. Emphasise the common goal of better quality. v. Each discussion is a battle to be won. A. (ii), (iii) and (iv) aid, (i) and (v) hinder. B. (i), (iii) and (iv) aid, (ii) and (v) hinder. C. (i), (ii) and (iii) aid, (iv) and (v) hinder. D. (iii), (iv) and (v) aid, (i) and (ii) hinder.
B. (i), (iii) and (iv) aid, (ii) and (v) hinder.
26. Which option is part of the implementation and execution area of the fundamental test process? A. Analysing lessons learnt for future releases.
B. Writing a test summary. C. Developing the tests. D. Comparing actual and expected results.
D. Comparing actual and expected results. (a. is part of Analysis and design, c. is part of 'evaluating exit criteria and reporting', and d. is part of 'Test closure activities')
27. The five parts of the fundamental test process have a broad chronological order. Which of the options gives three different parts in the correct order? A. Evaluating exit criteria and reporting, test closure activities, analysis and design. B. Evaluating exit criteria and reporting, implementation and execution, analysis and design. C. Implementation and execution, planning and control, analysis and design. D. Analysis and design, evaluating exit criteria and reporting, test closure activities.
D. Analysis and design, evaluating exit criteria and reporting, test closure activities.
28. Which pair of definitions is correct? A. Regression testing is checking that there are no additional problems in previously tested software, retesting is demonstrating that the reported defect has been fixed. B. Regression testing involves running all tests that have been run before; retesting runs new tests. C. Regression testing is checking that the reported defect has been fixed; retesting is testing that there are no additional problems in previously tested software. D. Regression testing is checking there are no additional
problems in previously tested software; retesting enables developers to isolate the problem.
A. Regression testing is checking that there are no additional problems in previously tested software, retesting is demonstrating that the reported defect has been fixed.
29. Which statement is most true? A. A technique that has found no defects is not useful. B. A technique that finds defects will always find defects. C. Different testing is needed depending upon the application. D. All software is tested in the same way.
C. Different testing is needed depending upon the application.
30. When is testing complete? A. When every data combination has been exercised successfully. B. When there are no remaining high priority defects outstanding. C. When time and budget are exhausted. D. When there is enough information for sponsors to make an informed decision about release.
D. When there is enough information for sponsors to make an informed decision about release. (Sometimes time and money does signify the end of testing, but it is really complete when everything that was set out in advance has been achieved)
31. Which list of levels of tester independence is in the correct order, starting with the most independent first? A. Tests designed by someone from a different department within the company; tests designed by someone from a different company; tests designed by the author.
B. Tests designed by someone from a different company; tests designed by someone from a different department within the company; tests designed by another member of the development team. C. Tests designed by the author; tests designed by another member of the development team; tests designed by someone from a different company. D. Tests designed by someone from a different department within the company; tests designed by the author; tests designed by someone from a different company.
B. Tests designed by someone from a different company; tests designed by someone from a different department within the company; tests designed by another member of the development team.
32. The following statements relate to activities that are part of the fundamental test process. i. Evaluating the testability of requirements. ii. Repeating testing activities after changes. iii. Designing the test environment set-up. iv. Developing and prioritising test cases. v. Verifying the environment is set up correctly. Which statement below is TRUE? A. (i) and (iv) are part of analysis and design, (ii), (iii) and (v) are part of test implementation and execution. B. (i) and (v) are part of analysis and design, (ii), (iii) and (iv) are part of test implementation and execution. C. (i) and (ii) are part of analysis and design, (iii), (iv) and (v) are part of test implementation and execution. D. (i) and (iii) are part of analysis and design, (ii), (iv) and (v) are part of test implementation and execution.
D. (i) and (iii) are part of analysis and design, (ii), (iv) and (v) are part of test implementation and execution.
33. Which statement correctly describes the public and profession aspects of the code of ethics? A. Public: Certified software testers shall consider the wider public interest in their actions. Profession: Certified software testers shall advance the integrity and reputation of their industry consistent with the public interest. B. Public: Certified software testers shall consider the wider public interest in their actions. Profession: Certified software testers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of their profession. C. Public: Certified software testers shall act in the best interests of their client and employer (being consistent with the wider public interest). Profession: Certified software testers shall advance the integrity and reputation of their industry consistent with the public interest. D. Public: Certified software testers shall advance the integrity and reputation of the profession consistent with the public interest. Profession: Certified software testers shall consider
34. Chapter: Fundamental of Testing 35. 1) Which of the following statements BEST describes one of the seven key principles of Software Testing? 36. a) Automated tests are better than manual tests for avoiding the Exhaustive Testing. 37. b) Exhaustive testing is, with sufficient effort and tool support, feasible for all software. 38. c) It is normally impossible to test all input/output combinations for a software system. 39. d) The purpose of testing is to demonstrate the absence of defects.
40. Ans: C 41. 2. Which of the following statements is the MOST valid goal for a test team? 42. a) Determine whether enough component testing was executed. 43. b) Cause as many failures as possible so that faults can be identified and corrected. 44. c) Prove that all faults are identified. 45. d) Prove that any remaining faults will not cause any failures. 46. Ans: B 47. 3. Which of these tasks would you expect to perform during Test Analysis and Design? 48. a) Setting or defining test objectives. 49. b) Reviewing the test basis. 50. c) Creating test suites from test procedures. 51. d) Analyzing lessons learned for process improvement. 52. Ans: B 53. 4. Below is a list of problems that can be observed during testing or operation. Which is MOST likely a failure? 54. a) The product crashed when the user selected an option in a dialog box. 55. b) One source code file included in the build was the wrong version. 56. c) The computation algorithm used the wrong input variables. 57. d) The developer misinterpreted the requirement for the algorithm. 58. Ans: A 59. 5. Which of the following, if observed in reviews and tests, would lead to problems (or conflict) within teams? 60. a) Testers and reviewers are not curious enough to find defects. 61. b) Testers and reviewers are not qualified enough to find failures and faults.
62. c) Testers and reviewers communicate defects as criticism against persons and not against the software product. 63. d) Testers and reviewers expect that defects in the software product have already been found and fixed by the developers. 64. Ans: C 65. 6. Which of the following statements are TRUE ? 66. a) Software testing may be required to meet legal or contractual requirements. 67. b) Software testing is mainly needed to improve the quality of the developers work. 68. c) Rigorous testing and fixing of defects found can help reduce the risk of problem occurring in an operational environment. 69. d) Rigorous testing is sometimes used to prove that all failures have been found. 70. a) B and C are true; A and D are false. 71. b) A and D are true; B and C are false. 72. c) A and C are true; B and D are false. 73. d) C and D are true; A and B are false. 74. Ans: C 75. 7. Which of the following statements BEST describes the difference between testing and debugging ? 76. a) Testing pinpoints (identifies the source of) the defects. Debugging analyzes the faults and proposes prevention activities. 77. b) Dynamic testing shows failures caused by defects. Debugging finds, analyzes, and removes the causes of failures in the software. 78. c) Testing removes faults. Debugging identifies the cause of failures. 79. d) Dynamic testing prevents causes of failures. Debugging removes the failures. 80. Ans: B