100% found this document useful (1 vote)
81 views24 pages

Software Test Metric: UNIT 10 Planning Management

1) The document defines software testing concepts like test cases, types of testing (certification, feature, load, regression), and testing methods (white box, black box). 2) It provides details on different levels of software testing like unit, integration, system, acceptance, and regression testing. 3) The document gives an example to estimate the number of test cases needed based on available time and budget. The minimum number from time-based and cost-based calculations is selected.

Uploaded by

Kimberly Dosado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
81 views24 pages

Software Test Metric: UNIT 10 Planning Management

1) The document defines software testing concepts like test cases, types of testing (certification, feature, load, regression), and testing methods (white box, black box). 2) It provides details on different levels of software testing like unit, integration, system, acceptance, and regression testing. 3) The document gives an example to estimate the number of test cases needed based on available time and budget. The minimum number from time-based and cost-based calculations is selected.

Uploaded by

Kimberly Dosado
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT 10 Planning Management

SOFTWARE TEST
METRIC

REPORT BY: 
KIMBERLY B. DOSADO
MCS 2-A, ISAT-U
UNIT 10: SOFTWARE TEST METRIC
REPORT OUTLINE:

• Test concepts, definitions and techniques


• Estimating number of test case
TEST CONCEPTS,
DEFINITIONS AND
TECHNIQUES
Definitions

• Run: The smallest division of work that can be initiated by external intervention on a
software component. Run is associated with input state (set of input variables) and
runs with identical input state are of the same run type.
• Direct input variable: is a variable that controls the operation directly
• Example: arguments, selection menu, entered data field.
• Indirect input variable: is a variable that only influences the operations or its effects
are propagated to the operation
• Example: traffic load, environmental variable.
What is a Test Case?
• Definition: A test case is a partial specification of a run through the
naming of its direct input variables and their values.
• Better Definition: A test case is an instance (or scenario) of a use-case
composed of a set of test inputs, execution conditions and expected
results.
• Prior to specifying test cases one should document the use-cases.
Types of Software Test

• Certification Test: Accept or reject (binary decision) an acquired component


for a given target failure intensity.
• Feature Test: A single execution of an operation with interaction between
operations minimized.
• Load Test: Testing with field use data and accounting for interactions among
operations
• Regression Test: Feature tests after every build involving significant change,
i.e., check whether a bug fix worked.
Basic Testing
Methods
• Two widely recognized testing
methods:
• White Box Testing
• Reveal problems with the
internal structure of a program
• Black Box Testing
• Assess how well a program
meets its requirements
1. White Box Testing
• Checks internal structure of a program
• Requires detailed knowledge of structure
• Common goal is Path Coverage
• How many of the possible execution paths are actually tested?
• Effectiveness often measured by the fraction of code exercised by test cases
2. Black Box Testing
• Checks how well a program meets its requirements
• Assumes that requirements are already validated 
• Look for missing or incorrect functionality
• Exercise system with input for which the expected output is known
• Various methods:
• Performance, Stress, Reliability, Security testing
Testing occurs throughout software lifecycle:
• Unit
Testing • Integration & System
Levels •

Evaluation & Acceptance
Installation
• Regression (Reliability Growth)
• White box testing in a controlled test environment
of a module in isolation from others
1. Unit • Unit is a function or small library
• Small enough to test thoroughly
Testing • Exercises one unit in isolation from others
• Controlled test environment
• White Box
• Units are combined and module is exercised 

2. • Focus is on the interfaces between units


• White Box with some Black Box
Integration • Three main approaches:
• Top-down
Testing • Bottom-up 
• Big Bang
Integration Testing: Top-Down
• The control program is tested first
• Modules are integrated one at a time
• Major emphasis is on interface testing
• Interface errors are discovered early
• Forms a basic early prototype
• Test stubs are needed
• Errors in low levels are found late
Integration Testing: Bottom-Up
• Modules integrated in clusters as desired
• Shows feasibility of modules early on
• Emphasis on functionality and performance
• Usually, test stubs are not needed
• Errors in critical modules are found early
• Many modules must be integrated before a working program is available
• Interface errors are discovered late
Integration Testing: Big Bang
• Units completed and tested independently
• Integrated all at once
• Quick and cheap (no stubs, drivers)
• Errors:
• Discovered later
• More are found
• More expensive to repair
• Most commonly used approach
• Black Box test
3. External • Verify the system correctly implements specified
functions
Function • Sometimes known as an Alpha test

Test • In-house testers mimic the end use of the system


• More robust version of the external test
• Difference is the test platform
4. System • Environment reflects end use
• Includes hardware, database size,
Test system complexity, external factors
• Can more accurately test nonfunctional system
requirements (performance, security etc.)
• Also known as Beta testing

5. Acceptance • Completed system tested by end users


• More realistic test usage than ‘system’ phase
Testing • Validates system with user expectations
• Determine if the system is ready for deployment
6. • The testing of full, partial, or
Installation upgrade install/uninstall processes
• Not well documented in the literature
Testing
• Tests modified software
• Verify changes are correct and do not adversely
7. affect other system components
• Selects existing test cases that are
Regression deemed necessary to validate the modification
• With bug fixes, four things can happen
Testing • Fix a bug; add a new bug; damage
program structure; damage program integrity
• Three of them are unwanted
ESTIMATING NUMBER
OF TEST CASE
Estimate Number of Test Cases

• How many test cases do we need?


• Affected by two factors: time and cost
• Method:
• Compute the number of test cases for the given 
• Time:
(available time × available staff) / (average time to prepare a test case)
• Cost:
(available budget) / (average preparation cost per test case)
• Select the minimum number of the two
Example
• The development budget for a project is $4 million and %10 of which can
be spent on preparing test cases. Each test case costs $250 or 4 hours to
prepare. The duration of project is set to be 25 weeks (of 40 hours each)
and a staff of 5 is assigned to prepare the test cases. How many test cases
should be prepared?
• From cost point of view:
N1 = (4,000,000 × 0.1) / 250 = 1,600
• From time point of view:
N2 = (25 × 40 × 5) / 4 = 1,250

N = min (N1 , N2), therefore, N = 1,250


References • Dr. Behrouz Far. (2014). Software Metrics
(SENG 421) Course Outline.
https://people.ucalgary.ca/~far/Lectures/S
ENG421/index.html

You might also like