SE Unit - 5
SE Unit - 5
• The strategy provides a road map that describes the steps to be taken, when, and how much effort, time,
and resources will be required
• The strategy incorporates test planning, test case design, test execution, and test result collection and
evaluation
• The strategy provides guidance for the practitioner and a set of milestones for the manager
• Because of time pressures, progress must be measurable and problems must surface as early as possible
• To perform effective testing, a software team should conduct effective formal technical reviews
• Testing begins at the component level and work outward toward the integration of the entire computer-
based system
• Testing is conducted by the developer of the software and (for large projects) by an independent test group
• Testing and debugging are different activities, but debugging must be accommodated in any testing strategy
• Software testing is part of a broader group of activities called verification and validation that are involved in
software quality assurance
– The set of activities that ensure that software correctly implements a specific function or algorithm
– The set of activities that ensure that the software that has been built is traceable to customer
requirements
• Common misconceptions
– The software should be given to a secret team of testers who will test it unmercifully
– The testers get involved with the project only when the testing steps are about to begin
– Removes the inherent problems associated with letting the builder test the software that has been
built
– Works closely with the software developer during analysis and design to ensure that thorough
testing occurs
• Unit testing
• Integration testing
• Validation testing
• System testing
• Unit testing
– Exercises specific paths in a component's control structure to ensure complete coverage and
maximum error detection
– Components are then assembled and integrated
• Integration testing
– Focuses on inputs and outputs, and how well the components fit together and work together
• Validation testing
– Provides final assurance that the software meets all functional, behavioral, and performance
requirements
• System testing
– Verifies that all system elements (software, hardware, people, databases) mesh properly and that
overall system function and performance is achieved
• Must broaden testing to include detections of errors in analysis and design models
• Unit testing loses some of its meaning and integration testing changes significantly
• Use the same philosophy but different approach as in conventional software testing
• Test "in the small" and then work out to testing "in the large"
– Testing in the small involves class attributes and operations; the main focus is on communication
and collaboration within the class
– Testing in the large involves a series of regression tests to uncover errors due to communication and
collaboration among classes
• Every time a user executes the software, the program is being tested
• Sadly, testing usually stops when a project is running out of time, money, or both
• One approach is to divide the test results into various severity levels
– Then consider testing to be complete when certain levels of errors no longer occur or have been
repaired or eliminated
• Understand the user of the software (through use cases) and develop a profile for each user category
• Develop a testing plan that emphasizes rapid cycle testing to get quick feedback to control quality levels and
adjust the test strategy
• Build robust software that is designed to test itself and can diagnose certain kinds of errors
• Use effective formal technical reviews as a filter prior to testing to reduce the amount of testing required
• Conduct formal technical reviews to assess the test strategy and test cases themselves
• Develop a continuous improvement approach for the testing process through the gathering of metrics
• Concentrates on critical modules and those with high cyclomatic complexity when testing resources are
limited
• Module interface
– Ensure that information flows properly into and out of the module
– Ensure that data stored temporarily maintains its integrity during all steps in an algorithm execution
• Boundary conditions
– Ensure that the module operates properly at boundary values established to limit or restrict
processing
– Paths are exercised to ensure that all statements in a module have been executed at least once
• Expectation of equality when precision error makes equality unlikely (using == with float types)
• Error description does not provide enough information to assist in the location of the cause of the error
• Driver
– A simple main program that accepts test case data, passes such data to the component being tested,
and prints the returned results
• Stubs
– Serve to replace modules that are subordinate to (called by) the component to be tested
– It uses the module’s exact interface, may do minimal data manipulation, provides verification of
entry, and returns control to the module undergoing testing
– Both must be written but don’t constitute part of the installed software product
Integration Testing
• Defined as a systematic technique for constructing the software architecture
– At the same time integration is occurring, conduct tests to uncover errors associated with interfaces
• Objective is to take unit tested modules and build a program structure based on the prescribed design
• Two Approaches
• Chaos results
• Once a set of errors are corrected, more errors occur, and testing appears to enter an endless loop
• Three kinds
– Top-down integration
– Bottom-up integration
– Sandwich integration
Top-down Integration
• Modules are integrated by moving downward through the control hierarchy, beginning with the main
module
• Advantages
– This approach verifies major control or decision points early in the test process
• Disadvantages
– Stubs need to be created to substitute for modules that have not been built or tested yet; this code
is later discarded
– Because stubs are used to replace lower level modules, no significant data flow can occur until much
later in the integration/testing process
Bottom-up Integration
• Integration and testing starts with the most atomic modules in the control hierarchy
• Advantages
– This approach verifies low-level data processing early in the testing process
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this code is later discarded or
expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will eventually use the services of the
lower-level modules; consequently, testing may be incomplete or more testing may be needed later
when the upper level modules are available
Sandwich Integration
• Occurs both at the highest level modules and also at the lowest level modules
• Proceeds using functional groups of modules, with each group completed before the next
– High and low-level modules are grouped based on the control and data processing they provide for a
specific program feature
– Integration within the group progresses in alternating steps between the high and low level modules
of the group
– When integration for a certain functional group is complete, integration and testing moves onto the
next group
• Reaps the advantages of both types of integration while minimizing the need for drivers and stubs
• Requires a disciplined approach so that integration doesn’t tend towards the “big bang” scenario
Regression Testing
• Each new addition or change to baselined software may cause problems with functions that previously
worked flawlessly
• Regression testing re-executes a small subset of tests that have already been conducted
– Helps to ensure that changes do not introduce unintended behavior or additional errors
– Additional tests that focus on software functions that are likely to be affected by the change
– Tests that focus on the actual software components that have been changed
Smoke Testing
– Power is applied and a technician checks for sparks, smoke, or other dramatic signs of fundamental
failure
– A series of breadth tests is designed to expose errors that will keep the build from properly
performing its function
• The goal is to uncover “show stopper” errors that have the highest likelihood of throwing
the software project behind schedule
– The build is integrated with other builds and the entire product is smoke tested daily
• Daily testing gives managers and practitioners a realistic assessment of the progress of the
integration testing
– Daily testing uncovers incompatibilities and show-stoppers early in the testing process, thereby
reducing schedule impact
– Smoke testing is likely to uncover both functional errors and architectural and component-level
design errors
– Smoke testing will probably uncover errors in the newest components that were integrated
– As integration testing progresses, more software has been integrated and more has been
demonstrated to work
– Focuses on operations encapsulated by the class and the state behavior of the class
– To test operations at the lowest level and for testing whole groups of classes
– To replace the user interface so that tests of system functionality can be conducted prior to
implementation of the actual interface
– In situations in which collaboration between classes is required but one or more of the collaborating
classes has not yet been fully implemented
– Thread-based testing
• Integrates the set of classes required to respond to one input or event for the system
– Use-based testing
• First tests the independent classes that use very few, if any, server classes
• Then the next layer of classes, called dependent classes, are integrated
• This sequence of testing layer of dependent classes continues until the entire system is
constructed
Validation Testing
• Validation testing follows integration testing
– Documentation is correct
– Usability and other requirements are met (e.g., transportability, compatibility, error recovery,
maintainability)
• A configuration review or audit ensures that all elements of the software configuration have been properly
developed, cataloged, and have the necessary detail for entering the support phase of the software life cycle
• Alpha testing
• Beta testing
– It serves as a live application of the software in an environment that cannot be controlled by the
developer
– The end-user records all problems that are encountered and reports these to the developers at
regular intervals
• After beta testing is complete, software engineers make software modifications and prepare for release of
the software product to the entire customer base
System Testing
Different Types
• Recovery testing
– Forces the software to fail in a variety of ways and verifies that recovery is properly performed
– Tests reinitialization, checkpointing mechanisms, data recovery, and restart for correctness
• Security testing
– Verifies that protection mechanisms built into a system will, in fact, protect it from improper access
• Stress testing
– Executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
• Performance testing
– Tests the run-time performance of software within the context of an integrated system
– Often coupled with stress testing and usually requires both hardware and software instrumentation
– Can uncover situations that lead to degradation and possible system failure
The Art of Debugging
Debugging Process
• Results are assessed and the difference between expected and actual performance is encountered
• The debugging process attempts to match symptom with cause, thereby leading to error correction
• The symptom may be caused by human error that is not easily traced
• The symptom may be a result of timing problems, rather than processing problems
• It may be difficult to accurately reproduce input conditions, such as asynchronous real-time information
• The symptom may be intermittent such as in embedded systems involving both hardware and software
• The symptom may be due to causes that are distributed across a number of tasks running on different
processes
Debugging Strategies
• Debugging methods and tools are not a substitute for careful evaluation based on a complete design model
and clear source code
– Brute force
– Backtracking
– Cause elimination
• Involves the use of memory dumps, run-time traces, and output statements
• The method starts at the location where a symptom has been uncovered
• The source code is then traced backward (manually) until the location of the cause is found
• In large programs, the number of potential backward paths may become unmanageably large
• Involves the use of induction or deduction and introduces the concept of binary partitioning
– Induction (specific to general): Prove that a specific starting value is true; then prove the general
case is true
– Deduction (general to specific): Show that a specific conclusion follows from a set of general
premises
• Data related to the error occurrence are organized to isolate potential causes
• A cause hypothesis is devised, and the aforementioned data are used to prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to
isolate the bug
• What next bug might be introduced by the fix that I’m about to make?
– The source code (and even the design) should be studied to assess the coupling of logic and data
structures related to the fix
• What could we have done to prevent this bug in the first place?
– By correcting the process as well as the product, the bug will be removed from the current program
and may be eliminated from all future programs
Software Testing Techniques
• Operable
• Observable
• Controllable
– The states and variables of the software can be controlled directly by the tester
• Decomposable
– The software is built from independent modules that can be tested independently
• Simple
• Stable
– Changes to the software during testing are infrequent and do not invalidate existing tests
• Understandable
Test Characteristics
– The tester must understand the software and how it might fail
– Testing time is limited; one test should not serve the same purpose as another test
– Tests that have the highest likelihood of uncovering a whole class of errors should be used
– Each test should be executed separately; combining a series of tests could cause side effects and
mask certain errors
• Black-box testing
– Knowing the specified function that a product has been designed to perform, test to see if that
function is fully operational and error free
• White-box testing
– Knowing the internal workings of a product, test that all internal operations are performed according
to specifications and all internal components have been exercised
White-box Testing
• Uses the control structure part of component-level design to derive the test cases
– Guarantee that all independent paths within a module have been exercised at least once
– Execute all loops at their boundaries and within their operational bounds
• Enables the test case designer to derive a logical complexity measure of a procedural design
• Uses this measure as a guide for defining a basis set of execution paths
• Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at
least one time during testing
• A circle in a graph represents a node, which stands for a sequence of one or more procedural statements
– Each compound condition in a conditional expression containing one or more Boolean operators
(e.g., and, or) is represented by a separate predicate node
– A predicate node has two edges leading out from it (True and False)
• When counting regions, include the area outside the graph as a region, too
Independent Program Paths
• Defined as a path through the program from the start node until the end node that introduces at least one
new set of processing statements or a new condition (i.e., new nodes)
• Must move along at least one edge that has not been traversed before by a previous path
– Path 1: 0-1-11
– Path 2: 0-1-2-3-4-5-10-1-11
– Path 3: 0-1-2-3-6-8-9-10-1-11
– Path 4: 0-1-2-3-6-7-9-10-1-11
• The number of paths in the basis set is determined by the cyclomatic complexity
Cyclomatic Complexity
• Provides an upper bound for the number of tests that must be conducted to ensure all statements have
been executed at least once
– V(G) = E – N + 2, where E is the number of edges and N is the number of nodes in graph G
– Number of regions = 4
4) Prepare test cases that will force execution of each path in the basis set
Loop Testing – General
• A white-box testing technique that focuses exclusively on the validity of loop constructs
– Simple loops
– Nested loops
– Concatenated loops
– Unstructured loops
– Examples:
1) Start at the innermost loop; set all other loops to minimum values
2) Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration
parameter values; add other tests for out-of-range or excluded values
3) Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values and
other nested loops to “typical” values
• For independent loops, use the same approach as for simple loops
• Depending on the resultant design, apply testing for simple loops, nested loops, or concatenated loops
Black-box Testing
• Complements white-box testing by uncovering different classes of errors
• Focuses on the functional requirements and the information domain of the software
• Used during the later stages of testing after white box testing has been performed
• The tester identifies a set of input conditions that will fully exercise all functional requirements for a program
– Reduce, by a count greater than one, the number of additional test cases that must be designed to
achieve reasonable testing
– Tell us something about the presence or absence of classes of errors, rather than an error associated
only with the specific task at hand
• Interface errors
• Errors in data structures or external data base access
• What data rates and data volume can the system tolerate?
Equivalence Partitioning
• A black-box testing method that divides the input domain of a program into classes of data from which test
cases are derived
• An ideal test case single-handedly uncovers a complete class of errors, thereby reducing the total number of
test cases that must be developed
• Test case design is based on an evaluation of equivalence classes for an input condition
• An equivalence class represents a set of valid or invalid states for input conditions
• From each equivalence class, test cases are selected so that the largest number of attributes of an
equivalence class are exercise at once
• If an input condition specifies a range, one valid and two invalid equivalence classes are defined
• If an input condition requires a specific value, one valid and two invalid equivalence classes are defined
• If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined
– Input set: {-2.5, 7.3, 8.4}Eq classes: {-2.5, 7.3, 8.4}, {any other x}
• If an input condition is a Boolean value, one valid and one invalid class are define
• Boundary value analysis is a test case design method that complements equivalence partitioning
– It derives test cases from both the input domain and output domain
• 1. If an input condition specifies a range bounded by values a and b, test cases should be designed with
values a and b as well as values just above and just below a and b
• 2. If an input condition specifies a number of values, test case should be developed that exercise the
minimum and maximum numbers. Values just above and just below the minimum and maximum are also
tested
• Apply guidelines 1 and 2 to output conditions; produce output that reflects the minimum and the maximum
values expected; also test the values just below and just above
• If internal program data structures have prescribed boundaries (e.g., an array), design a test case to exercise
the data structure at its minimum and maximum boundaries
Object-Oriented Testing
Methods
• It is necessary to test an object-oriented system at a variety of different levels
• The goal is to uncover errors that may occur as classes collaborate with one another and subsystems
communicate across architectural layers
– Testing begins "in the small" on methods within a class and on collaboration between classes
– As class integration occurs, use-based testing and fault-based testing are applied
– Finally, use cases are used to uncover errors during the software validation phase
• Object-oriented testing focuses on designing appropriate sequences of methods to exercise the states of a
class
• Because attributes and methods are encapsulated in a class, testing methods from outside of a class is
generally unproductive
• Testing requires reporting on the state of an object, yet encapsulation can make this information somewhat
difficult to obtain
• Built-in methods should be provided to report the values of class attributes in order to get a snapshot of the
state of an object
– If a subclass is used in an entirely different context than the super class, the super class test cases
will have little applicability and a new set of tests must be designed
Applicability of Conventional Testing Methods
– Basis path testing and loop testing can help ensure that every statement in an method has been
tested
– Use cases can provide useful input in the design of black-box tests
Fault-based Testing
• The objective in fault-based testing is to design tests that have a high likelihood of uncovering plausible
faults
– The tester looks for plausible faults (i.e., aspects of the implementation of the system that may
result in defects)
– To determine whether these faults exist, test cases are designed to exercise the design or code
• If the analysis and design models can provide insight into what is likely to go wrong, then fault-based testing
can find a significant number of errors
• Integration testing looks for plausible faults in method calls or message connections (i.e., client/server
exchange)
– Unexpected result
– Incorrect invocation
• The behavior of a method must be examined to determine the occurrence of plausible faults as methods are
invoked
• Testing should exercise the attributes of an object to determine whether proper values occur for distinct
types of object behavior
• The focus of integration testing is to determine whether errors exist in the calling code, not the called code
– Interactions among subsystems: behavior of one subsystem creates circumstances that cause
another subsystem to fail
– It concentrates on what the user does, not what the product does
– This means capturing the tasks (via use cases) that the user has to perform, then applying them as
tests
– Scenario-based testing tends to exercise multiple subsystems in a single test
• Using the methods for a class, a variety of method sequences are generated randomly and then executed
• The goal is to detect these order dependencies or expectations and make appropriate adjustments to the
design of the methods
• State-based partitioning categorizes class methods based on their ability to change the state of the class
– Tests are designed in a way that exercise methods that change state and those that do not change
state
• Attribute-based partitioning categorizes class methods based on the attributes that they use
– Methods are partitioned into those that read an attribute, modify an attribute, or do not reference
the attribute at all
• Category-based partitioning categorizes class methods based on the generic function that each performs
– Example categories are initialization methods, computational methods, and termination methods
–
INTERCLASS TEST CASE DESIGN
• The following sequence of steps can be used to generate multiple class random test cases
1) For each client class, use the list of class methods to generate a series of random test sequences; use
these methods to send messages to server classes
2) For each message that is generated, determine the collaborator class and the corresponding method
in the server object
3) For each method in the server object (invoked by messages from the client object), determine the
messages that it transmits
4) For each of these messages, determine the next level of methods that are invoked and incorporate
these into the test sequence
– Method sequences should cause the object to transition through all allowable states
• More test cases should be derived to ensure that all behaviors for the class have been exercised based on
the behavior life history of the object
• The state diagram can be traversed in a "breadth-first" approach by exercising only a single transition at a
time
– When a new transition is to be tested, only previously tested transitions are used
- - - - - - - - - - - - - - -
– Time spent mowing the lawn for the past two times
• College experience
• Travel
• Help software engineers to better understand the attributes of models and assess the quality of the
software
• Help software engineers to gain insight into the design and construction of the software
• Focus on specific attributes of software engineering work products resulting from analysis, design, coding,
and testing
• Provide a systematic way to assess quality based on a set of clearly defined rules
• Provide an “on-the-spot” rather than “after-the-fact” insight into the software development
Software Quality
• Definition:
– Explicit software requirements are the foundation from which quality is measured. Lack of
conformance to requirements is lack of quality
– Specific standards define a set of development criteria that guide the manner in which software is
engineered. If the criteria are not followed, lack of quality will most surely result
– There is a set of implicit requirements that often goes unmentioned (e.g., ease of use). If software
conforms to its explicit requirements but fails to meet implicit requirements, software quality is
suspect
• Some factors can be directly measured (e.g. defects uncovered during testing)
• Functionality
• Reliability
• Usability
• Efficiency
– The degree to which the software makes optimal use of system resources
• Maintainability
– The ease with which repair and enhancement may be made to the software
• Portability
– The ease with which the software can be transposed from one environment to another
• Measure
– Provides a quantitative indication of the extent, amount, dimension, capacity, or size of some
attribute of a product or process
• Measurement
• Metric
– (IEEE) A quantitative measure of the degree to which a system, component, or process possesses a
given attribute
• Indicator
– A metric or combination of metrics that provides insight into the software process, a software
project, or the product itself
• Formulation
– The derivation (i.e., identification) of software measures and metrics appropriate for the
representation of the software that is being considered
• Collection
– The mechanism used to accumulate data required to derive the formulated metrics
• Analysis
• Interpretation
– The evaluation of metrics in an effort to gain insight into the quality of the representation
• Feedback
– Recommendations derived from the interpretation of product metrics and passed on to the software
development team
– It should not be set on a rational scale if it is composed of components measured on an ordinal scale
• If a metric represents a software characteristic that increases when positive traits occur or decreases when
undesirable traits are encountered, the value of the metric should increase or decrease in the same manner
• Each metric should be validated empirically in a wide variety of contexts before being published or used to
make decisions
• Valid statistical techniques should be applied to establish relationships between internal product attributes
and external quality characteristics
• GQM technique identifies meaningful metrics for any part of the software process
– Establish an explicit measurement goal that is specific to the process activity or product
characteristic that is to be assessed
– Define a set of questions that must be answered in order to achieve the goal
Analyze the SafeHome software architecture for the purpose of evaluating architecture components. Do this
with respect to the ability to make SafeHome more extensible from the viewpoint of the software engineers,
who are performing the work in the context of product enhancement over the next three years.
• Example questions for this goal definition
– Is the complexity of each component within bounds that will facilitate modification and extension?
– It should be relatively easy to learn how to derive the metric, and its computation should not
demand inordinate effort or time
– The metric should satisfy the engineer’s intuitive notions about the product attribute under
consideration
– The mathematical computation of the metric should use measures that do not lead to bizarre
combinations of units
– Metrics should be based on the analysis model, the design model, or the structure of the program
itself
– Provides an indirect measure of the functionality that is packaged within the software
• System size
– Measures the overall size of the system defined in terms of information available as part of the
analysis model
• Specification quality
Eg : Function Points
• First proposed by Albrecht in 1979; hundreds of books and papers have been written on functions points
since then
• Can be used effectively as a means for measuring the functionality delivered by a system
– Estimate the cost or effort required to design, code, and test the software
– Forecast the number of components and/or the number of projected source code lines in the
implemented system
– Each external input originates from a user or is transmitted from another application
– They are not inquiries (those are counted under another category)
– Each external output is derived within the application and provides information to the user
– Individual data items within a report or screen are not counted separately
– An external inquiry is defined as an online input that results in the generation of some immediate
software response
– Each internal logical file is a logical grouping of data that resides within the application’s boundary
and is maintained via external inputs
– Each external interface file is a logical grouping of data that resides external to the application but
provides data that may be of use to the application
3) Evaluate and sum up the adjustment factors (see the next two slides)
• “Fi” refers to 14 value adjustment factors, with each ranging in value from 0 (not important) to 5
(absolutely essential)
2) Are specialized data communications required to transfer information to or from the application?
4) Is performance critical?
7) Does the on-line data entry require the input transaction to be built over multiple screens or operations?
14) Is the application designed to facilitate change and for ease of use by the user?
• Assume that past project data for a software development group indicates that
– An average of three errors per function point are found during analysis and design reviews
– An average of four errors per function point are found during unit and integration testing
• This data can help project managers revise their earlier estimates
• This data can also help software engineers estimate the overall implementation size of their code and assess
the completeness of their review and testing activities
• Component-level metrics
– Measure the complexity of software components and other characteristics that have a bearing on
quality
• These metrics place emphasis on the architectural structure and effectiveness of modules or components
within the architecture
• They are “black box” in that they do not require any knowledge of the inner workings of a particular
software component
• Fan out: the number of modules immediately subordinate to the module i, that is, the number of modules
directly invoked by module i
• Structural complexity
– S(i) = f2out(i), where fout(i) is the “fan out” of module i
• Data complexity
– D(i) = v(i)/[fout(i) + 1], where v(i) is the number of input and output variables that are passed to and
from module i
• System complexity
• As each of these complexity values increases, the overall architectural complexity of the system also
increases
• This leads to greater likelihood that the integration and testing effort will also increase
• Shape complexity
– r = a/n
• Size
• Coupling
– The number of collaborations between classes or the number of methods called between objects
• Cohesion
– The cohesion of a class is the degree to which its set of properties is part of the problem or design
domain
• Primitiveness
– The degree to which a method in a class is atomic (i.e., the method cannot be constructed out of a
sequence of other methods provided by the class)
– The maximum length from the derived class (the node) to the base class (the root)
– Indicates the potential difficulties when attempting to predict the behavior of a class because of the
number of inherited methods
• Reuse increases
• The abstraction represented by the parent class can be diluted by inappropriate children
– Measures the number of collaborations a class has with any other classes
– This is the set of methods that can potentially be executed in a class in response to a public method
call from outside the class
– As the response value increases, the effort required for testing also increases as does the overall
design complexity of the class
– This measures the number of methods that access one or more of the same instance variables (i.e.,
attributes) of a class
– As the measure increases, methods become more coupled to one another via attributes, thereby
increasing the complexity of the class design
– Measure the logical complexity of source code (can also be applied to component-level design)
• Length metrics
“These metrics can be used to assess source code complexity, maintainability, and testability, among other
characteristics”
Metrics for Testing
• Statement and branch coverage metrics
• Defect-related metrics
– Focus on defects (i.e., bugs) found, rather than on the tests themselves
– Provide a real-time indication of the effectiveness of tests that have been conducted
• In-process metrics
– Provides an indication of the stability of a software product based on changes that occur for each
release
• As the SMI (i.e., the fraction) approaches 1.0, the software product begins to stabilize
• The average time to produce a release of a software product can be correlated with the SMI