Swebok
Swebok
[3*]
[1*]
[4*]
or containing their effects if recovery is not possible. The most common fault tolerance strategies
include backing up and retrying, using auxiliary
code, using voting algorithms, and replacing an
erroneous value with a phony value that will have
a benign effect.
4.6.Executable Models
[5*]
[1*]
[1*] [6*]
[7*]
[3*] [6*]
Middleware is a broad classification for software that provides services above the operating
system layer yet below the application program
layer. Middleware can provide runtime containers for software components to provide message
passing, persistence, and a transparent location
across a network. Middleware can be viewed as
a connector between the components that use the
middleware. Modern message-oriented middleware usually provides an Enterprise Service Bus
(ESB), which supports service-oriented interaction and communication between multiple software applications.
[7*]
[6*]
[1*]
[6*] [7*]
[1*]
Test-first programming (also known as TestDriven DevelopmentTDD) is a popular development style in which test cases are written prior
to writing any code. Test-first programming can
usually detect defects earlier and correct them
more easily than traditional programming styles.
Furthermore, writing test cases first forces programmers to think about requirements and design
before coding, thus exposing requirements and
design problems sooner.
A development environment, or integrated development environment (IDE), provides comprehensive facilities to programmers for software
construction by integrating a set of development
tools. The choices of development environments
can affect the efficiency and quality of software
construction.
In additional to basic code editing functions,
modern IDEs often offer other features like compilation and error detection from within the editor, integration with source code control, build/
test/debugging tools, compressed or outline
views of programs, automated code transforms,
and support for refactoring.
5.2.GUI Builders
[1*]
[1*] [2*]
Unit testing verifies the functioning of software
modules in isolation from other software elements
that are separately testable (for example, classes,
routines, components). Unit testing is often automated. Developers can use unit testing tools
and frameworks to extend and create automated
testing environment. With unit testing tools and
frameworks, the developer can code criteria into
the test to verify the units correctness under various data sets. Each individual test is implemented
as an object, and a test runner runs all of the tests.
During the test execution, those failed test cases
will be automatically flagged and reported.
5.4.Profiling, Performance Analysis, and
Slicing Tools
[1*]
1.Software
Construction
Fundamentals
c2, c3,
c7-c9,
1.1.Minimizing
c24, c27,
Complexity
c28, c31,
c32, c34
c3c5,
1.2.Anticipating
c24, c31,
Change
c32, c34
c8,
1.3.Constructing for
c20
Verification
c23, c31,
c34
1.4.Reuse
1.5.Standards in
c4
Construction
2.Managing
Construction
2.1.Construction in
c2, c3,
Life Cycle Models
c27, c29
c3, c4,
2.2.Construction
c21,
Planning
c27c29
2.3.Construction
c25, c28
Measurement
3.Practical
Considerations
3.1.Construction
c3, c5,
Design
c24
3.2.Construction
c4
Languages
c5c19,
3.3.Coding
c25c26
c16
Sommerville 2011
[2*]
McConnell 2004
[1*]
3.4.Construction
c22, c23
Testing
3.5.Construction for
Reuse
3.6.Construction
with Reuse
3.7.Construction
c8,
Quality
c20c25
3.8.Integration
c29
4.Construction
Technologies
4.1.API Design and
Use
4.2.Object-Oriented
c6, c7
Runtime Issues
4.3.
Parameterization
and Generics
4.4.Assertions,
Design by Contract,
c8, c9
and Defensive
Programming
4.5.Error Handling,
Exception Handling,
c3, c8
and Fault Tolerance
4.6.Executable
Models
4.7.State-Based
and Table-Driven
c18
Construction
Techniques
4.8.Runtime
Configuration and
c3, c10
Internationalization
4.9.Grammar-Based
c5
Input Processing
c16
c16
c7
c1
c1
c8
Sommerville 2011
[2*]
McConnell 2004
[1*]
4.10.Concurrency
Primitives
4.11.Middleware
4.12.Construction
Methods for
Distributed Software
4.13.Constructing
Heterogeneous
Systems
4.14.Performance
c25, c26
Analysis and Tuning
4.15.Platform
Standards
4.16.Test-First
c22
Programming
5.Construction Tools
5.1.Development
Environments
5.2.GUI Builders
5.3.Unit Testing
Tools
5.4.Profiling,
Performance
Analysis, and
Slicing Tools
c1
c2
c9
c10
c30
c25, c26
c6
c30
c22
Sommerville 2011
[2*]
McConnell 2004
[1*]
c8
c1
FURTHER READINGS
REFERENCES
CHAPTER 4
SOFTWARE TESTING
ACRONYMS
API
TDD
TTCN3
XP
INTRODUCTION
Software testing consists of the dynamic verification that a program provides expected behaviors
on a finite set of test cases, suitably selected from
the usually infinite execution domain.
In the above definition, italicized words correspond to key issues in describing the Software
Testing knowledge area (KA):
Dynamic: This term means that testing
always implies executing the program on
selected inputs. To be precise, the input
value alone is not always sufficient to specify a test, since a complex, nondeterministic
system might react to the same input with
different behaviors, depending on the system
state. In this KA, however, the term input
will be maintained, with the implied convention that its meaning also includes a specified input state in those cases for which it
is important. Static techniques are different
from and complementary to dynamic testing.
Static techniques are covered in the Software
Quality KA. It is worth noting that terminology is not uniform among different communities and some use the term testing also in
reference to static techniques.
Finite: Even in simple programs, so many test
cases are theoretically possible that exhaustive testing could require months or years to
and test plans and procedures should be systematically and continuously developedand possibly refinedas software development proceeds.
These test planning and test designing activities
provide useful input for software designers and
help to highlight potential weaknesses, such as
design oversights/contradictions, or omissions/
ambiguities in the documentation.
For many organizations, the approach to software quality is one of prevention: it is obviously
much better to prevent problems than to correct
them. Testing can be seen, then, as a means for
providing information about the functionality
dynamic techniques (code execution). Both categories are useful. This KA focuses on dynamic
techniques.
Software testing is also related to software
construction (see Construction Testing in the
Software Construction KA). In particular, unit
and integration testing are intimately related to
software construction, if not part of it.
BREAKDOWN OF TOPICS FOR
SOFTWARE TESTING
The breakdown of topics for the Software Testing KA is shown in Figure 4.1. A more detailed
breakdown is provided in the Matrix of Topics
vs. Reference Material at the end of this KA.
The first topic describes Software Testing Fundamentals. It covers the basic definitions in the
field of software testing, the basic terminology
and key issues, and software testings relationship with other activities.
The second topic, Test Levels, consists of two
(orthogonal) subtopics: the first subtopic lists the
levels in which the testing of large software is
traditionally subdivided, and the second subtopic
considers testing for specific conditions or properties and is referred to as Objectives of Testing.
Not all types of testing apply to every software
product, nor has every possible type been listed.
The test target and test objective together
determine how the test set is identified, both with
regard to its consistencyhow much testing is
enough for achieving the stated objectiveand
to its compositionwhich test cases should
be selected for achieving the stated objective
(although usually for achieving the stated objective remains implicit and only the first part of the
two italicized questions above is posed). Criteria
for addressing the first question are referred to as
test adequacy criteria, while those addressing the
second question are the test selection criteria.
Several Test Techniques have been developed
in the past few decades, and new ones are still
being proposed. Generally accepted techniques
are covered in the third topic.
Test-Related Measures are dealt with in the
fourth topic, while the issues relative to Test Process are covered in the fifth. Finally, Software
Testing Tools are presented in topic six.
is sufficient for a specified purpose. Test adequacy criteria can be used to decide when sufficient testing will be, or has been accomplished
[4] (see Termination in section 5.1, Practical
Considerations).
1.2.2.Testing Effectiveness / Objectives for
Testing
[1*, c11s4, c13s11]
Testing effectiveness is determined by analyzing
a set of program executions. Selection of tests to
be executed can be guided by different objectives:
it is only in light of the objective pursued that the
effectiveness of the test set can be evaluated.
1.2.3.Testing for Defect Discovery
[1*, c1s14]
in this regard is the Dijkstra aphorism that program testing can be used to show the presence of
bugs, but never to show their absence [5]. The
obvious reason for this is that complete testing is
not feasible in realistic software. Because of this,
testing must be driven based on risk [6, part 1]
and can be seen as a risk management strategy.
1.2.6.The Problem of Infeasible Paths
[1*, c4s7]
Infeasible paths are control flow paths that cannot
be exercised by any input data. They are a significant problem in path-based testing, particularly
in automated derivation of test inputs to exercise
control flow paths.
1.2.7.Testability
[1*, c17s2]
Testing vs. Program Construction (see Construction Testing in the Software Construction KA [1*, c3s2]).
2.Test Levels
Software testing is usually performed at different levels throughout the development and maintenance processes. Levels can be distinguished
based on the object of testing, which is called
the target, or on the purpose, which is called the
objective (of the test level).
2.1.The Target of the Test
[1*, c1s7]
Testing is conducted in view of specific objectives, which are stated more or less explicitly
and with varying degrees of precision. Stating
the objectives of testing in precise, quantitative
terms supports measurement and control of the
test process.
Testing can be aimed at verifying different properties. Test cases can be designed to check that
the functional specifications are correctly implemented, which is variously referred to in the literature as conformance testing, correctness testing, or functional testing. However, several other
nonfunctional properties may be tested as well
including performance, reliability, and usability, among many others (see Models and Quality
Characteristics in the Software Quality KA).
Other important objectives for testing include
but are not limited to reliability measurement,
[1*, c12s2]
According to [7], regression testing is the selective retesting of a system or component to verify
that modifications have not caused unintended
effects and that the system or component still
complies with its specified requirements. In
practice, the approach is to show that software
still passes previously passed tests in a test suite
(in fact, it is also sometimes referred to as nonregression testing). For incremental development,
the purpose of regression testing is to show that
software behavior is unchanged by incremental changes to the software, except insofar as it
should. In some cases, a tradeoff must be made
between the assurance given by regression testing
every time a change is made and the resources
required to perform the regression tests, which
can be quite time consuming due to the large
number of tests that may be executed. Regression
testing involves selecting, minimizing, and/or
prioritizing a subset of the test cases in an existing test suite [8]. Regression testing can be conducted at each of the test levels described in section 2.1, The Target of the Test, and may apply to
functional and nonfunctional testing.
2.2.6.Performance Testing
[1*, c8s6]
2.2.7.Security Testing
[1*, c8s8]
[7]
IEEE/ISO/IEC Standard 24765 defines back-toback testing as testing in which two or more
variants of a program are executed with the same
inputs, the outputs are compared, and errors are
analyzed in case of discrepancies.
2.2.10.Recovery Testing
[1*, c14s2]
2.2.12.Configuration Testing
[1*, c8s5]
[1*, c9s4]
[1*, c9s3]
[1*, c9s5]
[1*, c9s7]
[1*, c4]
loops, other less stringent criteria focus on coverage of paths that limit loop iterations such as
statement coverage, branch coverage, and condition/decision testing. The adequacy of such
tests is measured in percentages; for example,
when all branches have been executed at least
once by the tests, 100% branch coverage has
been achieved.
3.3.2.Data Flow-Based Criteria
[1*, c5]
[1*, c1s14]
With different degrees of formalization, faultbased testing techniques devise test cases specifically aimed at revealing categories of likely
or predefined faults. To better focus the test case
generation or selection, a fault model can be
introduced that classifies the different types of
faults.
3.4.1.Error Guessing
[1*, c9s8]
[1*, c3s5]
[1*, c15s5]
[1*, c9s6]
[1*, c10]
Workflow models specify a sequence of activities performed by humans and/or software applications, usually represented through graphical
notations. Each sequence of actions constitutes
one workflow (also called a scenario). Both typical and alternate workflows should be tested [6,
part 4]. A special focus on the roles in a workflow specification is targeted in business process
testing.
3.7.Techniques Based on the Nature of the
Application
The above techniques apply to all kinds of software. Additional techniques for test derivation
and execution are based on the nature of the software being tested; for example,
object-oriented software
component-based software
web-based software
concurrent programs
protocol-based software
real-time systems
safety-critical systems
service-oriented software
open-source software
embedded software
3.8.Selecting and Combining Techniques
3.8.1.Combining Functional and Structural
[1*, c9]
Model-based and code-based test techniques
are often contrasted as functional vs. structural
testing. These two approaches to test selection
are not to be seen as alternatives but rather as
complements; in fact, they use different sources
of information and have been shown to highlight different kinds of problems. They could be
used in combination, depending on budgetary
considerations.
3.8.2.Deterministic vs. Random
[1*, c9s6]
at every decision point. To avoid such misunderstandings, a clear distinction should be made
between test-related measures that provide an
evaluation of the program under test, based on
the observed test outputs, and the measures that
evaluate the thoroughness of the test set. (See
Software Engineering Measurement in the Software Engineering Management KA for information on measurement programs. See Software
Process and Product Measurement in the Software Engineering Process KA for information on
measures.)
Measurement is usually considered fundamental to quality analysis. Measurement may also be
used to optimize the planning and execution of
the tests. Test management can use several different process measures to monitor progress. (See
section 5.1, Practical Considerations, for a discussion of measures of the testing process useful
for management purposes.)
4.1.Evaluation of the Program Under Test
4.1.1.Program Measurements That Aid in
Planning and Designing Tests
[9*, c11]
Measures based on software size (for example,
source lines of code or functional size; see Measuring Requirements in the Software Requirements KA) or on program structure can be used
to guide testing. Structural measures also include
measurements that determine the frequency with
which modules call one another.
4.1.2.Fault Types, Classification, and
Statistics
[9*, c4]
4.1.3.Fault Density
4.2.2.Fault Seeding
In fault seeding, some faults are artificially introduced into a program before testing. When the
tests are executed, some of these seeded faults will
be revealed as well as, possibly, some faults that
were already there. In theory, depending on which
and how many of the artificial faults are discovered, testing effectiveness can be evaluated and the
remaining number of genuine faults can be estimated. In practice, statisticians question the distribution and representativeness of seeded faults
relative to genuine faults and the small sample size
on which any extrapolations are based. Some also
argue that this technique should be used with great
care since inserting faults into software involves
the obvious risk of leaving them there.
4.2.3.Mutation Score
[1*, c3s5]
In mutation testing (see Mutation Testing in section 3.4, Fault-Based Techniques), the ratio of
killed mutants to the total number of generated
mutants can be a measure of the effectiveness of
the executed test set.
4.2.4.Comparison and Relative Effectiveness
of Different Techniques
Several studies have been conducted to compare the relative effectiveness of different testing
techniques. It is important to be precise as to the
property against which the techniques are being
assessed; what, for instance, is the exact meaning
given to the term effectiveness? Possible interpretations include the number of tests needed to
find the first failure, the ratio of the number of
faults found through testing to all the faults found
during and after testing, and how much reliability was improved. Analytical and empirical comparisons between different techniques have been
conducted according to each of the notions of
effectiveness specified above.
5.Test Process
Testing concepts, strategies, techniques, and measures need to be integrated into a defined and
controlled process. The test process supports testing activities and provides guidance to testers and
testing teams, from test planning to test output
evaluation, in such a way as to provide assurance
that the test objectives will be met in a cost-effective way.
5.1.Practical Considerations
5.1.1.Attitudes / Egoless Programming
[1*c16] [9*, c15]
An important element of successful testing is a
collaborative attitude towards testing and quality
assurance activities. Managers have a key role in
fostering a generally favorable reception towards
failure discovery and correction during software
development and maintenance; for instance, by
overcoming the mindset of individual code ownership among programmers and by promoting a
collaborative environment with team responsibility for anomalies in the code.
5.1.2.Test Guides
the test item. Test documentation should be produced and continually updated to the same level
of quality as other types of documentation in
software engineering. Test documentation should
also be under the control of software configuration management (see the Software Configuration
Management KA). Moreover, test documentation
includes work products that can provide material
for user manuals and user training.
5.1.5.Test-Driven Development
[1*, c1s16]
[9*, c10s4]
A decision must be made as to how much testing is enough and when a test stage can be terminated. Thoroughness measures, such as achieved
code coverage or functional coverage, as well as
estimates of fault density or of operational reliability, provide useful support but are not sufficient in themselves. The decision also involves
considerations about the costs and risks incurred
by possible remaining failures, as opposed to
the costs incurred by continuing to test (see Test
Selection Criteria / Test Adequacy Criteria in
section 1.2, Key Issues).
5.1.9.Test Reuse and Test Patterns
[9*, c2s5]
To carry out testing or maintenance in an organized and cost-effective way, the means used to
test each part of the software should be reused
systematically. A repository of test materials
should be under the control of software configuration management so that changes to software requirements or design can be reflected in
changes to the tests conducted.
The test solutions adopted for testing some
application types under certain circumstances,
with the motivations behind the decisions taken,
form a test pattern that can itself be documented
for later reuse in similar projects.
5.2.Test Activities
As shown in the following description, successful
management of test activities strongly depends
on the software configuration management process (see the Software Configuration Management KA).
5.2.1.Planning
[1*, c12s7]
[9*, c15]
[1*, c13s9]
[9*, c9]
Testing requires many labor-intensive tasks, running numerous program executions, and handling
a great amount of information. Appropriate tools
can alleviate the burden of clerical, tedious operations and make them less error-prone. Sophisticated tools can support test design and test case
generation, making it more effective.
6.1.1.Selecting Tools
[1*, c12s11]
1.1.Testing-Related Terminology
1.1.1.Definitions of Testing and
Related Terminology
1.1.2.Faults vs. Failures
1.2.Key Issues
1.2.1.Test Selection Criteria /
Test Adequacy Criteria
(Stopping Rules)
1.2.2.Testing Effectiveness /
Objectives for Testing
1.2.3.Testing for Defect
Identification
c8
c1s5
c11
c1s14, c6s6,
c12s7
c13s11, c11s4
c1s14
c1s9,
c9s7
c2s7
c4s7
c17s2
c12
c17s2
c3s6
c3s2
c1s13
c3
c7
c8
c8s1
c8
c8
c8
Nielsen 1993
[10*]
Sommerville 2011
[2*]
c1,c2
Kan 2003
[9*]
2.2.Objectives of Testing
2.2.1.Acceptance / Qualification
2.2.2.Installation Testing
2.2.3.Alpha and Beta Testing
2.2.4.Reliability Achievement
and Evaluation
2.2.5.Regression Testing
2.2.6.Performance Testing
2.2.7.Security Testing
2.2.8.Stress Testing
2.2.9.Back-to-Back Testing
2.2.10.Recovery Testing
2.2.11.Interface Testing
2.2.12.Configuration Testing
2.2.13.Usability and Human
Computer Interaction Testing
3. Test Techniques
3.1.Based on the Software
Engineers Intuition and
Experience
3.1.1.Ad Hoc
3.1.2.Exploratory Testing
3.2.Input Domain-Based
Techniques
3.2.1.Equivalence Partitioning
3.2.2.Pairwise Testing
3.2.3.Boundary-Value Analysis
3.2.4.Random Testing
3.3.Code-Based Techniques
3.3.1.Control Flow-Based
Criteria
c1s7
c1s7
c12s2
c13s7,
c16s6
c15
c8s11,
c13s3
c8s6
c8s3
c8s8
Nielsen 1993
[10*]
Kan 2003
[9*]
Sommerville 2011
[2*]
c8s4
c8s4
c15s2
c11s4
c14s2
c8s1.3
c4s4.5
c8s5
c6
c9s4
c9s3
c9s5
c9s7
c4
Nielsen 1993
[10*]
Kan 2003
[9*]
Sommerville 2011
[2*]
c5
c4
c1s14
c9s8
c3s5
c15s5
c5, c7
c9s6
c10
c10s11
c15
c9
c9s6
c11
c4
c13s4
c4
c15
c3
c15
c8
c11
c2s5
c3s5
c6
c16
c15
c12s1
c12
c15s1
c15
c8s12
c4s5
c1s16
c16
c18s3
c5s7
c10s4
c2s5
c12s1
c12s8
c12s1
c12s3
c12s6
c12s7
c15
Nielsen 1993
[10*]
Kan 2003
[9*]
Sommerville 2011
[2*]
c13s9
c9
c12s11
c12s11
c1s7, c3s9,
c4, c9s7,
c12s11,
c12s16
c5
c8
Nielsen 1993
[10*]
Kan 2003
[9*]
Sommerville 2011
[2*]
REFERENCES
[1*] S. Naik and P. Tripathy, Software Testing
and Quality Assurance: Theory and
Practice, Wiley-Spektrum, 2008.
[2*] I. Sommerville, Software Engineering, 9th
ed., Addison-Wesley, 2011.
[3] M.R. Lyu, ed., Handbook of Software
Reliability Engineering, McGraw-Hill and
IEEE Computer Society Press, 1996.
[4] H. Zhu, P.A.V. Hall, and J.H.R. May,
Software Unit Test Coverage and
Adequacy, ACM Computing Surveys, vol.
29, no. 4, Dec. 1997, pp. 366427.
[5] E.W. Dijkstra, Notes on Structured
Programming, T.H.-Report 70-WSE-03,
Technological University, Eindhoven, 1970;
http://www.cs.utexas.edu/users/EWD/
ewd02xx/EWD249.PDF.
[6] ISO/IEC/IEEE P29119-1/DIS Draft Standard
for Software and Systems Engineering
Software TestingPart 1: Concepts and
Definitions, ISO/IEC/IEEE, 2012.
[7] ISO/IEC/IEEE 24765:2010 Systems and
Software EngineeringVocabulary, ISO/
IEC/IEEE, 2010.
CHAPTER 5
SOFTWARE MAINTENANCE
ACRONYMS
MR
Modification Request
PR
Problem Report
Software Configuration
Management
Service-Level Agreement
Software Quality Assurance
Verification and Validation
SCM
SLA
SQA
V&V
during the postdelivery stage. Predelivery activities include planning for postdelivery operations,
maintainability, and logistics determination for
transition activities [1*, c6s9]. Postdelivery
activities include software modification, training,
and operating or interfacing to a help desk.
The Software Maintenance knowledge area
(KA) is related to all other aspects of software
engineering. Therefore, this KA description is
linked to all other software engineering KAs of
the Guide.
INTRODUCTION
Software development efforts result in the delivery of a software product that satisfies user
requirements. Accordingly, the software product
must change or evolve. Once in operation, defects
are uncovered, operating environments change,
and new user requirements surface. The maintenance phase of the life cycle begins following a
warranty period or postimplementation support
delivery, but maintenance activities occur much
earlier.
Software maintenance is an integral part of a
software life cycle. However, it has not received
the same degree of attention that the other phases
have. Historically, software development has had
a much higher profile than software maintenance
in most organizations. This is now changing, as
organizations strive to squeeze the most out of
their software development investment by keeping software operating as long as possible. The
open source paradigm has brought further attention to the issue of maintaining software artifacts
developed by others.
In this Guide, software maintenance is defined
as the totality of activities required to provide
cost-effective support to software. Activities are
performed during the predelivery stage as well as
The breakdown of topics for the Software Maintenance KA is shown in Figure 5.1.
1.Software Maintenance Fundamentals
This first section introduces the concepts and
terminology that form an underlying basis to
understanding the role and scope of software
maintenance. The topics provide definitions and
emphasize why there is a need for maintenance.
Categories of software maintenance are critical to
understanding its underlying meaning.
1.1.Definitions and Terminology
[1*, c3] [2*, c1s2, c2s2]
The purpose of software maintenance is defined
in the international standard for software maintenance: ISO/IEC/IEEE 14764 [1*].1 In the context
of software engineering, software maintenance is
essentially one of the many technical processes.
1 For the purpose of conciseness and ease of reading, this standard is referred to simply as IEEE 14764
in the subsequent text of this KA.
5-1
[2*, c1s3]
Software maintenance sustains the software product throughout its life cycle (from development
to operations). Modification requests are logged
and tracked, the impact of proposed changes is
determined, code and other software artifacts are
[2*, c1s5]
[2*, c3s5]
Maintenance consumes a major share of the financial resources in a software life cycle. A common
Proactive
Reactive
Correction
Enhancement
Preventive
Corrective
Perfective
Adaptive
[2*, c6]
Impact analysis describes how to conduct, costeffectively, a complete analysis of the impact of
a change in existing software. Maintainers must
possess an intimate knowledge of the softwares
structure and content. They use that knowledge
to perform impact analysis, which identifies all
systems and software products affected by a software change request and develops an estimate of
the resources needed to accomplish the change.
Additionally, the risk of making the change is
determined. The change request, sometimes called
a modification request (MR) and often called a
problem report (PR), must first be analyzed and
translated into software terms. Impact analysis is
performed after a change request enters the software configuration management process. IEEE
14764 states the impact analysis tasks:
analyze MRs/PRs;
replicate or verify the problem;
develop options for implementing the
modification;
document the MR/PR, the results, and the
execution options;
obtain approval for the selected modification
option.
The severity of a problem is often used to
decide how and when it will be fixed. The software engineer then identifies the affected components. Several potential solutions are provided,
followed by a recommendation as to the best
course of action.
Software designed with maintainability in mind
greatly facilitates impact analysis. More information can be found in the Software Configuration
Management KA.
2.1.4.Maintainability
[2*, c4]
Organizational objectives describe how to demonstrate the return on investment of software maintenance activities. Initial software development is
usually project-based, with a defined time scale and
budget. The main emphasis is to deliver a product
that meets user needs on time and within budget.
In contrast, software maintenance often has the
objective of extending the life of software for as
long as possible. In addition, it may be driven by
the need to meet user demand for software updates
and enhancements. In both cases, the return on
investment is much less clear, so that the view at
the senior management level is often that of a major
activity consuming significant resources with no
clear quantifiable benefit for the organization.
2.2.2.Staffing
Staffing refers to how to attract and keep software maintenance staff. Maintenance is not often
viewed as glamorous work. As a result, software
maintenance personnel are frequently viewed
as second-class citizens, and morale therefore
suffers.
2.2.3.Process
[3*]
Outsourcing and offshoring software maintenance has become a major industry. Organizations are outsourcing entire portfolios of software, including software maintenance. More
often, the outsourcing option is selected for less
mission-critical software, as organizations are
unwilling to lose control of the software used in
their core business. One of the major challenges
for outsourcers is to determine the scope of the
maintenance services required, the terms of a service-level agreement, and the contractual details.
Outsourcers will need to invest in a maintenance
infrastructure, and the help desk at the remote site
should be staffed with native-language speakers.
Outsourcing requires a significant initial investment and the setup of a maintenance process that
will require automation.
2.3.Maintenance Cost Estimation
Software engineers must understand the different
categories of software maintenance, discussed
above, in order to address the question of estimating the cost of software maintenance. For planning purposes, cost estimation is an important
aspect of planning for software maintenance.
2.3.1.Cost Estimation
[2*, c7s2.4]
Section 2.1.3 describes how impact analysis identifies all systems and software products affected
by a software change request and develops an
estimate of the resources needed to accomplish
that change.
Maintenance cost estimates are affected
by many technical and nontechnical factors.
IEEE 14764 states that the two most popular
approaches to estimating resources for software
maintenance are the use of parametric models
and the use of experience [1*, c7s4.1]. A combination of these two can also be used.
2.3.2.Parametric Models
[2*, c12s5.6]
[2*, c12s5.5]
[2*, c12]
maintenance review/acceptance,
migration, and
software retirement.
3.2.5.Software Quality
[1*, c6s5, c6s7, c6s8] [2*, c12s5.3]
4.3.Reverse Engineering
[1*, c6s2] [2*, c7, c14s5]
4.4.Migration
4.1.Program Comprehension
During softwares life, it may have to be modified to run in different environments. In order to
migrate it to a new environment, the maintainer
needs to determine the actions needed to accomplish the migration, and then develop and document the steps required to effect the migration in
a migration plan that covers migration requirements, migration tools, conversion of product
and data, execution, verification, and support.
Migrating software can also entail a number of
additional activities such as
[2*, c7]
[1*, c5s5]
[1*, c5s6]
Once software has reached the end of its useful life, it must be retired. An analysis should
be performed to assist in making the retirement
decision. This analysis should be included in the
retirement plan, which covers retirement requirements, impact, replacement, schedule, and effort.
Accessibility of archive copies of data may also
be included. Retiring software entails a number
of activities similar to migration.
5.Software Maintenance Tools
[1*, c6s4] [2*, c14]
This topic encompasses tools that are particularly
important in software maintenance where existing software is being modified. Examples regarding program comprehension include
c3
c1s2, c2s2
Sneed 2008
[3*]
1.Software Maintenance
Fundamentals
1.1.Definitions and Terminology
1.2.Nature of Maintenance
c1s3
c1s5
c4s3, c5s5.2
1.5.Evolution of Software
1.6.Categories of Maintenance
c3s5
c3, c6s2
c3s3.1, c4s3
c6
c6s2.2.2
c9
2.1.3.Impact Analysis
c5s2.5
c13s3
2.1.4.Maintainability
c6s8, c3s4
c12s5.5
2.2.Management Issues
2.2.1.Alignment with
Organizational objectives
c4
2.2.2.Staffing
2.2.3.Process
2.2.4.Organizational Aspects of
Maintenance
c4s5, c10s4
c5
c5
c7s.2.3
c10
2.2.5.Outsourcing/Offshoring
all
c7s4.1
c7s2.4
2.3.2.Parametric Models
c12s5.6
2.3.3.Experience
c12s5.5
2.4.Software Maintenance
Measurement
c6s5
2.4.1.Specific Measures
c12, c12s3.1
c12
3.Maintenance Process
3.1.Maintenance Processes
3.2.Maintenance Activities
3.2.1.Unique Activities
3.2.2.Supporting Activities
3.2.3.Maintenance Planning
Activities
3.2.4.Software Configuration
Management
3.2.5.Software Quality
c5
c5, c5s3.2.2,
c6s8.2, c7s3.3
c3s10, c6s9, c7s2,
c7s3
c4s1, c5, c6s7
c5
c6,c7
c9
c7s2, c7s.3
c5s1.2.3
c11
c12s5.3
c6,c14s5
4.2.Reengineering
c7
4.3.Reverse Engineering
c6s2
4.4.Migration
c5s5
4.5.Retirement
c5s6
c6s4
c7, c14s5
c14
Sneed 2008
[3*]
FURTHER READINGS
REFERENCES
CHAPTER 6
SOFTWARE CONFIGURATION MANAGEMENT
ACRONYMS
CCB
CM
FCA
PCA
SCCB
SCI
SCM
SCMP
SCR
SCSA
SDD
SEI/
CMMI
SQA
SRS
to serve a particular purpose. Configuration management (CM), then, is the discipline of identifying the configuration of a system at distinct points
in time for the purpose of systematically controlling changes to the configuration and maintaining
the integrity and traceability of the configuration
throughout the system life cycle. It is formally
defined as
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item,
control changes to those characteristics,
record and report change processing and
implementation status, and verify compliance with specified requirements. [1]
Software configuration management (SCM)
is a supporting-software life cycle process that
benefits project management, development and
maintenance activities, quality assurance activities, as well as the customers and users of the end
product.
The concepts of configuration management
apply to all items to be controlled, although there
are some differences in implementation between
hardware CM and software CM.
SCM is closely related to the software quality assurance (SQA) activity. As defined in the
Software Quality knowledge area (KA), SQA
processes provide assurance that the software
products and processes in the project life cycle
conform to their specified requirements by planning, enacting, and performing a set of activities
to provide adequate confidence that quality is
being built into the software. SCM activities help
in accomplishing these SQA goals. In some project contexts, specific SQA requirements prescribe
certain SCM activities.
INTRODUCTION
A system can be defined as the combination of
interacting elements organized to achieve one or
more stated purposes [1]. The configuration of a
system is the functional and physical characteristics of hardware or software as set forth in technical documentation or achieved in a product [1]; it
can also be thought of as a collection of specific
versions of hardware, firmware, or software items
combined according to specific build procedures
6-1
The SCM activities are management and planning of the SCM process, software configuration
identification, software configuration control,
software configuration status accounting, software configuration auditing, and software release
management and delivery.
The Software Configuration Management KA
is related to all the other KAs, since the object
of configuration management is the artifact produced and used throughout the software engineering process.
BREAKDOWN OF TOPICS FOR
SOFTWARE CONFIGURATION
MANAGEMENT
The breakdown of topics for the Software Configuration Management KA is shown in Figure 6.1.
1.Management of the SCM Process
SCM controls the evolution and integrity of a
product by identifying its elements; managing and
controlling change; and verifying, recording, and
reporting on configuration information. From the
software engineers perspective, SCM facilitates
in parallel with hardware and firmware CM activities and must be consistent with system-level
CM. Note that firmware contains hardware and
software; therefore, both hardware and software
CM concepts are applicable.
SCM might interface with an organizations
quality assurance activity on issues such as
records management and nonconforming items.
Regarding the former, some items under SCM
control might also be project records subject to
provisions of the organizations quality assurance
program. Managing nonconforming items is usually the responsibility of the quality assurance
activity; however, SCM might assist with tracking and reporting on software configuration items
falling into this category.
Perhaps the closest relationship is with the
software development and maintenance organizations. It is within this context that many of
the software configuration control tasks are conducted. Frequently, the same tools support development, maintenance, and SCM purposes.
1.2.Constraints and Guidance for the SCM
Process
[2*, c6, ann. D, ann. E] [3*, c2, c5]
[5*, c19s2.2]
Constraints affecting, and guidance for, the SCM
process come from a number of sources. Policies and procedures set forth at corporate or other
organizational levels might influence or prescribe
the design and implementation of the SCM process for a given project. In addition, the contract
between the acquirer and the supplier might contain provisions affecting the SCM process. For
example, certain configuration audits might be
required, or it might be specified that certain items
be placed under CM. When software products to
be developed have the potential to affect public
safety, external regulatory bodies may impose
constraints. Finally, the particular software life
cycle process chosen for a software project and
the level of formalism selected to implement the
software affect the design and implementation of
the SCM process.
Guidance for designing and implementing an
SCM process can also be obtained from best
practice, as reflected in the standards on software
1.3.5.Interface Control
[3*, c1s1]
This involves understanding the software configuration within the context of the system configuration, selecting software configuration items,
developing a strategy for labeling software items
and describing their relationships, and identifying
both the baselines to be used and the procedure
for a baselines acquisition of the items.
2.1.1.Software Configuration
[1, c3]
Software configuration is the functional and physical characteristics of hardware or software as set
forth in technical documentation or achieved in
a product. It can be viewed as part of an overall
system configuration.
2.1.2.Software Configuration Item
[4*, c29s1.1]
[3*, c7s4]
Software items evolve as a software project proceeds. A version of a software item is an identified instance of an item. It can be thought of as a
state of an evolving item. A variant is a version of
a program resulting from the application of software diversity.
2.1.5.Baseline
[1, c3]
what changes to make, the authority for approving certain changes, support for the implementation of those changes, and the concept of formal
deviations from project requirements as well as
waivers of them. Information derived from these
activities is useful in measuring change traffic
and breakage as well as aspects of rework.
3.1.Requesting, Evaluating, and Approving
Software Changes
[2*, c9s2.4] [4*, c29s2]
The first step in managing changes to controlled
items is determining what changes to make. The
software change request process (see a typical
flow of a change request process in Figure 6.3)
provides formal procedures for submitting and
recording change requests, evaluating the potential cost and impact of a proposed change, and
accepting, modifying, deferring, or rejecting
the proposed change. A change request (CR) is
a request to expand or reduce the project scope;
modify policies, processes, plans, or procedures;
modify costs or budgets; or revise schedules
[1]. Requests for changes to software configuration items may be originated by anyone at any
point in the software life cycle and may include
a suggested solution and requested priority. One
source of a CR is the initiation of corrective
action in response to problem reports. Regardless
of the source, the type of change (for example,
defect or enhancement) is usually recorded on the
Software CR (SCR).
This provides an opportunity for tracking
defects and collecting change activity measurements by change type. Once an SCR is received,
a technical evaluation (also known as an impact
analysis) is performed to determine the extent of
the modifications that would be necessary should
the change request be accepted. A good understanding of the relationships among software
(and, possibly, hardware) items is important for
this task. Finally, an established authoritycommensurate with the affected baseline, the SCI
involved, and the nature of the changewill
evaluate the technical and managerial aspects
of the change request and either accept, modify,
reject, or defer the proposed change.
[4*, c29]
[1, c3]
The constraints imposed on a software engineering effort or the specifications produced during the
development activities might contain provisions
that cannot be satisfied at the designated point
in the life cycle. A deviation is a written authorization, granted prior to the manufacture of an
item, to depart from a particular performance or
design requirement for a specific number of units
or a specific period of time. A waiver is a written authorization to accept a configuration item or
other designated item that is found, during production or after having been submitted for inspection,
to depart from specified requirements but is nevertheless considered suitable for use as-is or after
rework by an approved method. In these cases, a
formal process is used for gaining approval for
deviations from, or waivers of, the provisions.
4.Software Configuration Status Accounting
[2*, c10]
Software configuration status accounting (SCSA)
is an element of configuration management consisting of the recording and reporting of information needed to manage a configuration effectively.
4.1.Software Configuration Status Information
[2*, c10s2.1]
The SCSA activity designs and operates a system for the capture and reporting of necessary
information as the life cycle proceeds. As in any
information system, the configuration status information to be managed for the evolving configurations must be identified, collected, and maintained.
Various information and measurements are needed
to support the SCM process and to meet the configuration status reporting needs of management,
software engineering, and other related activities.
The types of information available include the
approved configuration identification as well as
the identification and current implementation status of changes, deviations, and waivers.
Some form of automated tool support is necessary to accomplish the SCSA data collection and
reporting tasks; this could be a database capability, a stand-alone tool, or a capability of a larger,
integrated tool environment.
4.2.Software Configuration Status Reporting
[2*, c10s2.4] [3*, c1s5, c9s1, c17]
Reported information can be used by various
organizational and project elementsincluding
the development team, the maintenance team,
project management, and software quality activities. Reporting can take the form of ad hoc queries to answer specific questions or the periodic
production of predesigned reports. Some information produced by the status accounting activity
during the course of the life cycle might become
quality assurance records.
In addition to reporting the current status of the
configuration, the information obtained by the
SCSA can serve as a basis of various measurements. Examples include the number of change
requests per SCI and the average time needed to
implement a change request.
5.Software Configuration Auditing
[2*, c11]
[4*, c29s4]
the build process might be needed for future reference and may become quality assurance records.
When discussing software configuration management tools, it is helpful to classify them. SCM
tools can be divided into three classes in terms
of the scope at which they provide support: individual support, project-related support, and companywide-process support.
Individual support tools are appropriate and
typically sufficient for small organizations or
development groups without variants of their
software products or other complex SCM requirements. They include:
[4*, c29s3.2]
introduction
c6, ann.D,
ann.E
c6, ann.D,
ann.E
c2
Sommerville 2011
[4*]
c6, ann.D
Moore 2006
[5*]
Hass 2003
[3*]
IEEE 828-2012
[2*]
c29
c19s2.2
c29 intro
c23
c29
ann.Ds56
c1011
c29 intro
ann.Ds8
c23
c26s2; s6
c13
c13s9c14s2
c12
ann.D
c24s4
c23
c29s5
c29s1
c11s3
c9s2;
c25s2s3
c1s1
c29s1.1
c8s2.2
c29s1.1
c29s1.1
c7s4
c29s3
2.1.5.Baseline
2.1.6.Acquiring Software
Configuration Items
2.2.Software Library
3.Software Configuration
Control
3.1.Requesting, Evaluating, and
Approving Software Changes
3.1.1.Software Configuration
Control Board
3.1.2.Software Change
Request Process
3.2.Implementing Software
Changes
3.3.Deviations and Waivers
4.Software Configuration
Status Accounting
4.1.Software Configuration
Status Information
4.2.Software Configuration
Status Reporting
5.Software Configuration
Auditing
5.1.Software Functional
Configuration Audit
5.2.Software Physical
Configuration Audit
5.3.In-Process Audits of a
Software Baseline
6.Software Release
Management and Delivery
6.1.Software Building
6.2.Software Release
Management
7.Software Configuration
Management Tools
Sommerville 2011
[4*]
Moore 2006
[5*]
Hass 2003
[3*]
IEEE 828-2012
[2*]
c18
c1s3
c29s1.2
c9
c29s2
c9s2.4
c29s2
c9s2.2
c11s1
c29s2
c1s4, c8s4
c29
c10
c10s2.1
c10s2.4
c1s5, c9s1,
c17
c11
c11s2.1
c11s2.2
c11s2.3
c14
c8s2
c29s3
c29s4
c29s3.2
c26s1
FURTHER READINGS
REFERENCES
CHAPTER 7
SOFTWARE ENGINEERING MANAGEMENT
ACRONYMS
PMBOK
Guide
SDLC
SEM
SQA
SWX
WBS
INTRODUCTION
Software engineering management can be defined
as the application of management activitiesplanning, coordinating, measuring, monitoring, controlling, and reporting1to ensure that software
products and software engineering services are
delivered efficiently, effectively, and to the benefit
of stakeholders. The related discipline of management is an important element of all the knowledge
areas (KAs), but it is of course more relevant to
this KA than to other KAs. Measurement is also an
important aspect of all KAs; the topic of measurement programs is presented in this KA.
In one sense, it should be possible to manage
a software engineering project in the same way
other complex endeavors are managed. However,
there are aspects specific to software projects
and software life cycle processes that complicate
effective management, including these:
7-1
managementa basic principle of any true engineering discipline (see Measurement in the Engineering Foundations KA)can help improve
the perception and the reality. In essence, management without measurement (qualitative and
quantitative) suggests a lack of discipline, and
measurement without management suggests a
lack of purpose or context. Effective management
requires a combination of both measurement and
experience.
The following working definitions are adopted
here:
Management is a system of processes and
controls required to achieve the strategic
objectives set by the organization.
Measurement refers to the assignment of values and labels to software engineering work
products, processes, and resources plus the
models that are derived from them, whether
these models are developed using statistical
or other techniques [3* , c7, c8].
The software engineering project management
sections in this KA make extensive use of the
software engineering measurement section.
This KA is closely related to others in the
SWEBOK Guide, and reading the following KA
descriptions in conjunction with this one will be
particularly helpful:
The Engineering Foundations KA describes
some general concepts of measurement that
are directly applicable to the Software Engineering Measurement section of this KA.
In addition, the concepts and techniques
presented in the Statistical Analysis section
of the Engineering Foundations KA apply
directly to many topics in this KA.
The Software Requirements KA describes
some of the activities that should be performed during the Initiation and Scope definition phase of the project.
The Software Configuration Management
KA deals with identification, control, status
accounting, and auditing of software configurations along with software release management and delivery and software configuration management tools.
[3*, c3]
[4*, c4]
2.1.Process Planning
Software development life cycle (SDLC) models span a continuum from predictive to adaptive
(see Software Life Cycle Models in the Software
Engineering Process KA). Predictive SDLCs are
characterized by development of detailed software requirements, detailed project planning, and
minimal planning for iteration among development phases. Adaptive SDLCs are designed to
accommodate emergent software requirements
and iterative adjustment of plans. A highly predictive SDLC executes the first five processes
listed in Figure 7.1 in a linear sequence with revisions to earlier phases only as necessary. Adaptive SDLCs are characterized by iterative development cycles. SDLCs in the mid-range of the
SDLC continuum produce increments of functionality on either a preplanned schedule (on the
predictive side of the continuum) or as the products of frequently updated development cycles
(on the adaptive side of the continuum).
Well-known SDLCs include the waterfall,
incremental, and spiral models plus various forms
of agile software development [2] [3*, c2].
Relevant methods (see the Software Engineering Models and Methods KA) and tools should be
selected as part of planning. Automated tools that
will be used throughout the project should also
be planned for and acquired. Tools may include
tools for project scheduling, software requirements, software design, software construction,
software maintenance, software configuration
management, software engineering process, software quality, and others. While many of these
tools should be selected based primarily on the
technical considerations discussed in other KAs,
some of them are closely related to the management considerations discussed in this chapter.
2.2.Determine Deliverables
Equipment, facilities, and people should be allocated to the identified tasks, including the allocation of responsibilities for completion of various elements of a project and the overall project.
A matrix that shows who is responsible for,
accountable for, consulted about, and informed
about each of the tasks can be used. Resource
allocation is based on, and constrained by, the
availability of resources and their optimal use, as
well as by issues relating to personnel (for example, productivity of individuals and teams, team
dynamics, and team structures).
2.5.Risk Management
Risk and uncertainty are related but distinct concepts. Uncertainty results from lack of information. Risk is characterized by the probability of an
event that will result in a negative impact plus a
characterization of the negative impact on a project. Risk is often the result of uncertainty. The
converse of risk is opportunity, which is characterized by the probability that an event having a
positive outcome might occur.
Risk management entails identification of risk
factors and analysis of the probability and potential impact of each risk factor, prioritization of
risk factors, and development of risk mitigation
strategies to reduce the probability and minimize
the negative impact if a risk factor becomes a
problem. Risk assessment methods (for example,
expert judgment, historical data, decision trees,
and process simulations) can sometimes be used
in order to identify and evaluate risk factors.
Project abandonment conditions can also be
determined at this point in discussion with all
relevant stakeholders. Software-unique aspects
of risk, such as software engineers tendency to
add unneeded features, or the risks related to softwares intangible nature, can influence risk management of a software project. Particular attention should be paid to the management of risks
related to software quality requirements such as
safety or security (see the Software Quality KA).
Risk management should be done not only at the
beginning of a project, but also at periodic intervals throughout the project life cycle.
2.6.Quality Management
Software quality requirements should be identified, perhaps in both quantitative and qualitative
terms, for a software project and the associated
work products. Thresholds for acceptable quality measurements should be set for each software
quality requirement based on stakeholder needs
[3*, c4]
For software projects, where change is an expectation, plans should be managed. Managing the
project plan should thus be planned. Plans and
processes selected for software development
should be systematically monitored, reviewed,
reported, and, when appropriate, revised. Plans
associated with supporting processes (for example, documentation, software configuration management, and problem resolution) also should be
managed. Reporting, monitoring, and controlling
a project should fit within the selected SDLC and
the realities of the project; plans should account
for the various artifacts that will be used to manage the project.
3.Software Project Enactment
During software project enactment (also known
as project execution) plans are implemented and
the processes embodied in the plans are enacted.
Throughout, there should be a focus on adherence to the selected SDLC processes, with an
overriding expectation that adherence will lead to
the successful satisfaction of stakeholder requirements and achievement of the projects objectives. Fundamental to enactment are the ongoing
management activities of monitoring, controlling, and reporting.
3.1.Implementation of Plans
[4*, c2]
Project activities should be undertaken in accordance with the project plan and supporting plans.
Resources (for example, personnel, technology,
and funding) are utilized and work products (for
[3*, c8]
predetermined intervals. Also, outputs and completion criteria for each task should be assessed.
Deliverables should be evaluated in terms of their
required characteristics (for example, via inspections or by demonstrating working functionality).
Effort expenditure, schedule adherence, and costs
to date should be analyzed, and resource usage
examined. The project risk profile (see section
2.5, Risk Management) should be revisited, and
adherence to software quality requirements evaluated (see Software Quality Requirements in the
Software Quality KA).
Measurement data should be analyzed (see Statistical Analysis in the Engineering Foundations
KA). Variance analysis based on the deviation of
actual from expected outcomes and values should
be determined. This may include cost overruns,
schedule slippage, or other similar measures.
Outlier identification and analysis of quality and
other measurement data should be performed (for
example, defect analysis; see Software Quality
Measurement in the Software Quality KA). Risk
exposures should be recalculated (see section 2.5,
Risk Management). These activities can enable
problem detection and exception identification
based on thresholds that have been exceeded.
Outcomes should be reported when thresholds
have been exceeded, or as necessary.
3.5.Control Process
[3*, c11]
5.2.Closure Activities
the organization and the project (for example, an organizational objective might be
first-to-market with new products).
Scope of measurement. The organizational
unit to which each measurement requirement
is to be applied should be established. This
may consist of a functional area, a single
project, a single site, or an entire enterprise.
The temporal scope of the measurement
effort should also be considered because
time series of some measurements may be
required; for example, to calibrate estimation models (see section 2.3, Effort, Schedule, and Cost Estimation).
Team commitment to measurement. The
commitment should be formally established,
communicated, and supported by resources
(see next item).
Resources for measurement. An organizations commitment to measurement is an
essential factor for success, as evidenced by
the assignment of resources for implementing the measurement process. Assigning
resources includes allocation of responsibility for the various tasks of the measurement
process (such as analyst and librarian). Adequate funding, training, tools, and support to
conduct the process should also be allocated.
6.2.Plan the Measurement Process
Evaluate information products and the measurement process against specified evaluation criteria and determine strengths and
weaknesses of the information products or
process, respectively. Evaluation may be
performed by an internal process or an external audit; it should include feedback from
measurement users. Lessons learned should
be recorded in an appropriate database.
Identify potential improvements. Such
improvements may be changes in the format
of indicators, changes in units measured, or
reclassification of measurement categories.
The costs and benefits of potential improvements should be determined and appropriate
improvement actions should be reported.
Communicate proposed improvements to the
measurement process owner and stakeholders for review and approval. Also, lack of
potential improvements should be communicated if the analysis fails to identify any
improvements.
7.Software Engineering Management Tools
[3*, c5, c6, c7]
Software engineering management tools are often
used to provide visibility and control of software
engineering management processes. Some tools
are automated while others are manually implemented. There has been a recent trend towards
the use of integrated suites of software engineering tools that are used throughout a project to
plan, collect and record, monitor and control, and
c3
c4
c3
c2, c3, c4, c5
c4, c5, c6
c1
c6
c5, c10, c11
c9
c4
c4
c5
c24
c2
c3, c4
c7
c8
c7, c8
c11
c8, c10
Sommerville 2011
[4*]
Fairley 2009
[3*]
Sommerville 2011
[4*]
Fairley 2009
[3*]
5.Closure
5.1.Determining Closure
5.2.Closure Activities
6.Software Engineering
Measurement
6.1.Establish and Sustain
Measurement Commitment
6.2.Plan the Measurement
Process
6.3.Perform the Measurement
Process
6.4.Evaluate Measurement
7.Software Engineering
Management Tools
c1, c2
c1, c2
c1, c2
c1, c2
c5, c6, c7
FURTHER READINGS
REFERENCES
CHAPTER 8
SOFTWARE ENGINEERING PROCESS
ACRONYMS
BPMN
CASE
CM
CMMI
GQM
IDEF0
LOE
ODC
SDLC
SPLC
UML
INTRODUCTION
An engineering process consists of a set of interrelated activities that transform one or more inputs
into outputs while consuming resources to accomplish the transformation. Many of the processes of
traditional engineering disciplines (e.g., electrical,
mechanical, civil, chemical) are concerned with
transforming energy and physical entities from
one form into another, as in a hydroelectric dam
that transforms potential energy into electrical
energy or a petroleum refinery that uses chemical
processes to transform crude oil into gasoline.
In this knowledge area (KA), software engineering processes are concerned with work activities
accomplished by software engineers to develop,
maintain, and operate software, such as requirements, design, construction, testing, configuration management, and other software engineering
processes. For readability, software engineering
the software requirements process and its subprocesses may be entered and exited multiple times
during software development or modification.
Complete definition of a software process may
also include the roles and competencies, IT support, software engineering techniques and tools,
and work environment needed to perform the
process, as well as the approaches and measures
(Key Performance Indicators) used to determine
the efficiency and effectiveness of performing the
process.
In addition, a software process may include
interleaved technical, collaborative, and administrative activities.
Notations for defining software processes
include textual lists of constituent activities and
tasks described in natural language; data-flow
diagrams; state charts; BPMN; IDEF0; Petri nets;
and UML activity diagrams. The transforming
tasks within a process may be defined as procedures; a procedure may be specified as an ordered
set of steps or, alternatively, as a checklist of the
work to be accomplished in performing a task.
It must be emphasized that there is no best software process or set of software processes. Software processes must be selected, adapted, and
applied as appropriate for each project and each
organizational context. No ideal process, or set of
processes, exists.
1.1.Software Process Management
[3*, s26.1] [4*, p453454]
Two objectives of software process management
are to realize the efficiency and effectiveness that
result from a systematic approach to accomplishing software processes and producing work productsbe it at the individual, project, or organizational leveland to introduce new or improved
processes.
Processes are changed with the expectation that
a new or modified process will improve the efficiency and/or effectiveness of the process and the
quality of the resulting work products. Changing
to a new process, improving an existing process,
organizational change, and infrastructure change
(technology insertion or changes in tools) are
closely related, as all are usually initiated with the
goal of improving the cost, development schedule, or quality of the software products. Process
change has impacts not only for the software
product; they often lead to organizational change.
Changing a process or introducing a new process
can have ripple effects throughout an organization. For example, changes in IT infrastructure tools and technology often require process
changes.
Existing processes may be modified when
other new processes are deployed for the first
time (for example, introducing an inspection
activity within a software development project
will likely impact the software testing process
see Reviews and Audits in the Software Quality
KA and in the Software Testing KA). These situations can also be termed process evolution.
If the modifications are extensive, then changes
in the organizational culture and business model
will likely be necessary to accommodate the process changes.
This topic addresses categories of software processes, software life cycle models, software
[2*, p188190]
construction, and testing) can be adapted to facilitate operation, support, maintenance, migration,
and retirement of the software.
Additional factors to be considered when
defining and tailoring a software life cycle model
include required conformance to standards, directives, and policies; customer demands; criticality
of the software product; and organizational maturity and competencies. Other factors include the
nature of the work (e.g., modification of existing software versus new development) and the
application domain (e.g., aerospace versus hotel
management).
3.Software Process Assessment and
Improvement
[2*, p188, p194] [3*, c26] [4*, p397, c15]
This topic addresses software process assessment models, software process assessment methods, software process improvement models, and
continuous and staged process ratings. Software
process assessments are used to evaluate the form
and content of a software process, which may
be specified by a standardized set of criteria. In
some instances, the terms process appraisal
and capability evaluation are used instead of
process assessment. Capability evaluations are
typically performed by an acquirer (or potential
acquirer) or by an external agent on behalf of
an acquirer (or potential acquirer). The results
are used as an indicator of whether the software
processes used by a supplier (or potential supplier) are acceptable to the acquirer. Performance
appraisals are typically performed within an organization to identify software processes in need of
improvement or to determine whether a process
(or processes) satisfies the criteria at a given level
of process capability or maturity.
Process assessments are performed at the levels of entire organizations, organizational units
within organizations, and individual projects.
Assessment may involve issues such as assessing whether software process entry and exit criteria are being met, to review risk factors and
risk management, or to identify lessons learned.
Process assessment is carried out using both an
assessment model and an assessment method. The
model can provide a norm for a benchmarking
developers or the effort of an independent testing team, depending on who fixes the defects
found by the independent testers. Note that this
example refers to the effort of teams of developers or teams of testers and not to individuals.
Software productivity calculated at the level of
individuals can be misleading because of the
many factors that can affect the individual productivity of software engineers.
Standardized definitions and counting rules
for measurement of software processes and work
products are necessary to provide standardized
measurement results across projects within an
organization, to populate a repository of historical data that can be analyzed to identify software
processes that need to be improved, and to build
predictive models based on accumulated data. In
the example above, definitions of software defects
and staff-hours of testing effort plus counting
rules for defects and effort would be necessary to
obtain satisfactory measurement results.
The extent to which the software process is
institutionalized is important; failure to institutionalize a software process may explain why
good software processes do not always produce anticipated results. Software processes may
be institutionalized by adoption within the local
organizational unit or across larger units of an
enterprise.
4.2.Quality of Measurement Results
[4*, s3.43.7]
The quality of process and product measurement
results is primarily determined by the reliability
and validity of the measured results. Measurements that do not satisfy these quality criteria
can result in incorrect interpretations and faulty
software process improvement initiatives. Other
desirable properties of software measurements
include ease of collection, analysis, and presentation plus a strong correlation between cause and
effect.
The Software Engineering Measurement topic
in the Software Engineering Management KA
describes a process for implementing a software
measurement program.
each category to the software process or software processes where a group of defects originated (see Defect Characterization in the Software Quality KA). Software interface defects,
for example, may have originated during an inadequate software design process; improving the
software design process will reduce the number
of software interface defects. ODC can provide
quantitative data for applying root cause analysis.
Statistical Process Control can be used to track
process stability, or the lack of process stability,
using control charts.
4.4.2.Qualitative Process Measurement
Techniques
[1*, s6.4]
Qualitative process measurement techniques
including interviews, questionnaires, and expert
judgmentcan be used to augment quantitative
process measurement techniques. Group consensus techniques, including the Delphi technique,
can be used to obtain consensus among groups of
stakeholders.
5.Software Engineering Process Tools
[1*, s8.7]
Software process tools support many of the notations used to define, implement, and manage
individual software processes and software life
cycle models. They include editors for notations
such as data-flow diagrams, state charts, BPMN,
IDEF0 diagrams, Petri nets, and UML activity
diagrams. In some cases, software process tools
allow different types of analyses and simulations (for example, discrete event simulation). In
p295
c2
p190
preface
p294295
c2
s2.7
s3.2
p51
2.4.Practical Considerations
p437438
c22, c23,
c24
s2.1
p188, p194
c26
p397, c15
s4.5,
s4.6
s26.5
p4448
s26.3
p4448,
s16.4
s26.5
s2.7
s26.5
p3945
s26.2
s18.1.1
p322331
p187188
p2834
4.Software Measurement
4.1.Software Process and Product
Measurement
s6.3,
p273
s26.2,
p638
p453454
p188190
p2829,
p36,
c5
s26.1
Kan 2003
[4*]
p177
Sommerville 2011
[3*]
Moore 2009
[2*]
Fairley 2009
[1*]
p310311
s6.4,
c8
s8.7
p. 712713
s3.4,
s3.5,
s3.6,
s3.7
s19.2
s5.1,
s5.7,
s9.8
FURTHER READINGS
REFERENCES
CHAPTER 9
SOFTWARE ENGINEERING MODELS
AND METHODS
ACRONYMS
3GL
BNF
FDD
IDE
PBI
RAD
UML
XP
INTRODUCTION
Software engineering models and methods
impose structure on software engineering with
the goal of making that activity systematic,
repeatable, and ultimately more success-oriented.
Using models provides an approach to problem
solving, a notation, and procedures for model
construction and analysis. Methods provide an
approach to the systematic specification, design,
construction, test, and verification of the end-item
software and associated work products.
Software engineering models and methods
vary widely in scopefrom addressing a single
software life cycle phase to covering the complete software life cycle. The emphasis in this
knowledge area (KA) is on software engineering models and methods that encompass multiple
software life cycle phases, since methods specific
for single life cycle phases are covered by other
KAs.
Figure 9.1. Breakdown of Topics for the Software Engineering Models and Methods KA
modeling languages or a single modeling language. The Unified Modeling Language (UML)
recognizes a rich collection of modeling diagrams. Use of these diagrams, along with the
modeling language constructs, brings about three
broad model types commonly used: information
models, behavioral models, and structure models
(see section 1.1).
2.1.Information Modeling
2.3.Structure Modeling
[1*, c7s2.5, c7s3.1, c7s3.2] [3*, c5s3] [4*, c4]
Structure models illustrate the physical or logical
composition of software from its various component parts. Structure modeling establishes the
defined boundary between the software being
implemented or modeled and the environment
in which it is to operate. Some common structural constructs used in structure modeling are
composition, decomposition, generalization, and
specialization of entities; identification of relevant relations and cardinality between entities;
and the definition of process or functional interfaces. Structure diagrams provided by the UML
for structure modeling include class, component,
object, deployment, and packaging diagrams.
3.Analysis of Models
The development of models affords the software
engineer an opportunity to study, reason about,
and understand the structure, function, operational usage, and assembly considerations associated with software. Analysis of constructed
models is needed to ensure that these models are
complete, consistent, and correct enough to serve
their intended purpose for the stakeholders.
The sections that follow briefly describe the
analysis techniques generally used with software models to ensure that the software engineer
and other relevant stakeholders gain appropriate
value from the development and use of models.
3.1.Analyzing for Completeness
[3*, c4s1.1p7, c4s6] [5*, p811]
In order to have software that fully meets the needs
of the stakeholders, completeness is criticalfrom
the requirements elicitation process to code implementation. Completeness is the degree to which
all of the specified requirements have been implemented and verified. Models may be checked for
completeness by a modeling tool that uses techniques such as structural analysis and state-space
reachability analysis (which ensure that all paths in
the state models are reached by some set of correct
inputs); models may also be checked for completeness manually by using inspections or other review
techniques (see the Software Quality KA). Errors
[5*, p811]
Correctness is the degree to which a model satisfies its software requirements and software
design specifications, is free of defects, and ultimately meets the stakeholders needs. Analyzing
for correctness includes verifying syntactic correctness of the model (that is, correct use of the
modeling language grammar and constructs) and
verifying semantic correctness of the model (that
is, use of the modeling language constructs to
correctly represent the meaning of that which is
being modeled). To analyze a model for syntactic
and semantic correctness, one analyzes iteither
automatically (for example, using the modeling
tool to check for model syntactic correctness)
or manually (using inspections or other review
techniques)searching for possible defects and
then removing or repairing the confirmed defects
before the software is released for use.
3.4.Traceability
database designs or data repositories typically found in business software, where data
is actively managed as a business systems
resource or asset.
Object-Oriented Analysis and Design Methods: The object-oriented model is represented
as a collection of objects that encapsulate
data and relationships and interact with other
objects through methods. Objects may be
real-world items or virtual items. The software model is constructed using diagrams
to constitute selected views of the software.
Progressive refinement of the software models leads to a detailed design. The detailed
design is then either evolved through successive iteration or transformed (using some
mechanism) into the implementation view
of the model, where the code and packaging approach for eventual software product
release and deployment is expressed.
4.2.Formal Methods
[1*, c18] [3*, c27] [5*, p824]
Formal methods are software engineering methods used to specify, develop, and verify the software through application of a rigorous mathematically based notation and language. Through use
of a specification language, the software model
can be checked for consistency (in other words,
lack of ambiguity), completeness, and correctness
in a systematic and automated or semi-automated
fashion. This topic is related to the Formal Analysis section in the Software Requirements KA.
This section addresses specification languages,
program refinement and derivation, formal verification, and logical inference.
Specification Languages: Specification
languages provide the mathematical basis
for a formal method; specification languages are formal, higher level computer
languages (in other words, not a classic
3rd Generation Language (3GL) programming language) used during the software
specification, requirements analysis, and/
or design stages to describe specific input/
output behavior. Specification languages are
not directly executable languages; they are
4.3.Prototyping Methods
[1*, c12s2] [3*, c2s3.1] [6*, c7s3p5]
Software prototyping is an activity that generally
creates incomplete or minimally functional versions of a software application, usually for trying out specific new features, soliciting feedback
on software requirements or user interfaces, further exploring software requirements, software
design, or implementation options, and/or gaining
some other useful insight into the software. The
software engineer selects a prototyping method to
understand the least understood aspects or components of the software first; this approach is in
contrast with other software engineering methods
that usually begin development with the most
understood portions first. Typically, the prototyped product does not become the final software
product without extensive development rework
or refactoring.
This section discusses prototyping styles, targets, and evaluation techniques in brief.
Prototyping Style: This addresses the various
approaches to developing prototypes. Prototypes can be developed as throwaway code
or paper products, as an evolution of a working design, or as an executable specification.
Different prototyping life cycle processes are
typically used for each style. The style chosen is based on the type of results the project
needs, the quality of the results needed, and
the urgency of the results.
Prototyping Target: The target of the prototype activity is the specific product being
served by the prototyping effort. Examples
of prototyping targets include a requirements
specification, an architectural design element
or component, an algorithm, or a humanmachine user interface.
Prototyping Evaluation Techniques: A prototype may be used or evaluated in a number of ways by the software engineer or
other project stakeholders, driven primarily
by the underlying reasons that led to prototype development in the first place. Prototypes may be evaluated or tested against
the actual implemented software or against
1.1.Modeling
Principles
1.2.Properties
and Expression of
Models
1.3.Syntax,
Semantics, and
Pragmatics
1.4.Preconditions,
Postconditions, and
Invariants
2.Types of Models
2.1.Information
Modeling
2.2.Behavioral
Modeling
2.3.Structure
Modeling
c4s1.1p7,
c4s6p3,
c5s0p3
c5s2,
c5s3
c2s2.2.2
p6
c5s0
c10s4p2,
c10s5
p2p4
c4s4
c7s2.2
c7s2.1,
c7s2.3,
c7s2.4
c7s2.5,
c7s3.1,
c7s3.2
c8s1
c9s2
c5s4
c5s3
c4
3.Analysis of Models
3.1.Analyzing for
Completeness
3.2.Analyzing for
Consistency
3.3.Analyzing for
Correctness
c4s1.1p7,
c4s6
c4s1.1p7,
c4s6
pp811
pp811
c4s7.1,
c4s7.2
3.4.Traceability
3.5.Interaction
Analysis
pp811
c10, c11
c29s1.1,
c29s5
c5
c5s0
1.Modeling
Brookshear 2008
[6*]
Sommerville 2011
[3*]
c2s2
Wing 1990
[5*]
c2s2,
c5s1,
c5s2
Page-Jones 1999
[4*]
Budgen 2003
[1*]
Brookshear 2008
[6*]
Wing 1990
[5*]
Page-Jones 1999
[4*]
Sommerville 2011
[3*]
Budgen 2003
[1*]
4.Software
Engineering Methods
4.1.Heuristic
Methods
4.2.Formal Methods
4.3.Prototyping
Methods
4.4.Agile Methods
c18
c2s2.2,
c7s1,
c5s4.1
c27
c12s2
c2s3.1
c7s3p5
c3
c7s3p7
c13, c15,
c16
pp824
c6, app.
A
REFERENCES
[1*] D. Budgen, Software Design, 2nd ed.,
Addison-Wesley, 2003.
[2*] S.J. Mellor and M.J. Balcer, Executable
UML: A Foundation for Model-Driven
Architecture, 1st ed., Addison-Wesley,
2002.
[3*] I. Sommerville, Software Engineering, 9th
ed., Addison-Wesley, 2011.
[4*] M. Page-Jones, Fundamentals of ObjectOriented Design in UML, 1st ed., AddisonWesley, 1999.
CHAPTER 10
SOFTWARE QUALITY
ACRONYMS
CMMI
CoSQ
COTS
FMEA
FTA
PDCA
PDSA
QFD
SPI
SQA
SQC
SQM
TQM
V&V
INTRODUCTION
What is software quality, and why is it so important that it is included in many knowledge areas
(KAs) of the SWEBOK Guide?
One reason is that the term software quality is
overloaded. Software quality may refer: to desirable characteristics of software products, to the
extent to which a particular software product possess those characteristics, and to processes, tools,
and techniques used to achieve those characteristics. Over the years, authors and organizations
have defined the term quality differently. To Phil
Crosby, it was conformance to requirements
[1]. Watts Humphrey refers to it as achieving
excellent levels of fitness for use [2]. Meanwhile, IBM coined the phrase market-driven
10-1
use. Stakeholder value is expressed in requirements. For software products, stakeholders could
value price (what they pay for the product), lead
time (how fast they get the product), and software
quality.
This KA addresses definitions and provides an
overview of practices, tools, and techniques for
defining software quality and for appraising the
state of software quality during development,
maintenance, and deployment. Cited references
provide additional details.
BREAKDOWN OF TOPICS FOR
SOFTWARE QUALITY
The breakdown of topics for the Software Quality
KA is presented in Figure 10.1.
1.Software Quality Fundamentals
Reaching agreement on what constitutes quality
for all stakeholders and clearly communicating
that agreement to software engineers require that
that engineers accurately report information, conditions, and outcomes related to quality.
Ethics also play a significant role in software
quality, the culture, and the attitudes of software
engineers. The IEEE Computer Society and the
ACM have developed a code of ethics and professional practice (see Codes of Ethics and Professional Conduct in the Software Engineering
Professional Practice KA).
software product to the customer. External failure costs include activities to respond to software
problems discovered after delivery to the customer.
Software engineers should be able to use CoSQ
methods to ascertain levels of software quality
and should also be able to present quality alternatives and their costs so that tradeoffs between
cost, schedule, and delivery of stakeholder value
can be made.
[9*, c11s3]
Safety-critical systems are those in which a system failure could harm human life, other living
things, physical structures, or the environment.
The software in these systems is safety-critical.
There are increasing numbers of applications
of safety-critical software in a growing number
of industries. Examples of systems with safetycritical software include mass transit systems,
chemical manufacturing plants, and medical
devices. The failure of software in these systems
could have catastrophic effects. There are industry standards, such as DO-178C [11], and emerging processes, tools, and techniques for developing safetycritical software. The intent of these
standards, tools, and techniques is to reduce the
risk of injecting faults into the software and thus
improve software reliability.
Safety-critical software can be categorized as
direct or indirect. Direct is that software embedded in a safety-critical system, such as the flight
control computer of an aircraft. Indirect includes
software applications used to develop safetycritical software. Indirect software is included in
software engineering environments and software
test environments.
Three complementary techniques for reducing the risk of failure are avoidance, detection
and removal, and damage limitation. These
techniques impact software functional requirements, software performance requirements, and
development processes. Increasing levels of risk
imply increasing levels of software quality assurance and control techniques such as inspections.
Higher risk levels may necessitate more thorough
inspections of requirements, design, and code
or the use of more formal analytical techniques.
Another technique for managing and controlling software risk is building assurance cases. An
assurance case is a reasoned, auditable artifact
created to support the contention that its claim
or claims are satisfied. It contains the following
and their relationships: one or more claims about
properties; arguments that logically link the evidence and any assumptions to the claims; and a
body of evidence and assumptions supporting
these arguments [12].
2.Software Quality Management Processes
Software quality management is the collection of
all processes that ensure that software products,
services, and life cycle process implementations
meet organizational software quality objectives
and achieve stakeholder satisfaction [13, 14].
SQM defines processes, process owners, requirements for the processes, measurements of the
processes and their outputs, and feedback channels throughout the whole software life cycle.
SQM comprises four subcategories: software
quality planning, software quality assurance
(SQA), software quality control (SQC), and software process improvement (SPI). Software quality planning includes determining which quality
standards are to be used, defining specific quality
goals, and estimating the effort and schedule of
software quality activities. In some cases, software quality planning also includes defining the
software quality processes to be used. SQA activities define and assess the adequacy of software
processes to provide evidence that establishes
confidence that the software processes are appropriate for and produce software products of suitable quality for their intended purposes [5]. SQC
activities examine specific project artifacts (documents and executables) to determine whether they
2.3.2.Technical Reviews
As stated in [16*],
The purpose of a technical review is to
evaluate a software product by a team of
qualified personnel to determine its suitability for its intended use and identify
discrepancies from specifications and
standards. It provides management with
evidence to confirm the technical status of
the project.
Although any work-product can be reviewed,
technical reviews are performed on the main
software engineering work-products of software
requirements and software design.
Purpose, roles, activities, and most importantly
the level of formality distinguish different types
of technical reviews. Inspections are the most formal, walkthroughs less, and pair reviews or desk
checks are the least formal.
Examples of specific roles include a decision
maker (i.e., software lead), a review leader, a
recorder, and checkers (technical staff members
who examine the work-products). Reviews are
also distinguished by whether meetings (face to
face or electronic) are included in the process. In
some review methods checkers solitarily examine work-products and send their results back to
a coordinator. In other methods checkers work
cooperatively in meetings. A technical review
may require that mandatory inputs be in place in
order to proceed:
Statement of objectives
Specific software product
Specific project management plan
Issues list associated with this product
Technical review procedure.
The team follows the documented review procedure. The technical review is completed once
all the activities listed in the examination have
been completed.
Technical reviews of source code may include a
wide variety of concerns such as analysis of algorithms, utilization of critical computer resources,
adherence to coding standards, structure and
also Formal Methods in the Software Engineering Models and Methods KA.)
3.3.2.Dynamic Techniques
Dynamic techniques involve executing the software code. Different kinds of dynamic techniques
are performed throughout the development and
maintenance of software. Generally, these are
testing techniques, but techniques such as simulation and model analysis may be considered
dynamic (see the Software Engineering Models
and Methods KA). Code reading is considered a
static technique, but experienced software engineers may execute the code as they read through
it. Code reading may utilize dynamic techniques.
This discrepancy in categorizing indicates that
people with different roles and experience in the
organization may consider and apply these techniques differently.
Different groups may perform testing during
software development, including groups independent of the development team. The Software
Testing KA is devoted entirely to this subject.
3.3.3.Testing
Two types of testing may fall under V&V because
of their responsibility for the quality of the materials used in the project:
Evaluation and tests of tools to be used on
the project
Conformance tests (or review of conformance tests) of components and COTS products to be used in the product.
Sometimes an independent (third-party or
IV&V) organization may be tasked to perform
testing or to monitor the test process V&V may
be called upon to evaluate the testing itself: adequacy of plans, processes, and procedures, and
adequacy and accuracy of results.
The third party is not the developer, nor is it
associated with the development of the product.
Instead, the third party is an independent facility, usually accredited by some body of authority.
Their purpose is to test a product for conformance
to a specific set of requirements (see the Software
Testing KA).
Tools that support tracking of software problems provide for entry of anomalies discovered during software testing and subsequent
analysis, disposition, and resolution. Some
tools include support for workflow and for
tracking the status of problem resolution.
Tools that analyze data captured from software engineering environments and software test environments and produce visual
displays of quantified data in the form of
graphs, charts, and tables. These tools sometimes include the functionality to perform
statistical analysis on data sets (for the purpose of discerning trends and making forecasts). Some of these tools provide defect
and removal injection rates; defect densities;
yields; distribution of defect injection and
removal for each of the life cycle phases.
c24s1
c2s4
c1s4
c17
c11s3
c4c6,
c11,
c2627
2.2.Verification
and Validation
c2
s2.3,
c8, c15
s1.1,
c21
s3.3
2.3.Reviews
and Audits
c24s3
Wiegers 2003
[18*]
c11
s2.4
c17,
c22
Moore 2006
[17*]
Voland 2003
[10*]
c24
Sommerville 2011
[9*]
c2s3.5
c1s4
Galin 2004
[7*]
1.Software
Quality
Fundamentals
1.1.Software
Engineering
Culture and
Ethics
1.2.Value and
Cost of Quality
1.3.Models
and Quality
Characteristics
1.4.Software
Quality
Improvement
1.5.Software
Safety
2.Software
Quality
Management
Processes
2.1.Software
Quality
Assurance
Kan 2002
[3*]
Moore 2006
[17*]
Wiegers 2003
[18*]
Voland 2003
[10*]
Sommerville 2011
[9*]
Galin 2004
[7*]
Kan 2002
[3*]
c15
s3.2.2,
c15
s3.3.1,
c16
s9.10
c12
3.Software
Quality Practical
Considerations
3.1.Software
Quality
Requirements
3.2.Defect
Characterization
c11s1
c3s3,
c8s8,
c10s2
3.3.SQM
Techniques
3.4.Software
Quality
Measurement
4.Software
Quality Tools
c7s3
c4
c17
c12s5,
c15s1,
p417
c17
p90
FURTHER READINGS
N. Leveson, Safeware: System Safety and
Computers [20].
REFERENCES
[1] P.B. Crosby, Quality Is Free, McGraw-Hill,
1979.
[2] W. Humphrey, Managing the Software
Process, Addison-Wesley, 1989.
[3*] S.H. Kan, Metrics and Models in Software
Quality Engineering, 2nd ed., AddisonWesley, 2002.
[4] ISO/IEC 25010:2011 Systems and Software
EngineeringSystems and Software
Quality Requirements and Evaluation
(SQuaRE)Systems and Software Quality
Models, ISO/IEC, 2011.
CHAPTER 11
SOFTWARE ENGINEERING
PROFESSIONAL PRACTICE
ACRONYMS
Association for Computing
Machinery
BCS
British Computer Society
Certified Software Development
CSDA
Associate
Certified Software Development
CSDP
Professional
International Electrotechnical
IEC
Commission
IEEE CS IEEE Computer Society
International. Federation for
IFIP
Information Processing
IP
Intellectual Property
International Organization for
ISO
Standardization
NDA
Non-Disclosure Agreement
World Intellectual Property
WIPO
Organization
WTO
World Trade Organization
ACM
INTRODUCTION
The Software Engineering Professional Practice knowledge area (KA) is concerned with the
knowledge, skills, and attitudes that software
engineers must possess to practice software engineering in a professional, responsible, and ethical manner. Because of the widespread applications of software products in social and personal
life, the quality of software products can have
profound impact on our personal well-being
and societal harmony. Software engineers must
handle unique engineering problems, producing
software with known characteristics and reliability. This requirement calls for software engineers
who possess a proper set of knowledge, skills,
training, and experience in professional practice.
The term professional practice refers to a
way of conducting services so as to achieve certain standards or criteria in both the process of
performing a service and the end product resulting from the service. These standards and criteria can include both technical and nontechnical
aspects. The concept of professional practice can
be viewed as being more applicable within those
professions that have a generally accepted body
of knowledge; codes of ethics and professional
conduct with penalties for violations; accepted
processes for accreditation, certification, and
licensing; and professional societies to provide
and administer all of these. Admission to these
professional societies is often predicated on a prescribed combination of education and experience.
A software engineer maintains a professional
practice by performing all work in accordance
with generally accepted practices, standards, and
guidelines notably set forth by the applicable professional society. For example, the Association for
Computing Machinery (ACM) and IEEE Computer Society (IEEE CS) have established a Software Engineering Code of Ethics and Professional
Practice. Both the British Computer Society (BCS)
and the International Federation for Information
Processing (IFIP) have established similar professional practice standards. ISO/IEC and IEEE have
further provided internationally accepted software
engineering standards (see Appendix B of this
Guide). IEEE CS has established two international
certification programs (CSDA, CSDP) and a corresponding Guide to the Software Engineering Body
of Knowledge (SWEBOK Guide). All of these are
11-1
Figure 11.1. Breakdown of Topics for the Software Engineering Professional Practice KA
elements that lay the foundation for of the professional practice of software engineering.
BREAKDOWN OF TOPICS FOR
SOFTWARE ENGINEERING
PROFESSIONAL PRACTICE
The Software Engineering Professional Practice
KAs breakdown of topics is shown in Figure
11.1. The subareas presented in this KA are professionalism, group dynamics and psychology,
and communication skills.
1.Professionalism
A software engineer displays professionalism
notably through adherence to codes of ethics
and professional conduct and to standards and
Since standards and codes of ethics and professional conduct may be introduced, modified,
or replaced at any time, individual software engineers bear the responsibility for their own continuing study to stay current in their professional
practice.
1.3.Nature and Role of Professional Societies
[1*, c1s1c1s2] [4*, c1s2] [5*, c35s1]
Professional societies are comprised of a mix
of practitioners and academics. These societies
serve to define, advance, and regulate their corresponding professions. Professional societies
help to establish professional standards as well
as codes of ethics and professional conduct. For
this reason, they also engage in related activities,
which include
establishing and promulgating a body of generally accepted knowledge;
accrediting, certifying, and licensing;
dispensing disciplinary actions;
advancing the profession through conferences, training, and publications.
Participation in professional societies assists
the individual engineer in maintaining and sharpening their professional knowledge and relevancy
and in expanding and maintaining their professional network.
1.4.Nature and Role of Software Engineering
Standards
[1*, c5s3.2, c10s2.1] [5*, c32s6] [7*, c1s2]
Software engineering standards cover a remarkable variety of topics. They provide guidelines for
the practice of software engineering and processes
to be used during development, maintenance, and
support of software. By establishing a consensual
body of knowledge and experience, software engineering standards establish a basis upon which further guidelines may be developed. Appendix B of
this Guide provides guidance on IEEE and ISO/
IEC software engineering standards that support
the knowledge areas of this Guide.
The benefits of software engineering standards
are many and include improving software quality,
[1*, c7]
1.7.8.Trade Compliance
All software professionals must be aware of
legal restrictions on import, export, or reexport
of goods, services, and technology in the jurisdictions in which they work. The considerations
include export controls and classification, transfer
of goods, acquisition of necessary governmental
licenses for foreign use of hardware and software,
services and technology by sanctioned nation,
enterprise or individual entities, and import
restrictions and duties. Trade experts should be
consulted for detailed compliance guidance.
1.7.9.Cybercrime
Cybercrime refers to any crime that involves
a computer, computer software, computer networks, or embedded software controlling a system. The computer or software may have been
used in the commission of a crime or it may have
been the target. This category of crime includes
fraud, unauthorized access, spam, obscene or
offensive content, threats, harassment, theft of
sensitive personal data or trade secrets, and use
of one computer to damage or infiltrate other
networked computers and automated system
controls.
Computer and software users commit fraud by
altering electronic data to facilitate illegal activity. Forms of unauthorized access include hacking, eavesdropping, and using computer systems
in a way that is concealed from their owners.
Many countries have separate laws to cover
cybercrimes, but it has sometimes been difficult
to prosecute cybercrimes due to a lack of precisely framed statutes. The software engineer has
a professional obligation to consider the threat of
cybercrime and to understand how the software
system will protect or endanger software and user
information from accidental or malicious access,
use, modification, destruction, or disclosure.
1.8.Documentation
[1*, c10s5.8] [3*, c1s5] [5*, c32]
Providing clear, thorough, and accurate documentation is the responsibility of each software
engineer. The adequacy of documentation is
One point to emphasize is that software engineers must be able to work in multidisciplinary
environments and in varied application domains.
Since today software is everywhere, from a phone
to a car, software is impacting peoples lives far
beyond the more traditional concept of software
made for information management in a business
environment.
2.2.Individual Cognition
through training and study add skills and knowledge to the software engineers portfolio; reading,
networking, and experimenting with new tools,
techniques, and methods are all valid means of
professional development.
2.3.Dealing with Problem Complexity
[3*, c3s2] [5*, c33]
Many, if not most, software engineering problems are too complex and difficult to address as
a whole or to be tackled by individual software
engineers. When such circumstances arise, the
usual means to adopt is teamwork and problem
decomposition (see Problem Solving Techniques
in the Computing Foundations KA).
Teams work together to deal with complex and
large problems by sharing burdens and drawing upon each others knowledge and creativity.
When software engineers work in teams, different views and abilities of the individual engineers
complement each other and help build a solution
that is otherwise difficult to come by. Some specific teamwork examples to software engineering
are pair programming (see Agile Methods in the
Software Engineering Models and Methods KA)
and code review (see Reviews and Audits in the
Software Quality KA).
2.4.Interacting with Stakeholders
[9*, c2s3.1]
Therefore, it is vital to maintain open and productive communication with stakeholders for the
duration of the software products lifetime.
2.5.Dealing with Uncertainty and Ambiguity
[4*, c24s4, c26s2] [9*, c9s4]
As with engineers of other fields, software engineers must often deal with and resolve uncertainty and ambiguities while providing services
and developing products. The software engineer
must attack and reduce or eliminate any lack of
clarity that is an obstacle to performing work.
Often, uncertainty is simply a reflection of lack
of knowledge. In this case, investigation through
recourse to formal sources such as textbooks and
professional journals, interviews with stakeholders, or consultation with teammates and peers can
overcome it.
When uncertainty or ambiguity cannot be overcome easily, software engineers or organizations
may choose to regard it as a project risk. In this
case, work estimates or pricing are adjusted to
mitigate the anticipated cost of addressing it (see
Risk Management in the Software Engineering
Management KA).
2.6.Dealing with Multicultural Environments
[9*, c10s7]
Multicultural environments can have an impact
on the dynamics of a group. This is especially
true when the group is geographically separated
or communication is infrequent, since such separation elevates the importance of each contact.
Intercultural communication is even more difficult if the difference in time zones make oral
communication less frequent.
Multicultural environments are quite prevalent
in software engineering, perhaps more than in
other fields of engineering, due to the strong trend
of international outsourcing and the easy shipment
of software components instantaneously across
the globe. For example, it is rather common for a
software project to be divided into pieces across
national and cultural borders, and it is also quite
common for a software project team to consist of
people from diverse cultural backgrounds.
For a software project to be a success, team
members must achieve a level of tolerance,
acknowledging that some rules depend on societal norms and that not all societies derive the
same solutions and expectations.
This tolerance and accompanying understanding can be facilitated by the support of leadership
and management. More frequent communication,
including face-to-face meetings, can help to mitigate geographical and cultural divisions, promote
cohesiveness, and raise productivity. Also, being
able to communicate with teammates in their
native language could be very beneficial.
3.Communication Skills
It is vital that a software engineer communicate
well, both orally and in reading and writing. Successful attainment of software requirements and
deadlines depends on developing clear understanding between the software engineer and
customers, supervisors, coworkers, and suppliers. Optimal problem solving is made possible
through the ability to investigate, comprehend,
and summarize information. Customer product
acceptance and safe product usage depend on the
provision of relevant training and documentation.
It follows that the software engineers own career
success is affected by the ability to consistently
provide oral and written communication effectively and on time.
3.1.Reading, Understanding, and Summarizing
[5*, c33s3]
Software engineers are able to read and understand technical material. Technical material
includes reference books, manuals, research
papers, and program source code.
Reading is not only a primary way of improving skills, but also a way of gathering information necessary for the completion of engineering
goals. A software engineer sifts through accumulated information, filtering out the pieces that
will be most helpful. Customers may request that
a software engineer summarize the results of
such information gathering for them, simplifying
or explaining it so that they may make the final
choice between competing solutions.
Reading and comprehending source code is
also a component of information gathering and
problem solving. When modifying, extending,
[3*, c1s5]
collaboration tools to share information. In addition, the use of electronic information stores,
accessible to all team members, for organizational policies, standards, common engineering
procedures, and project-specific information, can
be most beneficial.
Some software engineering teams focus on
face-to-face interaction and promote such interaction by office space arrangement. Although
private offices improve individual productivity,
colocating team members in physical or virtual
forms and providing communal work areas is
important to collaborative efforts.
3.4.Presentation Skills
[3*, c1s5] [4*, c22] [9*, c10s7c10s8]
Software engineers rely on their presentation
skills during software life cycle processes. For
example, during the software requirements
IEEE-CS/ACM 1999
[6*]
c33
c1s2
c35s1
Fairley 2009
[9*]
McConnell 2004
[5*]
c1s2
Tockey 2004
[8*]
Sommerville 2011
[4*]
c8
Moore 2006
[7*]
Voland 2003
[3*]
1.Professionalism
1.1.Accreditation,
Certification, and
Licensing
1.2.Codes of Ethics
and Professional
Conduct
1.3.Nature and
Role of Professional
Societies
1.4.Nature and
Role of Software
Engineering
Standards
1.5.Economic
Impact of Software
1.6.Employment
Contracts
1.7.Legal Issues
c1s4.1,
c1s5.1
c1s5.4
c1s6
c1s9
c1s1
c1s2
c5s3.2,
c10s2.1
c32s6
c10s8
c1s1.1
c1s2
c1
c7
c6, c11
1.8.Documentation c10s5.8
1.9.Tradeoff
Analysis
2.Group Dynamics
and Psychology
2.1.Dynamics of
Working in Teams/
Groups
2.2.Individual
Cognition
2.3.2.3 Dealing with
Problem Complexity
2.4.Interacting with
Stakeholders
c5s3
c5s4
c1s5
c1s2,
c10
c1s10
c32
c9s5.10
c1s3.5,
c10
c1s6
c1s6.5
c33
c3s2
c33
c2s3.1
2.5.Dealing with
Uncertainty and
Ambiguity
2.6.Dealing with
Multicultural
Environments
3.Communication
Skills
3.1.Reading,
Understanding, and
Summarizing
3.2.Writing
3.3.Team and Group
Communication
3.4.Presentation
Skills
c24s4,
c26s2
Fairley 2009
[9*]
Tockey 2004
[8*]
Moore 2006
[7*]
IEEE-CS/ACM 1999
[6*]
McConnell 2004
[5*]
Sommerville 2011
[4*]
Voland 2003
[3*]
c9s4
c10s7
c33s3
c1s5
c1s6.8
c22s3
c1s5
c22
c27s1
c10s4
c10s7
c10s8
FURTHER READINGS
REFERENCES
This was the first major book to address programming as an individual and team effort and became
a classic in the field.
CHAPTER 12
SOFTWARE ENGINEERING ECONOMICS
ACRONYMS
EVM
IRR
MARR
SDLC
SPLC
ROI
ROCE
TCO
INTRODUCTION
Software engineering economics is about making decisions related to software engineering in a
business context. The success of a software product, service, and solution depends on good business management. Yet, in many companies and
organizations, software business relationships to
software development and engineering remain
vague. This knowledge area (KA) provides an
overview on software engineering economics.
Economics is the study of value, costs,
resources, and their relationship in a given context
or situation. In the discipline of software engineering, activities have costs, but the resulting
software itself has economic attributes as well.
Software engineering economics provides a way
to study the attributes of software and software
processes in a systematic way that relates them
to economic measures. These economic measures
can be weighed and analyzed when making decisions that are within the scope of a software organization and those within the integrated scope of
an entire producing or acquiring business.
Software engineering economics is concerned
with aligning software technical decisions with
12-1
[1*, c2]
[1*, c15]
[1*, c15]
Controlling is an element of finance and accounting. Controlling involves measuring and correcting the performance of finance and accounting.
It ensures that an organizations objectives and
plans are accomplished. Controlling cost is a specialized branch of controlling used to detect variances of actual costs from planned costs.
1.4.Cash Flow
[1*, c3]
solutions. A commercial, off-the-shelf, objectrequest broker product might cost a few thousand
dollars, but the effort to develop a homegrown
service that gives the same functionality could
easily cost several hundred times that amount.
If the candidate solutions all adequately solve
the problem from a technical perspective, then
the selection of the most appropriate alternative
should be based on commercial factors such as
optimizing total cost of ownership (TCO) or
maximizing the short-term return on investment
(ROI). Life cycle costs such as defect correction,
field service, and support duration are also relevant considerations. These costs need to be factored in when selecting among acceptable technical approaches, as they are part of the lifetime
ROI (see section 4.3, Return on Investment).
A systematic process for making decisions will
achieve transparency and allow later justification. Governance criteria in many organizations
demand selection from at least two alternatives.
A systematic process is shown in Figure 12.3.
It starts with a business challenge at hand and
describes the steps to identify alternative solutions, define selection criteria, evaluate the solutions, implement one selected solution, and monitor the performance of that solution.
Figure 12.3 shows the process as mostly stepwise and serial. The real process is more fluid.
Sometimes the steps can be done in a different
order and often several of the steps can be done
in parallel. The important thing is to be sure that
1.6.Valuation
1.7.Inflation
[1*, c13]
[1*, c14]
1.10.Time-Value of Money
[2*, c1]
[2*, c1]
[2*, c23]
2.1.Product
2.4.Portfolio
Ibid.
Ibid.
[1*, c3]
[1*, c4]
2.9.Planning Horizon
[1*, c11]
When an organization chooses to invest in a particular proposal, money gets tied up in that proposalso-called frozen assets. The economic
impact of frozen assets tends to start high and
decreases over time. On the other hand, operating and maintenance costs of elements associated
with the proposal tend to start low but increase
over time. The total cost of the proposalthat
is, owning and operating a productis the sum
of those two costs. Early on, frozen asset costs
dominate; later, the operating and maintenance
costs dominate. There is a point in time where the
sum of the costs is minimized; this is called the
minimum cost lifetime.
To properly compare a proposal with a fouryear life span to a proposal with a six-year life
span, the economic effects of either cutting the
six-year proposal by two years or investing the
profits from the four-year proposal for another
two years need to be addressed. The planning
horizon, sometimes known as the study period,
is the consistent time frame over which proposals are considered. Effects such as software lifetime will need to be factored into establishing a
planning horizon. Once the planning horizon is
established, several techniques are available for
putting proposals with different life spans into
that planning horizon.
2.10.Price and Pricing
[1*, c13]
[1*, c15]
[3*, c8]
[3*, c6]
[3*, c6]
3.3.Addressing Uncertainty
[3*, c6]
[3*, c6]
[1*, c10]
[1*, c10]
[1*, c10]
4.5.Cost-Benefit Analysis
[1*, c18]
[1*, c18]
Cost-effectiveness analysis is similar to costbenefit analysis. There are two versions of costeffectiveness analysis: the fixed-cost version
maximizes the benefit given some upper bound
on cost; the fixed-effectiveness version minimizes
the cost needed to achieve a fixed goal.
4.7.Break-Even Analysis
[1*, c19]
[1*, c3]
[1*, c26]
The topics discussed so far are used to make decisions based on a single decision criterion: money.
The alternative with the best present worth, the
best ROI, and so forth is the one selected. Aside
from technical feasibility, money is almost
always the most important decision criterion, but
its not always the only one. Quite often there are
other criteria, other attributes, that need to be
considered, and those attributes cant be cast in
terms of money. Multiple attribute decision techniques allow other, nonfinancial criteria to be factored into the decision.
There are two families of multiple attribute
decision techniques that differ in how they use
the attributes in the decision. One family is the
compensatory, or single-dimensioned, techniques. This family collapses all of the attributes
onto a single figure of merit. The family is called
compensatory because, for any given alternative,
a lower score in one attribute can be compensated
byor traded off againsta higher score in other
attributes. The compensatory techniques include
nondimensional scaling
additive weighting
analytic hierarchy process.
In contrast, the other family is the noncompensatory, or fully dimensioned, techniques.
This family does not allow tradeoffs among the
attributes. Each attribute is treated as a separate
entity in the decision process. The noncompensatory techniques include
dominance
satisficing
lexicography.
4.10.Optimization Analysis
[1*, c20]
[1*, c21]
results from adding features that have low marginal value for the users (see Agile Methods in
the Software Engineering Models and Methods
KA and Software Life Cycle Models in the Software Engineering Process KA). In agile methods, detailed planning and lengthy development
phases are replaced by incremental planning and
frequent delivery of small increments of a deliverable product that is tested and evaluated by user
representatives.
5.2.Friction-Free Economy
Economic friction is everything that keeps markets from having perfect competition. It involves
distance, cost of delivery, restrictive regulations,
and/or imperfect information. In high-friction
markets, customers dont have many suppliers
from which to choose. Having been in a business
for a while or owning a store in a good location
determines the economic position. Its hard for
new competitors to start business and compete.
The marketplace moves slowly and predictably.
Friction-free markets are just the reverse. New
competitors emerge and customers are quick to
respond. The marketplace is anything but predictable. Theoretically, software and IT are frictionfree. New companies can easily create products
and often do so at a much lower cost than established companies, since they need not consider
any legacies. Marketing and sales can be done
via the Internet and social networks, and basically free distribution mechanisms can enable a
ramp up to a global business. Software engineering economics aims to provide foundations to
judge how a software business performs and how
friction-free a market actually is. For instance,
competition among software app developers is
inhibited when apps must be sold through an app
store and comply with that stores rules.
5.3.Ecosystems
An ecosystem is an environment consisting of all
the mutually dependent stakeholders, business
units, and companies working in a particular area.
Fairley 2009
[3*]
Sommerville 2011
[2*]
Tockey 2005
[1*]
c2
c15
c15
c3
c2, c4
c5, c8
c13
c14
c16, c17
c5, c11
c1
c1
c23
c22
c22
c6
c1
c2
c2
c2
c2
c3
c4
c11
c13
c15
c7, c8
c8
c11, c12
c12
c9
c9
Fairley 2009
[3*]
Sommerville 2011
[2*]
Tockey 2005
[1*]
c24
c25
c10
c10
c10
c18
c18
c19
c3
c26
c20
c21
c6
c6
c6
c6
c9
c9
FURTHER READINGS
REFERENCES
CHAPTER 13
COMPUTING FOUNDATIONS
ACRONYMS
AOP
ALU
API
ATM
B/S
CERT
COTS
CRUD
C/S
CS
DBMS
FPU
I/O
ISA
ISO
ISP
LAN
MUX
NIC
OOP
OS
OSI
PC
PDA
PPP
RFID
RAM
ROM
Aspect-Oriented Programming
Arithmetic and Logic Unit
Application Programming
Interface
Asynchronous Transfer Mode
Browser-Server
Computer Emergency Response
Team
Commercial Off-The-Shelf
Create, Read, Update, Delete
Client-Server
Computer Science
Database Management System
Float Point Unit
Input and Output
Instruction Set Architecture
International Organization for
Standardization
Internet Service Provider
Local Area Network
Multiplexer
Network Interface Card
Object-Oriented Programming
Operating System
Open Systems Interconnection
Personal Computer
Personal Digital Assistant
Point-to-Point Protocol
Radio Frequency Identification
Random Access Memory
Read Only Memory
SCSI
SQL
TCP
UDP
VPN
WAN
INTRODUCTION
The scope of the Computing Foundations knowledge area (KA) encompasses the development
and operational environment in which software
evolves and executes. Because no software can
exist in a vacuum or run without a computer, the
core of such an environment is the computer and
its various components. Knowledge about the
computer and its underlying principles of hardware and software serves as a framework on
which software engineering is anchored. Thus, all
software engineers must have good understanding of the Computing Foundations KA.
It is generally accepted that software engineering builds on top of computer science. For
example, Software Engineering 2004: Curriculum Guidelines for Undergraduate Degree
Programs in Software Engineering [1] clearly
states, One particularly important aspect is that
software engineering builds on computer science
and mathematics (italics added).
Steve Tockey wrote in his book Return on
Software:
Both computer science and software engineering deal with computers, computing,
and software. The science of computing, as
a body of knowledge, is at the core of both.
13-1
that the computer can eventually solve the problem. In general, a problem should be expressed
in such a way as to facilitate the development of
algorithms and data structures for solving it.
The result of the first task is a problem statement. The next step is to convert the problem statement into algorithms that solve the problem. Once
an algorithm is found, the final step converts the
algorithm into machine instructions that form the
final solution: software that solves the problem.
Abstractly speaking, problem solving using a
computer can be considered as a process of problem transformationin other words, the step-bystep transformation of a problem statement into
a problem solution. To the discipline of software
engineering, the ultimate objective of problem
solving is to transform a problem expressed in
natural language into electrons running around
a circuit. In general, this transformation can be
broken into three phases:
a) Development of algorithms from the problem statement.
b) Application of algorithms to the problem.
c) Transformation of algorithms to program
code.
The conversion of a problem statement into
algorithms and algorithms into program codes
usually follows a stepwise refinement (a.k.a.
systematic decomposition) in which we start
with a problem statement, rewrite it as a task,
and recursively decompose the task into a few
simpler subtasks until the task is so simple that
solutions to it are straightforward. There are three
basic ways of decomposing: sequential, conditional, and iterative.
2.Abstraction
[3*, s5.25.4]
Abstraction is an indispensible technique associated with problem solving. It refers to both the
process and result of generalization by reducing
the information of a concept, a problem, or an
observable phenomenon so that one can focus
on the big picture. One of the most important
skills in any engineering undertaking is framing
the levels of abstraction appropriately.
at different timesin other words, we work on different levels of abstraction as the situation calls.
Most of the time, these different levels of abstraction are organized in a hierarchy. There are many
ways to structure a particular hierarchy and the
criteria used in determining the specific content of
each layer in the hierarchy varies depending on the
individuals performing the work.
Sometimes, a hierarchy of abstraction is sequential, which means that each layer has one and only
one predecessor (lower) layer and one and only
one successor (upper) layerexcept the upmost
layer (which has no successor) and the bottommost
layer (which has no predecessor). Sometimes,
however, the hierarchy is organized in a tree-like
structure, which means each layer can have more
than one predecessor layer but only one successor
layer. Occasionally, a hierarchy can have a manyto-many structure, in which each layer can have
multiple predecessors and successors. At no time,
shall there be any loop in a hierarchy.
A hierarchy often forms naturally in task decomposition. Often, a task analysis can be decomposed
in a hierarchical fashion, starting with the larger
tasks and goals of the organization and breaking
each of them down into smaller subtasks that can
again be further subdivided This continuous division of tasks into smaller ones would produce a
hierarchical structure of tasks-subtasks.
2.4.Alternate Abstractions
Sometimes it is useful to have multiple alternate
abstractions for the same problem so that one can
keep different perspectives in mind. For example, we can have a class diagram, a state chart,
and a sequence diagram for the same software
at the same level of abstraction. These alternate
abstractions do not form a hierarchy but rather
complement each other in helping understanding
the problem and its solution. Though beneficial, it
is as times difficult to keep alternate abstractions
in sync.
3.Programming Fundamentals
[3*, c619]
problems. In functional programming, all computations are treated as the evaluation of mathematical functions. In contrast to the imperative
programming that emphasizes changes in state,
functional programming emphasizes the application of functions, avoids state and mutable data,
and provides referential transparency.
4.Programming Language Basics
[4*, c6]
specific requirements for the definition of variables and constants (in other words, declaration and types) and format requirements for the
instructions themselves.
In general, a programming language supports
such constructs as variables, data types, constants, literals, assignment statements, control
statements, procedures, functions, and comments.
The syntax and semantics of each construct must
be clearly specified.
4.3.Low-Level Programming Languages
Programming language can be classified into two
classes: low-level languages and high-level languages. Low-level languages can be understood
by a computer with no or minimal assistance and
typically include machine languages and assembly languages. A machine language uses ones
and zeros to represent instructions and variables,
and is directly understandable by a computer. An
assembly language contains the same instructions
as a machine language but the instructions and
variables have symbolic names that are easier for
humans to remember.
Assembly languages cannot be directly understood by a computer and must be translated into a
machine language by a utility program called an
assembler. There often exists a correspondence
between the instructions of an assembly language
and a machine language, and the translation from
assembly code to machine code is straightforward. For example, add r1, r2, r3 is an assembly instruction for adding the content of register
r2 and r3 and storing the sum into register r1. This
instruction can be easily translated into machine
code 0001 0001 0010 0011. (Assume the operation code for addition is 0001, see Figure 13.2).
add
0001
r1,
0001
r2,
0010
r3
0011
[3*, c23]
Once a program is coded and compiled (compilation will be discussed in section 10), the next step
is debugging, which is a methodical process of
finding and reducing the number of bugs or faults
in a program. The purpose of debugging is to find
out why a program doesnt work or produces a
wrong result or output. Except for very simple
programs, debugging is always necessary.
5.1.Types of Errors
When a program does not work, it is often because
the program contains bugs or errors that can be
either syntactic errors, logical errors, or data errors.
Logical errors and data errors are also known as
two categories of faults in software engineering
terminology (see topic 1.1, Testing-Related Terminology, in the Software Testing KA).
Syntax errors are simply any error that prevents the translator (compiler/interpreter) from
successfully parsing the statement. Every statement in a program must be parse-able before its
meaning can be understood and interpreted (and,
therefore, executed). In high-level programming
languages, syntax errors are caught during the
compilation or translation from the high-level
language into machine code. For example, in the
C/C++ programming language, the statement
123=constant; contains a syntax error that will
be caught by the compiler during compilation.
Logic errors are semantic errors that result in
incorrect computations or program behaviors.
Your program is legal, but wrong! So the results
do not match the problem statement or user expectations. For example, in the C/C++ programming
language, the inline function int f(int x) {return
f(x-1);} for computing factorial x! is legal but
logically incorrect. This type of error cannot be
caught by a compiler during compilation and is
often discovered through tracing the execution of
the program (Modern static checkers do identify
some of these errors. However, the point remains
that these are not machine checkable in general).
Data errors are input errors that result either in
input data that is different from what the program
expects or in the processing of wrong data.
5.2.Debugging Techniques
Debugging involves many activities and can be
static, dynamic, or postmortem. Static debugging usually takes the form of code review, while
dynamic debugging usually takes the form of
tracing and is closely associated with testing.
Postmortem debugging is the act of debugging
the core dump (memory dump) of a process. Core
dumps are often generated after a process has terminated due to an unhandled exception. All three
techniques are used at various stages of program
development.
The main activity of dynamic debugging is
tracing, which is executing the program one piece
at a time, examining the contents of registers and
memory, in order to examine the results at each
step. There are three ways to trace a program.
Single-stepping: execute one instruction at
a time to make sure each instruction is executed correctly. This method is tedious but
useful in verifying each step of a program.
Breakpoints: tell the program to stop executing when it reaches a specific instruction.
This technique lets one quickly execute
selected code sequences to get a high-level
overview of the execution behavior.
Watch points: tell the program to stop when a
register or memory location changes or when
it equals to a specific value. This technique
is useful when one doesnt know where or
when a value is changed and when this value
change likely causes the error.
5.3.Debugging Tools
Debugging can be complex, difficult, and tedious.
Like programming, debugging is also highly creative (sometimes more creative than programming). Thus some help from tools is in order. For
dynamic debugging, debuggers are widely used
and enable the programmer to monitor the execution of a program, stop the execution, restart the
execution, set breakpoints, change values in memory, and even, in some cases, go back in time.
For static debugging, there are many static
code analysis tools, which look for a specific
set of known problems within the source code.
7.2.Attributes of Algorithms
The attributes of algorithms are many and often
include modularity, correctness, maintainability, functionality, robustness, user-friendliness
(i.e. easy to be understood by people), programmer time, simplicity, and extensibility. A commonly emphasized attribute is performance
or efficiency by which we mean both time
and resource-usage efficiency while generally
emphasizing the time axis. To some degree, efficiency determines if an algorithm is feasible or
impractical. For example, an algorithm that takes
one hundred years to terminate is virtually useless and is even considered incorrect.
7.3.Algorithmic Analysis
Analysis of algorithms is the theoretical study
of computer-program performance and resource
usage; to some extent it determines the goodness
of an algorithm. Such analysis usually abstracts
away the particular details of a specific computer
and focuses on the asymptotic, machine-independent analysis.
There are three basic types of analysis. In
worst-case analysis, one determines the maximum time or resources required by the algorithm
on any input of size n. In average-case analysis,
one determines the expected time or resources
required by the algorithm over all inputs of size
n; in performing average-case analysis, one often
needs to make assumptions on the statistical distribution of inputs. The third type of analysis is
the best-case analysis, in which one determines
the minimum time or resources required by the
algorithm on any input of size n. Among the
three types of analysis, average-case analysis is
the most relevant but also the most difficult to
perform.
Besides the basic analysis methods, there are
also the amortized analysis, in which one determines the maximum time required by an algorithm over a sequence of operations; and the
competitive analysis, in which one determines
the relative performance merit of an algorithm
against the optimal algorithm (which may not
be known) in the same category (for the same
operations).
[6*, c10]
Figure 13.3. Basic Components of a Computer System Based on the von Neumann Model
8.2.Systems Engineering
Systems engineering is the interdisciplinary
approach governing the total technical and managerial effort required to transform a set of customer needs, expectations, and constraints into
a solution and to support that solution throughout its life. [7]. The life cycle stages of systems
engineering vary depending on the system being
built but, in general, include system requirements
definition, system design, sub-system development, system integration, system testing, system installation, system evolution, and system
decommissioning.
Many practical guidelines have been produced
in the past to aid people in performing the activities of each phase. For example, system design
can be broken into smaller tasks of identification
of subsystems, assignment of system requirements to subsystems, specification of subsystem
functionality, definition of sub-system interfaces,
and so forth.
8.3.Overview of a Computer System
Among all the systems, one that is obviously relevant to the software engineering community is
the computer system. A computer is a machine
that executes programs or software. It consists of
a purposeful collection of mechanical, electrical,
9.Computer Organization
[8*, c1c4]
10.1.Compiler/Interpreter Overview
Programmers usually write programs in high
level language code, which the CPU cannot execute; so this source code has to be converted into
machine code to be understood by a computer.
Due to the differences between different ISAs,
the translation must be done for each ISA or specific machine language under consideration.
The translation is usually performed by a piece
of software called a compiler or an interpreter.
This process of translation from a high-level language to a machine language is called compilation, or, sometimes, interpretation.
10.2.Interpretation and Compilation
There are two ways to translate a program written in a higher-level language into machine code:
interpretation and compilation. Interpretation
translates the source code one statement at a time
into machine language, executes it on the spot,
and then goes back for another statement. Both
the high-level-language source code and the interpreter are required every time the program is run.
Compilation translates the high-level-language
source code into an entire machine-language program (an executable image) by a program called a
compiler. After compilation, only the executable
image is needed to run the program. Most application software is sold in this form.
While both compilation and interpretation convert high level language code into machine code,
[4*, c3]
Multiprogrammed batching OS: adds multitask capability into earlier simple batching
OSs. An example of such an OS is IBMs
OS/360.
Time-sharing OS: adds multi-task and interactive capabilities into the OS. Examples of
such OSs include UNIX, Linux, and NT.
Real-time OS: adds timing predictability into the OS by scheduling individual
tasks according to each tasks completion
deadlines. Examples of such OS include
VxWorks (WindRiver) and DART (EMC).
Distributed OS: adds the capability of managing a network of computers into the OS.
Embedded OS: has limited functionality and
is used for embedded systems such as cars
and PDAs. Examples of such OSs include
Palm OS, Windows CE, and TOPPER.
Alternatively, an OS can be classified by its
applicable target machine/environment into the
following.
Mainframe OS: runs on the mainframe computers and include OS/360, OS/390, AS/400,
MVS, and VM.
Server OS: runs on workstations or servers
and includes such systems as UNIX, Windows, Linux, and VMS.
Multicomputer OS: runs on multiple computers and include such examples as Novell
Netware.
Personal computers OS: runs on personal
computers and include such examples as
DOS, Windows, Mac OS, and Linux.
Mobile device OS: runs on personal devices
such as cell phones, IPAD and include such
examples of iOS, Android, Symbian, etc.
12.Database Basics and Data Management
[4*, c9]
A database consists of an organized collection of
data for one or more uses. In a sense, a database is
a generalization and expansion of data structures.
But the difference is that a database is usually
external to individual programs and permanent in
existence compared to data structures. Databases
are used when the data volume is large or logical
12.5.Data Management
A database must manage the data stored in it.
This management includes both organization and
storage.
The organization of the actual data in a database
depends on the database model. In a relational
model, data are organized as tables with different
tables representing different entities or relations
among a set of entities. The storage of data deals
with the storage of these database tables on disks.
The common ways for achieving this is to use files.
Sequential, indexed, and hash files are all used in
this purpose with different file structures providing
different access performance and convenience.
12.6.Data Mining
One often has to know what to look for before
querying a database. This type of pinpointing
access does not make full use of the vast amount
of information stored in the database, and in fact
reduces the database into a collection of discrete
records. To take full advantage of a database, one
can perform statistical analysis and pattern discovery on the content of a database using a technique called data mining. Such operations can be
used to support a number of business activities
that include, but are not limited to, marketing,
fraud detection, and trend analysis.
Numerous ways for performing data mining
have been invented in the past decade and include
such common techniques as class description,
class discrimination, cluster analysis, association
analysis, and outlier analysis.
13.Network Communication Basics
[8*, c12]
link layer protocols include frame-relay, asynchronous transfer mode (ATM), and Point-toPoint Protocol (PPP). Application layer protocols
include Fibre channel, Small Computer System
Interface (SCSI), and Bluetooth. For each layer
or even each individual protocol, there may be
standards established by national or international
organizations to guide the design and development of the corresponding protocols.
Application Layer
Presentation Layer
Session Layer
Transport Layer
Network Layer
Data link Layer
Physical Layer
Figure 13.5. The Seven-Layer OSI Networking Model
13.4.The Internet
The Internet is a global system of interconnected
governmental, academic, corporate, public, and
private computer networks. In the public domain
access to the internet is through organizations
known as internet service providers (ISP). The
ISP maintains one or more switching centers
called a point of presence, which actually connects the users to the Internet.
13.5.Internet of Things
The Internet of Things refers to the networking
of everyday objectssuch as cars, cell phones,
PDAs, TVs, refrigerators, and even buildings
using wired or wireless networking technologies.
The function and purpose of Internet of Things
is to interconnect all things to facilitate autonomous and better living. Technologies used in the
Internet of Things include RFID, wireless and
wired networking, sensor technology, and much
software of course. As the paradigm of Internet
of Things is still taking shape, much work is
needed for Internet of Things to gain wide spread
acceptance.
Different ways of coordination give rise to different computing models. The most common models in this regard are the shared memory (parallel) model and the message-passing (distributed)
model.
In a shared memory (parallel) model, all computers have access to a shared central memory
where local caches are used to speed up the
processing power. These caches use a protocol
to insure the localized data is fresh and up to
date, typically the MESI protocol. The algorithm
designer chooses the program for execution by
each computer. Access to the central memory can
be synchronous or asynchronous, and must be
coordinated such that coherency is maintained.
Different access models have been invented for
such a purpose.
In a message-passing (distributed) model, all
computers run some programs that collectively
achieve some purpose. The system must work
correctly regardless of the structure of the network. This model can be further classified into
client-server (C/S), browser-server (B/S), and
n-tier models. In the C/S model, the server provides services and the client requests services
from the server. In the B/S model, the server provides services and the client is the browser. In the
n-tier model, each tier (i.e. layer) provides services to the tier immediately above it and requests
services from the tier immediately below it. In
fact, the n-tier model can be seen as a chain of
client-server models. Often, the tiers between the
bottommost tier and the topmost tier are called
middleware, which is a distinct subject of study
in its own right.
14.4.Main Issues in Distributed Computing
Coordination among all the components in a distributed computing environment is often complex
and time-consuming. As the number of cores/
CPUs/computers increases, the complexity of
distributed computing also increases. Among
the many issues faced, memory coherency and
consensus among all computers are the most difficult ones. Many computation paradigms have
been invented to solve these problems and are
the main discussion issues in distributed/parallel
computing.
15.3.Software Robustness
Software robustness refers to the ability of software to tolerate erroneous inputs. Software is said
to be robust if it continues to function even when
erroneous inputs are given. Thus, it is unacceptable for software to simply crash when encountering an input problem as this may cause unexpected consequences, such as the loss of valuable
data. Software that exhibits such behavior is considered to lack robustness.
Nielsen gives a simpler description of software
robustness: The software should have a low
error rate, so that users make few errors during
the use of the system and so that if they do make
errors they can easily recover from them. Further,
catastrophic errors must not occur [9*].
There are many ways to evaluate the robustness of software and just as many ways to make
software more robust. For example, to improve
robustness, one should always check the validity
of the inputs and return values before progressing further; one should always throw an exception when something unexpected occurs, and
one should never quit a program without first
giving users/applications a chance to correct the
condition.
16.Basic Developer Human Factors
[3*, c3132]
Developer human factors refer to the considerations of human factors taken when developing
software. Software is developed by humans, read
by humans, and maintained by humans. If anything is wrong, humans are responsible for correcting those wrongs. Thus, it is essential to write
software in a way that is easily understandable
by humans or, at the very least, by other software
developers. A program that is easy to read and
understand exhibits readability.
The means to ensure that software meet this
objective are numerous and range from proper
architecture at the macro level to the particular
coding style and variable usage at the micro level.
But the two prominent factors are structure (or
program layouts) and comments (documentation).
16.1.Structure
Well-structured programs are easier to understand
and modify. If a program is poorly structured, then
no amount of explanation or comments is sufficient
to make it understandable. The ways to organize a
program are numerous and range from the proper
use of white space, indentation, and parentheses to
nice arrangements of groupings, blank lines, and
braces. Whatever style one chooses, it should be
consistent across the entire program.
16.2.Comments
To most people, programming is coding. These
people do not realize that programming also
includes writing comments and that comments are
an integral part of programming. True, comments
are not used by the computer and certainly do not
constitute final instructions for the computer, but
they improve the readability of the programs by
explaining the meaning and logic of the statements
or sections of code. It should be remembered that
programs are not only meant for computers, they
are also read, written, and modified by humans.
The types of comments include repeat of the
code, explanation of the code, marker of the
code, summary of the code, description of the
codes intent, and information that cannot possibly be expressed by the code itself. Some comments are good, some are not. The good ones
are those that explain the intent of the code and
justify why this code looks the way it does.The
bad ones are repeat of the code and stating irrelevant information. The best comments are selfdocumenting code. If the code is written in such a
clear and precise manner that its meaning is selfproclaimed, then no comment is needed. But this
is easier said than done. Most programs are not
self-explanatory and are often hard to read and
understand if no comments are given.
Here are some general guidelines for writing
good comments:
Comments should be consistent across the
entire program.
Each function should be associated with
comments that explain the purpose of the
function and its role in the overall program.
1. Validate input.
2. Heed compiler warnings.
3. Architect and design for security policies.
4. Keep it simple.
5. Default deny.
6. Adhere to the principle of least privilege.
7. Sanitize data sent to other software.
8. Practice defense in depth.
9. Use effective quality assurance techniques.
10. Adopt a software construction security
standard.
1.Problem Solving
Techniques
1.1.Definition of
Problem Solving
1.2.Formulating the
Real Problem
1.3.Analyze the
Problem
1.4.Design a
Solution Search
Strategy
1.5.Problem Solving
Using Programs
2.Abstraction
2.1.Levels of
Abstraction
2.2.Encapsulation
2.3.Hierarchy
3.Programming
Fundamentals
3.1.The
Programming
Process
3.2.Programming
Paradigms
3.3.Defensive
Programming
4.Programming
Language Basics
4.1.Programming
Language Overview
4.2.Syntax and
Semantics of
Programming
Language
s3.2,
s4.2
s3.2
s3.2
s3.2
s4.2
c5
s5.2
5.4
s5.2
5.3
s5.3
s5.2
c619
c6c19
c6c19
c8
c6
s6.1
s6.2
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
4.3.Low Level
Programming
Language
4.4.High Level
Programing
Language
4.5.Declarative
vs. Imperative
Programming
Language
5.Debugging Tools
and Techniques
5.1.Types of Errors
5.2.Debugging
Techniques:
5.3.Debugging
Tools
6.Data Structure and
Representation
6.1.Data Structure
Overview
6.2.Types of Data
Structure
6.3.Operations on
Data Structures
7.Algorithms and
Complexity
s6.5
6.7
s6.5
6.7
s6.5
6.7
c23
s23.1
s23.2
s23.5
s2.1
2.6
s2.1
2.6
s2.1
2.6
s2.1
2.6
s1.1
1.3,
s3.3
3.6,
s4.1
4.8,
s5.1
5.7,
s6.1
6.3,
s7.1
7.6,
s11.1,
s12.1
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
7.1.Overview of
Algorithms
7.2.Attributes of
Algorithms
7.3.Algorithmic
Analysis
7.4.Algorithmic
Design Strategies
7.5.Algorithmic
Analysis Strategies
8.Basic Concept of a
System
8.1.Emergent
System Properties
8.2.System
Engineering
8.3.Overview of a
Computer System
s1.11.2
s1.3
s1.3
s3.3
3.6,
s4.1
4.8,
s5.1
5.7,
s6.1
6.3,
s7.1
7.6,
s11.1,
s12.1
s3.3
3.6,
s4.1
4.8,
s5.1
5.7,
s6.1
6.3,
s7.1
7.6,
s11.1,
s12.1
c10
s10.1
s10.2
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
9.Computer
Organization
9.1.Computer
Organization
Overview
9.2.Digital Systems
9.3.Digital Logic
9.4.Computer
Expression of Data
9.5.The Central
Processing Unit
(CPU)
9.6.Memory System
Organization
9.7.Input and Output
(I/O)
10.Compiler Basics
10.1.Compiler
Overview
10.2.Interpretation
and Compilation
10.3.The
Compilation Process
11.Operating
Systems Basics
11.1.Operating
Systems Overview
11.2.Tasks of
Operating System
11.3.Operating
System Abstractions
11.4.Operating
Systems
Classification
c14
s1.11.2
c3
c3
c2
s4.1
4.2
s4.6
s4.5
s6.4
s8.4
s8.4
s8.4
s6.4
c3
s3.2
s3.3
s3.2
s3.1
s8.4
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
12.Database
Basics and Data
Management
12.1.Entity and
Schema
12.2.Database
Management
Systems (DBMS)
12.3.Database
Query Language
12.4.Tasks of
DBMS Packages
12.5.Data
Management
12.6.Data Mining
13.Network
Communication
Basics
13.1.Types of
Network
13.2.Basic Network
Components
13.3.Networking
Protocols and
Standards
13.4.The Internet
13.5.Internet of
Things
13.6.Virtual Private
Network
14.Parallel and
Distributed
Computing
14.1.Parallel
and Distributed
Computing
Overview
c9
s9.1
s9.1
s9.2
s9.2
s9.5
s9.6
c12
s12.2
12.3
s12.6
s12.4
12.5
s12.8
c9
s9.4.1
9.4.3
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
14.2.Differences
between Parallel
and Distributed
Computing
14.3.Parallel
and Distributed
Computing Models
14.4.Main Issues
in Distributed
Computing
15.Basic User
Human Factors
15.1.Input and
Output
Bishop 2002
[11*]
Nielsen 1993
[9*]
Sommerville 2011
[6*]
s9.4.4
9.4.5
s9.4.4
9.4.5
c8
c5
s5.1,
s5.3
s5.2,
s5.8
s5.5
5.6
15.2.Error Messages
15.3.Software
Robustness
16.Basic Developer
Human Factors
16.1.Structure
16.2.Comments
17.Secure Software
Development and
Maintenance
17.1.Two Aspects of
Secure Coding
17.2.Coding
Security into
Software
17.3.Requirement
Security
17.4.Design
Security
17.5.Implementation
Security
Brookshear 2008
[4*]
McConnell 2004
[3*]
Voland 2003
[2*]
c3132
c31
c32
c29
s29.1
s29.4
s29.2
s29.3
s29.5
REFERENCES
[1] Joint Task Force on Computing Curricula,
IEEE Computer Society and Association
for Computing Machinery, Software
Engineering 2004: Curriculum Guidelines
for Undergraduate Degree Programs in
Software Engineering, 2004; http://sites.
computer.org/ccse/SE2004Volume.pdf.
[2*] G. Voland, Engineering by Design, 2nd ed.,
Prentice Hall, 2003.
[3*] S. McConnell, Code Complete, 2nd ed.,
Microsoft Press, 2004.
[4*] J.G. Brookshear, Computer Science: An
Overview, 10th ed., Addison-Wesley, 2008.
[5*] E. Horowitz et al., Computer Algorithms,
2nd ed., Silicon Press, 2007.
[6*] I. Sommerville, Software Engineering, 9th
ed., Addison-Wesley, 2011.
CHAPTER 14
MATHEMATICAL FOUNDATIONS
INTRODUCTION
Software professionals live with programs. In a
very simple language, one can program only for
something that follows a well-understood, nonambiguous logic. The Mathematical Foundations
knowledge area (KA) helps software engineers
comprehend this logic, which in turn is translated
into programming language code. The mathematics that is the primary focus in this KA is quite
different from typical arithmetic, where numbers
are dealt with and discussed. Logic and reasoning are the essence of mathematics that a software
engineer must address.
Mathematics, in a sense, is the study of formal
systems. The word formal is associated with
preciseness, so there cannot be any ambiguous or
erroneous interpretation of the fact. Mathematics is therefore the study of any and all certain
truths about any concept. This concept can be
about numbers as well as about symbols, images,
sounds, videoalmost anything. In short, not
only numbers and numeric equations are subject to preciseness. On the contrary, a software
engineer needs to have a precise abstraction on a
diverse application domain.
The SWEBOK Guides Mathematical Foundations KA covers basic techniques to identify a set
of rules for reasoning in the context of the system
under study. Anything that one can deduce following these rules is an absolute certainty within
the context of that system. In this KA, techniques
that can represent and take forward the reasoning
and judgment of a software engineer in a precise
(and therefore mathematical) manner are defined
and discussed. The language and methods of logic
that are discussed here allow us to describe mathematical proofs to infer conclusively the absolute
truth of certain concepts beyond the numbers. In
[1*, c2]
14-1
1.1.Set Operations
Intersection. The intersection of two sets X and
Y, denoted by X Y, is the set of common elements in both X and Y.
In other words, X Y = {p | (p X) (p Y)}.
As, for example, {1, 2, 3} {3, 4, 6} = {3}
If X Y = f, then the two sets X and Y are said
to be a disjoint pair of sets.
The shaded portion of the Venn diagram in Figure 14.5 represents the complement set of X.
Set Difference or Relative Complement. The set
of elements that belong to set X but not to set Y
builds the set difference of Y from X. This is represented by X Y.
In other words, X Y = {p | (p X) (p Y)}.
As, for example, {1, 2, 3} {3, 4, 6} = {1, 2}.
It may be proved that X Y = X Y.
Set difference X Y is illustrated by the shaded
region in Figure 14.6 using a Venn diagram.
1. Associative Laws:
X (Y Z) = (X Y) Z
X (Y Z) = (X Y) Z
2. Commutative Laws:
X Y = Y X
XY=YX
3. Distributive Laws:
X (Y Z) = (X Y) (X Z)
X (Y Z) = (X Y) (X Z)
4. Identity Laws:
X = X X U = X
5. Complement Laws:
X X' = U X X' =
6. Idempotent Laws:
X X = X
XX=X
7. Bound Laws:
X U = U
X=
8. Absorption Laws:
X (X Y) = X
X (X Y) = X
9. De Morgans Laws:
(X Y)' = X' Y'
2.Basic Logic
[1*, c1]
2.1.Propositional Logic
A proposition is a statement that is either true
or false, but not both. Lets consider declarative
sentences for which it is meaningful to assign
either of the two status values: true or false. Some
examples of propositions are given below.
1. The sun is a star
2. Elephants are mammals.
3. 2 + 3 = 5.
However, a + 3 = b is not a proposition, as it is
neither true nor false. It depends on the values of
the variables a and b.
The Law of Excluded Middle: For every proposition p, either p is true or p is false.
The Law of Contradiction: For every proposition p, it is not the case that p is both true and false.
Propositional logic is the area of logic that
deals with propositions. A truth table displays
the relationships between the truth values of
propositions.
A Boolean variable is one whose value is either
true or false. Computer bit operations correspond
to logical operations of Boolean variables.
The basic logical operators including negation
( p), conjunction (p q), disjunction (p q),
exclusive or (p q), and implication (p q) are
to be studied. Compound propositions may be
formed using various logical operators.
A compound proposition that is always true is a
tautology. A compound proposition that is always
false is a contradiction. A compound proposition
that is neither a tautology nor a contradiction is a
contingency.
Compound propositions that always have the
same truth value are called logically equivalent
(denoted by ). Some of the common equivalences are:
Identity laws:
p T p
pFp
Domination laws:
p T T
pFF
Idempotent laws:
p p p
ppp
Double negation law:
( p) p
Commutative laws:
p q q p p q q p
Associative laws:
(p q) r p (q r)
(p q) r p (q r)
Distributive laws:
p (q r) (p q) (p r)
p (q r) (p q) (p r)
De Morgans laws:
(p q) p q
(p q) p q
2.2.Predicate Logic
A predicate is a verb phrase template that
describes a property of objects or a relationship
among objects represented by the variables. For
example, in the sentence, The flower is red, the
template is red is a predicate. It describes the
property of a flower. The same predicate may be
used in other sentences too.
Predicates are often given a name, e.g., Red
or simply R can be used to represent the predicate is red. Assuming R as the name for the predicate is red, sentences that assert an object is of the
color red can be represented as R(x), where x represents an arbitrary object. R(x) reads as x is red.
Quantifiers allow statements about entire collections of objects rather than having to enumerate the objects by name.
The Universal quantifier x asserts that a sentence is true for all values of variable x.
For example, x Tiger(x) Mammal(x)
means all tigers are mammals.
The Existential quantifier x asserts that a sentence is true for at least one value of variable x.
For example, x Tiger(x) Man-eater(x) means
there exists at least one tiger that is a man-eater.
Thus, while universal quantification uses
implication, the existential quantification naturally uses conjunction.
[1*, c1]
positive integer 1. In the second phase, it is established that if the proposition holds for an arbitrary
positive integer k, then it must also hold for the
next greater integer, k + 1. In other words, proof
by induction is based on the rule of inference that
tells us that the truth of an infinite sequence of
propositions P(n), n [1 ] is established
if P(1) is true, and secondly, k [2 ... n] if P(k)
P(k + 1).
It may be noted here that, for a proof by mathematical induction, it is not assumed that P(k) is
true for all positive integers k. Proving a theorem or proposition only requires us to establish
that if it is assumed P(k) is true for any arbitrary
positive integer k, then P(k + 1) is also true. The
correctness of mathematical induction as a valid
proof technique is beyond discussion of the current text. Let us prove the following proposition
using induction.
Proposition: The sum of the first n positive odd
integers P(n) is n2.
Basis Step: The proposition is true for n = 1 as
P(1) = 12 = 1. The basis step is complete.
Inductive Step: The induction hypothesis (IH)
is that the proposition is true for n = k, k being an
arbitrary positive integer k.
1 + 3 + 5+ + (2k 1) = k2
Now, its to be shown that P(k) P(k + 1).
P(k + 1) = 1 + 3 + 5+ +(2k 1) + (2k + 1)
= P(k) + (2k + 1)
= k2 + (2k + 1) [using IH]
= k2 + 2k + 1
= (k + 1)2
Thus, it is shown that if the proposition is true
for n = k, then it is also true for n = k + 1.
The basis step together with the inductive step of
the proof show that P(1) is true and the conditional
statement P(k) P(k + 1) is true for all positive
integers k. Hence, the proposition is proved.
4.Basics of Counting
[1*c6]
5.1.Graphs
A graph G = (V, E) where V is the set of vertices
(nodes) and E is the set of edges. Edges are also
referred to as arcs or links.
the degree of v, deg(v), is its number of incident edges, except that any self-loops are
counted twice.
Adjacency
List
B, C
A, B, C
A, B
For example, Figure 14.14 illustrates the adjacency lists for the pseudograph in Figure 14.10
and the directed graph in Figure 14.11. As the
out-degree of vertex C in Figure 14.11 is zero,
there is no entry against C in the adjacency list.
Different representations for a graphlike
adjacency matrix, incidence matrix, and adjacency listsneed to be studied.
5.2.Trees
A tree T(N, E) is a hierarchical data structure of n
= |N| nodes with a specially designated root node
R while the remaining n 1 nodes form subtrees
under the root node R. The number of edges |E| in
a tree would always be equal to |N| 1.
The subtree at node X is the subgraph of the
tree consisting of node X and its descendants and
all edges incident to those descendants. As an
alternate to this recursive definition, a tree may
be defined as a connected undirected graph with
no simple circuits.
[1*, c7]
P(x)
1/6
1/6
1/6
1/6
1/6
1/6
These numbers indeed aim to derive the average value from repeated experiments. This is
based on the single most important phenomenon of probability, i.e., the average value from
repeated experiments is likely to be close to the
expected value of one experiment. Moreover,
the average value is more likely to be closer to
the expected value of any one experiment as the
number of experiments increases.
7.Finite State Machines
[1*, c13]
A computer system may be abstracted as a mapping from state to state driven by inputs. In other
words, a system may be considered as a transition
function T: S I S O, where S is the set of
states and I, O are the input and output functions.
If the state set S is finite (not infinite), the system is called a finite state machine (FSM).
Alternately, a finite state machine (FSM) is a
mathematical abstraction composed of a finite
number of states and transitions between those
states. If the domain S I is reasonably small,
then one can specify T explicitly using diagrams
similar to a flow graph to illustrate the way logic
flows for different inputs. However, this is practical only for machines that have a very small
information capacity.
An FSM has a finite internal memory, an input
feature that reads symbols in a sequence and one
at a time, and an output feature.
The operation of an FSM begins from a start
state, goes through transitions depending on input
to different states, and can end in any valid state.
However, only a few of all the states mark a successful flow of operation. These are called accept
states.
The information capacity of an FSM is
C = log |S|. Thus, if we represent a machine having
an information capacity of C bits as an FSM, then
its state transition graph will have |S| = 2C nodes.
A finite state machine is formally defined as M
= (S, I, O, f, g, s0).
S is the state set;
I is the set of input symbols;
O is the set of output symbols;
f is the state transition function;
The state transition and output values for different inputs on different states may be represented
using a state table. The state table for the FSM in
Figure 14.21 is shown in Figure 14.22. Each pair
against an input symbol represents the new state
and the output symbol.
For example, Figures 14.22(a) and 14.22(b) are
two alternate representations of the FSM in Figure 14.21.
8.Grammars
Current
State
S0
S1
S2
S2
S0
(a)
Current
State
S0
S1
S2
Output
Input
0
1
3
2
3
2
2
3
State
Input
0
1
S2
S1
S2
S2
S2
S0
(b)
Figure 14.22. Tabular Representation of an FSM
[1*, c13]
[1*, c4]
11.1.Group
A set S closed under a binary operation forms a
group if the binary operation satisfies the following four criteria:
Associative: a, b, c S, the equation (a b)
c = a (b c) holds.
Identity: There exists an identity element I
S such that for all a S, I a = a I = a.
Inverse: Every element a S, has an inverse
a' S with respect to the binary operation,
i.e., a a' = I; for example, the set of integers
Z with respect to the addition operation is a
group. The identity element of the set is 0 for
the addition operation. x Z, the inverse
of x would be x, which is also included in Z.
Closure property: a, b S, the result of the
operation a b S.
A group that is commutative, i.e., a b = b a,
is known as a commutative or Abelian group.
The set of natural numbers N (with the operation of addition) is not a group, since there is no
inverse for any x > 0 in the set of natural numbers.
Thus, the third rule (of inverse) for our operation
is violated. However, the set of natural number
has some structure.
Sets with an associative operation (the first
condition above) are called semigroups; if they
also have an identity element (the second condition), then they are called monoids.
Our set of natural numbers under addition is
then an example of a monoid, a structure that
is not quite a group because it is missing the
requirement that every element have an inverse
under the operation.
A monoid is a set S that is closed under a single
associative binary operation and has an identity
element I S such that for all a S, I a = a I
= a. A monoid must contain at least one element.
For example, the set of natural numbers N
forms a commutative monoid under addition with
identity element 0. The same set of natural numbers N also forms a monoid under multiplication
with identity element 1. The set of positive integers P forms a commutative monoid under multiplication with identity element 1.
It may be noted that, unlike those in a group,
elements of a monoid need not have inverses. A
11.2.Rings
If we take an Abelian group and define a second
operation on it, a new structure is found that is
different from just a group. If this second operation is associative and is distributive over the
first, then we have a ring.
A ring is a triple of the form (S, +, ), where (S,
+) is an Abelian group, (S, ) is a semigroup, and
is distributive over +; i.e., a, b, c S, the equation a (b + c) = (a b) + (a c) holds. Further, if
is commutative, then the ring is said to be commutative. If there is an identity element for the
operation, then the ring is said to have an identity.
For example, (Z, +, *), i.e., the set of integers Z,
with the usual addition and multiplication operations, is a ring. As (Z, *) is commutative, this ring
is a commutative or Abelian ring. The ring has 1
as its identity element.
Lets note that the second operation may not
have an identity element, nor do we need to find
an inverse for every element with respect to this
second operation. As for what distributive means,
intuitively it is what we do in elementary mathematics when performing the following change: a
* (b + c) = (a * b) + (a * c).
A field is a ring for which the elements of the
set, excluding 0, form an Abelian group with the
second operation.
A simple example of a field is the field of rational numbers (R, +, *) with the usual addition
and multiplication operations. The numbers of
the format a/b R, where a, b are integers and
b 0. The additive inverse of such a fraction is
simply a/b, and the multiplicative inverse is b/a
provided that a 0.
c2
c1
c1
c6
c10, c11
c7
c13
c13
Rosen 2011
[1*]
c2
c4
REFERENCES
ACKNOWLEDGMENTS
The author thankfully acknowledges the contribution of Prof. Arun Kumar Chatterjee, Ex-Head,
Department of Mathematics, Manipur University, India, and Prof. Devadatta Sinha, Ex-Head,
Department of Computer Science and Engineering, University of Calcutta, India, in preparing
this chapter on Mathematical Foundations.
CHAPTER 15
ENGINEERING FOUNDATIONS
ACRONYMS
CAD
CMMI
pdf
pmf
RCA
SDLC
Computer-Aided Design
Capability Maturity Model
Integration
Probability Density Function
Probability Mass Function
Root Cause Analysis
Software Development Life Cycle
INTRODUCTION
IEEE defines engineering as the application of
a systematic, disciplined, quantifiable approach
to structures, machines, products, systems or
processes [1]. This chapter outlines some of the
engineering foundational skills and techniques
that are useful for a software engineer. The focus
is on topics that support other KAs while minimizing duplication of subjects covered elsewhere
in this document.
As the theory and practice of software engineering matures, it is increasingly apparent that
software engineering is an engineering discipline that is based on knowledge and skills common to all engineering disciplines. This Engineering Foundations knowledge area (KA) is
concerned with the engineering foundations that
apply to software engineering and other engineering disciplines. Topics in this KA include
empirical methods and experimental techniques;
statistical analysis; measurement; engineering
design; modeling, prototyping, and simulation;
standards; and root cause analysis. Application
of this knowledge, as appropriate, will allow
software engineers to develop and maintain
software more efficiently and effectively. Completing their engineering work efficiently and
15-1
2.Statistical Analysis
In order to carry out their responsibilities, engineers must understand how different product
and process characteristics vary. Engineers often
come across situations where the relationship
between different variables needs to be studied.
An important point to note is that most of the
studies are carried out on the basis of samples
and so the observed results need to be understood
with respect to the full population. Engineers
must, therefore, develop an adequate understanding of statistical techniques for collecting reliable
data in terms of sampling and analysis to arrive at
results that can be generalized. These techniques
are discussed below.
2.1.Unit of Analysis (Sampling Units),
Population, and Sample
Unit of analysis. While carrying out any empirical study, observations need to be made on chosen units called the units of analysis or sampling
units. The unit of analysis must be identified and
must be appropriate for the analysis. For example, when a software product company wants to
find the perceived usability of a software product,
the user or the software function may be the unit
of analysis.
Population. The set of all respondents or items
(possible sampling units) to be studied forms the
population. As an example, consider the case of
studying the perceived usability of a software
product. In this case, the set of all possible users
forms the population.
While defining the population, care must be
exercised to understand the study and target
population. There are cases when the population studied and the population for which the
or pmf is known, the chances of the random variable taking certain set of values may be computed
theoretically.
Concept of estimation [2*, c6s2, c7s1, c7s3].
The true values of the parameters of a distribution
are usually unknown and need to be estimated
from the sample observations. The estimates are
functions of the sample values and are called statistics. For example, the sample mean is a statistic
and may be used to estimate the population mean.
Similarly, the rate of occurrence of defects estimated from the sample (rate of defects per line of
code) is a statistic and serves as the estimate of
the population rate of rate of defects per line of
code. The statistic used to estimate some population parameter is often referred to as the estimator
of the parameter.
A very important point to note is that the results
of the estimators themselves are random. If we
take a different sample, we are likely to get a different estimate of the population parameter. In the
theory of estimation, we need to understand different properties of estimatorsparticularly, how
much the estimates can vary across samples and
how to choose between different alternative ways
to obtain the estimates. For example, if we wish
to estimate the mean of a population, we might
use as our estimator a sample mean, a sample
median, a sample mode, or the midrange of the
sample. Each of these estimators has different
statistical properties that may impact the standard
error of the estimate.
Types of estimates [2*, c7s3, c8s1].There are
two types of estimates: namely, point estimates
and interval estimates. When we use the value
of a statistic to estimate a population parameter,
we get a point estimate. As the name indicates, a
point estimate gives a point value of the parameter being estimated.
Although point estimates are often used, they
leave room for many questions. For instance, we
are not told anything about the possible size of
error or statistical properties of the point estimate. Thus, we might need to supplement a point
estimate with the sample size as well as the variance of the estimate. Alternately, we might use
an interval estimate. An interval estimate is a
random interval with the lower and upper limits of the interval being functions of the sample
observations as well as the sample size. The limits are computed on the basis of some assumptions regarding the sampling distribution of the
point estimate on which the limits are based.
Properties of estimators. Various statistical
properties of estimators are used to decide about
the appropriateness of an estimator in a given
situation. The most important properties are that
an estimator is unbiased, efficient, and consistent
with respect to the population.
Tests of hypotheses [2*, c9s1].A hypothesis is
a statement about the possible values of a parameter. For example, suppose it is claimed that a
new method of software development reduces the
occurrence of defects. In this case, the hypothesis is that the rate of occurrence of defects has
reduced. In tests of hypotheses, we decideon
the basis of sample observationswhether a proposed hypothesis should be accepted or rejected.
For testing hypotheses, the null and alternative
hypotheses are formed. The null hypothesis is the
hypothesis of no change and is denoted as H0. The
alternative hypothesis is written as H1. It is important to note that the alternative hypothesis may be
one-sided or two-sided. For example, if we have
the null hypothesis that the population mean is not
less than some given value, the alternative hypothesis would be that it is less than that value and we
would have a one-sided test. However, if we have
the null hypothesis that the population mean is
equal to some given value, the alternative hypothesis would be that it is not equal and we would
have a two-sided test (because the true value could
be either less than or greater than the given value).
In order to test some hypothesis, we first compute some statistic. Along with the computation
of the statistic, a region is defined such that in
case the computed value of the statistic falls in
that region, the null hypothesis is rejected. This
region is called the critical region (also known as
the confidence interval). In tests of hypotheses,
we need to accept or reject the null hypothesis
on the basis of the evidence obtained. We note
that, in general, the alternative hypothesis is the
hypothesis of interest. If the computed value of
the statistic does not fall inside the critical region,
then we cannot reject the null hypothesis. This
indicates that there is not enough evidence to
believe that the alternative hypothesis is true.
Statistical Decision
Accept H0
OK
Type II error
(probability = b)
Reject H0
Type I error
(probability = a)
OK
measured in interval scale, as it is not necessary to define what zero intelligence would
mean.
If a variable is measured in interval scale, most
of the usual statistical analyses like mean, standard deviation, correlation, and regression may
be carried out on the measured values.
Ratio scale: These are quite commonly encountered in physical science. These scales of measures are characterized by the fact that operations
exist for determining all 4 relations: equality, rank
order, equality of intervals, and equality of ratios.
Once such a scale is available, its numerical values can be transformed from one unit to another
by just multiplying by a constant, e.g., conversion
of inches to feet or centimeters. When measurements are being made in ratio scale, existence of
a nonarbitrary zero is mandatory. All statistical
measures are applicable to ratio scale; logarithm
usage is valid only when these scales are used, as
in the case of decibels. Some examples of ratio
measures are
the number of statements in a software
program
temperature measured in the Kelvin (K) scale
or in Fahrenheit (F).
An additional measurement scale, the absolute
scale, is a ratio scale with uniqueness of the measure; i.e., a measure for which no transformation
is possible (for example, the number of programmers working on a project).
3.2.Direct and Derived Measures
[6*, c7s5]
Measures may be either direct or derived (sometimes called indirect measures). An example of
a direct measure would be a count of how many
times an event occurred, such as the number of
defects found in a software product. A derived
measure is one that combines direct measures in
some way that is consistent with the measurement
method. An example of a derived measure would
be calculating the productivity of a team as the
number of lines of code developed per developermonth. In both cases, the measurement method
determines how to make the measurement.
A basic question to be asked for any measurement method is whether the proposed measurement method is truly measuring the concept with
good quality. Reliability and validity are the two
most important criteria to address this question.
The reliability of a measurement method is
the extent to which the application of the measurement method yields consistent measurement
results. Essentially, reliability refers to the consistency of the values obtained when the same item
is measured a number of times. When the results
agree with each other, the measurement method
is said to be reliable. Reliability usually depends
on the operational definition. It can be quantified
by using the index of variation, which is computed as the ratio between the standard deviation
and the mean. The smaller the index, the more
reliable the measurement results.
Validity refers to whether the measurement
method really measures what we intend to measure. Validity of a measurement method may
be looked at from three different perspectives:
namely, construct validity, criteria validity, and
content validity.
3.4.Assessing Reliability
[4*, c3s5]
There are several methods for assessing reliability; these include the test-retest method, the
alternative form method, the split-halves method,
and the internal consistency method. The easiest of these is the test-retest method. In the testretest method, we simply apply the measurement
method to the same subjects twice. The correlation coefficient between the first and second set
of measurement results gives the reliability of the
measurement method.
4.Engineering Design
refine the design or drive the selection of an alternative design solution. One of the most important activities in design is documentation of the
design solution as well as of the tradeoffs for the
choices made in the design of the solution. This
work should be carried out in a manner such that
the solution to the design problem can be communicated clearly to others.
The testing and verification take us back to the
success criteria. The engineer needs to devise
tests such that the ability of the design to meet the
success criteria is demonstrated. While designing the tests, the engineer must think through
different possible failure modes and then design
tests based on those failure modes. The engineer
may choose to carry out designed experiments to
assess the validity of the design.
5.Modeling, Simulation, and Prototyping
[5*, c6] [11*, c13s3] [12*, c2s3.1]
Modeling is part of the abstraction process used
to represent some aspects of a system. Simulation uses a model of the system and provides a
means of conducting designed experiments with
that model to better understand the system, its
behavior, and relationships between subsystems,
as well as to analyze aspects of the design. Modeling and simulation are techniques that can be
used to construct theories or hypotheses about the
behavior of the system; engineers then use those
theories to make predictions about the system.
Prototyping is another abstraction process where
a partial representation (that captures aspects of
interest) of the product or system is built. A prototype may be an initial version of the system but
lacks the full functionality of the final version.
5.1.Modeling
A model is always an abstraction of some real
or imagined artifact. Engineers use models in
many ways as part of their problem solving
activities. Some models are physical, such as a
made-to-scale miniature construction of a bridge
or building. Other models may be nonphysical
representations, such as a CAD drawing of a cog
or a mathematical model for a process. Models
help engineers reason and understand aspects of
a problem. They can also help engineers understand what they do know and what they dont
know about the problem at hand.
There are three types of models: iconic, analogic, and symbolic. An iconic model is a visually equivalent but incomplete 2-dimensional
or 3-dimensional representationfor example,
maps, globes, or built-to-scale models of structures such as bridges or highways. An iconic
model actually resembles the artifact modeled.
In contrast, an analogic model is a functionally
equivalent but incomplete representation. That
is, the model behaves like the physical artifact
even though it may not physically resemble it.
Examples of analogic models include a miniature
airplane for wind tunnel testing or a computer
simulation of a manufacturing process.
Finally, a symbolic model is a higher level of
abstraction, where the model is represented using
symbols such as equations. The model captures
the relevant aspects of the process or system in
symbolic form. The symbols can then be used to
increase the engineers understanding of the final
system. An example is an equation such as F =
Ma. Such mathematical models can be used to
describe and predict properties or behavior of the
final system or product.
5.2.Simulation
All simulation models are a specification of reality. A central issue in simulation is to abstract
and specify an appropriate simplification of
reality. Developing this abstraction is of vital
importance, as misspecification of the abstraction would invalidate the results of the simulation
exercise. Simulation can be used for a variety of
testing purposes.
Simulation is classified based on the type of
system under study. Thus, simulation can be either
continuous or discrete. In the context of software
engineering, the emphasis will be primarily on
discrete simulation. Discrete simulations may
model event scheduling or process interaction.
The main components in such a model include
entities, activities and events, resources, the state
of the system, a simulation clock, and a random
number generator. Output is generated by the
simulation and must be analyzed.
6.Standards
regional and governmentally recognized organizations that generate standards for that region or
country. For example, in the United States, there
are over 300 organizations that develop standards. These include organizations such as the
American National Standards Institute (ANSI),
the American Society for Testing and Materials
(ASTM), the Society of Automotive Engineers
(SAE), and Underwriters Laboratories, Inc. (UL),
as well as the US government. For more detail
on standards used in software engineering, see
Appendix B on standards.
There is a set of commonly used principles
behind standards. Standards makers attempt to
have consensus around their decisions. There is
usually an openness within the community of
interest so that once a standard has been set, there
is a good chance that it will be widely accepted.
Most standards organizations have well-defined
processes for their efforts and adhere to those
processes carefully. Engineers must be aware of
the existing standards but must also update their
understanding of the standards as those standards
change over time.
In many engineering endeavors, knowing and
understanding the applicable standards is critical
and the law may even require use of particular
standards. In these cases, the standards often represent minimal requirements that must be met by
the endeavor and thus are an element in the constraints imposed on any design effort. The engineer must review all current standards related to
a given endeavor and determine which must be
met. Their designs must then incorporate any and
all constraints imposed by the applicable standard. Standards important to software engineers
are discussed in more detail in an appendix specifically on this subject.
7.Root Cause Analysis
[4*, c5, c3s7, c9s8] [5*, c9s3, c9s4, c9s5]
[13*, c13s3.4.5]
Root cause analysis (RCA) is a process designed
to investigate and identify why and how an
undesirable event has happened. Root causes
are underlying causes. The investigator should
attempt to identify specific underlying causes of
the event that has occurred. The primary objective
of RCA is to prevent recurrence of the undesirable event. Thus, the more specific the investigator can be about why an event occurred, the easier
it will be to prevent recurrence. A common way
to identify specific underlying cause(s) is to ask a
series of why questions.
7.1.Techniques for Conducting Root Cause
Analysis
[4*, c5] [5*, c3]
There are many approaches used for both quality
control and root cause analysis. The first step in
any root cause analysis effort is to identify the real
problem. Techniques such as statement-restatement, why-why diagrams, the revision method,
present state and desired state diagrams, and the
fresh-eye approach are used to identify and refine
the real problem that needs to be addressed.
Once the real problem has been identified, then
work can begin to determine the cause of the
problem. Ishikawa is known for the seven tools
for quality control that he promoted. Some of
those tools are helpful in identifying the causes
for a given problem. Those tools are check sheets
or checklists, Pareto diagrams, histograms, run
charts, scatter diagrams, control charts, and
fishbone or cause-and-effect diagrams. More
recently, other approaches for quality improvement and root cause analysis have emerged. Some
examples of these newer methods are affinity diagrams, relations diagrams, tree diagrams, matrix
charts, matrix data analysis charts, process decision program charts, and arrow diagrams. A few
of these techniques are briefly described below.
A fishbone or cause-and-effect diagram is a
way to visualize the various factors that affect
some characteristic. The main line in the diagram
represents the problem and the connecting lines
represent the factors that led to or influenced the
problem. Those factors are broken down into subfactors and sub-subfactors until root causes can
be identified.
c9s1,
c2s1
c3s6,
c3s9,
c4s6,
c6s2,
c7s1,
c7s3,
c8s1,
c9s1
2.2.Concepts of
c11s2,
Correlation and
c11s8
Regression
3.Measurement
3.1.Levels
(Scales) of
Measurement
3.2.Direct
and Derived
Measures
c3s2
c7s5
p442
447
Moore 2006
[13*]
Sommerville 2011
[12*]
c7s5
c10s3
c4s4
c1
McConnell 2004
[10*]
Fairley 2009
[6*]
c3s1,
c3s2
Tockey 2004
[7*]
Voland 2003
[5*]
2.1.Concept of
Unit of Analysis
(Sampling
Units), Sample,
and Population
Kan 2002
[4*]
1.Empirical
Methods and
Experimental
Techniques
1.1.Designed
Experiment
1.2.
Observational
Study
1.3.
Retrospective
Study
2.Statistical
Analysis
Moore 2006
[13*]
Sommerville 2011
[12*]
McConnell 2004
[10*]
Tockey 2004
[7*]
Fairley 2009
[6*]
Voland 2003
[5*]
Kan 2002
[4*]
c2
s3.1
c3s5
c1s2,
c1s3,
c1s4
4.1.Design in
Engineering
Education
4.2.Design
as a Problem
Solving Activity
4.3.Steps
Involved in
Engineering
Design
5.Modeling,
Prototyping, and
Simulation
5.1.Modeling
5.2.Simulation
5.3.Prototyping
c1s4,
c2s1,
c3s3
c5s1
c4
c6
c5,
c3s7,
c9s8
c9
s3.2
c9s3,
c9s4,
c9s5
c5
c3
6.Standards
7.1.Techniques
for Conducting
Root Cause
Analysis
c13s3
c3s4,
c3s5
4.Engineering
Design
7.Root Cause
Analysis
3.3.Reliability
and Validity
3.4.Assessing
Reliability
c1s2
c13
s3.4.5
FURTHER READINGS
A. Abran, Software Metrics and Software
Metrology. [14]
REFERENCES
[1] ISO/IEC/IEEE 24765:2010 Systems and
Software EngineeringVocabulary, ISO/
IEC/IEEE, 2010.
[2*] D.C. Montgomery and G.C. Runger,
Applied Statistics and Probability for
Engineers, 4th ed., Wiley, 2007.
[3*] L. Null and J. Lobur, The Essentials of
Computer Organization and Architecture,
2nd ed., Jones and Bartlett Publishers,
2006.
[4*] S.H. Kan, Metrics and Models in Software
Quality Engineering, 2nd ed., AddisonWesley, 2002.
[5*] G. Voland, Engineering by Design, 2nd ed.,
Prentice Hall, 2003.
[6*] R.E. Fairley, Managing and Leading
Software Projects, Wiley-IEEE Computer
Society Press, 2009.
[7*] S. Tockey, Return on Software: Maximizing
the Return on Your Software Investment,
Addison-Wesley, 2004.
[8] Canadian Engineering Accreditation Board,
Engineers Canada, Accreditation Criteria
and Procedures, Canadian Council of
Professional Engineers, 2011; www.
engineerscanada.ca/files/w_Accreditation_
Criteria_Procedures_2011.pdf.
APPENDIX A
KNOWLEDGE AREA DESCRIPTION
SPECIFICATIONS
INTRODUCTION
This document presents the specifications provided to the Knowledge Area Editors (KA Editors) regarding the Knowledge Area Descriptions
(KA Descriptions) of the Version 3 (V3) edition
of the Guide to the Software Engineering Body
of Knowledge (SWEBOK Guide). This document
will also enable readers, reviewers, and users to
clearly understand what specifications were used
when developing this version of the SWEBOK
Guide.
This document begins by situating the SWEBOK Guide as a foundational document for the
IEEE Computer Society suite of software engineering products and more widely within the
software engineering community at large. The
role of the baseline and the Change Control
Board is then described. Criteria and requirements are defined for the breakdowns of topics,
for the rationale underlying these breakdowns
and the succinct description of topics, and for reference materials. Important input documents are
also identified, and their role within the project is
explained. Noncontent issues such as submission
format and style guidelines are also discussed.
THE SWEBOK GUIDE IS A
FOUNDATIONAL DOCUMENT FOR THE
IEEE COMPUTER SOCIETY SUITE OF
SOFTWARE ENGINEERING PRODUCTS
The SWEBOK Guide is an IEEE Computer Society flagship and structural document for the IEEE
Computer Society suite of software engineering products. The SWEBOK Guide is also more
widely recognized as a foundational document
within the software engineering community at
A-1
Appendix A A-3
Specialized
Practices Used Only for
Certain Types of Software
Appendix A A-5
Generally Recognized
Established traditional practices recommended by many
organizations
Advanced and Research
Innovative practices tested
and used only by some organizations and concepts still
being developed and tested in
research organizations
LENGTH OF KA DESCRIPTION
KA Descriptions are to be roughly 10 to 20 pages
using the formatting template for papers published in conference proceedings of the IEEE
Computer Society. This includes text, references,
appendices, tables, etc. This, of course, does not
include the reference materials themselves.
IMPORTANT RELATED DOCUMENTS
1. Graduate Software Engineering 2009
(GSwE2009): Curriculum Guidelines for
Graduate Degree Programs in Software
Engineering, 2009; www.gswe2009.org. [2]
This document provides guidelines and recommendations for defining the curricula of a
professional masters level program in software
engineering. The SWEBOK Guide is identified
as a primary reference in developing the body
of knowledge underlying these guidelines. This
document has been officially endorsed by the
IEEE Computer Society and sponsored by the
Association for Computing Machinery.
2. IEEE Std. 12207-2008 (a.k.a. ISO/IEC
12207:2008) Standard for Systems and
Software EngineeringSoftware Life Cycle
Processes, IEEE, 2008 [3].
This standard is considered the key standard
regarding the definition of life cycle processes and
has been adopted by the two main standardization
bodies in software engineering: ISO/IEC JTC1/
SC7 and the IEEE Computer Society Software
Appendix A A-7
REFERENCES
[1] Project Management Institute, A Guide to the
Project Management Body of Knowledge
(PMBOK(R) Guide), 5th ed., Project
Management Institute, 2013.
[2] Integrated Software and Systems
Engineering Curriculum (iSSEc) Project,
Graduate Software Engineering 2009
(GSwE2009): Curriculum Guidelines
for Graduate Degree Programs in
Software Engineering, Stevens Institute of
Technology, 2009; www.gswe2009.org.
[3] IEEE Std. 12207-2008 (a.k.a. ISO/IEC
12207:2008) Standard for Systems and
Software EngineeringSoftware Life Cycle
Processes, IEEE, 2008.
[4*] J.W. Moore, The Road Map to Software
Engineering: A Standards-Based Guide,
Wiley-IEEE Computer Society Press, 2006.
APPENDIX B
IEEE AND ISO/IEC STANDARDS SUPPORTING
THE SOFTWARE ENGINEERING BODY OF
KNOWLEDGE (SWEBOK)
Some might say that the supply of software engineering standards far exceeds the demand. One
seldom listens to a briefing on the subject without
suffering some apparently obligatory joke that
there are too many of them. However, the existence of standards takes a very large (possibly
infinite) trade space of alternatives and reduces
that space to a smaller set of choicesa huge
advantage for users. Nevertheless, it can still be
difficult to choose from dozens of alternatives, so
supplementary guidance, like this appendix, can
be helpful. A summary list of the standards mentioned in this appendix appears at the end.
To reduce tedium in reading, a few simplifications and abridgements are made in this appendix:
ISO/IEC JTC 1/SC 7 maintains nearly two
hundred standards on the subject. IEEE
maintains about fifty. The two organizations
are in the tenth year of a systematic program
to coordinate and integrate their collections.
In general, this article will focus on the standards that are recognized by both organizations, taking this condition as evidence that
wide agreement has been obtained. Other
standards will be mentioned briefly.
Standards tend to have long, taxonomical
titles. If there were a single standard for
building an automobile, the one for your
Camry probably would be titled something
like, Vehicle, internal combustion, fourwheel, passenger, sedan. Also, modern standards organizations provide their standards
from databases. Like any database, these
sometimes contain errors, particularly for the
titles. So this article will often paraphrase the
B-1
7) is the one responsible for software and systems engineering. SC 7, and its working groups,
meets twice a year, attracting delegations representing the national standards bodies of participating nations. Each nation follows its own procedures for determining national positions and
each nation has the responsibility of determining
whether an ISO/IEC standard should be adopted
as a national standard.
SC 7 creates three types of documents:
International Standards: Documents containing requirements that must be satisfied in
order to claim conformance.
Technical Specifications (formerly called
Technical Reports, type 1 and type 2): Documents published in a preliminary manner
while work continues.
Technical Reports (formerly called Technical Reports, type 3): Documents inherently
unsuited to be standards, usually because
they are descriptive rather than prescriptive.
The key thing to remember is that only the
first category counts as a consensus standard.
The reader can easily recognize the others by the
suffix TS or TR prepended to the number of the
document.
IEEE SOFTWARE AND SYSTEMS
ENGINEERING STANDARDS
COMMITTEE (S2ESC)
IEEE is the worlds largest organization of technical professionals, with about 400,000 members
in more than 160 countries. The publication of
standards is performed by the IEEE Standards
Association (IEEE-SA), but the committees that
draft and sponsor the standards are in the various
IEEE societies; S2ESC is a part of the IEEE Computer Society. IEEE is a global standards maker
because its standards are used in many different countries. Despite its international membership (about 50% non-US), though, the IEEE-SA
routinely submits its standards to the American
National Standards Institute (ANSI) for endorsement as American National Standards. Some
S2ESC standards are developed within S2ESC,
some are developed jointly with SC 7, and some
are adopted after being developed by SC 7.
Appendix B B-3
SOFTWARE REQUIREMENTS
The primary standard for software and systems
requirements engineering is a new one that
replaced several existing IEEE standards. It provides a broad view of requirements engineering
across the entire life cycle.
ISO/IEC/IEEE 29148:2011 Systems and Software
EngineeringLife Cycle ProcessesRequirements
Engineering
Appendix B B-5
Sometimes requirements are described in natural language, but sometimes they are described
in formal or semiformal notations. The objective
of the Unified Modeling Language (UML) is to
provide system architects, software engineers,
and software developers with tools for analysis,
design, and implementation of software-based
systems as well as for modeling business and
similar processes. The two parts of ISO/IEC
19505 define UML, revision 2. The older ISO/
IEC 19501 is an earlier version of UML. They
are mentioned here because they are often used to
model requirements.
ISO/IEC 19501:2005 Information Technology
Open Distributed ProcessingUnified Modeling
Language (UML) Version 1.4.2
ISO/IEC 19505:2012 [two parts] Information TechnologyObject Management Group Unified Modeling Language (OMG UML)
SOFTWARE DESIGN
The software design KA includes both software
architectural design (for determining the relationships among the items of the software and detailed
design (for describing the individual items). ISO/
IEC/IEEE 42010 concerns the description of
architecture for systems and software.
ISO/IEC/IEEE 42010:2011 Systems and Software
EngineeringArchitecture Description
ISO/IEC/IEEE 42010:2011 addresses the creation, analysis, and sustainment of architectures of systems through the use of architecture
descriptions. A conceptual model of architecture
description is established. The required contents
of an architecture description are specified. Architecture viewpoints, architecture frameworks and
architecture description languages are introduced
for codifying conventions and common practices
of architecture description. The required content
of architecture viewpoints, architecture frameworks and architecture description languages
is specified. Annexes provide the motivation
and background for key concepts and terminology and examples of applying ISO/IEC/IEEE
42010:2011.
Like ISO/IEC/IEEE 42010, the next standard treats software design as an abstraction,
independent of its representation in a document.
Accordingly, the standard places provisions on
the description of design, rather than on design
itself.
IEEE Std. 1016-2009 Standard for Information
TechnologySystems DesignSoftware Design
Descriptions
Appendix B B-7
SOFTWARE TESTING
Oddly, there are few standards for testing. IEEE
Std. 829 is the most comprehensive.
IEEE Std. 829-2008 Standard for Software and System Test Documentation
Test processes determine whether the development products of a given activity conform to the
requirements of that activity and whether the system and/or software satisfies its intended use and
user needs. Testing process tasks are specified
for different integrity levels. These process tasks
determine the appropriate breadth and depth of
test documentation. The documentation elements
for each type of test documentation can then be
selected. The scope of testing encompasses software-based systems, computer software, hardware, and their interfaces. This standard applies
to software-based systems being developed,
maintained, or reused (legacy, commercial offthe-shelf, nondevelopmental items). The term
software also includes firmware, microcode,
and documentation. Test processes can include
inspection, analysis, demonstration, verification,
and validation of software and software-based
system products.
IEEE Std. 1008 focuses on unit testing.
IEEE Std. 1008-1987 Standard for Software Unit
Testing
ISO/IEC 26513 provides the minimum requirements for the testing and reviewing of user documentation, including both printed and onscreen
documents used in the work environment by the
users of systems software. It applies to printed
user manuals, online help, tutorials, and user reference documentation.
SOFTWARE MAINTENANCE
This standardthe result of harmonizing distinct
IEEE and ISO/IEC standards on the subject
describes a single comprehensive process for the
management and execution of software maintenance. It expands on the provisions of the software maintenance process provided in ISO/IEC/
IEEE 12207.
IEEE Std. 14764-2006 (a.k.a. ISO/IEC 14764:2006)
Standard for Software EngineeringSoftware Life
Cycle ProcessesMaintenance
Appendix B B-9
standard
for
configuration
This standard establishes the minimum requirements for processes for configuration management
(CM) in systems and software engineering. The
application of this standard applies to any form,
class, or type of software or system. This revision
of the standard expands the previous version to
explain CM, including identifying and acquiring
configuration items, controlling changes, reporting the status of configuration items, as well as
software builds and release engineering. Its predecessor defined only the contents of a software
configuration management plan. This standard
addresses what CM activities are to be done, when
they are to happen in the life cycle, and what planning and resources are required. It also describes
the content areas for a CM plan. The standard supports ISO/IEC/IEEE 12207:2008 and ISO/IEC/
IEEE 15288:2008 and adheres to the terminology
Appendix B B-11
Software projects often require the development of user documentation. Management of the
project, therefore, includes management of the
documentation effort.
improved practices. For example, one might propose an improved practice for software requirements analysis. A nave treatment might relate
the description to an early stage of the life cycle
model. A superior approach is to describe the
practice in the context of a process that can be
applied at any stage of the life cycle. The requirements analysis process, for example, is necessary for the development stage, for maintenance,
and often for retirement, so an improved practice
described in terms of the requirements analysis
process can be applied to any of those stages.
The two key standards are ISO/IEC/IEEE
12207, Software Life Cycle Processes, and ISO/
IEC/IEEE 15288, System Life Cycle Processes.
The two standards have distinct histories, but
they were both revised in 2008 to align their processes, permitting their interoperable use across a
wide spectrum of projects ranging from a standalone software component to a system with negligible software content. Both are being revised
again with the intent of containing an identical
list of processes, but with provisions specialized
for the respective disciplines.
IEEE Std. 12207-2008 (a.k.a. ISO/IEC 12207:2008)
Standard for Systems and Software Engineering
Software Life Cycle Processes
Appendix B B-13
ISO/IEC TR 24748-2 is a guide for the application of ISO/IEC 15288:2008. It addresses system, life cycle, process, organizational, project,
and adaptation concepts, principally through
reference to ISO/IEC TR 24748-1 and ISO/IEC
15288:2008. It then gives guidance on applying
ISO/IEC 15288:2008 from the aspects of strategy, planning, application in organizations, and
application on projects.
ISO/IEC/IEEE 15289:2011 provides requirements for identifying and planning the specific
ISO/IEC TR 24748-3 is a guide for the application of ISO/IEC 12207:2008. It addresses system, life cycle, process, organizational, project,
and adaptation concepts, principally through
reference to ISO/IEC TR 24748-1 and ISO/IEC
12207:2008. It gives guidance on applying ISO/
IEC 12207:2008 from the aspects of strategy,
planning, application in organizations, and application on projects.
The 12207 and 15288 standards provide processes covering the life cycle, but they do not provide a standard life cycle model (waterfall, incremental delivery, prototype-driven, etc). Selecting
an appropriate life cycle model for a project is a
major concern of ISO/IEC 24748-1.
IEEE Std. 24748.1-2011 GuideAdoption of ISO/
IEC TR 24748-1:2010 Systems and Software EngineeringLife Cycle ManagementPart 1: Guide
for Life Cycle Management
Appendix B B-15
A VSE could obtain an ISO/IEC 29110 Certification. The set of technical reports is available
at no cost on the ISO website. Many ISO 29110
documents are available in English, Spanish, Portuguese, Japanese, and French.
ISO/IEC TR 29110-5-1-2:2011 Software EngineeringLifecycle Profiles for Very Small Entities
(VSEs)Part 5-1-2: Management and Engineering
Guide: Generic Profile Group: Basic Profile
All of the standards described so far in this section provide a basis for defining processes. Some
users are interested in assessing and improving
their processes after implementation. The 15504
series provides for process assessment; it is currently being revised and renumbered 330xx.
ISO/IEC 15504 [ten parts] Information TechnologyProcess Assessment
Appendix B B-17
IDEF0 function modeling is designed to represent the decisions, actions, and activities of an
existing or prospective organization or system.
IDEF0 graphics and accompanying texts are presented in an organized and systematic way to gain
ISO/IEC 19501 describes the Unified Modeling Language (UML), a graphical language for
visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. The UML offers a standard way to write a
systems blueprints, including conceptual things
such as business processes and system functions
as well as concrete things such as programming
language statements, database schemas, and reusable software components.
ISO/IEC 19505:2012 [two parts] Information TechnologyObject Management Group Unified Modeling Language (OMG UML)
ISO/IEC 19506:2012 defines a metamodel for representing existing software assets, their associations, and operational environments, referred to as
the knowledge discovery metamodel (KDM). This
is the first in the series of specifications related to
software assurance (SwA) and architecture-driven
modernization (ADM) activities. KDM facilitates
Appendix B B-19
ISO/IEC 19507:2012 defines the Object Constraint Language (OCL), version 2.3.1. OCL version 2.3.1 is the version of OCL that is aligned
with UML 2.3 and MOF 2.0.
Some organizations invest in software engineering environments (SEE) to assist in the
construction of software. An SEE, per se, is not
a replacement for sound processes. However, a
suitable SEE must support the processes that
have been chosen by the organization.
ISO/IEC 15940:2006 Information Technology
Software Engineering Environment Services
Within systems and software engineering, computer-aided software engineering (CASE) tools
represent a major part of the supporting technologies used to develop and maintain information technology systems. Their selection must be
carried out with careful consideration of both the
technical and management requirements.
ISO/IEC 14102:2008 defines both a set of processes and a structured set of CASE tool characteristics for use in the technical evaluation and
the ultimate selection of a CASE tool. It follows
the software product evaluation model defined in
ISO/IEC 14598-5:1998.
ISO/IEC 14102:2008 adopts the general model
of software product quality characteristics and
subcharacteristics defined in ISO/IEC 91261:2001 and extends these when the software
product is a CASE tool; it provides product characteristics unique to CASE tools.
The next document provides guidance on how
to adopt CASE tools, once selected.
IEEE Std. 14471-2010 GuideAdoption of ISO/IEC
TR 14471:2007 Information TechnologySoftware
EngineeringGuidelines for the Adoption of CASE
Tools
The purpose of this family of standards is to specify a common set of modeling concepts based
on those found in commercial CASE tools for
describing the operational behavior of a software
system. These standards establish a uniform,
integrated model of software concepts related to
software functionality. They also provide a textual syntax for expressing the common properties
(attributes and relationships) of those concepts as
they have been used to model software behavior.
SOFTWARE QUALITY
One viewpoint of software quality starts with
ISO 9001, Quality Management Requirements,
dealing with quality policy throughout an organization. The terminology of that standard may
be unfamiliar to software professionals, and
quality management auditors may be unfamiliar
with software jargon. The following standard
describes the relationship between ISO 9001 and
ISO/IEC 12207. Unfortunately, the current version refers to obsolete editions of both; a replacement is in progress:
IEEE Std. 90003-2008 GuideAdoption of ISO/
IEC 90003:2004 Software EngineeringGuidelines
ISO/IEC 90003 provides guidance for organizations in the application of ISO 9001:2000 to the
acquisition, supply, development, operation, and
maintenance of computer software and related
support services. ISO/IEC 90003:2004 does not
add to or otherwise change the requirements of
ISO 9001:2000.
The guidelines provided in ISO/IEC
90003:2004 are not intended to be used as assessment criteria in quality management system
registration/certification.
The application of ISO/IEC 90003:2004 is
appropriate to software that is
part of a commercial contract with another
organization,
a product available for a market sector,
used to support the processes of an
organization,
embedded in a hardware product, or
related to software services.
Some organizations may be involved in all
the above activities; others may specialize in
one area. Whatever the situation, the organizations quality management system should cover
all aspects (software related and nonsoftware
related) of the business.
ISO/IEC 90003:2004 identifies the issues
which should be addressed and is independent
of the technology, life cycle models, development processes, sequence of activities, and
organizational structure used by an organization. Additional guidance and frequent references to the ISO/IEC JTC 1/SC 7 software
engineering standards are provided to assist in
the application of ISO 9001:2000: in particular, ISO/IEC 12207, ISO/IEC TR 9126, ISO/
IEC 14598, ISO/IEC 15939, and ISO/IEC TR
15504.
The ISO 9001 approach posits an organization-level quality management process paired
with project-level quality assurance planning
to achieve the organizational goals. IEEE 730
describes project-level quality planning. It is
Appendix B B-21
SQuaRE provides
terms and definitions,
reference models,
guides
standards for requirements specification,
planning and management, measurement,
and evaluation purposes.
The next SQuaRE standard provides a taxonomy of software quality characteristics that may
be useful in selecting characteristics relevant to a
specific project:
ISO/IEC 25010:2011 Systems and Software EngineeringSystems and Software Quality Requirements and Evaluation (SQuaRE)System and Software Quality Models
Appendix B B-23
A standard dictionary of measures of the software aspects of dependability for assessing and
predicting the reliability, maintainability, and
availability of any software system; in particular,
it applies to mission critical software systems.
IEEE Std. 1633-2008 Recommended Practice for
Software Reliability
The methods for assessing and predicting the reliability of software, based on a life cycle approach
to software reliability engineering, are prescribed in
this recommended practice. It provides information
necessary for the application of software reliability
(SR) measurement to a project, lays a foundation
for building consistent methods, and establishes
the basic principle for collecting the data needed to
assess and predict the reliability of software. The
recommended practice prescribes how any user can
participate in SR assessments and predictions.
IEEE has an overall standard for software
product quality that has a scope similar to the
ISO/IEC 250xx series described previously. Its
terminology differs from the ISO/IEC series, but
it is substantially more compact.
IEEE Std. 1061-1998 Standard for Software Quality
Metrics Methodology
In many cases, a database of software anomalies is used to support verification and validation
activities. The following standard suggests how
anomalies should be classified.
Appendix B B-25
Knowledge (SWEBOK)
See General
SOFTWARE ENGINEERING
PROFESSIONAL PRACTICE
Appendix B B-27
ENGINEERING FOUNDATIONS
Most Relevant KA
SW Quality
SW Configuration
Management
SW Testing
SW Quality
SW Testing
SW Quality
SW Design
SW Quality
SW Quality
SW Quality
SW Engineering
Management
SW Engineering
Process
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Engineering
Process
SW Quality
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Engineering
Management
SW Engineering
Process
Appendix B B-29
Most Relevant KA
SW Quality
SW Engineering
Process
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Requirements
SW Engineering
Models and Methods
SW Maintenance
SW Quality
SW Quality
SW Quality
SW Quality
SW Engineering
Process
SW Engineering
Process
SW Engineering
Process
SW Engineering
Management
SW Engineering
Management
SW Engineering
Management
SW Engineering
Models and Methods
Most Relevant KA
SW Engineering
Models and Methods
SW Engineering
Models and Methods
SW Engineering
Models and Methods
[General]
SW Requirements
SW Engineering
Process
SW Requirements
SW Requirements
SW Requirements
SW Engineering
Process
SW Engineering
Process
SW Engineering
Process
[General]
SW Construction
SW Engineering
Professional Practice
SW Engineering
Process
SW Quality
Appendix B B-31
Most Relevant KA
SW Quality
SW Quality
SW Quality
SW Engineering
Management
SW Engineering
Management
SW Testing
SW Design
SW Engineering
Models and Methods
SW Engineering
Process
SW Testing
SW Requirements
SW Design
SW Quality
APPENDIX C
CONSOLIDATED REFERENCE LIST
The Consolidated Reference List identifies all
recommended reference materials (to the level of
section number) that accompany the breakdown
of topics within each knowledge area (KA). This
Consolidated Reference List is adopted by the
software engineering certification and associated
professional development products offered by the
IEEE Computer Society. KA Editors used the references allocated to their KA by the Consolidated
Reference List as their Recommended References.
Collectively this Consolidated Reference List is
Complete: Covering the entire scope of the
SWEBOK Guide.
Sufficient: Providing enough information to
describe generally accepted knowledge.
Consistent: Not providing contradictory
knowledge nor conflicting practices.
Credible: Recognized as providing expert
treatment.
Current: Treating the subject in a manner that
is commensurate with currently generally
accepted knowledge.
Succinct: As short as possible (both in number of reference items and in total page
count) without failing other objectives.
[1*] J.H. Allen et al., Software Security
Engineering: A Guide for Project
Managers, Addison-Wesley, 2008.
[2*] M. Bishop, Computer Security: Art and
Science, Addison-Wesley, 2002.
[3*] B. Boehm and R. Turner, Balancing Agility
and Discipline: A Guide for the Perplexed,
Addison-Wesley, 2003.
C-1