0% found this document useful (0 votes)
812 views390 pages

CABA CBOK Ver 9-1

CABA CBOK

Uploaded by

Afonso Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
812 views390 pages

CABA CBOK Ver 9-1

CABA CBOK

Uploaded by

Afonso Souza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 390

Guide to the

CABA COMMON BODY


OF KNOWLEDGE

Version 9.1 2008

Certified Associate Software


Business Analyst
Common Body of Knowledge

Copyright 2008 by
QAI Global Institute
2101 Park Center Drive,
Suite 200
Orlando, FL 32835-7614
Phone 407-363-1111
Fax 407-363-1112
Web site: www.qaiworldwide.org

Revision and Copyright


DESCRIPTION

BY

DATE

Initial Release

Patterson/Marina

08/01/2008

Copyright
Copyright QAI Global Institute 2008 All Rights Reserved
No part of this publication, or translations of it, may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
any other media embodiments now known or hereafter to become known, without the prior
written permission of the QAI Global Institute.

Table of Contents
Skill Category 1
Business Analyst Principles and Concepts . . . . . 1-1
1.1. Introduction to Business Analyst Principles and Concepts . . . . . . . . .1-1
1.2. Relationship between Certified Software Business Analyst and Quality
1-2
1.3. Quality Pioneers, Thinkers and Innovators . . . . . . . . . . . . . . . . . . . . .1-3
1.3.1. Walter Shewhart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-3
1.3.2. Joseph M. Juran . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-4
1.3.3. Frederick Herzberg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-5
1.3.4. W. Edwards Deming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-5
1.3.5. Kaoru Ishikawa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-7
1.3.6. Genichi Taguchi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-8
1.3.7. Philip Crosby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-8
1.3.8. Tom DeMarco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-9
1.4. Basic Quality Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-9
1.4.1. Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-9
1.4.2. Quality Assurance (QA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-10
1.4.3. Quality Control (QC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-10
1.4.4. Quality Improvement (QI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-11
1.4.5. Quality Management (QM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-11
1.5. Business Analysis and Management Tools for Quality . . . . . . . . . . .1-11
1.5.1. Affinity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-12
1.5.2. Baselining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-12
1.5.3. Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-13
1.5.4. Cause-and-Effect Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-14
1.5.5. Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-15
1.5.6. Graphical Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-17
1.5.6.1. Pie Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
1.5.6.2. Bar (Column) Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17

Version 9.1

Guide to the CABA CBOK

1.5.7. Cost of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-18


1.5.8. Earned Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-20
1.5.9. Expected Value/Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-23
1.5.10. Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-25
1.5.11. Force Field Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26
1.5.12. Kaizen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-29
1.5.13. Pareto Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
1.5.14. Relationship Diagram, Entity Relationship Diagram . . . . . . . . . . 1-31
1.5.15. Scatter Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-31
1.5.16. Six Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-33
1.6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-33

Version 9.1

Table of Contents

Skill Category 2
Management and Communication Skills . . . . . . . 2-1
2.1. Leadership and Management Concepts . . . . . . . . . . . . . . . . . . . . . . . .2-1
2.2. Quality Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-5
2.2.1. Manage by Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-5
2.2.2. Manage with Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-6
2.2.3. Manage Toward Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-8
2.2.4. Focus on the Customer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9
2.2.5. Continuous Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9
2.2.6. Creating the Infrastructure for Improving Quality . . . . . . . . . . . . .2-10
2.2.6.1. Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.6.2. Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.6.3. Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.6.4. Act . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2-10
2-11
2-11
2-13

2.3. Communication and Interpersonal Skills for Software Business Analysts


2-13
2.3.1. Listening Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-13
2.3.2. Interviewing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-15
2.3.3. Facilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-19
2.3.4. Team Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-23
2.3.5. Tuckmans Forming-Storming-Norming-Performing Model . . . . .2-24
2.3.5.1. Tuckman's Forming-Storming-Norming-Performing - Original Model . . 2-24
2.3.5.2. Hersey's and Blanchard's Situational Leadership Model . . . . . . . . . . . . 2-27

2.3.6. Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-27


2.3.7. Focus Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-28
2.3.8. Negotiating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-29
2.3.9. Johari Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-32
2.3.10. Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-35
2.3.10.1. Conflict Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35
2.3.10.2. Prioritization Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38

2.4. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-41

Version 9.1

Guide to the CABA CBOK

Skill Category 3
Define, Build, Implement and Improve Work Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1
3.1. Understanding and Applying Standards for Software Development 3-1
3.1.1. Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
3.2. Standards Organizations and Models . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
3.2.1. International Organization for Standardization (ISO) . . . . . . . . . . . 3-2
3.2.2. Software Engineering Institute and the Capability Maturity Model SEI/CMM and CMMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
3.2.3. Six Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-8
3.2.3.1. DMAIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
3.2.3.2. DMADV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
3.2.3.3. Six Sigma Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9

3.2.4. Information Technology Infrastructure Library (ITIL) . . . . . . . . . 3-10


3.2.5. National Quality Awards and Models . . . . . . . . . . . . . . . . . . . . . . 3-11
3.2.6. The Role of Models and Standards . . . . . . . . . . . . . . . . . . . . . . . . 3-11
3.3. Process Management Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
3.3.1. Definition of a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
3.3.2. Why Processes are Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-12
3.3.3. Process Workbench and Components . . . . . . . . . . . . . . . . . . . . . . 3-13
3.3.3.1. Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
3.3.3.2. Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-14

3.3.4. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15


3.3.4.1. Do Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15
3.3.4.2. Check Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-15

3.3.5. Process Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16


3.3.5.1. Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
3.3.5.2. Work Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16
3.3.5.3. Check Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-16

3.3.6. The Process Maturity Continuum . . . . . . . . . . . . . . . . . . . . . . . . . 3-17


3.3.7. How Processes are Managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-18
3.3.7.1. Align the Process to the Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.7.2. Identify the Process and Define the Policy . . . . . . . . . . . . . . . . . . . . . . .
3.3.7.3. Evaluate Process Development Stage . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.7.4. Determine Current and Desired Process Capability . . . . . . . . . . . . . . . .
3.3.7.5. Establish Process Management Information Needs . . . . . . . . . . . . . . . . .
3.3.7.6. Establish Process Measurement, Monitoring and Reporting . . . . . . . . . .

3-19
3-19
3-20
3-21
3-25
3-25

Version 9.1

Table of Contents

3.3.8. Process Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-26


3.4. Process Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-26
3.4.1. Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-27
3.4.2. Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-28
3.4.3. Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-30
3.4.4. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-30
3.4.5. Role of Process Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31
3.5. Process Planning and Evaluation Process . . . . . . . . . . . . . . . . . . . . . .3-31
3.5.1. Process Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31
3.5.2. Do Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-32
3.5.3. Check Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-32
3.5.4. Act Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-33
3.5.4.1. Process Improvement Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-33
3.5.4.2. Process Improvement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-33

3.6. Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34


3.6.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-34
3.6.1.1. Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-34
3.6.1.2. Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-36

3.6.2. Types and Uses of Measures and Metrics . . . . . . . . . . . . . . . . . . . .3-37


3.6.2.1. Strategic or Intent Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.2.2. Process Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.2.3. Efficiency Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.2.4. Effectiveness Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.2.5. Size Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3-37
3-38
3-38
3-38
3-39

3.6.3. Developing Measures and Metrics . . . . . . . . . . . . . . . . . . . . . . . . .3-39


3.6.3.1. Responsibility for measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.3.2. Responsibility for Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.3.3. Responsibility for Analysis and Reporting . . . . . . . . . . . . . . . . . . . . . . . .
3.6.3.4. Common Measures for Information Technology . . . . . . . . . . . . . . . . . . .

3-39
3-39
3-40
3-40

3.6.4. Obstacles to Establishing Effective Measures and Metrics . . . . . .3-42


3.6.4.1. Use in performance appraisals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.4.2. Unreliable, invalid measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6.4.3. Measuring individuals rather than projects or groups of people . . . . . . .
3.6.4.4. Non-timely recording of manual measurement data . . . . . . . . . . . . . . . . .
3.6.4.5. Misuse of measurement data by management . . . . . . . . . . . . . . . . . . . . .
3.6.4.6. Unwillingness to accept bad numbers . . . . . . . . . . . . . . . . . . . . . . . . . .

3-42
3-42
3-43
3-43
3-43
3-43

3.7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-43

Version 9.1

Guide to the CABA CBOK

Skill Category 4
Business Fundamentals . . . . . . . . . . . . . . . . . . . . .4-1
4.1. Concept Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
4.2. Understanding Vision, Mission, Goals, Objectives, Strategies and Tactics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
4.2.1. Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-4
4.2.2. Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-6
4.2.3. Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8
4.2.4. Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-9
4.2.5. Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-11
4.2.6. Tactics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-13
4.3. Developing an Organizational Vocabulary . . . . . . . . . . . . . . . . . . . . 4-13
4.3.1. Risk Analysis and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-20
4.3.1.1. What is risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.2. Event identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.3. Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.4. Risk Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1.5. Control Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4-20
4-21
4-21
4-21
4-22

4.4. Knowledge Area Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-22

Version 9.1

Table of Contents

Skill Category 5
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1. Business Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-1
5.1.1. How Requirements are defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-1
5.1.1.1. What is a requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.1.1.2. Separating requirements and design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2

5.1.2. Who Participates in Requirement Definition . . . . . . . . . . . . . . . . . .5-3


5.1.2.1. Business Project Sponsor or Champion . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2.2. Business Stakeholders and Subject Matter Experts (SME) . . . . . . . . . . . .
5.1.2.3. Developers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2.4. Testers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2.5. Customers and Suppliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.2.6. Business Analysts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-3
5-4
5-4
5-5
5-5
5-6

5.1.3. Attributes of a good requirement . . . . . . . . . . . . . . . . . . . . . . . . . . .5-6


5.1.3.1. Correct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.2. Complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.3. Consistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.4. Unambiguous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.5. Important . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.6. Stable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.7. Verifiable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.8. 5.1.3.8 Modifiable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.3.9. Traceable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-6
5-7
5-7
5-7
5-8
5-8
5-8
5-8
5-9

5.2. Processes used to define business requirements . . . . . . . . . . . . . . . . . .5-9


5.2.1. Joint Application Development (JAD) . . . . . . . . . . . . . . . . . . . . . .5-10
5.2.1.1. What is Joint Application Development . . . . . . . . . . . . . . . . . . . . . . . . . . 5-10
5.2.1.2. Who participates in Joint Application Development . . . . . . . . . . . . . . . . 5-11
5.2.1.3. How is Joint Application Development conducted . . . . . . . . . . . . . . . . . 5-11

5.2.2. Business Event Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-14


5.2.2.1. What is a Business Event Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
5.2.2.2. How is a Business Event Model Created . . . . . . . . . . . . . . . . . . . . . . . . . 5-16
5.2.2.3. Using the Business Event Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-17

5.2.3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-18


5.2.3.1. What is a Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.3.2. How Use Cases are Created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.3.3. How are Use Cases Applied . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.3.4. Considerations for Use Case Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-18
5-20
5-22
5-23

5.2.4. Process Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-23


5.2.4.1. Data Flow Diagrams (DFD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-24
5.2.4.2. Entity Relationship Diagrams (ERD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-26
5.2.4.3. State Transition Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-28

Version 9.1

Guide to the CABA CBOK

5.2.5. Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30


5.2.6. Test First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-30
5.3. Quality Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
5.3.1. Quality Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
5.3.1.1. Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.2. Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.3. Availability / Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.4. Integrity / Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.5. Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.6. Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.7. Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.8. Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.9. Reusability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1.10. Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-31
5-31
5-32
5-32
5-32
5-32
5-33
5-33
5-33
5-33

5.3.2. Relationship of Quality Requirements . . . . . . . . . . . . . . . . . . . . . . 5-34


5.3.3. Measuring Business and Quality Requirements . . . . . . . . . . . . . . 5-35
5.3.3.1. Minimum Level of Acceptable Quality . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36

5.3.4. Enterprise Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37


5.3.4.1. Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37
5.3.4.2. Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
5.3.4.3. Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38

5.4. Constraints and Trade-offs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38


5.4.1. Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-39
5.4.1.1. Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1.2. Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1.3. Policies, Standards and Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1.4. Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1.5. Internal Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4.1.6. External Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-39
5-39
5-39
5-39
5-40
5-40

5.5. Critical Success Factors and Critical Assumptions . . . . . . . . . . . . . 5-40


5.5.1. Critical Success Factors (CSFs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
5.5.2. Critical Assumptions (CA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-42
5.6. Prioritizing Requirements and Quality Function Deployment . . . . 5-43
5.6.1. Prioritizing Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-44
5.6.2. Quality Function Deployment (QFD) . . . . . . . . . . . . . . . . . . . . . . 5-45
5.7. Developing Testable Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 5-47
5.7.1. Ambiguity Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48
5.7.1.1. What is Ambiguity Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48
5.7.1.2. Performing Ambiguity Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-48

5.7.2. Resolving Requirement Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . 5-49


8

Version 9.1

Table of Contents

5.7.3. Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-50


5.7.4. Fagan Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-51
5.7.4.1. Fagan Inspection Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-52
5.7.4.2. Fagan Inspection Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-53
5.7.4.3. Fagan Inspection Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-54

5.7.5. Gilb Agile Specification Quality Control . . . . . . . . . . . . . . . . . . . .5-55


5.7.5.1. Agile Specification Quality Control (SQC) Approach . . . . . . . . . . . . . . .
5.7.5.2. Using Inspection Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.7.5.3. Agile SCQ Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.7.5.4. Agile SCQ Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-55
5-55
5-56
5-56

5.7.6. Consolidated Inspection Approach . . . . . . . . . . . . . . . . . . . . . . . . .5-56


5.8. Tracing Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-57
5.8.1. Uniquely Identify All Requirements . . . . . . . . . . . . . . . . . . . . . . . .5-57
5.8.2. From Requirements to Design . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-58
5.8.2.1. Tracing Requirements to Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-58
5.8.2.2. Tracing Design to Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-58

5.8.3. From Requirements to Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-58


5.8.3.1. Tracing Requirements and Design to Test . . . . . . . . . . . . . . . . . . . . . . . . 5-59
5.8.3.2. Tracing Test Cases to Design and Requirements . . . . . . . . . . . . . . . . . . . 5-59

5.9. Managing Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-59


5.9.1. Create a known base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-61
5.9.2. Manage Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-61
5.9.3. Authorize changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-62
5.9.3.1. Change Control Board (CCB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.9.3.2. Change Scope: Requirement, Enhancement or New Project . . . . . . . . . .
5.9.3.3. Establish priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.9.3.4. Publish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5-62
5-63
5-64
5-64

5.9.4. Control results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-64


5.9.4.1. 5.8.4.1 Size the effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-64
5.9.4.2. Adjust the resources or the plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-65
5.9.4.3. Communicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-65

5.10. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-66

Version 9.1

Guide to the CABA CBOK

Skill Category 6
Software Development Processes, Project and Risk
Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1
6.1. Software Development in the Process Context . . . . . . . . . . . . . . . . . . 6-1
6.1.1. Policies, Standards, Procedures and Guidelines . . . . . . . . . . . . . . . 6-2
6.1.1.1. Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1.2. Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1.3. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1.4. Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-2
6-2
6-2
6-2

6.1.2. Entry and Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3


6.1.2.1. Entry Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
6.1.2.2. Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
6.1.2.3. Applying Entry and Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3

6.1.3. Benchmarks and Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4


6.1.3.1. The Role of Measures and Metrics in the Development Environment . . . . 6-5
6.1.3.2. Establishing Meaningful Software Development Measures . . . . . . . . . . . . 6-5

6.2. Major Components of Software Processes (Objectives and Deliverables)


6-6
6.2.1. Pre-Requirements Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
6.2.1.1. Preliminary Scope Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1.2. Preliminary Benefit Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1.3. Preliminary Estimate and Related Cost Calculation . . . . . . . . . . . . . . . . . .
6.2.1.4. Feasibility Study Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-6
6-7
6-7
6-7

6.2.2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8


6.2.2.1. Requirements Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.2.2. Preliminary Test Cases and Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.2.3. Project Charter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.2.4. Project Scope Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.2.5. Project Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.2.6. Revised Estimate, Cost and Benefit Information . . . . . . . . . . . . . . . . . . . .
6.2.2.7. Organizational Approvals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-8
6-8
6-8
6-8
6-9
6-9
6-9

6.2.3. Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9


6.2.3.1. External and Internal Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3.2. Revised Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3.3. New and Revised Test Cases and Test Plan . . . . . . . . . . . . . . . . . . . . . . .
6.2.3.4. Final Project Scope and Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3.5. Final Project Estimate, Cost and Benefit Information . . . . . . . . . . . . . . .

6-10
6-11
6-11
6-11
6-11

6.2.4. Code, Development, Construct or Build . . . . . . . . . . . . . . . . . . . . 6-12


6.2.4.1. Unit and System Tested Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-12
6.2.4.2. Revised Requirements and Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13

10

Version 9.1

Table of Contents
6.2.4.3. Defect Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13

6.2.5. Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-13


6.2.5.1. Executed Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.5.2. Production Ready System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.5.3. Organizational Approvals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.5.4. Defect Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-13
6-14
6-14
6-14

6.2.6. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-14


6.2.6.1. Product Documentation / User Manual . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.6.2. Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.6.3. Product Turnover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.6.4. Help Desk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-15
6-15
6-15
6-16

6.3. Traditional Approaches to Software Development . . . . . . . . . . . . . .6-16


6.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16
6.3.2. Typical Tasks in the Development Process Life Cycle . . . . . . . . .6-17
6.3.3. Process Model / Life-Cycle Variations . . . . . . . . . . . . . . . . . . . . .6-18
6.3.4. Ad-hoc Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-18
6.3.5. The Waterfall Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
6.3.5.1. Problems/Challenges associated with the Waterfall Model . . . . . . . . . . 6-20

6.3.6. Iterative Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-20


6.3.6.1. Problems/Challenges associated with the Iterative Model . . . . . . . . . . . . 6-21

6.3.7. Variations on Iterative Development . . . . . . . . . . . . . . . . . . . . . . .6-22


6.3.7.1. Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.7.2. Prototyping steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.7.3. Problems/Challenges associated with the Prototyping Model . . . . . . . . .
6.3.7.4. Variation of the Prototyping Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6-22
6-23
6-23
6-24

6.3.8. The Exploratory Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-24


6.3.8.1. The Exploratory Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24
6.3.8.2. Problems/Challenges associated with the Exploratory Model . . . . . . . . . 6-24

6.3.9. The Spiral Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-25


6.3.9.1. The Spiral Model steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-26
6.3.9.2. Problems/Challenges associated with the Spiral Model . . . . . . . . . . . . . . 6-26

6.3.10. The Reuse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-26


6.3.10.1. The Reuse Model steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-27
6.3.10.2. Problems/Challenges Associated with the Reuse Model . . . . . . . . . . . . 6-27

6.3.11. Creating and Combining Models . . . . . . . . . . . . . . . . . . . . . . . . .6-28


6.3.12. Process Models Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-29
6.4. Agile Development Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-29
6.4.1. Basic Agile Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-29
6.4.2. Agile Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-31
Version 9.1

11

Guide to the CABA CBOK

6.4.3. Effective Application of Agile Approaches . . . . . . . . . . . . . . . . . . 6-32


6.4.4. Integrating Agile with Traditional Methodologies . . . . . . . . . . . . 6-33
6.5. Software Development Process Improvement . . . . . . . . . . . . . . . . . . 6-33
6.5.1. Post Implementation Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-33
6.5.2. Defect Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-36
6.5.3. Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-36
6.6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-37
6.7. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-38

12

Version 9.1

Table of Contents

Skill Category 7
Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . 7-1
7.1. Concepts of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2
7.1.1. Dynamic Testing Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-3
7.1.1.1. Unit Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1.2. Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1.3. System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1.4. Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7-4
7-4
7-5
7-6

7.1.2. Dynamic Testing Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-6


7.1.2.1. White Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
7.1.2.2. Black Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-7
7.1.2.3. Equivalence Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
7.1.2.4. Boundary Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
7.1.2.5. Smoke Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9
7.1.2.6. Regression Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
7.1.2.7. Stress Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-10
7.1.2.8. Conditional and Cycle Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-11
7.1.2.9. Parallel Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
7.1.2.10. Risk-based Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13
7.1.2.11. Security Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14
7.1.2.12. Backup and Recovery Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
7.1.2.13. Failure and Ad Hoc Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-15
7.1.2.14. Other Test Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16

7.2. Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-16


7.2.1. Project Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-16
7.2.2. Test Manager and the Test Team . . . . . . . . . . . . . . . . . . . . . . . . . .7-17
7.2.3. Designer/Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-18
7.2.4. Business Partner and Subject Matter Expert . . . . . . . . . . . . . . . . . .7-19
7.2.5. Operations and Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-19
7.2.6. Data Security and Internal Audit . . . . . . . . . . . . . . . . . . . . . . . . . .7-20
7.2.7. Control Verification and the Independent Tester . . . . . . . . . . . . . .7-21
7.2.8. Business Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-22
7.2.9. Other Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-23
7.3. Use Cases for Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-23
7.3.1. Use Case Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-25
7.3.2. Use Case Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-28
7.3.3. Use Case Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-29

Version 9.1

13

Guide to the CABA CBOK

7.3.4. Writing effective Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31


7.4. Defect Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31
7.4.1. Defect Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
7.4.1.1. Defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-32
7.4.1.2. Errors, Faults and Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34
7.4.1.3. Recidivism and Persistent Failure Rates . . . . . . . . . . . . . . . . . . . . . . . . . . 7-34

7.4.2. Defect Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-35


7.4.2.1. Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.2.2. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.2.3. Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4.2.4. Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7-35
7-38
7-39
7-39

7.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-40

14

Version 9.1

Table of Contents

Skill Category 8
Commercial Off-the-Shelf Software and Performance
Based Contracting . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
8.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2
8.1.1. Commercial Off-the-Shelf Software (COTS) . . . . . . . . . . . . . . . . . .8-2
8.1.2. Custom Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2
8.1.3. Modified Off-the-Shelf Software (MOTS) . . . . . . . . . . . . . . . . . . . .8-2
8.1.4. Performance Based Contracting (PBC) . . . . . . . . . . . . . . . . . . . . . .8-2
8.2. Establish Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-3
8.3. Commercial Off-The-Shelf Software (COTS) Considerations . . . . . .8-3
8.3.1. Determine compatibility with your computer environment . . . . . . .8-4
8.3.1.1. Hardware Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1.2. Operating System Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1.3. Software Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1.4. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3.1.5. Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8-4
8-5
8-5
8-5
8-5

8.3.2. Ensure the software can be integrated into your business system work
flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-6
8.3.2.1. Current system based on certain assumptions . . . . . . . . . . . . . . . . . . . . . . 8-6
8.3.2.2. Existing forms, existing data, and existing procedures . . . . . . . . . . . . . . . 8-7
8.3.2.3. COTS based on certain assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7

8.3.3. Assuring product usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-8


8.3.3.1. Telephone interviews with current and previous customers . . . . . . . . . . . . 8-8
8.3.3.2. Demonstrate the software in operation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10

8.4. Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-10


8.4.1. Project Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-11
8.4.2. Project Liaison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-11
8.4.3. Attorneys and Legal Counsel . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-12
8.4.4. Test Manager and the Test Team . . . . . . . . . . . . . . . . . . . . . . . . . .8-12
8.4.5. Designer/Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-13
8.4.6. Business Partner and Subject Matter Expert . . . . . . . . . . . . . . . . . .8-13
8.4.7. Business Analyst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-14
8.4.8. Other Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-14
8.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-14

Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Version 9.1

15

Guide to the CABA CBOK

16

Version 9.1

Business Analyst Principles and Concepts

Skill
Category

1
Business Analyst Principles
and Concepts
1.1 Introduction to Business Analyst Principles and
Concepts
The Business Analyst position is uniquely placed in the organization to provide a strong link
between the Business Community and Information Technology (IT). Historically, the
Business Analyst was a part of the business operation who was assigned to work with
Information Technology (or Data Processing as it was once known). The Business
Community was willing to provide these services because of the desire to improve the quality
of the products and services being delivered by the IT organization and because it perceived
this as a way to control the amount of resources being consumed by developmental support
activities.
Originally referred to as users by the IT staff, the business organization had little control
over the quality, cost, or the schedule of the development process. The IT staff was struggling
with an evolving technology, with few rules, and a constantly changing infrastructure. What
was not changing was the insatiable appetite for the productivity increases provided by
automation on the part of users.
The Business Analysts, gathered Business Requirements, assisted in Integration and
Acceptance Testing, supported the development of training and implementation material,
participated in the implementation, and provided immediate post-implementation support.
While doing this, they were actively involved in the development of project plans and often
provided project management skills when these skills were not available in other project
participants. Although there are many new tools and techniques to perform these tasks today,
they are still the key functions performed by the Business Analyst.

Version 9.1

1-1

Guide to the CABA CBOK


Many of these activities relied on the outstanding communication skill, both verbal and
written, which was one of the BAs most valuable characteristics. As IT struggled to provided
the improvements in quality and productivity which were being demanded by a user which
had metamorphisized into a customer, the role of the Business Analyst became even more
important. Over time, many of these individuals were recruited into IT because of their ability
to translate business needs into terms that were understood by IT. They became Software
Business Analysts.
The shift in the balance of power, which occurred when users became customers, failed to
produce the kind of improvements in schedule, cost, and quality that the business community
had anticipated. It was clear that this was also a zero-sum game. The evolution of the
Business Partnership model, which had its roots in the very earliest days of the quality
revolution, has finally created a relationship that is win-win. The Business Analyst, fluent in
both the language of business and the language of IT, is once again the key player.

1.2 Relationship between Certified Software


Business Analyst and Quality
This introductory chapter looks at the background and evolution of the Quality Movement,
which had its roots in 1920s manufacturing processes. As Information Technology works to
move from an unstructured art form to an engineering approach, the lessons learned in
manufacturing are increasingly relevant.
The dramatic transformation of the automobile industry over the last thirty years is an
excellent example of what happens when organizations fail to deliver what the ultimate
customer (internal or external to the organization) perceives to be quality. It is not enough to
build a vehicle that provides transportation from point A to point B. If the price of gas goes
up, so must the fuel efficiency of the vehicle. If the roads become congested and parking
difficult, vehicles need to become smaller and more maneuverable. Vehicles that require
frequent, unpredictable repairs, whether major or minor, are viewed as less desirable than
those with fewer, more predictable service needs. Products not responding to these needs will
be perceived as having lower quality, and therefore they are less desirable in the marketplace.
Customers purchase those items in smaller quantities and want to pay comparatively less for
them. Japan, using their understanding of quality production techniques, has succeeded in
displacing most North American and European vehicles as quality vehicles in the eyes of
world customers.
One of the important lessons learned was that it is essential to understand what the external
customer defined as quality. This chapter will look at the varying operating definitions of
Quality which are in common usage today, and how they relate to IT. Business Analysts have
the opportunity each day to impact the final quality of the products they are involved with; by
better understanding the options and the tools available for developing quality systems, they
make a major contribution to the organization.

1-2

Version 9.1

Business Analyst Principles and Concepts


As the body of knowledge about quality practices evolved, it became clear that improving
quality was an excellent approach for addressing the problems of meeting schedule and
budget objectives, not an alternative. This is why a proficient Business Analyst must have a
firm foundation in Quality concepts to be truly professional.

1.3 Quality Pioneers, Thinkers and Innovators


The theoretical framework for how to create high quality products and services evolved over
time. The concepts for automation began simply and grew more sophisticated; so too with the
concepts of quality. The individuals discussed below added to the understanding in specific
and important ways. Each contribution is individually useful to the BA committed to
providing the very best work for their organization; collectively they are extraordinarily
powerful.

1.3.1

Walter Shewhart

Much of the early work in quality was focused on the manufacturing of goods; Bell
Laboratories in the 1920s was the testing grounds for much of this work. Leading the effort
for many years, Walter Shewhart developed conceptual framework for much of the Total
Quality Management (TQM) approach to improving quality. Key among his many
contributions are two which are fundamental to the business of providing quality software
systems:
1. The concept of Continuous Improvement
2. The related 1-2-3-4 Cycle.1
Continuous improvement stresses continuous learning and creativity as the pathway to
ongoing incremental improvements. Lack of creativity eventually results in dead end
products. Lack of learning inhibits creativity and results in boredom and lack of attention to
detail.
The 1-2-3-4 cycle is a methodical approach to applying the concept of Continuous
Improvement. Although not captioned in this fashion originally, the four steps are the
foundation for the PLAN-DO-CHECK-ACT approach popularized by Deming. In this
approach, organizations identify how a process or activity will be done (PLAN); they then
perform the process or activity according to the Plan (DO). When the process or activity is
complete, it is then time to evaluate the results of the plan and its execution to determine if the
desired results were achieved (CHECK). Based on the results of this evaluation, adjustments
1. Shewhart, Walter,; Statistical Method from the Viewpoint of Quality Control, (Graduate School,
Department of Agriculture, Washington, 1939)

Version 9.1

1-3

Guide to the CABA CBOK


will be made to the PLAN so that it will better achieve the desired results (ACT). The cycle is
then repeated.
Although the explanation above appears to describe a single iteration of a process, Shewhart
quickly realized that to have a sound foundation for action there would need to be many
iterations. (In the manufacturing environment, this is not difficult). As a result he also
developed a third fundamental concept: Statistical Process Control.
The pioneering work done by Shewhart has subsequently been expanded, enhanced, and
refined by others, some of whom are presented below.

1.3.2

Joseph M. Juran

Also working for Bell Telephone Laboratories, at the Hawthorn plant near Chicago, Joseph
Juran had a background in Engineering and Statistics. The Hawthorn Plant was the location of
a number of innovations and insights over a wide range of disciplines (the Hawthorne Effect
on human motivation is one of the most widely known). Juran added several key innovations
and improvements to the work done by Shewhart:
1. The application of the Pareto Principle to the Quality Improvement Process
2. The addition of the Human Element to the existing statistical basis of quality
3. His step-by-step process for breakthrough improvement (which evolved into Six
Sigma).
Jurans book, the Quality Control Handbook2 is the backbone of most Quality Improvement
Libraries. In it, he provides significant and detailed information about how to implement
quality processes with specific attention to the human aspects of quality. In his publications,
he provides one of the two most commonly cited definitions of Quality:
Fitness for Use.
This concept includes the idea that a product is created to fulfill a need, and that it must do so
in a way that is effective and free from defects.
Jurans application of the Pareto Principle (also referred to as the 80-20 Rule) helps to focus
an organizations attention on the critical few (20%) issues or activities that will result in the
greatest benefit. The remainder are the trivial many (80%.)
His work identified the Quality Trilogy as:
1. Quality Planning
2. Quality Improvement
3. Quality Control.

2. Juran, Joseph M. Quality Control Handbook, 1951.

1-4

Version 9.1

Business Analyst Principles and Concepts


Taken together, these three move an organization forward by focusing on delivering those
products and features that the customer actually needs and wants.
From a manufacturing perspective, the Quality Control activities were well established;
inspectors examined some number of finished products to determine if they were
acceptable. Juran, following the work of Shewhart, demonstrated that by collecting
information about the number of products that were not acceptable and what the
characteristics of the problems were, it would be possible to improve the manufacturing
process.

1.3.3

Frederick Herzberg

Herzberg became interested in the question of how people were motivated. Work by quality
pioneers, especially Juran had revealed that motivation was a key ingredient in the effective
implementation of quality processes. Based on his research, Herzberg developed his Hygiene
Motivation Theory.
According to Herzberg, people have two sets of needs: basic (or hygiene) needs and
motivators. Hygiene needs are conditions, that, if not met, will cause people to become
Dissatisfied; these include items such as Company policy and administration, salary,
interpersonal relationships, and supervision. Motivational Needs are conditions that if met,
will cause people to be Satisfied, these include items such as achievement, responsibility,
recognition, advancement, and the work itself.
These results complement and support the concepts later articulated by Dr. Deming.

1.3.4

W. Edwards Deming

Deming worked for the US Census Bureau following his academic career and began applying
and refining his understanding of Statistical Processes. Following World War II, he began his
private career, which lead him to his now famous work in Japan, helping them to transform
the statement Made in Japan from a symbol of cheap goods and poor quality to a symbol of
high quality, innovative products competing head-to-head with any other producer in the
world. During this and other work, Deming developed the 14 Points, which were first
presented in his book Out of the Crisis3.
The 14 Points encapsulated key concepts that result in a more productive environment leading
to increased profits and quality. These points are:
1. Create constancy of purpose toward improvement of product and service, with the aim
to become competitive and to stay in business, and to provide jobs.

3. Deming, W. Edwards, Out of the Crisis, MIT/CAES, 1982

Version 9.1

1-5

Guide to the CABA CBOK


2. Adopt the new philosophy. We are in a new economic age. Western management must
awaken to the challenge, must learn their responsibilities, and take on leadership for
change.
3. Cease dependence on inspection to achieve quality. Eliminate the need for inspection
on a mass basis by building quality into the product in the first place.
4. End the practice of awarding business on the basis of price tag. Instead, minimize total
cost. Move toward a single supplier for any one item, on a long-term relationship of
loyalty and trust.
5. Improve constantly and forever the system of production and service, to improve
quality and productivity, and thus constantly decrease costs.
6. Institute training on the job.
7. Institute leadership. The aim of supervision should be to help people and machines and
gadgets to do a better job. Supervision of management is in need of overhaul, as well as
supervision of production workers.
8. Drive out fear, so that everyone may work effectively for the company (see Ch. 3).
9. Break down barriers between departments. People in research, design, sales, and
production must work as a team, to foresee problems of production and in use that may
be encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the work force asking for zero defects
and new levels of productivity. Such exhortations only create adversarial relationships,
as the bulk of the causes of low quality and low productivity belong to the system and
thus lie beyond the power of the work force.
Eliminate work standards (quotas) on the factory floor. Substitute leadership.
Eliminate management by objective. Eliminate management by numbers,
numerical goals. Substitute leadership.
11. Remove barriers that rob the hourly worker of his right to pride of workmanship. The
responsibility of supervisors must be changed from sheer numbers to quality.
12. Remove barriers that rob people in management and in engineering of their right to
pride of workmanship. This means, inter alia, abolishment of the annual or merit rating
and of management by objective.
13. Institute a vigorous program of education and self-improvement.
14. Put everybody in the company to work to accomplish the transformation. The
transformation is everybody's job.
Deming, based on his experience and approach to quality, developed a different definition of
Quality which is: Quality is the Continuous Improvement of all processes. This definition
more clearly articulates the continually evolving nature of the perception of quality. This
definition is especially useful for organizations that are struggling to get started improving

1-6

Version 9.1

Business Analyst Principles and Concepts


their quality. He popularized his approach, based on Shewharts earlier work as the PLANDO-CHECK-ACT cycle; it is often referred to as the Deming Cycle. 4

1.3.5

Kaoru Ishikawa

Ishikawa was a colleague of Demings in Post World War II Japan. He wanted to help
managers understand the sources of problems with products; this resulted in the development
of the Root Cause Analysis Process. In addition, Ishikawa was a pioneer in the use of the
Seven Quality Tools, and he developed and championed the implementation of the Quality
Circle.
The Root Cause Analysis Process is also known as the Ishikawa Diagram, the Fishbone
Diagram, and the Cause-and-Effect Diagram. These tools make it possible to identify all of
the roots (basic causes) in a retrospective approach, or, all of the potential effects (possible
outcomes) in a prospective approach. Ishikawa identified five (5) key areas which occur
repeatedly in either type of analysis:
1. People
2. Processes
3. Machines
4. Materials
5. Environment
Ishikawa identified the seven (7) basic quality tools that are fundamental to the process of
improving and managing quality:
1. Control Chart
2. Run Chart
3. Histogram
4. Scatter Diagram
5. Flow Chart
6. Pareto Chart
7. Cause and Effect Diagram

4. Deming, W. Edwards, Out Of Crisis,pp88-89

Version 9.1

1-7

Guide to the CABA CBOK

1.3.6

Genichi Taguchi

Working in Japan in the same general time frame as Deming and Ishikawa, Taguchi was also
looking at how to improve manufacturing processes. He understood that in any process there
are many elements that can affect the ultimate outcome of the process for better or for worse.
He began to examine what those elements were and how they could be controlled more
effectively. The resulting approach is known as Robust Design or The Taguchi Method. He is
also known for his Loss Function, which is a mathematical approach to defining the decline in
quality as perceived by the customer.
The Taguchi method labels those elements that can negatively affect the product as noise in
the process. Engineers and managers work together to reduce the noise, thereby improving
both the process and the product(s). Taguchi considered not only those factors that arose
during the creation of the product, but also those that developed as a result of its use. As a part
of this process, he pioneered the use of the Othogonal Array to display the impact of noise in
a cost-effective manner.

1.3.7

Philip Crosby

Crosby is representative of the second wave of quality pioneers, who built upon the
foundations laid by Shewhart, Juran and Deming. Capitalizing on the growing realization that
Japan was coming to dominate many markets with the savvy combination of quality and
continuous improvement, Crosby worked to popularize the concepts of quality in the western
world. Crosby also came from a manufacturing background and could translate many of the
quality concepts into pragmatic approaches that were easily understood by nonmathematicians.
In his popular book Quality is Free5, Crosby articulates the Four Absolutes of Quality
Management:
1. Quality is defined as conformance to requirements, not as goodness or elegance
2. The system for causing quality is prevention, not appraisal
3. The performance standard must be Zero Defects, not thats good enough
4. The measurement of quality is the Price of Non-Conformance (i.e. money), not indices
Additionally, Crosby popularized the concept of the cost of quality. This approach
identifies the four components of the total product cost as:
1. Product Cost: the amount of resources (time and money) required to build the product
one time, correctly
2. Product Appraisal Costs: the resources used to determine if the product has been built
correctly, such as inspections, reviews and testing
5. Philip Crosby, Quality is Free, McGraw-Hill, 1979

1-8

Version 9.1

Business Analyst Principles and Concepts


3. Product Failure Costs: the resources consumed to handle and correct problems
(defects) in the product, such as rework, lost of reputation, delivery delays and help
desks
4. Prevention Costs: those costs incurred to prevent defects from occurring, such as
process development and maintenance costs and training costs
Total Product Cost: is the sum of all four of the elements above.
Crosby demonstrated that, to reduce Total Product Cost, the best approach is to increase
prevention costs, which can then make it possible to reduce appraisal and failure costs.

1.3.8

Tom DeMarco

DeMarco is another of the second wave of Quality pioneers, and like many of the originals,
began his career with Bell Telephone Laboratories. He is a prolific author, who has effectively
translated many of the quality lessons learned in the pure manufacturing environment into the
world of Information Technology. As such, he represents a unique blend of the two worlds.
His ground breaking book, Peopleware 6, co-authored by Timothy Lister, takes the human
element identified by Juran and Herzberg and extrapolates it into the world of Information
Technology projects. This followed his more technical book, Controlling Software Costs:
Managing Schedule and Estimation.7

1.4 Basic Quality Definitions


Throughout the remainder of the Skill Categories, reference will be made to the basic quality
definitions below. Understanding each and how they relate to each other is essential.

1.4.1

Quality

Fitness for Use8 or The Continuous Improvement of All Processes9; Quality is subjective and
dependent upon the role of the assessor. Best practice is that quality is measured from the
point of view of the ultimate consumer of the product or service. Failure to measure from this

6. DeMarco, Tom and Lister, Timothy, Peopleware, Dorset House, 1987.


7. DeMarco, Tom, Controlling Software Projects: Managing Schedule and Estimation, Prentice Hall,
1982
8. Juran, Joseph M. Quality Control Handbook, 1951
9. Deming, W. Edwards, Out of the Crisis, MIT/CAES, 1982

Version 9.1

1-9

Guide to the CABA CBOK


perspective will result in a gap between what the customer expects and what is actually
delivered.
Historically many IT organizations have defined Quality as meeting requirements. They
then have assumed (often with the consent and encouragement of their business partners) that
they know what the customer wants. This is a broken requirements gathering process and
consistently yields low quality products which fail to meet the business partner or the external
customers requirements.
Best practice also indicates that any quality assessment is a point in time or dynamic
rather than static. What is perceived as quality will change over time with additional
information, new choices or more experience.

1.4.2

Quality Assurance (QA)

Quality Assurance is defined as those activities which are designed to prevent defects from
occurring. By preventing defects before they are created through process improvement and
other activities, Quality Assurance both improves product quality and reduces product costs.
One of four (4) components of product costs, quality assurance activities such as standards
development and training are prevention costs according to Crosby10.
An effective Quality Assurance function will be using process data gathered from Quality
Control activities to identify potential process improvements. They will support the effort to
develop and implement processes and procedures designed to reduce the likelihood that
defects will be created. This may also include developing and supporting process training,
tool acquisition and training, as well as providing facilitation and administrative skills to
various parts of the organization.
Within Information Technology there are often organizations whose primary function is
Testing. This is a Quality Control function, not Quality Assurance (see below).

1.4.3

Quality Control (QC)

Quality Control is defined as those activities designed to identify defects which have already
been created. As such, Quality Control is a much less cost effective method of improving
product quality than Quality Assurance, since defects detected must still be analyzed and
corrected. However, as a prudent organization will not assume that all the defects possible
have actually been prevented, a robust Quality Control function is still needed. Inspections,
reviews and testing are primary Quality Control activities in the Software Development
Lifecycle (SDLC). Quality Control activities are appraisal costs.
An effective set of Quality Control activities provides information about where and how
defects are created. The historical approach to QC is to assume that as the majority of their
10. Philip Crosby, Quality is Free 1979

1-10

Version 9.1

Business Analyst Principles and Concepts


activities are testing, there is no reason to involve QC personnel early in the product life cycle.
More recent experience shows that introducing Quality Control as early in the product
development as possible is extremely cost effective. Defects caught early are the least
expensive to correct.

1.4.4

Quality Improvement (QI)

Quality Improvement is defined as a systematic approach to improvement of product quality,


typically through a combination of increasing the accuracy of Quality Assurance activities
resulting from an analysis of the results of Quality Control activities. Effective Quality
Improvement activities often are the result of Root Cause and Pareto style analysis.
Improving processes enhances the products created by those processes. The application of the
Kaizen style of management (which will be discussed in Section 1.5.12 on page 1-29) and
quality improvement will yield small continuous upgrades to processes (the Evolutionary
Approach). Using other approaches, such as Six Sigma (which will be discussed in Section
1.5.16 on page 1-33), will have a much more Revolutionary effect on the organization and its
processes.
Regardless of the approach taken, it is essential to have established a baseline before the
improvement activities are undertaken so that it is possible to demonstrate the nature and
extent of the improvements made.

1.4.5

Quality Management (QM)

Quality Management is defined as an approach which uses decisions based on facts, data, and
logic, rather than emotion, schedule, and budget. Quality Management relies upon the
consistent generation of reliable information (data// measures/metrics/statistics) about how
processes are performing (management dashboard) and what the organization needs to
achieve (vision, mission, strategies, goals and objectives). This in turn requires that processes
exist which are robust and realistic and will support the work effort required.

1.5 Business Analysis and Management Tools for


Quality
In order to effectively perform the tasks of a Business Analysis, the business analyst must
have a wide range of tools available to address different types of situations. Many of the
most important are designed to organize data and information in ways that make it easier to

Version 9.1

1-11

Guide to the CABA CBOK


assimilate and understand. The tools discussed in the following sections should be well
understood by the CSBA.

1.5.1

Affinity Diagram

The Affinity Diagram is used to organize large quantities of suggestions, ideas and comments
into groupings for later analysis. It was developed by Kawakita Jiro and handles messy data
very effectively. Affinity Diagrams are often the next step after an initial Brainstorming
Session (covered in Skill Category Two).
In the simplest version of the Affinity Diagram, each of the items generated as a result of a
survey or Brainstorming Session are recorded on a Note Card or Post-It11Note. The
participants then conduct a free-form session in silence in which items are accumulated into
what seem to be logical groupings to the participants. The session continues until the
movement of the items has stopped. At that point, each of the resulting groupings is given
representative names such as, Communication Problems; Response Time Issues; or
Inadequate Requirements.
If there has not been a preceding Brainstorming session, one can be conducted at the outset of
the meeting. Kawakita Jiro recommended that each member contribute 5-10 items, each of
which must meet the following criteria:
It is a statement of fact, not a judgment or evaluation
The fact is in some way related to the problem or situation under consideration
The fact need not be either a cause or a solution and may be tangentially related
Following the naming process, each of the items within a grouping are examined to determine
what its relationship is to the group name and to other items in the group (such as, A and B are
components of C which is one part of the topic described by the Group Name). Relationships
within the group are recorded, as is the relationship of the groupings to each other. The
completed Affinity Diagram is then a springboard for further analysis, prioritization and
potential activity.

1.5.2

Baselining

The foundation of any improvement process includes a determination of what is the current
level of organizational performance in the target area. The current level of performance is
referred to as the baseline. There are three general steps in developing a baseline:
1. Identifying the sources of baseline data
11. Post It is a trademarked name by 3M Corporation for their sticky note product.

1-12

Version 9.1

Business Analyst Principles and Concepts


2. Collecting the data
3. Analyzing the data
The analysis activity is often one of the most difficult steps in process improvement.
Organizations, which are concerned or embarrassed about their current level of performance,
often either do not collect the necessary numbers, or do so in a way that makes them difficult
to access and understand. To document a level of performance that warrants the expenditure
of resources is often a painful step for organizations. The Business Analyst, who understands
that it is impossible to measure progress if there is not a defined starting point, may need to be
very creative in securing agreement to a baseline.
This activity is described in more detail in the Skill Category Three, Define Implement and
Improve Processes.

1.5.3

Benchmarking

Benchmarking is often an early activity in the organizational assessment process that initiates
a major quality/process improvement effort by an organization. Organizations seek to learn
how well others perform in many aspects of their business. Benchmarking activities often are
the precursor to major systems initiatives and, therefore, are of great importance to the
Business Analyst.
Within Information Technology, common benchmark issues include items such as the Percent
of the IT Budget devoted to New Development, Maintenance/ Enhancements and Problem
Resolution. In the Development Lifecycle, benchmark issues may focus on the Percentage of
Defects identified in Requirements, Design, Code, Test and Production respectively.
Organizations seek to learn who is best and how their organization compares. They also
seek to learn how those results are being achieved.
Benchmarking can be as simple as gathering whatever published reports and documents
(Annual Reports, advertising materials, newspaper articles, etc.) are available. The primary
issue with this approach is that it is often lacking in the critical details needed to determine
whether the results are credible and applicable.
A more rigorous approach is to identify which organizations appear to be performing well
through either the sources above or through personal or industry contacts. Direct contact is
then made to see if the organization is willing to share the real data and what is behind it. This
often requires protracted negotiations and may involve signing of confidentially agreements.
Winners of the major national quality awards (Canadian National Quality Award, Malcolm
Baldrige Award, National Quality Award (India), etc.) are generally required to provide
information about their results and how they were achieved. These can be an excellent source
of data.
Once performance information has been gathered, it is compared to the internal baseline
already developed. (It may be necessary to adjust the calculation of the baseline to insure
direct comparability with external data.) This comparison provides information which

Version 9.1

1-13

Guide to the CABA CBOK


becomes part of the decision making process when determining which improvement activities
should be undertaken.

1.5.4

Cause-and-Effect Diagram

The Cause and Effect Diagram, Ishikawa Diagram, Root Cause Analysis Diagram and
the Fishbone Diagram all refer to the same basic process. The essential elements are the
identification of a specific problem, desired outcome, or issue, which is then clearly
articulated by the group. This becomes the head of the diagram. To the extent that it is
possible to do so, this statement should contain measurable data that defines the parameters of
the statement, for example; In the last 6 project completed by the Sales Team, 20% or more
of the Requirements were identified after the initiation of Integration Testing.
The group is then challenged to identify reasons why the stated problem or situation might be
occurring. In his research, Ishikawa identified that 5 areas occur fairly consistently:
1. People
2. Processes (Methods)
3. Material
4. Machines
5. Environment
When dealing with Information Technology projects, it is common practice to include a sixth
area that is IT Specific: Data. It is also increasingly common to add Communications.

Figure 1-1 Cause-and-Effect Diagram

1-14

Version 9.1

Business Analyst Principles and Concepts


The illustration in Figure 1-1 represents the basic Fishbone format. As individual items are
suggested, they are entered onto the appropriate fin or leg of the diagram as shown in Figure
1-2.

Figure 1-2 Cause-and-Effect Diagram with Initial Level of Detail

Once all of the big ticket items have been added to the diagram, they need to be
decomposed (broken down, one level at a time) into finer levels of detail. The decomposition
is complete when it comes to something outside the control of the organization (the weather)
or when a Root Cause has been identified. Ishikawa states that it may be necessary to ask
WHY five (5) times to arrive at a Root Cause. For small, contained issues only one or two
levels may be necessary.
When all of the Root Causes have been identified, it is then possible to begin the process of
analysis and prioritization.

1.5.5

Control Charts

Control charts are one of the seven key tools identified by Ishikawa for improving Quality.
Presenting data in this format can provide both the Business Analyst and the Business Partner
with useful insight into a problem or situation. Control Charts are especially useful in
understanding how systems and processes are performing over time.
Control Charts have the following characteristics:
A centerline. This usually represents a numeric average of either actual historical or
desired performance for the system or process.

Version 9.1

1-15

Guide to the CABA CBOK


Upper and Lower Boundaries represent the limits that define acceptable, or
controlled, performance. These generally have a statistical base and are frequently
described in terms of the number of standard deviations (Sigma is the mathematical
representation of a Standard Deviation, hence Six Sigma12) they represent. A
standard deviation is the square root of the sum of the differences between the
population mean and the single instance, squared.
For example, if the mean is 20 and the first instance is 12, (20-12=8; 8 squared =
64.) Repeat this process for each item in the distribution. Add then all up, then
calculate the square root; or use the formulas built into any spreadsheet.
Performance over time is plotted on the chart. Plot points that fall outside the
acceptable or desired performance range are generally subject to further scrutiny.
In Figure 1-3, Periods 1 and 6 are outside the established control limits and would normally
require investigation and action to determine the cause(s) and address them. Control Charts
are occasionally referred to as Run Charts, Line Charts or Shewhart Charts. The
effective use of Control Charts for managing processes is the backbone of Statistical Process
Control (SPC). Statistical Process Control is a necessary tool for moving from CMMI13
Level 3 to Level 4. This will be discussed in more detail in Skill Category 3.

Control Chart for 10 Periods


1000
800

Lower Limit

600

Average

400

Upper Limit

200

Performance

0
1

10

Performance Over Time

Figure 1-3 Standard Control Chart

12. Six Sigma is the copyrighted name for Motorolas Quality Control program.
13. CMMI is a trademarked product of the Software Engineering Institute (SEI.)

1-16

Version 9.1

Business Analyst Principles and Concepts

1.5.6

Graphical Representations

1.5.6.1

Pie Charts

Pie Charts are an excellent method for displaying data where there are relationships to be
understood. The conceptual presentation is very simple and a large amount of information can
be conveyed very quickly. As the name implies, Pie Charts are round and have wedges as seen
in Figure 1-4.
20xx Revenue By Country

Australia, 300

Hong Kong,
120
Thailand, 212

Japan, 315

China, 397

India, 406

Figure 1-4 A Typical Pie Chart

1.5.6.2

Bar (Column) Charts

Bar or Column charts are often used to present the same kinds of data as pie charts, but bar
charts can include multiple series as opposed to the single series shown in Figure 1-4. In
Figure 1-5, data for three periods is presented, but the detail of each entry has been eliminated.
For analysis purposes, it is possible to determine the general trends for each country, but not
the precise numbers. This will be effective in some situations; in others, it may be necessary to
supplement the display with the actual

Version 9.1

1-17

Guide to the CABA CBOK


data.
Revenue Comparisons
500
400
20x1

300

20x2

200

20x3

100
0
Hong Kong

Thailand

China

India

Japan

Australia

20x1 through 20x3

Figure 1-5 A Typical Chart with 3 Series of Data


When data is displayed vertically as it is in Figure 1-3, the chart is properly known as a
column chart. When the data is displayed horizontally, it is a bar chart. Both Pie and Bar/
Column Charts can be manipulated in a number of ways (3-D, Pulled out Wedges, etc.) which
make them a very popular and flexible tools for the Business Analyst. These graphical
representations of data are easily accomplished with most spreadsheet packages.

1.5.7

Cost of Quality

Although the popular name for this concept is the cost of quality, it is in reality, the cost of
poor quality. For the Business Analyst, the cost to design and develop a new system, or
modify an existing one, is of key importance. The language of business is money. Therefore,
translating the impact of quality decisions into financial terms makes very good sense. It helps
to bridge the gaps in understanding which can occur when there is no other common basis.
Product Cost: The amount of resources (time and money) required to build the
product one time, correctly. In manufacturing environments, this represents the bulk
of the total costs, often as much as 95%. By contrast, in Information Technology, it
is common for this to represent as little as 30% of the final cost of a product. The
remaining funds expended are the cost of (poor) quality. Some of these costs are to
prevent errors from being created, while others are to catch and correct defects
already built.
Product Appraisal and Inspection Costs: The resources used to determine if the
product has been built correctly and to identify the defects that have been created.
Included in these costs would be Design Reviews, Code Inspections, Glass Box and
Black Box Testing, Test Automation, Usability Testing, Beta Testing and PreRelease out-ofthe Box Testing.14 Pushing appraisal costs as early into the life
cycle as possible will reduce the total cost to find and fix defects. This has lead to
14. Kaner, Cem; Quality Cost Analysis: Risks and Benefits; www.kaner.com; 1996

1-18

Version 9.1

Business Analyst Principles and Concepts


the increased popularity of Inspecting Requirements, which is commonly cited as
the most effective cost reduction tool available.15
Product Failure Costs: The resources consumed to handle and correct problems
(defects) in the product. Failure costs have historically been divided into two (2)
categories: Internal Failure Costs and External Failure Costs. Many organizations
focus on only External Failure Costs (its OK as long as the customer doesnt see
it.). This mentality creates focus on the external customer. However, this approach
ignores the very significant internal costs, which may be very important in the long
term.

Internal Failure Costs are activities and expenses incurred as a result of defects
both during and after product development. Examples of Internal Failure Costs
include redefinition of requirements, redesign of elements, tables, modules and
screens, algorithm fixes, regression testing, wasted business partner, analyst,
programmer, designer, architect and tester time, changes to training materials,
changes to user manual, delays in product delivery, impact to other projects,
help desk staffing,16 and missed business opportunities.

External Failure Costs cover a wide range of potential exposures for organizations, even if the software they develop is only used by their own employees.
Typical costs include customer dissatisfaction, product returns, lost sales, damaged reputation, reshipping of replacements, cost to develop and implement
work arounds, cost to maintain multiple versions as a result of new, unstable
releases, contract disputes, loss of goodwill and legal fees and penalties.

Prevention Costs: those costs incurred to prevent defects or poor quality from
occurring. Prevention costs includes the time to develop, document, and implement
good standards and procedures, including time to train in how to use them
effectively, time spent in collection and analysis of defect data, time spent to
acquire, install and train staff of the use of appropriate tools, and time spent
learning effective communications and interpersonal skills. Because they prevent
defects from being created, prevention costs are the most effective in actually
improving quality; but because those defects do not actually occur, it is often
difficult to demonstrate the return on investment directly.
Total Product Cost: this is the sum of all four of the elements above. Juran, Crosby
and Perry have demonstrated that small percentage increases in Prevention Costs
yield much larger percentage cost reductions in Failure Costs, thereby producing a
net reduction in Total Product Costs as shown in Figure 1-6 Cost of Quality.

15. Wiegers, Karl; Cosmic Truths About Software Testing; QAI Spring Conference 2006.
16. Help Desk Staffing is an expense organizations will incur because of the anticipation (based on past
performance) that problems will occur and support will be required. If the product is used by others
outside the organization, Help Desk Staffing is also an External Failure Cost.

Version 9.1

1-19

Guide to the CABA CBOK

Cost of Quality

100

Product, 30

Product, 30

Failure, 35

Failure, 31.5

80
60
40

Appraisal, 32

Appraisal, 32
20

Prevention, 4

Prevention, 3

0
1

Figure 1-6 Cost of Quality

1.5.8

Earned Value

This approach compares information about the amount of work planned to what has actually
been completed. This is used to determine if cost, schedule and work accomplished are
progressing as planned. The comparisons are often expressed in equations such as:
BCWP - ACWP = CV
(Budgeted Cost for Work Performed minus the Actual Cost of Work Performed equals the
Cost Variance)
Value is earned as work is completed. This approach is very effective for the CSBA as it
allows the compilation of information across projects on a consistent and reliable basis.
According to many sources, two-thirds or more of all IT projects are over budget, behind
schedule, or both. This leads to massive outlays of unplanned resources to address the
problem or to avoid significant negative financial results for late delivery or delivery of a
defective product.
In addition to Schedule and Budget, it is essential to have a clearly defined and agreed upon
Scope Statement. Without this, neither the schedule nor the budget are meaningful. The Scope
Statement describes, in significant detail, the work to be accomplished.
The advantage of this approach is that it allows the early detection of slippage by using an
industry standard approach to:
1. Measure a projects actual progress
2. Forecast both project completion date and final cost

1-20

Version 9.1

Business Analyst Principles and Concepts


3. Track schedule and budget throughout the project life cycle
Measuring only scope, schedule or budget in isolation allows one to be maximized at the
expense of the other (for example, delay the delivery date and cut scope to reduce budget;
impact: work is finished but at a higher cost or the date is met, regardless of the cost, even if it
means cutting scope and reducing quality). Early detection of problems allows more
flexibility in developing effective responses.
Earned Value Analysis works most effectively when there is a consistent and realistic
approach to developing the project Work Breakdown Structures (WBS). This detailed effort is
essential to creating a solid estimate of the work to be done and to creating reasonable
benchmarks along the critical path. Failure to break work down to a sufficient level of
granularity will make it difficult to identify schedule issues in a timely fashion.
One issue that can arise with Earned Value reporting is that it does not have an integrated
measure of the quality of the work being performed. Unless other specific quality measures
are carefully designed and implemented, it will be possible to deliver a product on time and in
budget, but with significant defects.
The formula, BCWP - ACWP = CV is one example of many potential metrics, which can be
developed using the Earned Value Analysis; others include a comparison of Budgeted Cost
for Work Performed to Actual Cost of Work Performed for the Project to date, sometimes
referred to as the Cost Performance Index (CPI):
BCWP / ACWP = CPI
CPI should be greater than 1, indicating that 100% or less of the budget was consumed for
the work that has been completed. A CPI less 1 indicates funds in excess of the budget have
been spent for the actual work completed.
Budgeted Cost for Work Performed to Budgeted Cost for Work Scheduled is referred to as the
Schedule Performance Index as shown in the formula below:
BCWP / BCWS = SPI
SPI should also be greater than 1, indicating that 100% or less of the time allowed has been
used to complete the scheduled work. When multiplied (SPI times CPI) the result should also
be greater than 1. The more negative (below 1) the result is, the more difficult it will be to
bring the project in as planned, because more time has been spent doing less work than
anticipated.
BCWP
120000

Version 9.1

BCWP
150000

1-21

Guide to the CABA CBOK


Example 1 ACWP

160000

0.75

0.94

Example 2 ACWP

110000

1.09

1.36

Example 1 BCWS

150000

0.80

1.00

Example 2 BCWS

120000

1.00

1.25

Example 3

0.80

0.75

0.60

Example 4

1.00

1.09

1.09

CPI

SPI

CSI

In the examples above, the Budgeted Cost for Work Performed (BCWP) in the first column is
$120,000 and $150,000 in the second column. In Example 1 the Actual Cost of Work
Performed (ACWP) in each case is = $160,000, or $40,000 over budget the $120,00 budget in
the first column. When the BCWP is divided by the ACWP, the resulting Cost Performance
Index (CPI) is 0.75. This number, less than one, indicates the project has spent more money
than budgeted for work actually performed. For example, we estimated that to define 50
requirements would cost $10,000. We have defined 50 requirements, but at a cost of $12,000.
Still using data from the first column, when the Budgeted Cost of Work Performed (BCWP)
is divided by the Budgeted Cost of Work Scheduled (BCWS) of $150,000, the resulting
Schedule Performance Index (SPI) is .80. This shows that the project is significantly (20%)
behind schedule. The calculation of the Cost Schedule Index (CSI) using this data is shown in
Example 3. The result, .60, indicates that this is a project in serious difficulty from both a
schedule and a cost perspective. The further below 1.0 the CSI is, the more problems will be
encountered in attempting to recover.
In Example 2, still using a BCWP of $120,000, the Actual Cost ACWP is $110,000 and the
Scheduled Cost BCWS is also $120,000. The resulting CSI, shown in the first column of
Example 4 is 1.09. This project is currently in good shape in terms of both schedule and
budget.
Working in conjunction with the Project Manager, the Business Analyst will be able to use
these and other metrics, which can be developed using the Earned Value Analysis, to detect
and address signs of trouble early in the project. Many organizations find that once a project is
more than 10% complete, any budget or schedule overrun that exists at that point is almost
impossible to recover. At the 20% completion point, the CPI will not change by more than
10%. Assuming that the numbers shown in Example 1 were calculated at the 20% completion
point, the project is 20% over budget. A 10% variance would mean that the finished project
will be between 18% and 22% over budget.

1-22

Version 9.1

Business Analyst Principles and Concepts

1.5.9

Expected Value/Cost

Expected Value/Cost is a method for examining the relative potential costs or benefits of
multiple options, based on both the anticipated dollar outcome (either positive or negative), as
well as, the probability that any particular outcome will occur. This is often a difficult
discussion to conduct, particularly when there is one especially larger cost, or, more
frequently, benefit involved. Decisions makers, lacking an objective structure for evaluating
the alternatives and drawing conclusions, often have difficulty in turning their back(s) on a
very large potential benefit, regardless how unlikely it is to occur.
In Figure 1-7 a decision must be made in the following situation:
Should a prototype of the new airport security X-ray product be built? Project requirements
were poorly defined. As a result, there is the risk that the final product will not pass the user
acceptance test. However, a prototype also would substantially reduce the cost of rework for
failures at user acceptance test.
Cost to build the prototype $98,000
Probability of passing user acceptance test

With prototype

90%

Without prototype

20%

Cost of rework after user acceptance test

With prototype

$20,000

Without prototype $250,000

In an unstructured discussion, the cost to build the prototype may easily be more tangible to
decision makers than the potential cost of the failure to build the prototype. There are two (2)
main Branches for this Decision Tree, build the Prototype at an initial cost of $98,000 or do
not build one at an initial cost of zero dollars. Off the top branch of the Decision Tree, there
are two options with associated probabilities and costs; Pass Acceptance Testing and incur no
additional costs; (90% probability) or Fail Acceptance Testing, but perform better, therefore
incurring smaller additional testing cost of $20,000. There is a 10% probability of this
occurring if the prototype is built; therefore the Expected Value of this outcome is $2,000 (the
cost of the outcome times the probability that it will occur).
The total Expected Value of this Branch is as follows:

Version 9.1

1-23

Guide to the CABA CBOK

Figure 1-7 Decision Tree with Expected Values

$98,000 + $0 + $2,000 = $100,000


Using the same approach, the total Expected Value of the lower branch is as follows:
$0 + $0 + $200,000 = $200,000
Based on this analysis, the organization can expect to spend twice as much if they fail to build
the prototype. This example shows a fairly simple application of the expected value decision
making approach. Often decisions in real life are significantly more complex, with more than
two initial options and nested financial amounts and probabilities. Figure 1-8 shows a more
complex Decision Tree.

1-24

Version 9.1

Business Analyst Principles and Concepts

Figure 1-8 Decision Tree and Expected Value for Multiple Outcomes

Note that in Figure 1-8 all 6 final branches have the same number of options. This is not
uncommon; often the final options and their costs do not change, but the associated
probabilities do change depending upon the preceding decisions. In this example, Option C
significantly decreases the probability that a catastrophic failure will occur, while
simultaneously increasing the probability of passing.

1.5.10 Flow Chart


A traditional tool of systems developers, flow charting is an excellent method for reducing
ambiguity in the relationships among objects, information, people and processes. Because of
the frequency of its use, flow charting symbols have been standardized and are included in
virtually all graphics and word processing products. These tools will handle simple charts. For
more complex charts and relationships, there are more specialized products available.

Version 9.1

1-25

Guide to the CABA CBOK

Figure 1-9 Simple Generic Flow Chart


Specialized flow charts are used for specific issues in software development; Data Flow
Diagrams trace the movement of information through a process; Business Process Maps
define all of the components of a specific activity, whether automated or not; State Transition
Diagrams identify how and when transformations occur; Entity Relationship Diagrams
identify who and what are involved in a process and how they interact.

1.5.11 Force Field Analysis


Developed by Dr. Kurt Lewin17, Force Field Analysis is a structured approach for identifying
and evaluating the forces that will promote, encourage and support change and those that will
resist change. He referred to these as driving forces and restraining forces. When these two
forces are balanced, the status quo is maintained; only when the driving forces are stronger
than the restraining forces, will change occur. Lewin created a visual representation of these
forces, which allows a more intuitive interpretation of the forces.
The desired change is identified at the top of the analysis. Driving forces are displayed on the
left of the page and the corollary resisting forces are shown to their right. This presentation
allows the analyst to identify more clearly how strong the obstacles to change are and what
assets exist to help. In this approach, arrows are used to demonstrate both the direction and the

17. Lewin, K. (1948) Resolving social conflicts; selected papers on group dynamics. Gertrude W.
Lewin (ed.). New York: Harper & Row, 1948.

1-26

Version 9.1

Business Analyst Principles and Concepts


strength of the force; the shorter the arrow is, the weaker the force. Conversely the stronger
the force is, the longer the arrow.

Desired Change: Implement Inspections for Requirements

Figure 1-10 Simple Force Field Analysis


For the Business Analyst, completing the Force Field Analysis will provide a detailed insight
into what might be involved in any potential project or process change. It requires the Analyst
to gather information about:
What is the current situation?
What is the problem (gap) that the change is intended to address?
Who are the key players?
What is their position, and can that position be changed?
What are the costs and benefits associated with the proposed change (not just
monetary but also in terms of power and politics)?
Notice in Figure 1-10 that for some Driving Forces, there is more than one Resisting Force.
This is common, and the total weight of the two may be greater than the driving force if action
is not taken. With this information in hand, the Business Analyst can develop a strategy to
address the issues of those who will resist change and maximize the benefits perceived by
those in favor. Alternatively, it may also help to determine that the current situation has too
many resisting forces and that this change should be deferred.
1.5.1Histogram

Version 9.1

1-27

Guide to the CABA CBOK


This chart is one of the key tools identified by Ishikawa as essential for improving process
quality. It is a special- purpose Bar Chart for recording process information. It can provide the
following information to the analyst:
Mean - The arithmetic average of all values created by the process
Minimum - The smallest of all values created by the process
Maximum The largest of all the values created by the process
Standard Deviation - A measure of how closely grouped the values created by the
process are (density or tightness). Standard Deviations are measured from the
Mean. One Standard Deviation always includes 68% of all of the items in a class;
34% on either side of the Mean. Two (2) Standard Deviations will always include
95.5% of all the values; 47.8% on either side of the Mean.
Number of Classes - Classes represent natural subgroups within the values
produced by the process. Typical processes will have more than 4 and fewer than 10
processes. This determines the number of bars in the histogram.
Class Width - The range of values within an individual class. This is usually a
standard amount for all of the bars; occasionally it will be standard for the central
distribution, but may be larger in the tails. For example the columns representing
the classes in the first two standard deviations may be measured in hundreds; while
those in the third and fourth are measured in thousands.
Skewness - Most processes will produce an even distribution on either side of the
arithmetic mean. This will produce a symmetrical histogram (the traditional bell
curve is a symmetrical distribution). When the mean does not fall close the center
of the classes, the distribution is said to be skewed. The further from the central
class(es) the mean falls, the more skewed the distribution.
Trouble Calls During Acceptance Test and Implementation Calls
250
200
150
100
50
0
Wk
Wk Wk Minus Wk 0
Minus 3 Minus 2
1

Avg = 116

Wk Plus Wk Plus Wk Plus


1
2
3

STD = 68

Figure 1-11 Bar Chart with Skewness

1-28

Version 9.1

Business Analyst Principles and Concepts


Figure 1-12 is a fairly typical histogram with no apparent skewness, while representing a
single iteration of a process. There are 7 Classes and the Class Width is 1 Week. While this
information may have some use, it becomes much more valuable when placed in the context
of multiple iterations of the process. If the Business Analyst knew from previous
implementations that the Mean for similar sized projects was 56, it would indicate that this
project had issues that need to be addressed. Alternatively if the Mean was 95, it would
indicate that this project had performed better than average. In either case, the Analyst would
want to understand what happened and make use of Lessons Learned.

Trouble Calls During Acceptance Test and


Implementation Calls
140
120
100
80
60
40
20
0

122
97

104
72

58

44

27
Wk
Minus
3

Wk
Minus
1

Wk
Plus 1

Wk
Plus 3

Mean =74.9 STD Dev = 34.4

Figure 1-12 Histogram with Even Distribution

1.5.12 Kaizen
The Japanese strategy of small continuous improvement, this approach emphasizes processoriented thinking rather than results-product oriented thinking. Kaizen is the purview of staff,
line and middle management, the highest level of the organization is rarely involved in small
improvements. Top Management is responsible for major innovation, Middle Management is
responsible for ensuring that small improvements are being made on a daily basis, and staff is
responsible for maintaining the improvements that have been made and for suggesting new
ones.
This approach focuses on creating the right environment and individual attitudes for the
continuous improvement of all processes and features the following concepts:
Something in the organization should be improved every day
All management actions should ultimately lead to increased customer satisfaction

Version 9.1

1-29

Guide to the CABA CBOK


Process-oriented thinking and actions
Quality, not profit, is first
Defects are a treasure. Finding defects is encouraged and rewarded
Problem solving is cross-functional and systemic, not ad hoc
For the Business Analyst, using Kaizen as a personal and project philosophy will result in an
emphasis on the kind of continuous learning and process improvements that create
outstanding performance.

1.5.13 Pareto Principle


Vilfredo Pareto, an economist, gave his name to a phenomena that had been observed by
many others in the past. The Pareto Principle states that the distribution of critical factors is
not random or uniform but follows a pattern. There are the critical few, approximately 20% of
the distribution, and the trivial many, generally about 80% of the distribution.
In Information Technology this is reflected in situations such as, 80% of the time in systems
development is often focused on about 20% of the functionality; 80% of the problems arise
from 20% of the product. In the larger business context, most organizations see that 20% of
their customers generate about 80% of the revenue.
For the Business Analyst, this Principle provides an excellent tool for allocating attention and
resources. In the frenzied world of Information Technology it is not uncommon to see the
urgent drive out the important. This results in wasting resources on trivial, current problems
when there are more important issues to be addressed. Using the Pareto Principle will provide
focus and help the project move toward a more successful conclusion.
Figure 1-13 is a simple Pareto Chart displaying both the quantitative data on the left side yaxis and the percentage data on the right side, secondary y-axis. The cumulative total of the
data is displayed by the line extending from the first data point on the left to the 100% mark
on the right.

1-30

Version 9.1

Business Analyst Principles and Concepts

Defects Indentified by Phase


1400

100%

1200
80%

Defects

1000
800

50%

600
400
20%

200

0%

Te
st
Im
in
g
pl
em
en
ta
tio
n
Pr
od
uc
tio
n

Co
di
ng

De
sig
n

Re
qu
ire
m
en
ts

Phases

Figure 1-13 Simple Pareto Chart with Data Count and Percentages

1.5.14 Relationship Diagram, Entity Relationship Diagram


This tool is used to understand how various components interact under specific circumstances.
It allows the analyst to look at one-to-one, one-to-many, and many-to-one relationships.
Because many-to-many relationships are often the source of logic errors if not carefully
defined, the Relationship Diagram requires that Many-to-Many relationships be broken down
into one of the other formats using an associated entity. This removes the potential ambiguity
in these relationships. Entity Relationship Diagrams are examined in more detail in Skill
Category Five, Business Requirements.

1.5.15 Scatter Diagram


This is another of the basic tools identified as essential to managing and improving process
quality by Ishikawa. It allows the analyst to examine two variables and determine the extent
and nature of their relationship. Typical scatter diagrams use a limited number of data points
to reduce clutter and improve comprehension. They maybe used examine the change in
process results over time or similar situations.

Version 9.1

1-31

Guide to the CABA CBOK

and Acceptance Tes

Integration, Regress

Hours Spent in

Hours Spent in Reqirements Inspection vs Integration, Regression


and Acceptance Testing
600
500
400
300
200
100
0
0

10

20

30

40

50

60

Hours Spend in Requirements Inspections

Figure 1-14 Scatter Chart for Inspection and Testing Time By Project
In Figure 1-14 the analysis looks at the relationship between hours spent, by project, on
Inspecting Requirements versus Testing activities for a group of similarly sized projects. It is
not uncommon to see a several of such charts, looking at a series of factors to determine which
function has the greatest impact on the factor held constant. For example Figure 1-15 is an
alternative chart to this one, looking at the experience level of the testers compared to the
hours spent in Testing.

Hours SpentTest

Hours Spent in Integration, Regression and Acceptance


Testing
600
400
200
0
0

10

Years of Testing Experience

Figure 1-15 Scatter Chart Experience Level of Testers and Hours Spent

In Figure 1-15, there is no clear correlation on this analysis of similarly sized projects. This
type of analysis might be useful in determining whether the most effective method of reducing
the testing effort is to hire more experience testers, or implement Inspection of Requirements.

1-32

Version 9.1

Business Analyst Principles and Concepts

1.5.16 Six Sigma18


Introduced by Motorola in the 1980s, Six Sigma refers to a level of quality in which the defect
rate falls outside 6 Standard Deviations from the statistical mean. Technically, this translates
to 2 per billion; but because of the tendency of the process mean to drift with very high
levels of data, Motorola established a quality level of 3.4 errors per million. It is this number
that is most commonly seen in reference to Six Sigma Quality.
Although originally introduced into the high-volume manufacturing world, the quality
concepts have been adapted to the development of software. Often organizations that are
seeking to achieve ISO (International Standards Organization) Certification for their software
products will use the Six Sigma practices to move them forward.
The keys to implementation of Six Sigma are a management commitment to a multi-year
(often as long as 5 years) process for implementing the necessary processes; the support of
trained leaders (Black Belts) for driving and supporting the transition and a commitment to
use process improvement to drive out variability.
A more detailed consideration of Six Sigma concepts is contained in Skill Category Seven,
Software Development Processes.

1.6 Summary
In this Skill Category we have examined the evolution of quality concepts and practices
around the world. By understanding the work of quality pioneers, the Business Analyst is
better positioned to provide a strong link needed between the Business community and
Information Technology (IT).
Skill Category 1 also provides an introduction to the basic tools of implementing quality
processes and procedures. Many of these tools will appear in later Skill Categories; where
their application to specific needs will be addressed.
Armed with a good knowledge of the basic concepts and tools of quality, the CSBA can
approach project needs with greater knowledge and skills. In the next Skill Category,
Management and Communication, we will examine the role of management and
communication in the functions of the CSBA.

18. Six Sigma, Motorola Journal 1986.

Version 9.1

1-33

Guide to the CABA CBOK

1-34

Version 9.1

Management and Communication Skills

Skill
Category

2
Management and
Communication Skills
The most important prerequisites for successful implementation of any major quality initiative
are leadership and commitment from executive management. Management must create a work
environment supportive of quality initiatives. It is managements responsibility to establish
strategic objectives and build an infrastructure that is strategically aligned to those objectives.
This category will cover the management processes used to establish the foundation of a
quality-managed environment, as well as commitment, new behaviors, building the
infrastructure, techniques, approaches and communications.

2.1 Leadership and Management Concepts


Management has existed to organize human activities, from simple to complex, since the days
of the Pharaohs or even earlier. There were very few books written about management or
leadership until the Eighteenth Century, and those that were written generally focused on
military organizations. The operating theories of management that developed in the early days
of the Industrial Revolution were generally reflective of the attitudes of the day: in a class
society, the upper class had complete control and authority over the activities of those in the
lower class. Upper Class individuals were intelligent, educated, energetic and entitled, or
so the theory went. The lower class was characterized as having the opposite attributes.
Organizations of all types were hierarchical in nature. At the top were a very few individuals,
not infrequently a single individual, who exercised all of the power and control. The
Ownership and Management were often synonymous. Rensis Likert1 in his research captioned
this management style as exploitive-authoritarian and defined it as follows:
1. Likert, Rensis; The Human Organization, Its Management and Value, McGraw-Hill, 1967

Version 9.1

2-1

Guide to the CABA CBOK


where decisions are imposed on subordinated, where motivation is characterized by
threats, where high levels of management have great responsibilities but lower levels
have virtually none, where there is very little communication and no joint teamwork.
Likert also identified a second, somewhat less controlling style which he captioned
benevolent-authoritarian:
where leadership is by a condescending form of master-servant trust, where
motivation is mainly by rewards, where managerial personnel feel responsibility but
lower levels do not, where there is little communication and relatively little
teamwork2
Within context, these styles appeared to work fairly well, especially for those at the top.
As the Nineteenth Century gave way to the Twentieth, massive socials changes were
underway thought the world. The working class was becoming better educated in much of the
world, and the growth of modern transportation systems made it easier for workers to relocate
in search of better working conditions. The formerly unquestioned assumptions about how to
manage effectively were challenged by a series of researchers and writers who became
interested in productivity and motivation.
Frederick Taylor, the father of Scientific Management, developed a management approach
that identified four fundamental and interrelated principles:
1. Management is a science. The solutions to the problems of business management could
be discovered by the application of the scientific method of experimentation and
observation. This will establish the one correct way to perform any task.
2. The selection of workers is a science. The first class worker was the worker who is
suitable for the job. It is managements job to determine which worker is best suited for
which job.
3. Workers are to be trained and developed. Once the correct worker is selected, they
need to be trained in the one right way to perform their tasks. Managers need to
ensure that as the tasks evolve, workers are updated on the new procedures.
4. Scientific management is collaboration between management and the worker.3
Managers plan, develop processes, schedule and train; workers execute the way they
have been trained, according to the plan.
Unlike the US, in France there was enormous emphasis on the way work was administered.
This was grounded in the Bureaus which had been established for the administration of the
French government. Henri Fayol4, an engineer working for a mining company, developed a
list of fourteen principles successful organizations ought to follow. They include:
Division of work - Work should be specialized and like jobs should work together
2. Ibid.
3. This is the origin of the most common definition of management as the accomplishment of predetermined objectives through others.
4. Henri Fayol, Theory of Administration, www.mgmtguru.com, The History of Management, 2006.

2-2

Version 9.1

Management and Communication Skills


Authority - Delegated persons ought to have the right to give instructions and
expect that they will be followed
Unity of command and direction - Workers should receive direction from only one
person and that direction should be consistent across the organization
Subordination of individual interests - By focusing on a single organizational
objective, internal conflict will be minimized
Stability of personnel - Turnover is disruptive and expensive; organizations do not
want to lose the knowledge of experienced workers
The USA focused on how to improve productivity. The most well know of the research
activities took place at the Hawthorne Works of the Western Electric Company, outside
Chicago between 1924 and 1932. The work was conducted by a team of researchers; but, the
analysis and reporting done by Elton Mayo whose name was most associated with these
experiments. In what is termed The Hawthorne Effect, Mayo proposed the true way to
improve productivity was to pay attention to the workers and to recognize that the work place
exists as a social construct. His research led to the concept of talking to workers about what
was important to them. It also became an important foundation for the work done by Abraham
Maslow.
Maslow conducted his research during the early years of World War II. He proposed that there
were 5 needs felt by all individuals, called the basic needs. He arranged these needs in a
sequence which represented the order in they are felt and fulfilled by individuals. Each need
must be satisfied to a certain extent before the next need, and only rewards based on that need
would be recognized. This he referred to as The Hierarchy of Needs.5 The 5 needs he
identified are:
1. Physiological - Hunger, thirst, sleep. The basic needs to survive
2. Safety - Freedom from physical danger and deprivation
3. Love - The need to be part of a group, to give and receive affection
4. Self-Esteem - Reputation, the need to be recognized with in the group
5. Self-Actualization - The desire to improve, to create, to find job satisfaction
This theory, for the first time, explained why the rewards being offered by management to
workers were not producing the desired levels of effort and results. A few organizations
explored how to change the relationship between management and workers, but most had not
heard of Maslow and his work.
By contrast, at the end of World War II, the fundamentals of the assembly line processing
(Taylor and Gilbert) and the administrative / hierarchical management concepts (Fayol) were
seen in every kind of organization in many countries. The post war demand for goods of all
kinds gave renewed emphasis to the quest to determine how to become more productive. In
the West, the underlying assumption, based on the theories of the past, was that it was
5. Maslow, Abraham; Motivation and Personality, published in 1954 (second edition 1970)

Version 9.1

2-3

Guide to the CABA CBOK


necessary for the workers to become more highly motivated, while in Japan, Deming was
laying the foundations for the Quality Revolution.
In 1960, Douglas McGregor published his theory about how people are managed6. In this
work he identified two vastly different management styles, which have their roots in the belief
systems of managers (and to some extent workers):
Theory X holds that people are basically lazy and will avoid work (or learning) if it is
at all possible; because of this, most people must be coerced, threatened, etc. to get
them to put forth the effort needed to achieve the results the organization desires.
Because most people desire security above all else (according to those who agree with
Theory X) they wish to avoid responsibility and prefer to be directed.
Theory Y holds that working is as normal for people as playing, and that people do not
need to be threatened to get them to work; they need to identify with the goals and
objectives of the work. The rewards they crave are at the high end of Maslows scale,
not the low end. People have enormous capacity for creativity and learning, but little
of that capacity is utilized by most organizations.
This then opened the door for the two other styles of management which Likert7 described:
Consultative system- where leadership is by superiors who have substantial, but not
complete trust in subordinates, where motivation is by rewards and some involvement,
where a high proportion of personnel, especially those at the higher levels feel
responsibility for achieving organization goals, where there is some communication
(both vertical and horizontal) and a moderate amount of teamwork.
Participative-group system - which is the optimum solution (according to Likert)
where leadership is by superiors who have complete confidence in their subordinates,
where motivation is by economic rewards based on goals set in participation where
personnel at all levels feel real responsibility for the organizational goals, where there
is much communication, and a substantial amount of teamwork.
Despite the fact that these studies were published in the late 1960s, organizations continued
to operate under the older hierarchical / authoritarian models, and continued to focus on
ineffective or counterproductive management strategies. By the mid 1990s this problem was
becoming acute due to 3 major influences8 according to Feigenbaum:
1. The formation of an increasingly competitive global market
2. The continuing change in customer values
3. The trend toward affordable high quality products

6. McGregor, Douglas, The Human Side of Enterprise; 1960.


7. Op. cit Likert
8. Feigenbaum, Armand, How Total Quality Management Counters Three Forces of International
Competitiveness; National Productivity Review, Summer 1994.

2-4

Version 9.1

Management and Communication Skills


It is no longer enough to provide a quality product or service, according to Feignebaum,9 it
must also be affordable. Cutting costs to improve productivity without measures to maintain
or improve quality will be counter-productive. The old model for management, which focused
on centralized command and control, limited the ability to provide creative input to a select
few, and focused on people as the source of problems and defects, never produced the
organizational synergy needed.

2.2 Quality Management


An effective Quality Management approach represents the successful integration of the
concepts learned through the work of Juran, Deming, Crosby and others on the quality side
with those learned from Mayo, Maslow, Drucker and McGregor. The following are the major
characteristics of Quality Management:
Manage by Process
Manage with Facts
Manage toward Results
Focus on the Customer
Continuous Improvement

2.2.1

Manage by Process

This is a deceptively simple statement. Most organizations have a large number of processes.
They use those processes for accomplishing their work. They assume they are in fact
managing by processes. In looking a little more closely at these organizations, the processes
have innumerable exceptions, workarounds and loopholes. In this situation the organization is
in fact managing people and schedules, not processes.
To manage by process the organization must have substantial, well defined and fully
implemented processes that describe how work is done. People are trained in those processes.
Those processes are controlled; the results each process produces is known and documented.
Reasons for deviations are identified. (Once robust processes are in place for the primary
flows, collateral processes for handling exceptions can be developed and documented. This
accommodates those items which must, for one reason or another be handled as exceptions.)
Data about what the process is capable of is produced and collected in a systematic fashion.
This data is at the detail level of the processes and provides great insight into how the process

9. Ibid.

Version 9.1

2-5

Guide to the CABA CBOK


is actually performing over time. Concepts such as Process Capability are possible at this
level.
When things do go wrong in organizations which do not manage by process (and they always
do), the questions is, Whose fault is this? The on-going blame game is typical of a lower
maturity level organization and significantly impedes the willingness of individuals to take
risks.
If there is a problem, organizations which manage by process examine that process to see
how that problem or defect could have been created. This process oriented thinking allows the
full use of the Kaizen principles of turning each defect into an opportunity for improvement.
For the Business Analyst, it is important to maintain context about which group of people
constitute senior and executive management. If the processes which need to be managed are
IT processes, typically the management to consider is IT management. If they are business
processes, then it will be necessary to consider the business management.
Occasionally there may be process management issues which will involve other parts of the
organization, in which case the management to be considered needs to encompass all of the
players. This happens most frequently during the early stages of establishing Management By
Process, when the traditional ways of doing business are re-examined. In the case of cross
functional processes, it may be necessary to invoke both set of managers. Cross functional
processes are fairly common at a high level, but at the actual task level, tasks are concluded
within a single organizational area.
Often the old processes required significantly less input from the business community (and
produced lower quality). Although desirous of the improved quality, many business unit
managers are initially reluctant to commit additional resources to processes. This is best
addressed by providing a firm grounding in Cost of Quality Concepts and the SEI Capability
Maturity Model.

2.2.2

Manage with Facts

At every level, for every decision, people want to have good information as a basis for their
decisions. Lacking information the choices are:
Do what weve always done
Dont do what we did last time, it didnt work
Go with your instinct

Do whatever the boss decides

None of these gives the decision maker a secure feeling that the best possible decision is being
made. The results are decisions to which there is at best a half-hearted commitment and a real
lack of confidence in the outcome.

2-6

Version 9.1

Management and Communication Skills


When organizations manage by process, one of the things they get is data. The raw numbers
generated by the measurement activities associated with controlling each process. Data can
then be turned into information useful for making decisions. For example:
The average regression testing activity yields 1 major and 4 minor defects per 10,000
lines of code. On average it takes 15 working days to return a major defect to
regression testing and 4 days for minor defects. Our residual rate (the percentage of
errors recurring after regression) is 2%, or one in 50. We estimate the new
enhancement will have 20,000 lines of code. How many regression cycles should we
include in the project plan?
Organizations will make different decisions based on how willing they are to assume various
risks. If this decision is made often enough, the organization will build a track record of how
these facts correlate to the regressions cycles required and establish standards or guidelines (if
it is a new team or new technology, round up; otherwise round down).
Lacking data, decisions are often made on intangible and emotional issues. The inclusion of
feelings and emotions in the decision making process is not all bad, but it should be placed in
the context of available data. (I have a really bad feeling about this project, I think we may
have missed something in the Requirements. Even though the numbers say 1 cycle, lets go
with 1.5 to allow some slack.)
In the US the National Quality Award (MBNQA) Criteria is explicit about management by
fact;
Modern businesses depend upon measurement and analysis of performance.
Measurements must derive from the companys strategy and provide critical data and
information about key processes, outputs and results. Data and information needed for
performance measurement and improvement are of many types, including: customer,
product and service performance, operations, market and competitive comparisons,
supplier, employee-related, and cost and financial. Analysis entails using data to
determine trends, projections and cause and effect - that might not be evident without
analysis. Data and analysis support a variety of company purposes, such as planning,
reviewing company performance, improving operations and comparing company
performance with competitors or best practice benchmarks.10
For the Business Analyst, being able to prepare and present project and process
recommendations based upon well documented and organizationally accepted data is an
enormous advantage. Removing the fuzzy logic and personal preferences from a situation
allows an effective business case to be developed and presented with a greater probability of
success. It is also important to be able to measure the process effectiveness and efficiency. Any
process which maximizes one at the expense of the other will create long term problems for
the organization.

10. NIIST, Baldrige Criteria, 1997

Version 9.1

2-7

Guide to the CABA CBOK

2.2.3

Manage Toward Results

Organizations which focus on the results they need to achieve are far more likely to actually
achieve them. To do this, everyone needs to be clear about what the target is and the processes
that will be used to achieve it. The Leader needs to have clearly articulated the vision. This
makes it possible for the manager to have selected the right people for the task. Together the
Manager and the Leader create a supportive environment where the staff feels confident their
efforts will be rewarded. This in turn allows the staff to focus on the work to be accomplished,
rather than be distracted by dissatisfies.11
Because of the organizations clear focus on the objective, it is possible to develop the strategic
measures needed to track progress toward that goal. Unlike the detailed data collected from
process control and execution, Strategic Measures are focused on the big picture. In the
Business Skill Category there is more information on developing and tracking Strategic
Measures.
One approach which has drawn considerable popular attention is the Balanced Scorecard,
developed by Drs. Robert Kaplan and David Norton.12 Organizations historically found it
easiest to measure financial information. Because it was easiest to do and because it was
desired by capital markets, financial measures became the de facto standard. Kaplan and
Norton recognized these measures left out a lot of important information about any activity
taking place in the organization. To balance this they recommended adding the following to
financial measures:
Learning and Growth Perspective
Business Process Perspective
Customer Perspective
This approach emphasizes the need for the feedback loop in reporting processes to ensure
progress toward goals. Measuring the Business Processes and the Customer Perspective are
more obvious than the Learning and Growth area. Measuring the Learning and Growth area
means looking at more than training hours per employee; it includes things like the
development and use of effective mentoring schemes.
Managing toward results focuses on activities which are win-win for the organization rather
than setting up conflicts and the zero sum game. For the Business Analyst, the focus on
agreed upon goals and objectives, which have clearly established measures and metrics
associated with them, helps to reduce the noise13 surrounding project activity. This in turn
aids overall productivity.

11. Herzberg, Hygiene Theory of Motivation. Ibid.


12. Kaplan, Robert and Norton, David; The Balance Scorecard: Strategy in Action; 1996
13. Noise is a term used by many management theorists to describe a wide variety of activities which
contribute nothing to the accomplishment of the objective.

2-8

Version 9.1

Management and Communication Skills


In implementing these concepts it is important to remember that these other areas are intended
to augment, not replace the financial perspective which has a strong orientation on operational
and financial controls.

2.2.4

Focus on the Customer

Do nothing that does not benefit the ultimate customer. This concept is fundamental to the
teachings of Taguchi in his Robust Design and is the backbone of Quality Function
Deployment (QFD). It is the heart of Quality Management; organizations exist only as long as
they continue to meet the needs of their customers. If the focus of the organization is internal
(this is what the Auditors or Accounting or Human Resources want) as opposed to external,
resources will be spent ineffectively.
Organizations which focus on satisfying the Internal Customer set up situations which
create competition for resources and conflict in policy and practice. Satisfying internal
customers (business partners) is effective only to the extent that it leads to the satisfaction of
the external customer also. The Strategic Measures identified as a part of the Manage Toward
Results process will help maintain the focus on the external customer if they are properly
established. Organizations which have implemented the Partnership Model for working with
those in their own organization (Business Partners) find that it is easier to focus on the true
customer and there are far fewer distractions.
In some organizations it is difficult for the Business Analyst to gain access to the true
customer, while at the same time they are asked to represent their interests and needs in both
the Requirements and the Testing processes. To the extent that it is realistic and possible,
every attempt should be made to make the voice of the customer audible to everyone in the
organization. Some organizations accomplish this through customer panels or interviews
which are then shown to employees; others conduct surveys or provide feedback formats and
distribute the summarized results. For organizations which provide support to their customers,
a few days working on the Help Desk will be illuminating.
Where possible, customer data should be collected in a systematic and repeatable process and
the results quantified. This will allow the creation of trend data, which can be used to maintain
focus on improving those things which truly matter to the customer.

2.2.5

Continuous Improvement

Continuous improvement is the definition for quality proposed by Deming more than 50 years
ago. This encapsulates what good organizations have learned about themselves and their
environment:
Quality is not static - no matter how good the product or service, no matter how
excellent the current performance, if the organization ceases to change, others will
surpass them quickly;

Version 9.1

2-9

Guide to the CABA CBOK


Good enough is not good enough - any process can be improved, sometimes in
small ways yielding small benefits and sometimes with radical transformations
which yield comparable rewards
Integrating the four preceding concepts with the continuous improvement mentality creates
organizations of unusual strength and ability. This does not mean that organizations must
adopt and implement every new thing that comes into the market place. Nor does it mean
that it must deliberately reinvent itself every three or four years. It is not the exclusive
property of those who have reached great heights of customer service and product quality.
Instead it is a grass roots, every day attitude which allows organizations to move from
wherever they are, in the direction they wish to go.
Continuous improvement of all processes offers the Business Analyst the opportunity to learn
from each iteration of the process and feed back lessons learned. In the Process Identification,
Definition, Implementation and Improvement Knowledge Area, the activities involved in
making this a reality will be examined in more detail.

2.2.6

Creating the Infrastructure for Improving Quality

The fundamentals for improving quality were clearly spelled out by Shewhart and Deming:
Plan
Do
Check (Study or Analyze)
Act (Adjust Plan if necessary)
In creating an infrastructure to support continuous improvement, this is the best model to use.

2.2.6.1

Plan

Regardless of the level within the organization for which the Quality Improvement process is
undertaken, the top management of that group needs to participate in the planning. During the
planning stage, the team needs to articulate why this process is being undertaken and what are
the expected results. Typically this involves creating a baseline of current performance. This
baseline should be numeric and grounded in facts which are agreed to by the organization.
Failure to have a well established baseline will make it difficult or impossible to establish
results at a later date.
In addition to creating a baseline, many organizations perform significant benchmarking
research at this time. This provides a context for creating the necessary improvement goals. If
there is no goal, people will not be able to measure progress toward that goal which is very
disheartening.

2-10

Version 9.1

Management and Communication Skills


The plan which is created must be reviewed and evaluated to ensure that it is achievable.
Creating an Improvement Plan which the organization cannot possibly execute is a major
cause of failure. The three chief components of this type of failure are unrealistic schedules,
un-achievable goals, and the failure of senior management to stay focused and involved in the
process once the plan is agreed upon.
Once the goal is determined, tentative action plans are created and reviewed. Each action plan
must include information on who (specifically) is responsible for seeing that the action plan is
accomplished and in what time frame. Action plans are broken down into smaller parts and
responsibilities assigned to specific work units and finally to individuals. Responsibility for
ensuring the work gets done remains with management.
If the Quality Improvement Process is organization wide, this planning stage may include the
establishment of a chief quality officer who reports directly to the Board or the CEO. While
this has the advantage of creating a position which is solely focused on quality, it does have
the potential disadvantage of making quality improvement someone elses problem. To be
effective, quality improvement must be embedded in every position. To make that real, it must
be included in performance reviews for everyone. One of the early steps in the planning
process is to make this happen. The only possible reason to exclude someone would be if they
had no possible way of contributing to the quality goals of the organization. (In that case they
should not be on the payroll!)
Many organizations will use internal and external consultants to help develop the plan instead
of, or in addition to a CQO. In either case, these tend to be individuals with a sound
understanding of process, of quality and of the dynamics of change. Business Analysts are
often tapped to become one of these internal consultants. In that position they have the
opportunity to leverage their knowledge and skills in a dramatic fashion.

2.2.6.2

Do

Once a plan is created and agreed to by all of the stakeholders, it is time to execute the plan.
This requires participation at all levels of the organizations. Senior managers maintain focus,
model the new behaviors, reinforce the importance of change and ensure support down
through the chain of command. Middle managers allocate resources, remove roadblocks,
monitor progress and performance and model behaviors. Line managers provide feedback and
support at the individual task level, provide training on new processes and procedures, resolve
conflicts and model behaviors. Staff uses the improved processes and procedures, identify and
escalate problems with the new processes and procedures and track progress at the detail
level. When all of these happen simultaneously, the organization makes real progress on
improving the quality of their processes.

2.2.6.3

Check

As a part of an effective continuous quality improvement initiative, the organization will


generate a significant amount of data about how they perform. Depending upon the size and
scope of the initiative, the process of analyzing the results may occur after 10, 15, or maybe

Version 9.1

2-11

Guide to the CABA CBOK


even 100 iterations. Within IT initiatives it is typical to see incremental analysis. A
Requirements Gathering process improvement initiative undertaken as a part of a Continuous
Improvement program may be examined at the end of Requirements Gathering, again at the
end of Design, at the end of Unit Test, prior to Implementation and again at some point
following implementation. At each point it will be possible to determine the impact of the
changed requirements process on the succeeding stages. Waiting until the conclusion of the
project will result in the loss of important information about the early stages in all but very
small projects.
Having done an incremental analysis of several projects in this fashion, it will be possible to
develop reliable trend data about the long term impact of the changes. It is tempting but risky
to come to major conclusions on the basis of a single project. In the case of IT projects which
often have such a long tail from beginning to end, this may be necessary. During this
analysis, the results will be compared to the established baseline to determine progress from
the starting point, and against any selected benchmarks to measure progress toward the goal
or objective. It is not enough to measure just one of these, as it can provide a misleading
appearance of progress. A reduction in incomplete test cases by 2 per hundred cases may
reflect either of the following:
The baseline was 60 errors per hundred, a reduction of 2 per hundred is a 3%
decrease. A decrease is good, three percent is progress. However if the benchmark
is 8 errors per hundred, progress toward the goal is small, less than 10%.
The baseline was 12 errors per hundred, a reduction of 2 per hundred is still a 3%
decrease, however given the same benchmark progress toward the goal is much
greater, 50%!
There is one more critical piece of information needed to evaluate how well the organization
is doing in meeting its objectives: time. In the first example above, if the 2 per hundred
reduction was seen in the first 6 weeks, this would probably be considered excellent progress;
6 months not much progress and at a year, this would probably be considered relatively
ineffective. The organization would want to perform further analysis on the results, looking at
items such as:
Was the new process properly documented?
Were people adequately trained?
Was the process actually being followed?
Were the new measures being performed correctly?
Are some groups performing better than others?
Conversely, in the second example, because of the very high initial baseline number, reducing
it by 2 errors in 6 months would probably be considered a very good result. While it may be
worthwhile reviewing the same issues raised for the first examples, there are other questions
which might also be addressed:

2-12

Version 9.1

Management and Communication Skills


Can further reductions be achieved using this process?
Is the process fully deployed?
If the changes yield the desired results (the second example), little if any modification may be
in order. If, on the other hand, it appears that further improvement is needed to achieve the
goal, additional input for process improvement must be solicited. This may include activities
derived from answers to the first set of questions, i.e., if some groups are performing better
than others, find out why. If not everyone was fully trained, maybe more training is needed. If
the process is not actually being followed by everyone, compare the results and find out why
the process is not being used.

2.2.6.4

Act

Once the Check step has been completed, it is time to determine what if any action is required
to further improve the process. Act steps may include performing additional training,
tweaking the process to address problems or provide more consistent results, providing
additional reinforcement on why and when to use the new process and so on.
It is not unusual for the results of the Check step to identify several potential opportunities for
additional change or improvement. In the Act portion of the cycle, those to be implemented
are selected, defined, documented and agreed to by all of the stakeholders. They form the
baseline for the next cycle iteration(s) of the Plan step.

2.3 Communication and Interpersonal Skills for


Software Business Analysts
The Software Business Analyst is in the communication business. To be effective at their job,
the CSBA must possess a wide range of communication and interpersonal habits, attitudes,
knowledge and skills. These are directly involved in the tasks performed on a daily basis.
Acquiring these skills if they are missing, or improving them if they are weak will yield
enormous payback both personally and professionally.

2.3.1

Listening Skills

When there is a discussion of communication skills the emphasis tends to be on the sending
skills. Speaking, writing and doing presentations are all important, but as an information
gatherer, listening skills are crucial. Improving basic listening skills can improve the amount
of information the CSBA is able to gather by a large amount.

Version 9.1

2-13

Guide to the CABA CBOK


Every message has three parts: the sender (or speaker), the message itself, and the receiver. A
good listener, in addition to improving their own actions, can also directly impact the other
two. Good listeners encourage speakers to communicate more information. Good listeners
also exercise skills which will improve the accuracy of the message. Good listening is not a
passive activity. Active listening is essential to good communication. It requires that you do
the following:
Eliminate distractions- Turn off or away from computer screens, close or move
reports, books or other written material which may lure attention away from the
speaker. If, as a listener, you are prone to fidgeting, empty your pockets and put
away pencils and paper not actively used for note taking. Even though it may not
seem like it, each of these items has the potential to cause the your attention to drift
away from the speaker.
Avoid outside interruptions - Make arrangements to have phone calls go to the
voice mail, or let someone else answer. Answering a phone call after setting up an
appointment for an interview sends the message that the phone call is the more
important of the two activities. If for some reason it will be necessary to take a call
during the interview, explain why, in advance, and keep the call short. The same is
true for people who just happen to show up while the interview is being
conducted.
Focus on the Speaker - Make eye contact frequently, but dont stare. Dont let eyes
drift away for an extended period. Do not use the period while the speaker is
speaking to decide what you want to say next. This kind of interchange results in
many words being spoken, but no communication. Watch the body language of the
speaker as they answer or avoid answering questions. Be alert for signs of anger,
frustration, or stone-walling. Be aware of the extent to which the speaker is
comfortable or uncomfortable with the topic under discussion. Some people are
more sensitive about some topics than others.
Be patient - Some speakers need a few moments to organize their thoughts when
being asked a question. This is especially true when dealing with many IT
personnel who, as a group, tend to be highly introverted. Dont immediately follow
up with a restatement of that question or a new question. Silence is ok. Only if the
silence is prolonged and there are no cues that the speaker is attempting to form a
response should the listener follow up. Often the best information arrives at the end
of a long pause. Once the speaker has started speaking avoid interrupting them.
Interruptions are a signal that the interrupter does not respect either what the
speaker is saying or perhaps even the speaker.
Send non-verbal listening cues - Listeners who lean forward slightly send a signal
of interest. Leaning or turning away indicates lack of interest. Likewise nodding the
head in agreement or understanding reassures the speaker that the listener gets it.
This reinforcement will encourage the speaker to continue the communication.
Check for understanding - One of the most effective tools for a listener is to
periodically restate a particularly important or complex point to demonstrate

2-14

Version 9.1

Management and Communication Skills


comprehension. So what I heard you say is or Let me see if I have this
correct. These restatements serve two purposes: the confirm for the speaker that
the listener is paying attention to their message and they offer the opportunity to
amplify or clarify key points.
Withhold judgment - Do not evaluate or analyze the speakers comments until
they have completed their thoughts. Disagreement is particular easy to read in the
listeners and will have a discouraging effect on the speaker. This is especially true
when there is a significant difference in the organizational level of the individuals
involved. Premature expression of disagreement may cut off communication
entirely.
Listening is a group activity - In a group discussion, allow each person to have an
equal share of time. In a group of 4, everyone should be listening 75% of the time.
The length of pauses in conversations is culturally dependent. In Japan it may be as
long as 8 or 10 seconds between speakers. In North America it may be as short as 1
or 2 seconds. Be aware of the local style.
Telephone listening - When deprived of the opportunity to see the speaker, it is
more important than ever to practice good listening skills. It is very easy to
recognize that the listener is doing their e-mail or reading a memo when all of the
responses are Mmmm.

2.3.2

Interviewing

Interviewing activities are the heart and soul of the Business Analysts job. CSBAs use the
quality listening skills described in the previous section to gather the information needed to
develop the right requirements for the project. They use them to expand their knowledge of
the requirements into the associated test cases. They use them to develop the information
needed for effective implementation processes and support. The keys to effective interviews
are as follows:
Planning the Interview and Interview Environment - Effective interviewing is
like most other activities; it benefits enormously from planning and preparation.
In planning interviews it is essential that the right people are both interviewers and
interviewed. This can happen only with adequate planning and preparation. The
interviewers should include one person whos knowledgeable about the desired
process and is an effective listener. In addition there should be one person to act as
a recorder so that information can be captured effectively, without interruption to
the flow of the interview. The interviewees have information to share about what
the process is and does or what it is supposed to do and be.
In order for the interviewees to share their knowledge, they need to know in
advance what is expected of them. The best way to accomplish this is through the
use of an agenda. The agenda will provide a summary of the purpose of the

Version 9.1

2-15

Guide to the CABA CBOK


interview, an outline the topics to be discussed, and a list of potential questions to
be covered.
As a part of the agenda, or as a separate document, each interviewee should
receive information about how best to prepare for the interview. This will include
recommending that they bring copies of forms and examples of issues or problems
they encounter in working with this process or system. If only one member of a
larger team is to be interviewed, suggest that the team meet and discuss the topic
and questions beforehand. In that way the interviewee will be able to provide a
broader picture of the process and its idiosyncrasies.
Likewise, the interviewers need to prepare for each interview. They should review
any information already collected and have copies of any forms or materials which
might be useful. Any relevant fully or partial complete lists should be available for
discussion.
Interviewees are most likely to be responsive if there has been buy-in from the
top of the organization. Obtaining and communicating that support is a vital step in
the interview process. At the beginning of the interview process communicate
clearly to the project sponsors and key stake holders what the expected time
commitment will be on the part of their staff. Where possible schedule
appointments for staff level personnel through their manager or supervisor. This
will allow the work of the department to be managed effectively while one or more
people are being interviewed.
Allow about 90 minutes for each interview. Allow at least 30 minutes between
interviews. This provides time to clean up lose ends on the documentation, update
notes and profiles, and change where necessary. It will also allow time to prepare
for the next interview. Interviewing is a strenuous mental activity; try not to
schedule more than 3 in a day.
Select a mutually convenient time for the interview in a neutral location. Ideally
this will be the project home, this reduces the overhead in setting up each
session. This location should provide enough privacy that sensitive issues can be
discussed if need be. Notify the interviewee well in advance of the time and
location.
The interview process is essential for clear and correct data gathering whether for
Requirements, Test Case Development or Post Implementation Problem Solving.
It is important that interviewers enter the process with an open mind, prepared to
hear that much of what they have already recorded is wrong, incorrect or
incomplete. They must be prepared for experts to have wildly differing views on
how certain activities occur and to be able to resolve those conflicts.
In addition to planning and preparation, this is an area which will improve with
practice and feedback from team members and interviewees. Although an
excellent product may be achieved using less than excellent interviewing practices,
it is not a common occurrence. In the best approach when there are a number of
interviews to be performed, as is the case in Requirements Gathering, the
interview team should obtain a dedicated space for the duration of the interviewing

2-16

Version 9.1

Management and Communication Skills


process. This becomes their home and is equipped with the resources required to
successfully complete a requirements project.
Chief among these resources are a lot of blank walls. These will become the
residence for the multiple lists or cases (for testing) in process. List or case drafting
will take place both during interviews and after. It is important to be able to see and
touch all of the items developed during the interview process. Rolling them up
with rubber bands and sticking them on a shelf is counterproductive.
These may be constructed on 2 x 3 poster paper. Self stick is nice, but masking
tape works just fine too. During the initial stages of an interview process, it may
not be a good use of time to create this data in an electronic format. Too often
people perceive what is computer produced as being gospel or finished and fail
to give it the serious scrutiny that is essential to a quality finished product. At the
top of the map is the name of the requirement, task or action being researched. If
there is more than one page to the map, it will also include a page number.
Actions, decisions and relevant information are captured on Post It Notes and
placed in their proper position on the poster paper map. As new information is
gained it can be inserted where appropriate. Post Its can be moved multiple times
until the requirements flow correctly and accurately represents the desired process.
Markers can be used to add directional arrows, decision diamonds and other
descriptive information as the maps near completion. During the construction
however; these, should also be done with Sticky Notes as the flow may well
change.
Conducting An Interview - During the interview create and maintain a friendly
atmosphere, set the interviewee as at ease. Establish rapport between the
interviewer and the interviewee to improve the quality of the information exchange.
While many interviewees will talk very freely, others may be reluctant to discuss
problems because they fear that the results of the interview process may jeopardize
their current job process or result in some other unpleasant consequence. It is
important to be honest with people about the expected results of the project and to
reassure those individuals to the extent possible. Where the fear is that information
will cause personal problems for the interviewee they need to be reassured that
information will be managed in a way to protect the identity of the provider where
that is necessary.
If the Post-It technique is used, interviews can be conducted in any order after
the initial information is gathered. This is an enormous asset to the interview team
because people can be scheduled whenever they are available. This often means
that several key requirements areas are in process at the same time and they are at
varying levels of detail. This may require some practice on the part of the
interviewer, but works very well.
The purpose of the interview is to find out what the interviewee knows or thinks
they know about either the existing process or the proposed process. Do not begin
the process by asking the interviewee to review information already gathered; that
will limit what they chose to say. Proceed through the Q&A agenda provided in

Version 9.1

2-17

Guide to the CABA CBOK


advance. Review the materials brought by the interviewee. Then compare the
information provided with existing lists and notes.
One of the keys to an effective interview is to ask good questions. When preparing
for an interview, create a list of questions which the interviewee might be able to
answer. It is much easier to be told I dont know than to have to schedule a
follow up interview for questions which were not asked. In creating this list
consider what function the interviewees department is responsible for performing.
Consider what the interviewees role is in performing that function, are they the
only ones performing this function, or are their others. Ask questions about both
what works well and what does not. Be certain to determine who the work product
supplier is and who the receiver is. If the interviewee has wide experience in the
organization, it may be worthwhile to explore what they may know based on other
positions they have held or jobs they have performed.
Question discrepancies, confirm key points, and update lists with new information.
Secure confirmation on the spot that the lists correctly reflect what the interviewee
believes to be correct. These real time updates are essential to obtaining a complete
and correct picture of the process as well as for identifying areas of conflict or
misunderstanding. Flag items which do not agree with other sources.
Always provide the opportunity for the interviewee to volunteer information for
which you did not specifically ask. These more open ended questions near the end
of the interview session can open up whole new areas. No requirements or testing
project should fail because you never asked me about that.
Processes exist because the external customer needs something. How does the
customer experience the process? Where does it meet expectations and where does
it fail? The Process Identification activities often are started because of a perceived
failure to meet customer expectations. Effective interviewing and analysis can
place these perceived failures in the proper context.
To be successful, it is necessary to actually understand the customers experience
and perspective. Too many organization are satisfied with an internal expert
interpreting the customers needs and wants. Where at all possible it is much more
effective to gather information directly from the external customer. This can be
done through a wide range of activities such as web-based or paper surveys, focus
groups, analysis of customer comments and letters or phone interviews.
This direct feedback will provide the Business Analyst with hidden nuggets of
information about how the process actually works, not merely how it is intended to
work. A classic example of this is the automated telephone response systems
implemented by many organizations to provide customers with continuous access
to information about their account, thereby improving customer service. Designed
to provide the answer to many routine questions with just a few entries from the
telephone keypad, few ideas have more annoyed and infuriated customers who
become lost in the labyrinth of automated menus. Having satisfied themselves that
they are providing service to their customers, organizations reduce the number of
individuals available to provide assistance to the non-standard questions.
Frustrated and unhappy customers abound.

2-18

Version 9.1

Management and Communication Skills


During the interview one of the interviewers takes notes, while the other conducts
the interview. Due to the speed of the sessions, these notes often require some
clean up. Do as much of this as possible between sessions while the interview is
still fresh in both peoples minds.
Interview Validation and Problem Resolution - Following the interview, provide
the interviewee with a recap of the session. This will include a narrative description
of the part(s) of the process for which they were interviewed. Ask them to review
and make corrections if there are errors. This is essential to insure that at some later
date no one claims that their information was misrepresented in the final product.
Some interviewers deliberately insert a few obvious, but trivial errors to confirm
that the documents were reviewed.
Update lists and cases with any changes required as a result of revisions to the
recap. At the end of the interview process, completed materials should be sent to
everyone who participated for a final review. In the event of a conflict of
significance, re-interviews with key experts may be necessary. If that fails to
resolve the conflict, additional steps such as site visits or product reviews may be
necessary. The cost to resolve conflicts can rise rapidly with extra activity, ensure
that the conflict in question warrants the time and effort to resolve it. Otherwise,
note the conflict and move on. Most issues will be resolved quickly through the
review interview process.

2.3.3

Facilitation

Facilitation skills are in demand in organizations today. The role of the facilitator is to help
guide a group through a discussion or series of discussions in a non-judgmental and
productive way. One of the major reasons for using a facilitator is to maintain control and
eliminate the conflicts that can occur when topics of importance are under discussion.
Since the Facilitator can control and direct the course of a decision making process, it is a
position of great power for the duration of the facilitated session(s). Generally speaking, a
Facilitator should be knowledgeable, but not necessarily expert in the subject matter under
discussion. It is often desirable to have a Facilitator who has no vested interest in the actual
decisions made; neutrality will help to avoid dictating or supporting a particular position.
For very sensitive or highly important issues, many organizations chose to retain the services
of an outsider with no ties to the organization. This is not always possible, in which case the
Facilitator must exercise caution and control to maintain their neutral stance.
Sponsor and Initial Objectives - A facilitated session begins with the objectives
for the session itself: develop an updated strategic plan; create a new marketing
campaign; develop the requirements for a new system. Regardless of the purpose of
the session, there is a Sponsor. The sponsor is generally the one who sets the initial

Version 9.1

2-19

Guide to the CABA CBOK


objectives and contacts the facilitator. In some situations however, others may have
input into the objectives for the session.
Facilitation Agreement The facilitator may be a member of the staff or a
consultant from another part of the organization or an outside organization. The
facilitator meets with the session sponsor to review the objectives and the scheduled
time frame and participants for the session. It is not unusual for the objectives at the
beginning of the session to be too large, too vague and /or ambitious for the number
of people and activities needed.
The facilitator will first seek to clarify the underlying issues prompting the need
for the session. Once clear on the issues, the facilitator and the sponsor should
review the session objectives to determine if they really address the issue(s).
Objectives may be added, revised or deleted at this point. The facilitator also
reviews the proposed attendees to ensure the correct people are in the room; once
again some adjustments may be made. Finally they will review the amount of time
planned for the session or sessions and make adjustments based on the preceding
changes. How long any particular topic will require is a matter which depends
upon the organization, the issue, the number of participants, and the skill of the
facilitator.
A decision should be made at this time regarding a note taker. Facilitation is
mentally exhausting work requiring a high level of focus and concentration on the
part of the Facilitator. In this environment, it is difficult to both facilitate and take
notes; best practice is to have two trained facilitators who alternate in the two
roles. This allows each to have a break from the high pressure, while maintaining
the flow and energy of the group. Where it is not possible to use two facilitators,
the next best option is for the Sponsor to provide one additional resource to
perform the note taking. If the confidential nature of the topic makes this
impossible, rotate the responsibilities for note-taking among the participants.
At the conclusion of this process the facilitator should produce a document which
reflects the shared understanding of the Facilitator and the Sponsor regarding the
objectives for the session, who will be attending, and how long it is expected to
take. If the Facilitator is an external consultant, it is not unusual to require the
Sponsor to return a signed copy of the agreement. This protects both the Sponsor
and the Facilitator in the event the Sponsor changes their mind or is not satisfied
with the results.
Logistics for Sessions - During this same session, or shortly thereafter,
arrangements must be made for the session. These are best done by the Sponsor, or
at least in the name of the Sponsor. The Location for the session is very important.
If at all possible, facilitated sessions are best held off site; this does not necessarily
mean an expensive hotel. A conference room at another organization location may
work; many non-profits rent facilities for a nominal sum. Hotels are nice, but not
always needed. There are several reasons to go off site; 1) people are less likely to
be interrupted or pulled out of a session; 2) people are less likely to go back to their
desks and get involved in the crisis of the day; and 3) often the discussions which

2-20

Version 9.1

Management and Communication Skills


arise during these sessions may be of a confidential or speculative nature, so it is
much easier to control the flow of information and documents.
Coordination of the calendars of all of the participants ensures they will all be
available at the desired time(s). The process can be time consuming and
frustrating. This is one of the places where the Sponsor can put a little authority
behind the request if people are not being cooperative.
The Facilitator should prepare for the Sponsors signature an agenda for the session
which includes the information on where, when, and how long the sessions will
last. To the extent possible (some topics will make this easier than others)
attendees should be told the nature of the discussion, materials to bring and what
preparations are needed (it is not uncommon to use assigned readings as a part of
the preparation.) The Facilitator may wish to do a pre-meeting status check 2 or 3
days in advance to ensure there are no issues.
Conducting a Facilitated Session - Either as a part of the initial discussions with
the Sponsors, or in follow up work, the Facilitator should determine what kinds of
approaches are needed to achieve the objectives. If looking for new ideas or
approaches, some form of brainstorming activities may be required. If the attendees
do not know each other at all, or if they do not know each other well enough for
discussions to flow freely at the beginning of the day, some ice-breaker activity may
be needed. If part of the issue requires problem solving, some root cause analysis
activities may be required. If they know each other, but have no history of working
well together, some form of team building activity may be required. It is the
responsibility of the Facilitator to link the desired outcomes and individuals with
the appropriate activities. Often the Sponsor will be unfamiliar with the techniques
and what they involve, especially from a time perspective. Each of the activities for
the session and what they contribute be discussed and agreed upon.
On the day of the first session, the Sponsor should be present and should introduce
the Facilitator and the session. Ideally the Sponsor will be a full participant in at
least one session of the discussions which follow as they often have input which
others do not, but this is not always the case. When the Sponsor is a participant in a
session, it is essential to make clear that their role is as a participant. If this is not
done, the Sponsors presence may be perceived as an attempt to control on the
activities of the group. During the activities that follow the Facilitator is
responsible for the following:

Version 9.1

Ensuring a fair and productive discussion of the issues or challenges Manage the discussions so one or two individuals do not dominate or intimidate other participants. Paying attention to who is not contributing and soliciting their input or pulling them back into the group if they have drifted.
Protecting participants from heckling, putdowns or harassment.

Managing the pace and flow of activities - Establish a time plan for the day
and keeping participants focused on the reason(s) they are there. This includes

2-21

Guide to the CABA CBOK


limiting side conversations, detours to tangents or wasted time on hardened
conflicts.

Drawing out information - Follow up on unspoken questions, probing when


discussions seem to end prematurely, and asking individuals directly to confirm a point under discussion (Juan, do you agree that this part of the system
must be totally redesigned to achieve the desired performance levels?)

Building consensus - Work to develop points of agreement and finding ways


to reduce conflict. This is one of the most important responsibilities of the
Facilitator, and a delicate balance that must be maintained between gradually
drawing a true consensus from the group and driving the group to a false conclusion through lack of patience or finesse. Agreements reached as a part of
the facilitation process must have the support of the participants. When the
Facilitator has an agenda or a desired outcome, it is possible to hijack the process. If this happens, the participants will eventually repudiate the outcomes,
leaving the organization worse off than they were before.

Recognizing the efforts of participants - Even if the results of the session


were not as substantive as desired, it is important to thank all of the participants
for their time and effort. Recognize what was accomplished and outline the
work that lies ahead. Encourage the group to continue working toward the resolution of any unresolved issues.

Summarizing Results - Finalize the draft of notes taken during the session
and circulation of them for corrections and publication. To the extent possible
it is desirable to record comments in the exact words of the participants so they
will recognize and remember the discussions (unless it is necessary to reword
comments to protect the participants from reprisals). It is very important to get
the results out quickly to avoid losing steam or having the decisions become
stale due to intervening circumstances.

Managing multiple sessions - Have a well defined schedule and all of the
logistics organized. Where the multiple sessions will be with different participants but with the same agenda and with the intention of accumulating the
results, it is important to hold the sessions as close together as is possible. This
will prevent too much of the edge of the discussions being lost due to conversations with previous participants. Where the same participants are involved in
several sessions with different activities to be completed, the Facilitator must
maintain focus so14the level of participation remains high.

14.

2-22

Version 9.1

Management and Communication Skills

2.3.4

Team Building

Team Building is an essential skill for the Business Analyst who will often be in the position
of working with a group of individuals who begin the relationship with different languages,
priorities, skills and styles. They are often challenged to unite this group in a way that will
allow them to achieve aggressive technical, financial, quality or schedule goals. Lack of an
effective team jeopardizes the effort before it can really begin. Understanding how teams
work can be a major asset.
According to Rebecca Staton-Reinstein, Ph D.,15 Effective IT Leaders build a team to
manage the IT department because it is one of the best ways to ensure success. Many leaders
find it effective to have the entire group take part in special team-building events and training.
That is a good way to kick off the effort. Long term, the team will become high functioning if
it is engaged in developing and implementing the Strategic Plan, if it is held responsible for
results and given the authority to get the results. The more input people have into decision
making the more they buy into the decisions and begin to jell as a team to accomplish the
objectives.
As an IT leader, the most important reasons to delegate are to spread the workload so that
you can focus on leading and to build an effective team of people who can perform a wide
variety of assignments. The team building process:
1. Involve your team in developing the Strategic Plan
2. Have each team member get input for the Plan from his or her staff
3. Delegate the elements of the Strategic Plan to appropriate team members
4. Have team members develop more detailed plans with their staffs to implement their
assigned areas
5. Approve all of the plans and have your team look for areas of synergy and to eliminate
duplication or overlap
6. Review progress against plan regularly in team meetings and make adjustments as
necessary
7. Review individual plans regularly, one-on-one, to provide coaching and feedback and
to encourage effective performance
8. Discuss the lessons learned that can be applied in new situations as assignments finish
According to Staton-Reinstein, The team meeting agenda always includes a review of
performance against plan and planning for modifying the plan and moving forward. The team
becomes a team through focusing on the mission and achieving it. In turn, team member are
encouraged to delegate appropriately to develop the skills and talents of their staffs.

15. Staton-Reinstein, Rebecca, PhD., Developing Effective Information Technology Skills, 2002

Version 9.1

2-23

Guide to the CABA CBOK

2.3.5 Tuckmans Forming-Storming-Norming-Performing


Model16
Dr. Bruce Tuckman first published his Forming-Storming-Norming-Performing model in
1965. He added a fifth stage, Adjourning, in the 1970's. The Forming Storming Norming
Performing theory is an elegant and helpful explanation of team development and behavior.
Similarities can be seen with other models, such as Tannenbaum and Schmidt Continuum and
especially with Hersey and Blanchard's Situational Leadership model, developed about the
same time.
Tuckman's model explains that as the team develops maturity and ability, relationships
establish, and the leader changes leadership style. Beginning with a directing style, moving
through coaching, then participating, finishing delegating and almost detached. At this point
the team may produce a successor leader and the previous leader can move on to develop a
new team. This progression of team behavior and leadership style can be seen clearly in the
Tannenbaum and Schmidt Continuum - the authority and freedom extended by the leader to
the team increases while the control of the leader reduces. In Tuckman's Forming Storming
Norming Performing model, Hersey's and Blanchard's Situational Leadership model and in
Tannenbaum and Schmidt's Continuum, we see the same effect, represented in three ways.

2.3.5.1

Tuckman's Forming-Storming-Norming-Performing - Original Model

The progression is:


1. Forming
2. Storming
3. Norming
4. Performing
Features of each phase:
Forming - Stage 1

16. Bruce Tuckman 1965 original 'Forming-storming-norming-performing' concept; Alan Chapman


2001-2006 review and code; Businessballs 1995-2006. The use of this material is free provided
copyright (see below) is acknowledged and reference is made to the www.businessballs.com website. This material may not be sold, or published in any form. Disclaimer: Reliance on information,
material, advice, or other linked or recommended resources, received from Alan Chapman, shall be
at your sole risk, and Alan Chapman assumes no responsibility for any errors, omissions, or damages
arising. Users of this website are encouraged to confirm information received with other sources, and
to seek local qualified advice if embarking on any actions that could carry personal or organizational
liabilities. Managing people and relationships are sensitive activities; the free material and advice
available via this website do not provide all necessary safeguards and checks. Please retain this
notice on all copies.

2-24

Version 9.1

Management and Communication Skills


High dependence on (designated) leader for guidance and direction. Little
agreement on team aims, other than that received from leader. Individual roles and
responsibilities are unclear. Leader must be prepared to answer lots of questions
about the team's purpose, objectives and external relationships. Processes are often
ignored. Members test tolerance of system and leader. Leader directs (similar to
Situational Leadership 'Telling' mode).
Storming - Stage 2
Decisions do not come easily within the group. Team members vie for positions as
they attempt to establish themselves in relation to other team members and the
leader, who might receive challenges from team members. Clarity of purpose
increases, but plenty of uncertainties persist. Cliques and factions form and there
may be power struggles. The team needs to be focused on its goals to avoid
becoming distracted by relationships and emotional issues. Compromises may be
required to enable progress. Leader coaches (similar to Situational Leadership
'Selling' mode).
Norming - Stage 3
Agreement and consensus is largely formed among teams, who respond well to
facilitation by leader. Roles and responsibilities are clear and accepted. Big
decisions are made by group agreement. Smaller decisions may be delegated to
individuals or small teams within group. Commitment and unity is strong. The
team may engage in fun and social activities. The team discusses and develops its
processes and working style. There is general respect for the leader and some of
leadership is more shared by the team. Leader facilitates and enables (similar to the
Situational Leadership 'Participating' mode.)
Performing - Stage 4
The team is more strategically aware; the team knows clearly why it is doing what
it is doing. The team has a shared vision and is able to stand on its own feet with no
interference or participation from the leader. There is a focus on over-achieving
goals, and the team makes most of the decisions against criteria agreed with the
leader. The team has a high degree of autonomy. Disagreements occur but now
they are resolved within the team positively and necessary changes to processes
and structure are made by the team. The team is able to work towards achieving
the goal, and also to attend to relationship, style and process issues along the way.
Team members look after each other. The team requires delegated tasks and
projects from the leader. The team does not need to be instructed or assisted. Team
members might ask for assistance from the leader with personal and interpersonal
development. Leader delegates and oversees (similar to the Situational
Leadership 'Delegating' mode.)

Version 9.1

2-25

Guide to the CABA CBOK

Figure 2-1 The Tuckman Team Model


Stage 5 - Adjourning
Bruce Tuckman refined his theory around 1975 and added a fifth stage to the
Forming Storming Norming Performing Model - he called it Adjourning, which is
also referred to as Deforming and Mourning. Adjourning is arguably more of an
adjunct to the original four stage model rather than an extension - it views the
group from a perspective beyond the purpose of the first four stages. The
Adjourning phase is certainly very relevant to the people in the group and their
well-being, but not to the main task of managing and developing a team, which is
clearly central to the original four stages.
Tuckman's fifth stage, Adjourning, is the break-up of the group, hopefully when
the task is completed successfully, its purpose fulfilled; everyone can move on to
new things, feeling good about what's been achieved. From an organizational
perspective, recognition of, and sensitivity to, people's vulnerabilities in
Tuckman's fifth stage is helpful, particularly if members of the group have been
closely bonded and feel a sense of insecurity or threat from this change. Feelings
of insecurity are natural for people with high 'steadiness' attributes (as regards the
'four temperaments' or DISC model) and with strong routine and empathy style (as
regards the Benziger thinking styles model, right and left basal brain dominance).

2-26

Version 9.1

Management and Communication Skills


2.3.5.2

Hersey's and Blanchard's Situational Leadership17 Model

The classic Situational Leadership model of management and leadership style also
illustrates the ideal development of a team from immaturity (stage 1) through to maturity
(stage 4). Management and leadership style progressively develops from relatively detached
task-directing in the following stages:
1. Managerially explanation
2. Managerial participation
3. Detached delegation
4. Largely self-managing, and contains at least one potential management/leadership
successor
The aim of the leader or manager is therefore to develop the team through the four stages, and
then to move on to another role. Ironically this outcome is feared by many managers.
However, good organizations place an extremely high value on leaders and managers who can
achieve this. The model also illustrates four main leadership and management styles, which a
good leader is able to switch between, depending on the situation (i.e., the team's maturity).

2.3.6

Brainstorming

This technique has been around for many years and is widely used and abused by
organizations. It is a basic tool for the Business Analyst in the problem solving and process
improvement activities. It is often used at the beginning of a project to get people thinking in
new and different ways. Historically a brainstorm was the term for thinking that was wild
and crazy. Structured brainstorming can create an environment for exploring the outer edges
of what was previously done or accepted.
The objective of brainstorming is to gather as many ideas as possible on a given subject as
quickly as possible. During the initial brainstorming session, there is no evaluation or critique
of the ideas presented. There are several alternative methods, but all have the same general
approach:
Prepare - Attendees are provided with information about the topic in advance of
the session, which they are to read and understand. This may include information
about a problem, an opportunity or both. Often it will include input from customers,
business partners, and benchmarking activities. Sometimes the preparation stage
may include presentations at the beginning of the session by key stakeholders to
establish the need, and management support for the process.

17. cc Situational Leadership is a trademark of the Center for Leadership Studies. Situational Leadership II is a trademark of The Ken Blanchard Companies. Use of material relating to Situational
Leadership and/or Situational Leadership II requires licence and agreement from the respective
companies.

Version 9.1

2-27

Guide to the CABA CBOK


Contribute - Each attendee is asked contribute as many ideas on the subject as
possible. Like root-cause analysis, they must be related at least tangentially to the
issue under discussion. In one method, people fill out 3x5 index cards or Post-It
notes with a single idea on each. During each round each participant submits a
single idea. These are read to the group by the Facilitator and then posted on a
common list. There is then a pause while people have the opportunity to create new
idea notes. During successive rounds, the number of new ideas will decrease until
no one has anything further to contribute. In a slightly less structured method
people take turns reading aloud one idea from a list they have prepared and add to
the list as they hear ideas from others. The Facilitator again is responsible for
posting the ideas until there are no more being contributed. The least structured
approach is more like a verbal free-for-all with everyone shouting out ideas to the
group as quickly as they can, leveraging off what others said. This clearly is the
most spontaneous, but also the most difficult to capture effectively. The method
chosen will depend in large part on the diversity of the group, the amount of input
expected and the significance of the topic.
Consolidate - Once all of the ideas are posted, the group then tries to determine
where there are duplicates and eliminate them. This is often accomplished by
starting at the top of the list and reviewing the items one at a time. During this
process, the ideas may be expanded, or grouped with other ideas already on the list.
At the end of the session, each idea or suggestion is understood by the group and is
unique.
Once the ideas are consolidated, the brainstorming activity is concluded. At this point this
group or another one may go through the steps of analyzing and prioritizing the ideas. This
may be done in the same session if it is to be the same group, or if some research will be
required before even initial assessments can be performed, another session will be scheduled.
The advantage of a brain storming activity is that it can generate a lot of input and interest in
the topic. Brainstorming sessions can be fun and rewarding. It offers an opportunity for people
to exercise their creativity and demonstrate their knowledge. There is a danger with
brainstorming that people will get enamored of a solution which will not be organizationally
acceptable. If this solution is later shot down there will be a corresponding let down on the
part of the participants and may result in a cynical backlash.

2.3.7

Focus Groups

Focus groups are a way of gathering specific information from individuals who would not
ordinarily be accessible. Typically focus groups are drawn from the external customer base,
although occasionally there may be a reason to use employee groups. For example, when a
new system to be used by a wide spread group of employees is to be developed. Typical uses
of focus groups are to determine customer responses to current or proposed products.
Participants are often given some special consideration or reward in return for their
participation.

2-28

Version 9.1

Management and Communication Skills


Focus groups are generally held at a neutral site, rather than the organizations location. The
session itself is often conducted by outside personnel, hired because of their skills in this area.
Essential to the focus group process is a clear picture of what the organization wishes to learn,
because with a focus group, time will be limited and there is generally no follow up
opportunity. A software vendor might show a prototype of a new product to small groups of
long term customers of a companion product. They might be asked questions about the
perceived functionality, the look and feel, and perhaps the pricing or pricing strategy. In
return, the customers may be offered a discount on the product should they chose to purchase
it. If the product represents a significant technical breakthrough or major innovation,
participants should be asked to execute a confidentiality agreement.
Where an internal focus group is conducted, the BAs facilitation skills may lead to their
participation in this activity. Generally, in addition to the host (facilitator), there will be one or
more hidden observers who will also be responsible for taking notes. Obvious note taking in
this context is often a detriment to participation, but participants must be advised that there
are unseen observers taking notes. Failure to do so may result in people saying things
embarrassing or painful.

2.3.8

Negotiating

Life is a series of negotiated situations, especially in the business environment. The rules for
effective negotiation can be applied to both personal and business situations. There is a
misconception that negotiating is bad, or negative, or the result of a failure to communicate,
when in fact it is just the opposite. Good negotiations can help each of the parties arrive at a
solution which is the optimum given the situation. Failure to negotiate often results in
frustration, confusion and failed projects.
Effective negotiation skills can be employed by the Software Business Analyst in a wide
range of activities such as establishing and adjusting priorities; addressing resource allocation
and project responsibility issues; developing and adjusting agreements regarding cost, time,
scope and quality. The rules for good negotiating are presented below in the context of a
group, business issue. These same rules apply when conducting negotiations at the individual,
interpersonal level.
Communicate Clearly - Negotiations often get off on the wrong track almost
immediately, because one or more of the participants fails to clearly state what they
want from the specific situation. Even if it is already clear that the objective is not
attainable, it is a good place to start a discussion, for example, What I really want
is to be able to implement this project on March 1, for the European Economic
Community (EEC). Depending upon the nature of the situation more clarification
or detail may be needed, That would include installing all of the programs, getting
any Work Council approvals and training the staff.
It is essential to think through and be prepared to communicate clearly what
smaller or different level might be acceptable: I could live with Germany and

Version 9.1

2-29

Guide to the CABA CBOK


Spain not being ready. There may be several optional solutions which would be
acceptable; be prepared to articulate each of them as necessary.
Clear communications require effective listening. Give the speaker undivided
attention. Take heed of any non-verbal clues about parts of the message which are
either difficult or creating problems for the speaker.
Respect the other person - Many of the problems which cause negotiations to
break down rather than yielding a successful result are because one or more of the
parties do not feel that they or their position is being respected by others. Perceived
lack of respect creates anger and frustration. This in turn leads to statements,
positions and actions which are not productive for the negotiation.
One of the most common causes of feeling disrespected is interruptions. By
allowing the person or individual to make their position clear, communicate their
questions or concerns without interruption, hearers convey the message they are
interested in what the speaker has to say. Interruptions, no matter how well
intentioned, send the message that what is being said is less important than what
the interrupter has to say. This is a clear signal of disrespect.
Expressions of anger should be a red-flag for participants that it is time to back up.
Take the time to understand why the individual or group is angry. Attempt to see
the situation from their perspective. Resolving the cause of the anger will make it
possible to resolve the more important issues more easily.
Recognize and clearly define the problem - At the outset each party has defined
what they hope the outcome or decision will be. The area of disagreement among
them is the problem to be solved. Define the area of disagreement clearly. Sales
would like to have all 27 of their top priorities included in the first installation.
This would mean that only 3 of the top 4 Accounting priorities could be included,
and specifically the new Regional Balancing subsystem could not be done.
Accounting would like to have all 4 of their top priorities included, this would
mean that there would not be enough resources to include Sales items 7, 10 and 19
through 27.
Until this step has been taken, it is not productive to discuss solutions. Taking the
time to make the area of disagreement clear may in fact resolve part of the
perceived problem. Sales could live without 23 through 27 until the next release,
but we really need the others. This kind of amplification and refinement will help
to lay the ground work for developing workable solutions.
At this stage it is useful to clearly define and focus on areas of agreement. This will
help build the foundation for a consensus agreement. From what I hear we all
agree that we must do at least 12 of the Sales priorities in the first release. I think
everyone agrees that to make this work, we must have at least a piece of Regional
Balancing plus Accounting priorities 2 and 3. I think we all accept that, given
other priorities and resource commitments, IT can not get everything else done by
the initial release date.
Seek solutions from a variety of sources - With the problem area clearly defined,
it is time to look at potential solutions. Although it is tempting, do not accept the

2-30

Version 9.1

Management and Communication Skills


first idea that is offered, and not strenuously objected to, as the solution. Identify all
of the potential solutions before beginning to make decisions. This may include
adding outside resources, scaling back scope, agreeing to additional funding,
extending the time frame and so on. Develop as many alternatives as possible. This
will help people open their minds to other ways to meet their needs. We could give
up most of Regional Balancing in this release, if we were able to see the daily totals
by Region on the Flash screen.
Collaborate to reach a mutual solution - Once a variety of alternative solutions
have been proposed, work to find those which will best meet the organizations
needs as a whole. This may require a number of variations be discussed until
identifying one which will best suit the organization as a whole. Test the proposed
solutions with each of the involved parties. So, if we were to move priorities 13, 17
and 23 to 27 to the next release, Sales could live with that? And, if we were to add
the Regional daily totals to the Flash screen and do priorities 2,3 and 4, Accounting
could live with that? And IT agrees that these can be accomplished by the proposed
date for the initial release?
The resulting agreement should be published where appropriate to all parties to the
agreement. Many smaller items can simply be added to the documentation of the
project or activity.
This may be harder to accomplish when involved in an interpersonal negotiation as
some individuals may remain unresponsive to suggestions that they accommodate
the needs of others. It may be difficult to reach an amicable solution in this event.
This is why there are contracts, courts and arbitrators.
Be reliable - Once the agreement is reached via negotiation, it is essential that all of
the parties follow through on their respective piece of the plan or commitment.
Failing to follow through or deciding to unilaterally reject the agreement will
undermine all credibility in future negotiations.
Preserve the relationship - Throughout the process it is important to focus on the
relationship, especially for those within the organization. Winning the battle in one
particular situation may cause long term hard feelings and create a road block to
collaboration in the future. There will be many negotiations over time. Before
going to the wall on one particular situation, ensure the long term cost will not
outweigh the long term benefit.
This is especially true in interpersonal negotiations; beware of taking a position
from which it will be difficult or impossible to retreat gracefully. Try not to let
others do that either, as regardless of the financial or objective results of the
negotiations, the relationship will have sustained a mortal wound.
Even though vendors or suppliers are not part of the organization in a literal sense,
most organizations find a solid working relationship with their vendors is essential
to their success. Think carefully about pressing suppliers to the point that the
transaction is no longer financially attractive. The deal may go through this time,
but they may not be willing to do business in the future.

Version 9.1

2-31

Guide to the CABA CBOK

2.3.9

Johari Window

The Johari Window model was created by two American psychologists, Joseph Luft and
Harry Ingham who were researching group dynamics in the 1950's. The name of the model is
a combination of their names, Joe and Harry. It uses the same four quadrant format as the
Tuckman team model discussed earlier. In the case of the Johari model, the areas of the
quadrant are not fixed, but can change over time with better information and determination on
the part of team members. The four areas are:
1. What is known by the person about him/herself and is also known by others - open
area, open self, free area, free self, or 'the arena'
2. What is unknown by the person about him/herself but which others know - blind area,
blind self, or 'blind spot'
3. What the person knows about him/herself that others do not know - hidden area,
hidden self, avoided area, avoided self or 'facade'
4. What is unknown by the person about him/herself and is also unknown by others unknown area or unknown self 18
Figure 2-2 shows the basic or standard view of the Johari Window, with all four quadrants
being about equal.

18. The use this material is free provided copyright (Alan Chapman 1995-2006 adaptation, review and
code based on Ingham and Luft's original johari window concept) is acknowledged and reference is
made to the www.businessballs.com website. This material may not be sold, or published in any
form. Disclaimer: Reliance on information, material, advice, or other linked or recommended
resources, received from Alan Chapman, shall be at your sole risk, and Alan Chapman assumes no
responsibility for any errors, omissions, or damages arising. Users of this website are encouraged to
confirm information received with other sources, and to seek local qualified advice if embarking on
any actions that could carry personal or organizational liabilities. Managing people and relationships
are sensitive activities; the free material and advice available via this website do not provide all necessary safeguards and checks. Please retain this notice on all copies.

2-32

Version 9.1

Management and Communication Skills

Figure 2-2 The Basic Johari Window


In the open area, both the self (or team) and the other individual (or group) know the
information in this area. This is where productive discussions occur, because everyone has the
same information. The larger the open area is, the easier it is to prevent and resolve conflicts.
Effective individuals attempt to maximize the open area to the extent that it does not
jeopardize their strength or ability to perform.
In the hidden area there is information about the self (or team) that is not shared with the
other individual or group. Often information is hidden because people feel revealing it creates
a loss of status or bargaining position. Hidden information creates significant problems in
resolving problems as the other individuals or groups may not feel there is a need for
protection the information and may become quite hostile if and when hidden information is
revealed. Individuals and groups actively placing information in the hidden area need to
carefully assess the potential short and long term risks.
Most people are fairly comfortable with the concepts of both hidden and open information
areas. The other two areas are less commonly discussed. In the blind area there is information
about the self that is unknown to them but known to others. This can be especially difficult
to deal with as often the information is not favorable to the self. A fairly typical example of
this is when some managers in an organization are aware that a management change is about
to occur and that the self individual or group will lose some portion of their control or
authority over the area under discussion, while the self is unaware. A second frequent
situation is when others recognize that the self becomes angry, emotional or distracted in
certain situations and are able to use it to their advantage.

Version 9.1

2-33

Guide to the CABA CBOK


The fourth area is the unknown area in which there is information about the self, of which
they and others are unaware. This may include changes totally outside their control, or
information which has been successfully repressed by and individual. Operating in the
unknown area can result in unpredictable responses and actions by the self.
Compare the open area in Figure 2-2 with the same area in Figure 2-3. The groups or
individuals interacting with the self represented in Figure 2-3 are much more likely to achieve
a successful conclusion to any problem or conflict. Notice how the open area is significantly
expanded while the hidden, blind and unknown areas are decreased. Effective problem solvers
work hard to reduce the size of the blind area so they are not surprised during negotiations, by
negative information. The hidden area may contain information that is not relevant to the
situation under discussion, even though the other might wish to know it. For example, the
self may know that a third group, external to the situation, is going to be disbanded shortly.
While this is grist for the gossip mills, there is no need to share the information if it is not
essential to the discussion. Some things appropriately remain in the hidden area.

Figure 2-3 A more productive Johari Window

2-34

Version 9.1

Management and Communication Skills

2.3.10 Prioritization
2.3.10.1

Conflict Management

Conflict occurs when two or more (individuals or groups) with contrasting goals or objectives
come in contact with each other. Throughout much of history thinkers and writers assumed
conflict was always bad and it was best thing to eliminate all conflict. This led to a wide range
of social and political situations in which anyone voicing disagreement was at least suspect,
and possibly criminal. According to Schmidt,19 there a number of potential negative outcomes
for an organization from unmanaged conflict; they include:
A climate of suspicion and distrust
Resistance to teamwork
People leaving the situation because of turmoil
Despite the best efforts of various states, religions and organizations, conflict remains a part
of life. Schmidt went on to identify several potential benefits from properly managed conflict:
New approaches and solutions
Long standing problems brought out in the open
Clarified thoughts and feelings
Stimulation of interest and creativity
Stretched personal capabilities
That leaves the CSBA with very few choices: they can ignore and suppress all conflict they
encounter, or they can choose to manage it.
In the 1960s Robert Blake and Jane Mouton conducted a series of studies on conflict and
conflict management.20 Their findings provide a framework for effective conflict
management. They created a grid on which they rated an individuals concern for self and
their concern for others. Each of these factors is shown on an nine (9) point scale. Their
research revealed that depending upon an individuals location on the grid, they were most
likely to adopt one of five (5) conflict management styles or strategies. Each strategy has both
positive and negative features.

19. Schmidt, S.M.; Conflict: a powerful process of (good and bad) change; Management Review, 1974
20. Blake,R.R. and Mouton J. S.; The Managerial Grid; Gulf Publishing; 1964.

Version 9.1

2-35

Guide to the CABA CBOK

Figure 2-4 Blake and Mouton Conflict Management Style Diagram


Blake and Mouton discovered most people did not use the same style all the time nor did they
stay with that style throughout an entire conflict. Instead, they observed that individuals favor
styles and typical strategies, but they can and will vary those depending upon the
circumstances. Understanding each of the styles and how they impact conflicts provides the
CSBA with another tool for use in conflict management.
Withdrawing or Avoiding Conflict -There are a few good and a lot of less
worthwhile reasons to avoid conflict. Many IT people are heavily introverted and
find confrontation of any sort difficult to manage; so they seek to avoid any
situation in which conflict may occur. The difficulty with conflict avoidance is that
the individual or groups who did not withdraw (call them K), assume that everyone
is in agreement with their point of view. This is because they heard no objections. If
K is correct in their objective or plan of action everything may be alright. However,
the individual or group which failed to object so they could avoid the conflict (call
them L), still thinks they are right (and they may be right). The act of suppressing
ideas and objections creates a certain amount of frustration and tension which may
surface in other situations. It certainly creates a lack of commitment to the approach
adopted. The organization has not had an opportunity to examine their concept or to
take advantage of their knowledge and skills. Worse, the problem has not actually
been solved and there are difficulties down the road.

2-36

Version 9.1

Management and Communication Skills


Withdrawing or avoiding may be a good strategy if the problem will be solved by
something else, with no action required by either K or L. Likewise, if the issue is
trivial and will not have a noticeable impact, avoidance may be a good approach.
Withdrawing may also be a good choice if K will be convinced by production of
information or data that L knows exists, but does not currently have available.
Avoiding/withdrawing prevents K from settling into a hardened position from
which it will be difficult or impossible to move them. Finally there are conflicts
which are un-winnable for L. In that case, avoiding conflict will make excellent
sense.
Smoothing - In this approach L simply gives in to K, even though they may
strongly disagree. Unlike Avoiding, some conflict does occur between L and K, but
L minimizes it and gives every impression of agreement, and often is obvious in
their desire to adopt Ks solution in an effort to please K or to meet Ks needs. Like
Avoiding, this approach may be successful if K is correct. If K is not correct, the
problem is still not solved. Outwardly however, the appearance is that harmony is
restored, but like Avoiding, there may be some residual frustration at sacrificing
their personal goals, objectives or ideas.
There are times when Smoothing works well and should be the strategy of choice:
when it is a small issue, and goodwill needs to be preserved for the discussion of
the larger issues. This is especially true if the resolution of the issue is more
important to K than it is to L.
Compromising - Unlike Avoiding or Smoothing, L will obtain at least part of their
objective in a compromise solution. Compromises typically represent a less than
optimal solution for all of the parties involved; typically every party to a
compromise feels they gave away too much. This creates the same kind of
background frustration seen in withdrawing/avoiding and smoothing; but now
everyone has a piece of it, not just L. In addition, if there is a correct solution,
compromise will rarely reach it. The problem still exists and the organization
agreed to a series of steps or actions which will probably not solve it. At some point
the organization needs to either accept the cost of the problem, or invest more to
find a real solution. In that process, both K and L may lose a lot of credibility within
the organization for having adopted and possibly implemented an incorrect
solution.
Compromising can be effective when both sides want to demonstrate good faith
and a willingness to work together, and where the issue is small. Compromise can
be a good building block for cooperation on larger issues, and it can save time and
energy for dealing with those issues.
Forcing - Forcing is sometimes also referred to as competing,21 and is set up as a
win-lose situation. It is a method of interacting with others which depends
heavily on individual or group power; this power may be actual (L is the boss), or it
21. Conflict Strategies, Organizational Design and Development, Inc., HRDQ, 2004

Version 9.1

2-37

Guide to the CABA CBOK


may be political, technical or financial power. In any case, L committed to an
approach and nothing K can say or do will alter the situation. This is not a mirror
image of avoiding, as in this case K entered into the situation prepared to contribute
and to support their position strongly. Once firmly overruled, K has little
commitment to the result and may even be willing to sabotage the effort. One of the
other problems with forcing or competing is that L may not be correct, in which
case, failure to heed reasonable suggestions from K, may have strong negative
consequences.
Forcing is a popular style in hierarchical organizations despite a negative long term
track record. On occasion, it is the correct choice; the best example of this is when
two or more approaches are being supported, no one is giving ground, and a
decision must be made quickly. In that case, someone (typically at the next level
up in the organization) says, Well do it this way.
Problem Solving - This is the optimal approach in most situations. It focuses not on
individual needs, wants or agendas, but on what will work best for the organization.
Problem solving focuses on the problem, taking from both K and L, those portions
of the solution which will work well. While sometimes referred to as a strategy of
integration, often the ultimate resolution looks nothing like the initial offerings, but
is something wholly new and different. Problem solving opens up the doors to
creative thinking by all participants. Problem solving will work well in complex
situations which require new or different approaches.
The downside to problems solving are that it may take longer to reach an
agreement, even though that agreement will be more productive than faster
solutions. Problem solving also asks individuals to step away from political issues
and personal career objectives; this is often not easy to do.

2.3.10.2

Prioritization Techniques

One of the challenges facing the Business Analyst is the need to focus resources on a limited
number of activities which can actually be accomplished. This may include situations where
there are a large number of worthwhile activities or objectives, it may include the results of a
requirements gathering process, it may include structuring an acceptance test plan. Helping
groups to select a limited subset can be a conflict prone activity, when what is needed is
consensus. Below are two techniques which can help accomplish that goal.
1. Group Ranking - The objective of group rankings is to reduce the total number of
items under consideration down to a manageable number and to have the resulting
items in a priority sequence. In addition it will help to build support for those items
from the participants in the selection process. There are several variations to this
process, of which two (Simple Note Card Approach and the Multi-Voting Approach)
are described below, both start out with the same basic steps:
An item gathering process takes place. This may be one or more Brainstorming
Sessions, Requirements Gathering Sessions or Acceptance Test Case development
2-38

Version 9.1

Management and Communication Skills


session. The result of the activity is a list that is not in a priority sequence that is
acceptable to all participants (often not prioritized at all) and often there are too
many options to research or act upon in the time allowed.
A consolidation/clarification session is held. In this activity each of the items is
reviewed at a high level to eliminate duplicates, consolidate components of a single
event or activity and to ensure that people understand what each item on the list is
about.
Establish a target number of priorities as the output of the process (we want to
identify our Top 10, or 5).
At this point you can proceed in one of the following ways:
Simple Note Card Approach - This approach works well if the list is fairly short
(fewer than 15 objects) and the number of people is fairly large (can include 15 or
20 easily, but 5 as a minimum). Give each person 5 note cards. Participants are then
instructed to write one of their top 5 (five) on each note card. They are further
instructed that they may not cast multiple votes for the same item. No discussion is
allowed at this time. The Facilitator gathers up the note cards and records one tick
mark next to an item for each card. The maximum number of votes any item can
receive is one per attendee.
In a typical scenario there will be 3-4 items which receive votes from the majority
of attendees; another 3-4 will receive either no or very few votes. The first group is
clearly in and the second group is clearly out. In the middle there will be 7-9
items which received some, but not complete support.
On a new list, start with the item which received the highest number of votes. Add
the others in sequence. The group now has a prioritized list of items, which is
shorter and more manageable. Often additional research is required before final
decisions are made.
Multi-Voting - This is a more substantial version of the same process. It is much
more effective when the number of items to be prioritized is larger and the group of
participants is somewhat smaller (10 -12 is the maximum). The Facilitator divides
the total number of items by 2 and then adds either or 1 to arrive at an uneven
number. (For example, 49 items divided by 2 equals 24.5. Add for a total of 25.
Or, 36 items divided by 2 equals 18, add 1 to equal 19.) Each participant then has
that number of votes.
Participants are then instructed to take a magic marker and place a tick mark next
to each of the items they wish to vote for; once again, only one vote per item. As
before when everyone has finished, the results are summarized on a new list; this
list will be half as long as the first list. If the original list was quite long, it may be
desirable to repeat the multi-voting process after a review of the items on the new
list.
One of the advantages of this process is that people can see what is important to
others, as voting is not quite as anonymous as the first approach.
Version 9.1

2-39

Guide to the CABA CBOK


2. Forced Choice - Once the number of items is reduced down to 7 or less, another
approach will be useful in creating the final priority listing. This is called the Forced
Choice Decision Making Model. In Figure 2-5 below, a direct comparison is made
between each of the items, one at a time, going across the row. The letter of the more
important of the two items is entered in the box. In this example A is more important
than B, C and E, but less important than D. The process is repeated for each row, but
being careful not to repeat ratings (this is why the lower left blocks are grayed out).

Figure 2-5 Basic Forced Choice Model


After completing the one to one comparisons, the total number of times each letter
appears in the chart is recorded at the end of that row. Note that an entry for D appears
in the E column and must be added to the D total.
Sometimes options are very close in their ratings and the decision is difficult; other
times there is a wide gap. This may mean simply counting the number of responses is
inadequate to truly convey the relationship. In that case, adding a weight for the
difference to each comparison will help to clarify the situation.

Figure 2-6 Forced Choice with Weights

2-40

Version 9.1

Management and Communication Skills


In Figure 2-6 a simple 5 point scale has been used to identify how large the gap is
between the items being compared; 1 = little difference, very close; 3 = noticeable
difference; 5 = a major difference. After making the decision regarding relative
importance, the weighting is added. Note that in Figure 2-5, item A received 3 and
item E received 2 counts. It would be easy to assume that they are close in value. The
total weighting assigned in Figure 2-6 makes it clear that A (11 pts) is much more
important than C (6 pts). It is also clear that A is almost as important as D (12. pts).
The discussions held to create these rankings will allow all participants to understand
the relative relationship of the items under discussion and build support for the final
priority on items.

2.4 Summary
In this Skill Category you have seen how Theories of Management evolved and the role the
attitude of management plays in the successful implementation of improvement initiatives.
The most important prerequisites for successful implementation of any major quality initiative
are leadership and commitment from executive management. We have examined the
characteristics of a work environment supportive of quality initiatives. It is managements
responsibility to establish strategic objectives and build an infrastructure that is strategically
aligned to those objectives. This can only be done when management has the necessary
information from processes to allow them to manage by fact rather than by intuition or
emotion. The creation and support of robust processes is a key ingredient of effective
management. As the CSBA works within the organization, they must be able to see where and
how management is performing well in addition to understanding those areas in which they
perform less well. This will create realistic expectations about what the organization is
capable of achieving in any specific situation.
In this Skill Category we have also looked at the management and communication skills the
Software Business Analyst must have to be able to successfully perform their job. These
include basic skills such as listening and more complex skills such as facilitation and conflict
management. Future Skill Categories will address increasingly specific areas of job skills for
the Software Business Analyst. Armed with an understanding of the environment and
communication skills, the CSBA will be able to apply those skills with maximum effect.

Version 9.1

2-41

Guide to the CABA CBOK

2-42

Version 9.1

Define, Build, Implement and Improve Work Processes

Skill
Category

3
Define, Build, Implement and
Improve Work Processes
The world is constantly changing. Customers are more knowledgeable and demanding;
therefore, quality and speed of delivery are now critical needs. Companies must constantly
improve their ability to produce quality products adding value to their customer base.
Defining and continuously improving work processes allows the pace of change to be
maintained without negatively impacting the quality of products and services. This category
addresses process management concepts, including the definition of a process, the workbench
concept and components of a process which includes a policy, standards, procedures (i.e., do
and check procedures), and guidelines. It lays the groundwork for this by examining the role
and contribution of various national and international quality standards. Additionally, it will
address the understanding of definitions and continuous improvement of a process through the
process management Plan-Do-Check-Act (PDCA) cycle and an understanding of the
importance of entrance and exit criteria at each stage of the PDCA.

3.1 Understanding and Applying Standards for


Software Development
3.1.1

Background

The dynamic growth of the world economy following the Second World War was fueled in
part by the expansion of markets. Consumers in every part of the world became familiar with
goods and services from other countries. The globalization of the world economy was

Version 9.1

3-1

Guide to the CABA CBOK


underway. As organizations tried to be more efficient, one of the major issues that stood in the
way was the lack of standardization in processes.
A number of organizations arose to help fill this gap. One of the earliest non-governmental
organizations addressing computers specifically was the International Electrical and
Electronic Engineers (IEEE).1 Created in 1963 from the merger of two much older groups of
engineers, the IEEE quickly became one of the leaders in establishing standards through a
collaborative process. Their primary focus was on the hardware elements of computers, as this
level of standardization was essential for organizations to work with each other.
During that same time a number of government agencies, especially those supporting military
establishments, saw the need for some standards and processes for developing software. There
were initiatives in the UK, USSR and USA. The results demonstrated by these initiatives
encouraged industry groups to attempt similar approaches. At first each of the groups worked
in isolation from each other. Over time an increasing level of integration occurred. Some of
the major organizations and their approaches are described below. There are numerous
organizations that have contributed in some fashion to the efforts described by these major
organizations. Some of those organizations, commercial, professional and governmental have
been consolidated, renamed or eliminated. Others still exist and are still making major
contributions.

3.2 Standards Organizations and Models


3.2.1

International Organization for Standardization (ISO)2

The ISO Certification is the most widely recognized of its kind in the world. Holders of ISO
certifications are able to operate internationally at a considerable advantage. Although the
early efforts were criticized by many, the evolution of the ISO certification products has
addressed most of the issues.
In 1987 ISO released the first set of standards, the 9000 series. These were standards for a
quality management system. These standards were written for all kinds of organizations and
have a strong manufacturing orientation; as a result, it was difficult to apply them directly to
the software development process. These standards were developed by people with a strong
quality orientation, but little technical insight; the result is a focus on what is to be done,
rather than how to do it. As such, it is not a quality management approach, but a quality
management approach.
In 1991, to address the Information Technology applicability issue, ISO released ISO 9000-3.
This guideline was oriented toward ensuring that the product delivered conformed to
1. IEEE Data
2. ISO Data

3-2

Version 9.1

Define, Build, Implement and Improve Work Processes


requirements. The emphasis of ISO 9000-3 is the quality control activities needed to catch
defective products. There is an emphasis on creating and maintaining records and documents
to support the processes. This aspect of the approach caused may organizations to avoid the
early implementations. In the years since the initial introduction, ISO has continued to refine
and enhance the certification platform they offer.
Organizations doing business internationally find it advantageous to be ISO Certified.
Certification is the result of an external assessment, conducted by trained assessors.
Organizations which are certified must submit to periodic reassessments to maintain that
status. The documentation and records created by the various processes are essential for
successful certification. Some organizations wish to follow the ISO protocols, but have not
been certified. These organizations often refer to themselves as ISO compliant.
Described below are a few of the key ISO standards publications of significance to those in
IT:
In ISO 9000-3 the guidelines divide the activities into three groupings of tasks:
General company and management requirements
Project and maintenance phase requirements
Supporting activities requirements
As a part of the 9000 series, ISO also published 9004-4, which addressed the concept of
quality improvement as a part of the process for producing quality goods and services.
In ISO 9000:1994, published in 1994, the 9000-3 guidelines were revised to incorporate a
much more proactive approach to quality. Quality Assurance activities were specifically
added to each phase of the development cycle.
In ISO 9001:2000, introduced in 2000, a further shift was made away from the pure quality
control of 9000-3 to the concept that management must be actively involved in the
development of quality systems. This was a major shift from a line focused, documentation
heavy approach that enriched the quality assurance approach introduced in 9000:1994. This
publication introduced five Quality Elements to replace the list of 23 contained in 9000-3.
These are:
1. Quality Management System
2. Management Responsibilities
3. Resource Management
4. Product Realization
5. Measurement, Analysis and Improvement
As ISO received feedback from various organizations using their guidelines, they addressed
gaps and issues brought to their attention. ISO 9126 was introduced in 2000 as a standard for
evaluating software. It created a list of characteristics and sub-characteristics for quality
software:

Version 9.1

3-3

Guide to the CABA CBOK


Functionality - which includes suitability, accuracy, interoperability, compliance
and security
Reliability -which includes maturity, recoverability and fault tolerance
Usability - which includes learnability, understandability and operability
Efficiency - which includes time behavior and resource behavior
Maintainability - which includes stability, analyzability, changeability and
testability
Portability - which includes installability, replaceability and adaptability
ISO 12207 was published in 1995 and established a lifecycle for the development process
from conception to retirement. It focused on a negotiated acquisition with a supplier (either
internal or external) rather than a Commercial Off the Shelf product (COTS). ISO 12207 was
designed to be flexible and tailorable and provides guidelines at a fairly high level. Adopting
organizations need to develop the detail in concert to reflect the specific aims of the
acquisition.
ISO 14001 addressed the establishment of environmental policies, including planning,
environmental and regulatory assessment. The intent of 14001 was to provide guidelines for
improving the cost effectiveness and quality of the environment.
ISO 15504 incorporated much of the process assessment capability developed by SPICE
(Software Process Improvement and Capability dEtermination). This framework was
designed to support process assessment, process improvement and capability determination.
In addition to the five Maturity Levels described in SEI/CMM and CMMI, the SPICE model
adds a Level 0: Incomplete Processes. ISO 15504 focused on five process areas:
1. Customer-supplier processes
2. Engineering processes
3. Support processes
4. Management processes
5. Organization processes
ISO 20000 is the standard introduced to addressed the issue of Information Technology
Service Management. Based significantly on work done by the British Standards Institute,
ISO 20000 addresses how to control and manage the delivery of Information Technology
products and services. There are two areas: 20000-1 is the Specification for Service
Management and 20000-2 is the Code of Practice for Service Management.

3-4

Version 9.1

Define, Build, Implement and Improve Work Processes

3.2.2 Software Engineering Institute and the Capability


Maturity Model - SEI/CMM and CMMI
The Software Engineering Institute established at Carnegie-Mellon University in the US
focuses on the processes required to develop quality software. The Capability Maturity Model
developed at SEI is based on the premise that maturity indicates capability and further, that to
obtain continuous improvement it is much better to take small evolutionary steps rather than
rely upon revolutionary innovations.3
In 1989 SEI introduced their five level model that included a framework for improvement.
Each of the levels addresses the following four areas:
1. Key Process Areas - a group of related activities that are performed together to achieve
a set of goals
2. Goals - a description of the environment or state that is achieved through the
implementation of Key Process Areas
3. Common Features - found at each maturity level to achieve success; examples are
commitment to perform, and measurement and analysis
4. Key Practices - elements of the infrastructure that contribute to effectively
implementing Key Process Areas.
This model described a specific environment associated with each level and the processes
needed to move from one level to the next. These are referred to as Key Process Areas
(KPAs). Each KPA must have a policy, a standard, a process description and a detailed
procedure.

Figure 3-1 SEI Capability Maturity Model (Integrated)


3. ESSI Scope Software Process Improvement Approaches, www.cse.dcu.ie/essiscope/sm5/approach;
2006

Version 9.1

3-5

Guide to the CABA CBOK


According to SEI, the initial Capability Maturity Model was designed to provide the
following:
A place to start
The benefit of a communitys prior experiences
A common language and shared vision
A framework for prioritizing actions
A way to define what improvement means to the organization
As the model evolved, new versions were published each containing some slightly different
information. In 2000 CMMI, Capability Maturity Model Integrated was published to
reconcile the previous versions. The original CMM is no longer available from SEI.
Organizations can characterize their environment through reference to their Maturity Level.
SEI provides a certification process which allows qualified organizations to advertise their
maturity level. The SEI process, unlike ISO, is based upon a self-assessment conducted in
good faith.
Each of the levels, and the KPAs required to move to the next level, are described below:
1. Initial Level - This level is also referred to as ad hoc, chaotic or heroic. At the initial
level of maturity organizations have few good processes with the result that the
environment is not stable from one project to the next. The focus is on meeting time
and budget criteria at the expense of quality and processes, but projects often fail to
meet the schedule and budget criteria. Failures are blamed on people. Processes are
viewed as an impediment to success. Some projects are successful despite the
environment. Success depends upon heroic efforts of talented individuals and is
difficult to repeat from project to project. Level 1 organizations tend to over promise
and under deliver.
The migration from Level1 to Level 2 is the most difficult for most organizations. It
requires a major internal shift in priorities and commitments. Often the impetus to
undertake the effort is as a result of a spectacular failure or an impeding calamity.
There are seven Key Process Areas which must be implemented to move from Level 1
to Level 2:
1.
2.
3.
4.
5.
6.
7.

Requirements Management
Project Monitoring and Control
Project Planning
Supplier Agreement Management
Configuration Management
Measurement and Analysis
Process and Product Quality Assurance
2. Managed Level - At this level, the processes developed at Level 1 can be repeated by
others in the organization. Not all processes are necessarily used effectively by all

3-6

Version 9.1

Define, Build, Implement and Improve Work Processes


projects. The organization still focuses significantly on time and budget, but processes
are not abandoned in times of stress. Project planning is improved, so that time and
budget overruns, while still occurring, are more controlled. With the growth of project
predictability, there is improved customer satisfaction.
The migration from Level 2 to Level 3 requires the implementation of many more
process areas, but each is smaller in scope than the original seven. The fourteen KPAs
are:
1. Product Integration
2. Requirements Development
3. Technical Solution
4. Validation
5. Verification
6. Organizational Process Definition
7. Organizational Process Focus
8. Organizational Training
9. Integrated Project Management
10. Integrated Supplier Management
11. Integrated Teaming
12. Risk Management
13. Decision Analysis and Resolution
14. Organizational Environment for Integration
3. Defined Level - At Level 3 processes are fully integrated into the everyday operations
of the organization. Exceptions and deviations from published standards and
procedures are rare; tailoring the process set to meet the needs of a specific project is
the norm. Focus has shifted from schedule and budget to management by process. This
does not mean that time and budget are ignored; it does mean that how those goals are
achieved has changed. The focus is on qualitative measures of success and there is a
major improvement in customer satisfaction.
The move from Level 3 to Level 4 requires the implementation of two KPAs:
1. Organizational Process Performance
2. Quantitative Project Management
4. Quantitative Level - The introduction of quantitative measurement is the key for the
achievement of Level 4. Statistical process control provides ongoing information about
how well processes are performing, the source of deviations and process capability.
Process performance is improved in key sub-processes to achieve desired results.
Of the organizations which begin the effort to improve their process capability, few
achieve Level 4 or beyond without significant incentive. The time and effort invested
is great, and unless the reward is greater, the improvement effort will stall out. To
move from Level 4 to Level 5 requires two final KPAs:
1. Organizational Innovation and Deployment
2. Causal Analysis and Resolution

Version 9.1

3-7

Guide to the CABA CBOK


5. Optimizing Level - With processes being effectively measured and data used for
selective optimization, the Level 5 organization spends far fewer resources on failure
than its competitors. It is able to use those resources to innovate and evolve.
Information Technology (IT) becomes a strategic enabler for the organization as a
whole.
The first organization to achieve Level 5 was in Bangalore, India, in June of 1999.
Since that time there have been a number of other organizations which have been
certified at that level. A large number of those organizations are located in India,
Russia and China as they seek to capture the very lucrative off shore programming
market.

3.2.3

Six Sigma

This approach was introduced by Motorola in 1986 as a method of systematically improving


their manufacturing processes with the specific intention of eliminating defects. The Six
Sigma methodology integrated tools and techniques developed by Shewhart, Deming, Juran,
Ishikawa, Taguchi and others. Motorola added to these techniques a group of trained
individuals whose only function was to support the organization in implementing
improvements. These individuals are known as Green Belts, Black Belts and Master Black
Belts depending upon their level of training and expertise.4
Six Sigma is a systematic, data-driven approach to problem analysis and solution, with a
strong focus on the impact to the (external) customer. It relies heavily on the use of statistical
analysis tools and techniques. The approach can be implemented by organizations with a low
level of competence in this area; however, as the process matures, so will the level of
statistical complexity.
The name Six Sigma is derived from the statistical model for the standard distribution of data.
As explained in Skill Category 1, Quality Basics, data is distributed in predictable patterns.
Sigma () is the symbol for a Standard Deviation. This is a measure of how closely grouped
the values created by the process are (density or tightness). Standard Deviations are measured
from the Mean. One Standard Deviation always includes 68% of all of the items in a class;
34% on either side of the Mean. Two Standard Deviations will always include 95.5% of all
the values; 47.8% on either side of the Mean. In a purely mathematical context, six sigma will
include .99999998% of all of the values; so that 2 in 100 million will fall outside the range.
Based upon a theory called sigma shift Motorola actually defines Six Sigma for their
approach as 3.4 defects per million opportunities. The difference between the two measures
is the subject of considerable discussion and some criticism regarding the naming.5
At the heart of the approach are two key methodologies, DMAIC and DMADV. The first is
used to improve existing systems and the second to create new processes or products.

4. Motorola University What is Six Sigma (http://motorola.com/content) 2006


5. Wheeler, Donald J. Phd, The Six Sigma Practitioners Guide to Data Analysis; SCPPress

3-8

Version 9.1

Define, Build, Implement and Improve Work Processes


3.2.3.1

DMAIC

DMAIC is an acronym based upon the following five steps:


1. Define process improvement goals that are consistent with enterprise strategies and
customer demands.
2. Measure the current process performance and collect data for future use (establish a
baseline).
3. Analyze all factors to verify relationships among them and identify causality where it
exists.
4. Improve or optimize the process based upon the analysis.
5. Control the process to correct variances before they can cause defects. Pilot the
process to establish capability; implement and measure. Compare the new process
measures with the established baseline and improve controls where needed.

3.2.3.2

DMADV

DMADV is the other acronym based upon these five steps:


1. Define process improvement goals that are consistent with enterprise strategies and
customer demands.
2. Measure and identify factors that are critical to quality (CTQs); measure product
capabilities and production process capabilities; analyze risks.
3. Analyze the problems to identify alternative solutions, create and evaluate high-level
designs to select the best design.
4. Design details, optimize the design and verify.
5. Verify the design, set up and run pilot, implement process.

3.2.3.3

Six Sigma Fundamentals

Eight Six Sigma fundamentals are necessary to understand and employ for a successful
implementation:
1. Information Integrity is essential to all of the Six Sigma activities. Systems and
Processes which produce incorrect or inaccurate data, or fail to produce data in a timely
fashion will cause major problems.
2. Performance Management must be based on contribution to the organizations
growth and profitability. It must include both financial and non-financial data, such as
that in the Balanced Scorecard, to be meaningful.

Version 9.1

3-9

Guide to the CABA CBOK


3. Sequential Production which is streamlined and focused on timely delivery of goods
and services is key, especially in manufacturing environments. The implications for the
unstructured development approach used in many IT organizations are clear.
4. Point-of-Use Logistics place information and materials in the locations where they
will be used. On-line access to requirements, standards, policies and procedures should
be provided, as opposed to hard copies maintained in remote locations.
5. Cycle Time Management focuses on processes and process improvement rather than
fire-fighting. This approach, adopted by many of the Agile Methodologies, allows
organizations to accurately estimate and timely deliver products.
6. Production Linearity keeps work moving at a constant pace, rather than planning for
or allowing surges to catch up on work. Project plans that fail to allow the necessary
time to do the work required, create surges which are costly and error prone.
7. Resource Planning entails not only downsizing when needed, but upsizing also. A
failure to have the appropriate resources to complete critical tasks in a timely fashion
creates further scheduling bottlenecks.
8. Customer Satisfaction requires interaction with real customers to understand both
their needs and their perceptions. In the case of customers, perception is reality.

3.2.4

Information Technology Infrastructure Library (ITIL)

Originally developed in the 1980s in the UK by the Central Computer and


Telecommunications Agency (CCTA), ITIL is intended to help IT organizations deliver high
quality information technology services. The material was published in a 30 volume set that
described service delivery best practices. As product development technology has
improved, there has been a strong interest in the parallel improvement in service delivery.
There are strong similarities between the information and approaches offered in ITIL and
those published by IBM in the early 1980s, especially in the Service Support volume. The
material initially published was intended for large main-frame centric processing centers.
Subsequent revisions have addressed smaller scale IT operations.
ITIL is comprised of nine books:
1. Service Delivery
2. Service Support
3. ICT (Information and Communications Technology) Infrastructure Management
4. Security Management
5. The Business Perspective
6. Application Management
7. Software Asset Management

3-10

Version 9.1

Define, Build, Implement and Improve Work Processes


8. Planning to Implement Service Management
9. ITIL Small-Scale Implementation

3.2.5

National Quality Awards and Models

The role of national quality models and awards is significant in the adoption of processes
within a specific country. Because of the increasing international trade, the adoption of a
model or award within one country will impact others within its local trading region.
The earliest, and one of the most prestigious awards, is the Deming Prize established in 1950
and administered by the Union of Japanese Scientists and Engineers (JUSE). As the global
economy developed, more countries adopted quality models and awards in an effort to assure
their continuing success in the world economy. The European Quality Award and the
Malcolm Baldrige National Quality Award, have been widely adopted by other organizations
as a starting point for their own quality efforts. These, and the other awards listed below as
well as others like them, provide a methodology for assessing performance and are heavily
process oriented.
Canadian Award for Excellence established 1984
European Quality Award established 1999
Malcolm Baldrige National Quality Award established 1987
Premio Nacional da Qualidade (Brasil) established 2000
Rajiv Gandhi National Quality Award established 1991
Russian Quality Award of the Government established 2000

3.2.6

The Role of Models and Standards

The role of the various international standards and models described in this Skill Category is
to provide an organization with a road map or template to use in understanding and improving
processes. Every organization must start with who they are and what they wish to accomplish
in order to determine which, if any, of the available models will help them achieve their goals
and objectives.
Many organizations find that no one model meets all of their needs precisely. Instead they
chose the standard or model that provides the best fit. Sometimes it will be a combination of
several. No matter what model is chosen, it will only be useful it there is a solid management
commitment for the time and effort needed to make it real.

Version 9.1

3-11

Guide to the CABA CBOK

3.3 Process Management Concepts


3.3.1

Definition of a Process

A process is a set of activities that represent the way work is to be performed. According to
The Concise Oxford Dictionary it is a course of action or a series of stages in the
manufacture of a product. Websters Dictionary defines it as a system of operations in
producing something. A series of actions, changes, or functions that achieve an end result. A
more technical definition found in IEEE, Standard 610, a sequence of steps performed for a
given purpose. One final useful definition comes from Hammer and Champy, a collection
of activities that takes one or more kinds of input(s) and creates an output that is of value to
the customer.6

3.3.2

Why Processes are Needed

Processes provide a stable framework for activities that can be replicated as needed. Creating
processes is natural. In our personal lives, we often call our processes habits or routines.
These are good things that help us to manage the daily complexity of our lives. Most people
have a morning routine for workdays; when they get up, a sequence for getting dressed and
getting something to eat before leaving for work. This routine, worked out over the years,
ensures that we do not arrive at work, late, hungry, and still in our night clothes.
For the CSBA, processes provide guidance on what is to be done and when. It can provide
insight into potential problems and roadblocks; it allows the analyst to learn from the
experience of others. All work activities begin at what is referred to in the Maturity Models as
the ad hoc approach level. Each repetition of the cycle is different.
In the going-to-work example above, consider what happens when starting to work for a new
employer: the first day (cycle), anxious not to be late, we get up so we can leave extra early,
arrive only to find that no one is there to let us into the building and get started with work; we
are too early (wasted time resource). The second day (cycle) we leave 25 minutes later. We
encounter much more traffic than we did on the first day, and as a result arrive late to work
(schedule slippage). The third day, based on prior experience, we leave 10 minutes earlier
than the second day, and arrive on time despite the traffic.
At this point, a prudent person knows what time he or she needs to leave for work, barring
external forces, in order to arrive on time. It becomes a part of the morning process. Some
people, however, will continue to leave at different times resulting in daily cycle time issues;
they continue to operate in ad hoc mode. Their productivity and that of those around them will
suffer.

6. Hammer, Michael and Champy, James; ReEngineering the Corporation: Get pub date

3-12

Version 9.1

Define, Build, Implement and Improve Work Processes


Software development processes are, of course, much more complex than the simple example
above. The number of variables, even on a small project, will be large. The traffic encountered
on projects is other projects, each with their own schedules and resource requirements.
Without processes to help plan the route, getting our project completed correctly, on time, and
in budget will be a matter of good luck rather than good judgment.

3.3.3

Process Workbench and Components

A process may incorporate one or more workbenches.

Figure 3-2 Workbench Concept


The components found in every workbench are discussed in the following sections.

3.3.3.1

Policy

The policy defines the objective to be achieved. Policies are high-level documents that guide
and direct a wide range of activities, and answer the basic question, Why are we doing this,
what is the purpose of this process? A policy should link to the strategic goals and objectives
of the organization and support customer needs and expectations. It will include information
about intentions and desires. Policies also contain a quantifiable goal.
Examples of policy statements:
We will be perceived by our customers as the IT Services provider of choice, as
measured by the Widget Industry Annual Ranking of IT Service Providers.
This is a very high level policy statement and while it does provide information
about intention and desires, it will require further definition to be actionable. Most

Version 9.1

3-13

Guide to the CABA CBOK


policy statements should not be focused at this level, although some are needed at
the top level of the organization. A more specific policy statement is shown below.
A Joint Application Design (JAD) is conducted to identify customer requirements
early and efficiently, and to ensure that all involved parties interpret the
requirements consistently.
A policy statement at this level is very actionable. While some definitions are
required to nail down words such as early and consistently, there is no doubt
of the intent or the actions required.

3.3.3.2

Standard

Standards describe how work will be measured. A standard answers the question what.
What must happen to meet the intent/objectives of the policy? Standards may be developed
for both processes and the deliverables produced. These standards may be independent of
each other only to the extent that the standards set for the process must be capable of
producing a product which meets the standard set for it. A standard must be:

Measurable - It must definitively establish whether or not the standard has been
met.

Attainable - Given current resources and time frames, it must be reasonable to


comply with the intent of the policy.

Critical - It must be considered important/necessary in order to meet the intent of


the policy.

There are two types of standards that can be developed to support a policies; Literal and
Intent. A literal standard is directly measurable. An intent standard indicates what is desired.
An example of each is shown:
Literal Standard - JAD sessions will be completed prior to project submission to
the IT Steering Committee for final project funding approval. This standard
clarifies what is meant by early. It is directly measurable on a binary scale (Yes, it
is complete/ No, it isnt).
Intent Standard - All Requirements identified and agreed to in each JAD session
must be documented, reviewed, and approved by the JAD participants immediately
after the completion of that session. The standard set for work completion here
supports the attainability of the first standard cited.

3-14

Version 9.1

Define, Build, Implement and Improve Work Processes

3.3.4

Procedures

Procedures establish how work will be performed and how it will be verified. There may be a
significant number of procedures to be performed in any single process. A process may
require a lengthy period of time for completion. Substandard or defective work performed
early in the process can result in schedule-breaking delays for rework if not caught promptly.
For this reason there are two types of procedures which will be discussed in the following
sections.

3.3.4.1

Do Procedure

How work will be performed. Each Do procedure addresses the question how must the work
be performed? The Do procedures define how tasks, tools, techniques, and people are
applied to perform a process. These tasks, when properly performed will transform the input
to the desired output; that is, one that will meet the established standard(s).
Examples of Do procedures developed to support standards are:
A JAD Scribe will be appointed for each JAD Session. This responsibility will
rotate among all of the participants in the session. Notice that this procedure is
about resources: who is to do what. Resource allocations are a critical component of
effective procedures; a task that is everybodys job is nobodys job.
The JAD Scribe will use the Option Finder tool to document and display
requirements as they are identified. This procedure and the one that follows are
about techniques to be used to achieve the standard set for having participants agree
to the results of the session.
The JAD Scribe will use the Option Finder tool to display the complete list of
requirements gathered in the session once all requirements have been identified and
documented. Taken together, these procedures will make it possible to comply
with the standard and achieve the intention of the policy. It is essential to have a
methodology for ensuring that this has happened.

3.3.4.2

Check Procedure

How the completed work will be checked to ensure it meets standards. A quality control
method, when integrated with the Do procedures, will enable the product producer to
ascertain whether the product meets product standards and/or whether the intent of the policy
has been achieved. These control methods may include reviews, walk-throughs, inspections,
checklists, and software testing.
Examples of Check procedures developed to support standards are show below:

Version 9.1

3-15

Guide to the CABA CBOK

The JAD Moderator will facilitate the resolution of areas of disagreement prior to
the end of the session. By incorporating the check step into the flow of the
procedure, discrepancies can be identified and addressed immediately.

At the conclusion of the session, the JAD Scribe will read each requirement as it is
displayed. The JAD Moderator will request a positive affirmation from each
participant that they agree with, and accept, the requirement as stated. This then is
the check procedure, to insure the requirements are agreed to by all the participants.

3.3.5

Process Categories

Procedures are the building blocks of processes. Just as there is more than one kind of
procedure, there is also more than one kind of process. Each kind of process plays a role in
ensuring that each product satisfies the policies and standards identified by the organization.
In many projects, there are multiple processes underway concurrently. While this is an
excellent way to maximize the time resource, it can create other problems. Maintaining the
proper relationships among the processes is a part of the Management Process.

3.3.5.1

Management Processes

The management process includes all of the activities necessary to ensure that the other
processes are able to accomplish their objectives. The most familiar of the management
processes is project management. It includes the planning, resource acquisition, allocation,
and control necessary for project completion.

3.3.5.2

Work Processes

The work processes are the activities that create products. These include the Requirements
Elicitation Process, the High-Level Design Process and the Code Development Process.
Depending upon the size of the project, some work processes are quite large and time
consuming. Because of this, there is often a temptation to overlap processes as mentioned
above. Except in cases where the process so specifies (such as Agile), beginning work on
coding before all of the requirements are complete will result in a rise in defects and resulting
rework.

3.3.5.3

Check Processes

The check processes are activities that are designed to identify defects or non-conformances
in products created. The Testing Process is the best known of the checking processes, but
there are others. Change Control is a checking process designed to verify that changes have
been properly made and authorized. There is a circular relationship between work processes

3-16

Version 9.1

Define, Build, Implement and Improve Work Processes


and check processes, as many of the check processes have work processed that result in
products that must be checked! An example of this is the creation of test cases (a product set)
designed as a part of the check process (testing). Reviews and inspections of tests cases then
are a check process on those products.

3.3.6

The Process Maturity Continuum7

In Figure 3-3, it is possible to see the interrelationship among processes, services, business
partners and business practices as the organization matures. The bottom row represents the
Level 1 organization, with its focus on time and budget, an ad hoc approach to processes, and
personnel policies that focus on heroes.

Figure 3-3 The Process Maturity Continuum


At this level there is significant conflict with other parts of the organization as a result of
frustration over uneven delivery of products, both in terms of time and quality. Resources are
severely constrained and as a result of the emphasis on end-of-cycle quality control, products
often are delivered with little or no testing. Often in this environment, measurement is avoided
for fear that it will be used as a weapon against the IT organization. Through the
implementation of the Key Process Areas (KPAs) shown in 3.2.2 on page 3-5, there is a major
change in the organization.
As the organization develops a more process-oriented management style (Level 2), each of
the areas reflect the impact of a more stable environment; better definition of the work to be
done and the processes needed to achieve the organizations goals. A key result of the
implementation of the Requirements KPA is an improved relationship with other parts of the
organization. Processes are now well supported by the organization and as a result, product
variability is reduced. Importantly, there are now measurements in place to support the
7. Quality Assurance Institute (QAI), Process Maturity Continuum, 2003

Version 9.1

3-17

Guide to the CABA CBOK


improvements being achieved. Some of these measures are now subjective, as the
organization becomes more comfortable with the kinds of information it can generate.
Continued progress in implementing effective processes, and moving from a Level 2
organization to a Level 3 organization has a major impact on employees. The heroes of Level
1 have grown into a staff with a wide range of competencies. Managers understand more
clearly the competencies needed to achieve project and product goals. This leads to personnel
hiring and project assignments based on competencies. It also triggers a surge in training
activities. The creation of well-defined, stable processes lays the foundation for an effective
use of the defect data gather from various check processes and procedures. Product delivery is
much more predictable in terms of time, cost and quality.
As the process orientation continues to mature to a Level 4, individuals who were hired
because they possessed certain competencies, are now compensated for demonstrating those
competencies, rather than merely having them. Defect data is now sufficiently reliable that
processes can be said to be within statistical control with regard to common causes.
Measurement activities embedded in processes provide information about process capability
that can be used by managers to make fact-based decisions.
At Level 5, sound, reliable processes, used by skilled personnel with a wide range of
competencies provide the IT organization with the ability to be pro-active in the application of
technology to business problems. Level 1 organizations often spend as much as 70% of their
time in non-productive activities. Level 5 organizations have reduced this number
dramatically, and are able to use that time to provide greater value to the organization.

3.3.7

How Processes are Managed

Skill Category 1 provided an introduction to some of the tools used for managing processes:
the Control Chart, the Scatter Diagram and the Histogram. 3.2.3 Six Sigma provided a hint
about the uses of those tools. Fundamental to the management of processes is the concept:
What you cant measure, you cant manage.
Regardless of the intention, the general management approach is the same:
Step 1: Align the Process to the Mission
Step 2: Define a Process Policy
Step 3: Define Input Standard(s)
Step 4: Define a Process Standard(s)
Step 5: Define Output Standard(s)
Step 6: Define Do Procedures
Step 7: Define the Check Procedures
Step 8: Integrate the Do & Check Procedures

3-18

Version 9.1

Define, Build, Implement and Improve Work Processes


3.3.7.1

Align the Process to the Mission

To manage anything requires that there be an objective. To manage processes, organizations


must first agree upon the purpose of managing those processes. For some organizations, the
purpose will be to simply understand what the process is doing, for some to improve those
processes, for some others, it will be to maintain the processes, and occasionally the purpose
will be to render obsolete or eliminate a process.
Regardless of the reason for managing a process, the first step is to ensure that the intent of the
process is aligned with the Mission of the organization. Failure to perform this basic step may
lead to processes in conflict with the overall direction of the organization.

3.3.7.2

Identify the Process and Define the Policy

The next step in managing an existing process is to gather the existing information about the
process. At the outset, many of the pieces of information will be incomplete, incorrect or
missing entirely. Over time it will be possible to find, correct and complete the information
gathered initially. The following is a list of the kinds of information to collect:
Name and Description - While this may seem basic, all too often a single process
will be called by two, three or even four different names or titles, depending upon
who is referencing it. Alternatively, the same name may be applied to more than
one process. Attaching a description of what the process is will help to identify any
conflicts in nomenclature. By establishing a single, agreed upon, name and
description, the process is already improved. Historically, IT has tended to apply
numeric designations to processes. While this does help keep process identification
unique, it does little to help people understand and remember what the process is
about. Stick with real words for names and add a numeric designation if the
organizations standards require it.
Policy, Purpose and Ownership - The purpose statement for the process should
identify how it supports specific policies and standards within IT. The Process
Policy answers the question, Why are we doing this? Processes with no apparent
connection to policies and standards should be examined closely to determine if
they actually contribute anything. Not all processes are worth doing.
Each process must have an owner. That owner should be identified by both
position and by name. Over time either or both may have changed. It may be
necessary to find a new owner for the process. Ownership cannot just be randomly
assigned because it carries with it significant responsibility. Individuals must
accept ownership of a process. Processes that no one is willing to own should also
be looked at carefully. The problem may be with the process; it may be unwieldy,
unpopular or extremely contentious. These are issues that must be addressed
eventually. A process that no one is willing to own is a candidate for elimination.
Inputs and Outputs; Suppliers and Customers - Although there are a few
originating processes in the IT development lifecycle, most processes will receive

Version 9.1

3-19

Guide to the CABA CBOK


inputs from predecessor processes. Each of these inputs, along with the name and
description of the supplying process, must be identified. For each input, there must
be a standard that describes what that product must do or be. This is sometimes
referred to as the entry criteria. Products which do not meet this standard are of no
value to the process and should be rejected.
For some processes there is only a single input from a single source, but often
processes will receive the same kind of input from different sources. For example,
input to business requirements will be provided by members of the business
community; technical requirements are often provided by members of the IT
Operations staff. The individuals and organizations that provide input are process
suppliers. In development processes, the exact names and departments of the
suppliers will often change from project to project and may need to be identified
generically.
Every process produces something. It may produce a single output or multiple
things. Each output must be named and described. It is important to identify who
receives the product, in some cases it will merely be another process, but usually
there is an individual or job function associated with that receiving process.
The products created by this process must also meet a standard if they are to be of
value to the customer. These standards are sometimes referred to as the exit
criteria. Accepting products that fail to meet the entry criteria will significantly
increase the difficulty in achieving the exit criteria.
Sub-Processes and Procedures - Most organizations begin identifying the major
processes at this level of detail. As the practice of learning about processes expands,
more detail will be available. Even in the initial stages, it is necessary to identify the
sub-processes and associated procedures that create the process flow. If, for
example, the process being described is Create a project plan, one sub-process
would be Create a test plan. All of the major sub-processes should be included in
such a way that the process flow is clear. A process with more than five to seven
major sub-processes is generally too complex to be well managed. There may be
multiple layers of sub-processes. Do not attempt to include them all in the major
process.
Likewise, there will be many supporting procedures. It is not necessary to include
the full detail of all of the Do and Check Procedures. Include a reference to where
that detail is to be found.

3.3.7.3

Evaluate Process Development Stage

Processes evolve through time, especially in organizations that are moving from an ad hoc
environment to one managed by processes. Processes are created to meet a need. The initial
iteration may be less than satisfactory; that leads to changes in the process. At some point the
process achieves an organizationally acceptable level of performance and the level of change
drops. The process is at least marginally stabilized. This does not mean that the process is a
good one, only that the organization is willing to live with the results it produces.
3-20

Version 9.1

Define, Build, Implement and Improve Work Processes


Over time the process will evolve; new sub-processes are added to address new development
needs; old sub-processes are dropped as they become obsolete. If the organization engages in
some process improvement activities, the process may have been improved. If the
improvement process was extensive enough, the process may have become an optimized
mature process.
Major processes are generally not tied to specific technologies, platforms or environments;
that connection is generally at the sub or sub-sub-process level. For that reason, major
processes rarely become obsolete. To manage anything, it is important to understand where it
is now. Identify and record where along the process development lifecycle the process is now.

3.3.7.4

Determine Current and Desired Process Capability

Process capability is the measure of what the process is able to produce. It generally includes
a gross production amount, a defect rate and a time scale. For example:
The Requirements Inspection Process can inspect 20.0 requirements per hour. One
defect in 50 will escape detection in the Inspection process.
The User Manual Creation Process can create 12 pages of documentation per
Analyst, per Day. There are 1 major and 4 minor errors created for every 40 pages
of the manual.
To have this kind of information, organizations must have some kind of on-going
measurement process. For the information to be reliable in any specific instance, the
production process must be under control. This means that the amount of variability in the
process has been reduced to the point that defects occur outside the three standard deviations
range. Three standard deviations (3) include 99.9% of the entire population. Restated, out of
1000 opportunities, only one will be excluded. In order to know that, data must be collected
from multiple iterations of the cycle and recorded consistently.
To return to the example of the Requirements Inspection Process, each time there is an
inspection, the time spent, the number of requirements inspected and the number of defects
found will be recorded. In order to know the process failure rate, defects data must continue to
be collected through out the life of the project.

INSPECTION DATA
Project
Name
A
B
C
D
E
F
G

Version 9.1

# Requirements
Inspected

Hours

214
402
47
135
369
95
117

12
20
3
6
18
5
6

Defects
Found

Escaped

32
84
11
38
91
27
32

1
2
0
1
2
0
1
3-21

Guide to the CABA CBOK


H
I

187
256
1822
Inspection Rate

9
12
91
20.02

52
74
441
Escape Rate

1
1
9
2.04%

Figure 3-4 Requirements Inspection Data


The information from the example might have come from a chart like Figure 3-4. While the
data contained in the chart could (and should) be used for other purposes, the specific intent of
this chart is to examine the effectiveness of the Requirements Inspection Process. By
manufacturing industry standards, this is a very small sample; for Information Technology
processes, it is a reasonable initial sample.
With 40 or more occurrences of a process recorded, it is easier to use tools such as the Control
Chart to examine what is happening. Establishing reliable standard deviations for small
groups of items requires the use of different techniques such as T scoring which are explained
in most Statistical Analysis courses and textbook. One of the issues with small groups is that
the mean and the standard deviation are easily changed by the addition of new data. Figure 35 is a generic Control Chart for a process that is under control.

Figure 3-5 Standard Control Chart


Each individual occurrence of a process or activity is plotted along the line.
There is a time flow from left to right; this means that the most recent activity
will be shown at the right side of the chart. When a problem occurs, it is then
possible to put the event in a time context. This is important for determining
causation

INSPECTION DATA

3-22

Version 9.1

Define, Build, Implement and Improve Work Processes


Project
Name
A
B
C
D
E
F
G
H
I
J

# Requirements
Inspected
214
402
47
135
369
95
117
187
209
256
2031
Inspection Rate

Hours
12
20
3
6
18
5
6
9
7
12
98
20.72

Defects
Found
32
84
11
38
91
27
32
52
28
74
469
Escape Rate

Escaped
1
2
0
1
2
0
1
1
7
1
16
3.41%

Figure 3-6 Revised Inspection Data Chart


Figure 3-6 shows a revision to the original Data Chart. A new entry (I) has been added at the
arrow. The inspection results for this occurrence are significantly different than most of the
others shown. Rather than one in fifty defects escaping (2%), in this instance one in four
defects escaped (25%). This is a large enough change that in the small sample used, it changes
all of the result rates. In a larger sample, it might be very difficult to find the table entry that is
the source of the problems. The control chart does this much more effectively. In Figure 3-7it
is very easy to spot the problem event.

Figure 3-7 Control Chart with out of Control Event


Once the out-of-control event has been identified, a quick look at the supporting data chart
will point the analyst in the right direction. In this case, the amount of time spent in the
Inspection Process was well below the organization standard. The result was that fewer
defects were found and more escaped.

Version 9.1

3-23

Guide to the CABA CBOK


Events causing process variation are classified in one of two ways: common cause and special
cause.
Common Cause - Common cause is an event that is a part of the process, such as
the amount of time spent in the inspection process in the example above. Common
cause items are within the control of those using the process.
Much of the work done in CMM Level 3 and Level 4 is removing common causes
of variability from processes. Common causes are often easy to identify early in
the improvement process; they are things that people know already about how the
work is done. For example, if the Design Estimation Sub-Process consistently fails
to yield accurate estimates, the staff may already know that the estimates are being
developed before the requirements are significantly completed. Correcting this
part of the process is within the control of the organization.
As each common cause is identified and eliminated, the process variability
decreases; the results are increasingly within the predicted and acceptable standard
range of performance. Common causes are often responsive to training efforts and
more effective communication regarding the true priorities. At some point the
relative cost-benefit of identifying and removing additional common causes
becomes too small to pursue further.
Special Cause - Special cause is an event outside the process. It may or may not be
within the control of the organization. A hurricane that causes damage to the facility
which results in many days when people are unable to come to work would be an
example of a special cause. This might cause the Design Estimation Sub-Process to
yield inaccurate estimates, but the organization cannot control the event.
Even before organizations gain control of common causes, they often attempt to
identify potential special causes and predict the potential impact. This is a part of
the project risk management process.
Not all special causes are as dramatic as a hurricane; they may include regulatory
changes, staffing changes, and environmental factors such as power outages or icy
roads.
To return to the example of the Inspection of Requirements, it would appear from the data
presented in Figure 3-4 and Figure 3-6 that the process is working well at an inspection rate of
20 requirements per hour. It would also appear that it works significantly less well at an
inspection rate of 30 pages per hour (Project I.) From the perspective of the designers, coders
and testers, inspecting at a rate of 20 requirements per hour should produce the desired level
of quality. Going faster leaves more defects to be found and fixed later in the life cycle.
As the project manager responsible for allocating resources, it is highly desirable to be able to
inspect more requirements per hour. Given that there are between four and six individuals in
each inspection, inspections are very labor intensive. Being able to increase productivity, even
slightly, releases significant resources for other uses. Perhaps a rate of 23 or 24 pages per hour
will yield the same benefit using fewer resources. Work will need to be done to establish what
the actual desired capability threshold should be.

3-24

Version 9.1

Define, Build, Implement and Improve Work Processes


The point of data such as this is that it provides good information about what the current
processes are capable of doing. It would appear that the Requirement Inspection Process is
working well when properly used. It would also appear that the current Requirements
Gathering and Documentation process is not as good as it could be; one in four requirements
contains a defect. Based on this information, the organization may decide that the
Requirements Gathering and Documentation process needs to be analyzed to find out how it
can be improved to achieve the desired level of accuracy.
Based upon their current position, this organization might develop a target performance for
the process of one in ten requirements containing a defect. This would be a very significant
improvement in the current process. Large gains are possible early in the control process; later
it will be smaller incremental gains.

3.3.7.5

Establish Process Management Information Needs

At the outset of the effort to manage processes it was necessary to grab whatever information
was available. Often the measures available in the beginning are those that show the process
in the best possible light. They may not be the correct measures for determining how well the
process is actually functioning.
While learning about the process capability, it may have become clear that additional
information is necessary. If new sub-processes have been added or existing sub-processes
modified, it will be necessary to develop new measures for them.
The primary question to be answered is What do we need to know about this process to
determine how well it is working? When answering this question, it is essential to remember
that there may be more than one answer to the question. There are often multiple stakeholders
for a single process.

3.3.7.6

Establish Process Measurement, Monitoring and Reporting

Staff, along with first and second level management, needs to have detailed information that
will allow them to successfully complete the tasks at hand - using the processes provided.
They need to know what kinds of errors are occurring and under what circumstances; this is
tactical information. Senior staff and managers need to know what the organizational status
and capabilities are; this is strategic information.
Tactical Measures - Tactical measures are collected at the detail level by the
individuals performing the process. These might include the number of test cases
developed for a specific project and the amount of time required to do so. It might
be the actual number of test cases in the suite and the number that have actually
been executed. This is progress information. If the standards state that 100% of the
suite must be executed, the number actually executed provides information about
where we are in the process. As testing progresses it will also include information
about how successful the test execution has been (97% pass rate; 8 failed test cases
to be researched.)

Version 9.1

3-25

Guide to the CABA CBOK


This information about the process is very important to the project manager, the
other members of the business and IT development team working on the project.
This information is often the result of sub-process and procedure information.
Strategic Measures - These measures are provided to senior managers to allow
them to make appropriate decisions. They are at a much higher level of detail and
cover a much wider perspective. They may need to know that 95% (19 out of the
last 20) Acceptance Testing projects using the process have been completed within
the time originally estimated, and with residual error rates of less than 1 per 250
requirements. This level of information allows IT senior management to negotiate
effectively for following the process and allocating the correct amount of time.
Once the appropriate measures have been identified, it is necessary to determine
the responsibility for collecting data from the process performers, analyzing it and
reporting it.

3.3.8

Process Template

Process templates are designed to contain all of the information above, plus any organizationspecific requirements. By creating electronic templates and storing them in an easily
retrievable location, everyone involved in creating, documenting, assessing, revising and
using them will be able to do so. The use of templates also serves as a way to save time and
energy when developing process information. It will ensure that the vital pieces of
information are collected and stored.

3.4 Process Mapping


In order to understand how work is actually accomplished, it is important to know how
various components of a process work together and how they work with other processes. The
approach for gathering and organizing this information is called process mapping. Creation of
a process map can require a significant amount of time and energy. It is generally performed
at the beginning of a major process improvement cycle.
Process maps look very similar to flow charts. They use many of the same symbols however,
there is a key difference. In process mapping, time always flows. There are no loops in a
process map. Time begins in the upper left of the process map and flows to the lower right.
Where there is a rework or error activity, it will be shown along with the time consequences.
This approach allows the map to demonstrate visually significant issues with process
capability.
Earlier in this section, the need to document and understand process capability was discussed.
What is missing from the picture is the information about how the results are achieved. The
identification and linking of sub-processes together to form a process is an important step, but

3-26

Version 9.1

Define, Build, Implement and Improve Work Processes


only the beginning. In order to understand the process, it will be necessary to have much more
detail. There are four levels to most major processes:
1. Unit
2. Activity
3. Task
4. Procedure

3.4.1

Unit

Units are the major activities of a process. Most processes have 3 to 5 units. These units taken
together sequentially should provide a general, but comprehensive, end-to-end description of
the process; each unit relies upon the result of a previous unit for its input. This end-to-end
description should be adequate for a comparative novice to be able to understand what this
process is about.
Each unit should be at about the same level of detail. If one unit is Send battle ship to Gulf,
Buy toothpaste is not at the same level of detail. If the process appears to have 10 or 15
units, look at the level of detail to see if some are much more detailed than others. It may also
turn out that what are listed as two or three units are actually one that unit has not been clearly
named.
A change in setting or location signals the end of one segment and the beginning of another;
setting is important in Process Mapping. One effective method of determining what
constitutes a unit is to use the geographical location (setting) of the activities. This allows a
clear break and can help in identifying trouble spots.
Units should be named as either subject verb or verb subject (Bill Processing or Process
Bills); this will improve readability and reduce redundancy and confusion. Units may well
cross functional lines, beginning in one department, passing through another before
terminating in a third. It is important to know this information; it will be used to create the unit
level map.
To further improve understandability and traceability units should be numbered sequentially
through the process, as well as named. Units contain no how to or detailed level
information. The units also flow from left to right until there are several in one area; at that
point the flow moves sequentially down the page from top to bottom.

Version 9.1

3-27

Guide to the CABA CBOK

Figure 3-8 Unit Level Process Map


Units are connected in the map by directional arrows. Each unit now has a unique number.
With this map it is possible to see the big picture very clearly. Each of these units contain
many components. This is analogous to the movie plot summary: Boy meets girl; boy wins
girl; boy losses girl; boy wins girl back. Although the entire plot is present, we have no idea
how and when these things happen. When seeking to understand a process we need more
detail; therefore we need to break Units down to a lower level of detail. A single unit does not
cross locations although multiple units do occur in a single location.

3.4.2

Tasks

Tasks are the building blocks of units. Units may have many or few tasks, but when those
tasks are taken together, they form a complete drill down description of the work to be done
to successfully complete a unit. Tasks may rely upon the completion of previous units or tasks
for their input; however, tasks need not be interdependent. For example, a single trigger may
be used to initiate several tasks which run concurrently, but do not require interaction. For
example a school bell ringing may be the trigger for teachers and students to go to their
classrooms; however, classes do not begin until the next bell rings and the teacher arrives.
Students may arrive (with penalty) after the beginning of class.
Tasks should be named following the convention used for units (either verb subject or subject
verb) whichever format is chosen should be used consistently. In addition, each task will be
numbered in a way that will allow it to be properly associated with both the unit it is part of
and the other tasks involved in the completion of the unit. Because not all of the tasks may

3-28

Version 9.1

Define, Build, Implement and Improve Work Processes


have been correctly identified at this point, using some form of word processor or spreadsheet
is recommended. Anticipate that there will be changes to the task list which may require
modifying the numbering system.
The focus of tasks is often at the group or department level of the organization. Where the unit
level defines the who very broadly, at the task level it will be more specific. Often the
who will be a department name or a job title.
Tasks are still at the what level, not the how level.

Figure 3-9 Task Level Map with Delay


In Figure 3-9, the who is defined by the various functional titles shown at the top of the map.
This allows for the existence of multiple individuals performing the same set of tasks. If this is
the case, it is essential that task descriptions be verified with multiple incumbents. If only one
source is used, it is entirely possible that some steps will be missed or done differently.
In an organization with little standardization of their processes, it is entirely possible that each
incumbent does things differently. This in itself is an important finding for the Mapping
activity. Before the process is automated, it will be essential to identify which one process is
most successful and ensure that everyone is prepared to accept that process. However, during
the initial stages of mapping, the task level map should reflect the most common and
consistent sets of activities.
Also in Figure 3-9, notice that item 4.5, Correct Defects is flagged with a large D. This
indicates a place in the process where there is a Delay. A Delay is any time period in which no

Version 9.1

3-29

Guide to the CABA CBOK


work on this task is occurring. Delays extend the total amount of time required to complete the
process without adding value to the process. Defects are accumulated and returned to
Development on a weekly basis. Corrections are returned bi-weekly. While the batch error
handling methodology may simplify tracking of individual defects, it can create considerable
wasted time in the process. One of the goals of Business Process Management (BPM) is to
identify delays and remove them from the process.
Where interviews indicate significant, unexplained delays, the CSBA should suspect that
there is another process that is transparent to the interviewee, taking place. It is important to
identify that process and add it to the map.

3.4.3

Actions

Actions are the building blocks of tasks. It may require several actions to complete a single
task. Each action relies upon the input from a previous task or action. Actions are usually
focused on steps taken by a single individual. Action descriptions are very specific.
Up until now the mapping process has been fairly streamlined and uncluttered, but at the
action level it is necessary to begin adding more detail. These maps begin to look much more
like the traditional data flow diagrams with which IT processionals are familiar. An action
level map may begin with a trigger which is the end of a previous action; the off page
connector symbol is often used to reflect this.
In the action map there will be decisions, activities which may create multiple activity paths.
One typical set of activities seen on the action map will be the error loop. Error loops should
always be flagged with an R to reflect that rework is occurring at this point. Rework is any
set of activities which are repeated in order to completely and correctly finish an action or
task. Rework does not add value. Figure 3-9, already identified as having a built-in delay, also
has the Rework Cycle, Correct Errors.
This is one major difference between a data flow diagram and a process map; because the
business process map is concerned with time, the loop moves forward in time rather than
being reflected as reentering the process at the beginning. In addition to removing Delays,
another goal of the finished process is to remove as much Rework as possible. Separating
Rework from other types of Delays is important because they typically will require some
other action to be inserted earlier in the process to prevent them.
Because actions are specific, their names should contain more information than the verb-noun
syntax that has been used for units and actions. Actions steps are often accompanied by forms
or documents which are used in the completion of the action.

3.4.4

Procedures

Procedures are the building blocks of activities. In 3.3.4 on page 3-15 of this Skill Category,
the two types of procedures, Do Procedures and Check Procedures are discussed. Detailed

3-30

Version 9.1

Define, Build, Implement and Improve Work Processes


procedures do not form a part of the finished process map, but it is essential that they exist and
are properly documented. Early in the mapping process, it is good practice to ask individuals
to bring samples of the procedures they use. If they are unable to do so, or if the procedures
lack detail and clarity, a separate project may be required to develop the needed procedures.

3.4.5

Role of Process Maps

Creating a process map is an excellent place to begin work when processes fail to deliver the
needed or expected results. Business Analysts are often key players in mapping projects.
Because of their unique knowledge and skills, they can see more of the big picture than most
individuals. They often are very knowledgeable about both the strengths and weaknesses of
existing processes.

3.5 Process Planning and Evaluation Process


While 3.3.7.2 on page 3-19 addresses evaluation the relative maturity of the process, this
section presents a process for ensuring that any process is properly structured for successful
execution. To achieve this, a process must contain all of the key process improvement steps,
first identified in Skill Category 1, Quality Basics. By embedding the PLAN-DO-CHECKACT methodology into the process, ongoing improvement activities always occur, rather than
requiring special direction or approval.

3.5.1

Process Planning

The first portion of any process should be the planning. There should be discrete and
identifiable planning activities associated with each cycle of the process. For example, in the
Requirements Gathering Process, the planning portion would include items such as:
Identify what areas of the organization will have requirements for this project
Identify who within those areas will have the knowledge and authority to provide
requirements
Identify and determine what process will be used for gathering requirements,
individual interviews, JAD, etc.
Identify how long it will take to complete those activities
Identify who will be able to verify requirements
Identify who will be able to approval the final list of requirements

Version 9.1

3-31

Guide to the CABA CBOK


These explicit planning steps, included in the process activities, reduce the likelihood that key
planning activities will be missed.

3.5.2

Do Process

The Do portion of the process should explicitly perform all of the functions addressed in the
process definition. The majority of detailed Do and Check Procedures discussed earlier will
be performed here. In evaluating the process, it should be clear how the desired result will be
achieved.
These are the check procedures shown in Figure 3-2, the Process Workbench. They are
performed here to ensure that each component of the product is built correctly. There are
rework procedures to address components that are not correct. For example, the Acceptance
Testing process may specify that each functional requirement be tested at the upper and lower
boundary levels. Each time a test case fails on these boundary tests, it will be recycled for
correction.
For most IT organizations, the documentation on processes is focused on the Do Processes. In
evaluating this portion of the process, it is important to ensure that the proper Check
Procedures also exist.

3.5.3

Check Processes

At the end of each iteration of the process, the Check Processes ensure that the relative
success of that iteration in toto is assessed. By performing the Check Procedures, it is possible
to determine if the process in the aggregate provided the desired result.
In the example shown in Figure 3-9 on Acceptance Testing, it would be perfectly possible to
have a perfect result on boundary testing, and still deliver a significantly flawed product (i.e.,
the operation was a success, but the patient died). By assessing the total results of the process
at a macro level, it is possible to identify where improvements are needed.
The best known and least used tool for this type of assessment is the Post Implementation
Review. In many organizations this step is a nominal step in the process, but is dropped due
to time constraints. The time constraints are real; but they are often the result of the failure
to learn from previous mistakes. It becomes a self-perpetuating cycle. New technologies make
it possible to conduct effective post-implementation reviews for large projects in about four
hours.
Phase end reviews that require the product to meet certain criteria before moving to the next
phase are an intermediate approach. To be effective, the standards must be real and
organizationally enforced. This means that the kind of process measures discussed in
Figure 3.3.7.5 must be effectively implemented.

3-32

Version 9.1

Define, Build, Implement and Improve Work Processes


In evaluating the process, it is important to be able to explicitly identify those places where
measurements are being collected. If they do not exist, they must be developed.

3.5.4

Act Processes

For most organizations, the process ends with the Check step. To improve the performance of
the process, it is necessary to explicitly describe what happens when it yields an unacceptable
result. This entails identifying not only what will be done, but also who will do it.

3.5.4.1

Process Improvement Teams

The establishment of standing Process Improvement Teams helps to address the question of
Who do we tell? The Process Improvement Team is composed of a group of qualified and
interested individuals who are assigned the guardianship of a particular process. They report
to the Process Owner identified in the Process Model.
The process team receives the information from each iteration of the process. A single substandard evaluation may not produce a flurry of activity. Rather, it is assessed to determine
how the result was produced. Among other things, the team will need to know:
Was the team properly and fully trained in how to use the process?
Was the process followed correctly?
If there were deviations from the standard process, were they the cause of the
problems?
Have other teams had the same or similar results when following this process?
Once this information has been collected and analyzed, a decision can be made on whether or
not changes need to be made to the process.

3.5.4.2

Process Improvement Process

The Process Improvement Team, based on their analysis, may recommend the process be
changed. This will follow the same PDCA approach as has been used before. The Team will
identify the proposed change, conduct a pilot (test), evaluate the results and either change and
re-pilot or implement.

Version 9.1

3-33

Guide to the CABA CBOK

3.6 Measures and Metrics


The need to create and maintain effective measures and metrics has been stressed throughout
this Skill Category. The need to measure processes and the use of those measurements has
been a consistent theme. It is now time to take a closer look at these topics. Earlier in this
category one of the three truths of measurement appeared:
What you cannot measure, you cannot manage.
Only when there is accurate ongoing information about what is happening in a process (or
anything else) is it possible to establish control over the process. This is a well-know, oftenquoted maxim. All too often the other two truths are not considered, with catastrophic results.
The other two truths are essential to understanding and creating effective measurement
systems:
What you measure is what you get.
The measured should establish the measure.
The second truth, while offered as the outcome of years of observation and study by a wide
range of individuals, is also easily observable by individuals and organizations. People do the
things they are measured on. If a programmer is paid by lines of code, he or she will produce
many lines of code. If a Business Analyst is measured on the number of test cases created, she
or he will create many test cases. In either case, if this is the only measure, the quality may be
poor and the utility zero, but the expected volume is created. Because of this situation, it is
very important to understand both the expected and the potential results of implementing
specific measures.
The third truth is about the acceptance and the validity of the data being created. If those being
measured do not accept or agree with the standard set for the measure, it will be manipulated.
If the data about their process does not come from them, the producers will reject it as suspect
or unfair.

3.6.1

Definitions

3.6.1.1

Measure

A measure is a single attribute of a product or process. A measure must be defined in terms of


a standard, such as a liter, a kilometer, or an hour. A measure should meet these two
characteristics:
Reliability - A measure must be consistent. If two or more individuals performed
the same measurement process, they should achieve the same result each time. The
old carpenters adage, Measure three times, cut once addresses the potential
unreliability of measures performed by individuals.

3-34

Version 9.1

Define, Build, Implement and Improve Work Processes


This definition of a measure is called a standard unit of measure. This is the same
concept that a country uses when it establishes standard units of measure distance,
such as kilometers or miles, and weight, such as kilograms or pounds. In providing
this example, it is important to recognize that some countries use miles and others
use kilometers; without a standard base, the results are not directly comparable. If
the actual base is unknown, comparison is impossible. Likewise, all information
services groups do not use the same definitions for measures, such as hours or
defects. Without standard units of measure comparison within or between
organizations is not practical unless detailed information is provided about the
development and definition of the standards used.
Validity - The degree to which a measure actually measures what was intended.
Often it is difficult to obtain a direct measure of the desired product or process, so
indirect measures are adopted. Lacking direct measures of productivity, some
organizations measure the amount of time individuals spend at work. This
indirect measure has little validity as a measure of productivity, but is commonly
used.
3.6.1.1.1

Measure Examples

A measure is a quantitative description of a single attribute of a product, service, or process.


For example, hours would be a measure associated with a process; defects could be a
measure associated with a product; and customer satisfaction could be a measure associated
with service. If these measures are standardized, then every time they are used the measure
will have the same meaning. As such, it can be used effectively to manage by fact.

Lines of code

Time to write (work effort)

Number of go-to verbs used

Number of defects

The measurement of an hour as a unit of time to complete a work task is subject to


interpretation unless it is well defined. For example, an hour could be a paid hour or a worked
hour. An organization might pay someone for eight hours during a day, but the individual may
actually work ten hours or seven. To create valid and reliable measures, it is essential to know
whether the hours mean hours worked or hours paid for. Likewise, a defect can mean many
things to many people. If a user forgets to provide a requirement, is that a defect? The number
of defects will be impacted by the precise definition of a defect.
Without standard units of measure, collective quantitative data is not comparable. For
example, if one project does not calculate person-hours in the same way another project does,
those numbers are not comparable. Assume that Project One collects all hours worked,
regardless of whether they are paid or unpaid hours; and Project Two only collects paid hours.
The net result might be that Project One indicates they are less productive than Project Two,
when in fact Project One may be more than effective Project Twothe difference being that
the two projects used different units of measure.

Version 9.1

3-35

Guide to the CABA CBOK


3.6.1.2

Metrics

A metric is a combination of two or more measures. Given the previous examples of


measures, a common metric is kilometer or miles per hour. A metric will provide a rate at
which something is occurring. Six Sigma provides a rate at which defects occur per million
opportunities. A metric must meet the same two criteria as a measure; it must be reliable and it
must be valid.
While the reliability of most measures is fairly easy, the calculation of some metrics can be
quite complex and demonstrating reliability becomes increasingly important. When using a
metric, it is essential to demonstrate that not only is the calculation of the metric reliable, but
also that each of the underlying measures are also reliable. The criteria, that two or more
individuals using the same measurement and calculation process would consistently achieve
the same result, still applies.
When examining the validity of measures, the example of hours at work as a measure of
productivity was used. Simple measures often fail to achieve complex purposes. A metric,
number of hours to create a function point, combines two simple measures; hours worked and
function points created. The combination of the two may provide better information about
productivity if the underlying data are good.
3.6.1.2.1

Metrics Examples

The chart below was used earlier in this Skill Category to illustrate the use of process control
charts. What was missing from that discussion was the explicit consideration of the collection
and calculation of the data.
The Inspection Rate is a metric. It is composed of two measures: 1) hours spent and 2)
requirements inspected. The second metric is the Escape Rate; this compares the number of
defects that escaped detection to the number found. For each of the four individual measures
taken, the kind of definitional work described in 3.6.1.1 Measure and 3.6.1.2 Metrics must
have been completed for the measures to be reliable and have validity. Once validated, these
two metrics can be used for a wide variety of project planning and management activities.
They will support decision-making based on facts.

INSPECTION DATA
Project
Name
A
B
C
D
E
F
G
H
I

3-36

# Requirements
Inspected
214
402
47
135
369
95
117
187
256
1822
Inspection Rate

Hours
12
20
3
6
18
5
6
9
12
91
20.02

Defects
Found
32
84
11
38
91
27
32
52
74
441
Escape Rate

Escaped
1
2
0
1
2
0
1
1
1
9
2.04%

Version 9.1

Define, Build, Implement and Improve Work Processes


Using the same data, it is possible to develop other metrics. One such metric would be defects
escaped compared to the number of requirements inspected (0.5% or 1 in 202). Is this a
meaningful metric? What is it intended to tell the organization? If the intent is to provide
information about the quality of the requirements being provided to designers following
inspection, it would be very effective. If the intent is to measure the quality of the
requirements gathering process, it is ineffective (as is the requirements gathering process prior
to inspections)!
Most of the data for this chart comes from those performing the inspections: how many
requirements were inspected, how long did it take, how many defects were found. Only the
last item, Defects Escaped would be collected from others, later in the life cycle.

3.6.2

Types and Uses of Measures and Metrics

3.6.2.1

Strategic or Intent Measures and Metrics

These measures and metrics are created, collected and applied at the top levels of the
organization. They address global issues for the organization and provide information for long
term decision-making. Some of the results of these measures may become a matter of public
record through the filing of financial reports to various government agencies. Not all strategic
measures are the result of government requirements; some will be a result of the
organizations desire to know where it stands in the market place. Most of the other kinds of
measures and metrics are a result of the strategic or intent measures. Examples of strategic
measures are listed below:
Shareholder measures and metrics include financial indicators such as Return on
Investment (ROI), Earnings Per Share, and Period to Period Results. Not all
shareholder measures are directly financial. Management goals and objectives are
included in the strategic measures group. These should be quantified and measured.
Customer measures examine the relationship with the customer. One popular
measure is customer satisfaction. Other customer measures often include market
share for products or services and customer retention rates. These provide key
information to the senior management team about what is working in the
marketplace and what is not working.
Employee measures and metrics examine key factors, from a management
perspective, in the staffing of the organization. Typically these will include items
such as turnover rates, payroll and head count change, and increasingly the costs of
employee benefits (retirement and healthcare).
Community measures examine the organizations relationship with both the local
and the extended community. It may include tracking of public service hours,
charitable contributions, and compliance with federal, state and local regulations.

Version 9.1

3-37

Guide to the CABA CBOK


3.6.2.2

Process Measures and Metrics

Process measures and metrics are essential to the long term health of the organizations.
Process measures provide information about how well the organization can perform the basic
functions needed for success. Process measures can provide both strategic and tactical
information for the organization.
Process measures examine the process directly to determine how effective it is.
Measures such as cycle time, resources consumed, process budget and cost, and
schedule achievement are examples of direct process measures.
Output measures focus on the capability of the process. How much work can be
accomplished to achieve the customers needs? Can that work be done at a
consistent rate?
Outcome measures look at how well the product or service performs once in the
hands of the customer. In a sales-oriented organization, this measure is often
focused on returned or rejected products. Where the customer is a part of the same
organization, it may be measured in terms of man hours of help desk support
required, or problem tickets created post-implementation.

3.6.2.3

Efficiency Measures and Metrics

Efficiency measures and metrics are focused on doing things right. In this context right
means that the process is followed, there are a minimum of delays, resource costs are
minimized and waste is eliminated. Cycle time, already identified as process measure, is also
a good efficiency measure. Earned Value, discussed in Skill Category 1, is another set of
efficiency measures for the organization. Efficiency measures reward the highest productivity
to resource ratio. It is a tactical measure. When only efficiency measures are used,
organizations can become very good (that is, efficient) at delivering products and services that
do not meet the customers expectations.

3.6.2.4

Effectiveness Measures and Metrics

Effectiveness measures and metrics are focused on doing the right things. In contrast to
efficiency, effectiveness is focused on the outcome. To be effective, the product must be right,
it must meet the customers needs, it must function properly. Effectiveness is a tactical
measure that rewards the highest customer satisfaction. When only effectiveness measures are
used, customers will be delighted, but the organization may be fail to make a profit on the
products or services provided.

3-38

Version 9.1

Define, Build, Implement and Improve Work Processes


3.6.2.5

Size Measures and Metrics

Size Measures and Metrics are the backbone of the measurement system. Size measures
provide a scale against which other things can be measures. Earlier examples looked at
counting requirements; the number of requirements can become a size measure for proposed
systems. An alternative to this approach is to use Function Points to size a system. Instead of
counting requirements, External Inputs and Outputs, Internal Logical Files and so on are
counted. As with any measure, it is essential that size measures meet the reliability and
validity criteria. Manipulation of the data means any resulting decisions will be flawed. As
described earlier, simple measures can be combined into a wide variety of useful metrics.

3.6.3

Developing Measures and Metrics

3.6.3.1

Responsibility for measurement

Responsibility for measurement rests with management at all times. The line manager (first
level manager) is responsible for ensuring that the work of the individuals supervised is
measured and that the measurement process is performed correctly and consistently.
Management is responsible for clearly identifying the intent of any measures and metrics to be
developed. They are responsible for validating that the proposed measures and metrics
accomplish that purpose. They are responsible for ensuring that properly collected
information is handled appropriately with regard to security and confidentiality.
Management can, and should, delegate the performance of various management tasks to
individuals working for them. The delegation process may be embedded in detailed Do
procedures. This ensures that everyone using that specific procedure will collect the same data
in the same way (reliability).

3.6.3.2

Responsibility for Development

Responsibility for the development of individual measures and metrics is often delegated to a
group of qualified staff members. The Business Analyst may be involved in defining and
developing appropriate measures and metrics for all of the various functions they perform
within the organization. Because of their wider view of the product and its impact on the
organization as a whole, the Business Analyst may be involved in helping other areas develop
meaningful measures. Others who are often involved in developing cross functional measures
include project managers and quality assurance analysts.
The development of measures and metrics requires a solid understanding of the work being
performed and the consequences of that work. This is why measures need to be developed by
the measured and applied consistently by all who perform that work process. For example, the
procedures need to clearly identify what a test case is and what it contains: is it a single

Version 9.1

3-39

Guide to the CABA CBOK


scenario or all of the scenarios? Individuals unfamiliar with testing will not appreciate the
difference that failure to standardize the counting of these items will make in the final result.
Once measures have been developed and agreed upon by those knowledgeable in the process,
they should be piloted or tested prior to full scale implementation. As with any other process,
the pilot will provide the opportunity to validate the process and procedures, identify and
correct any deficiencies and gather support prior to a full scale release.

3.6.3.3

Responsibility for Analysis and Reporting

Those who create the individual data records often have little time to perform the necessary
analysis. At the line level, functional managers or supervisors may perform the initial data
collection and analysis themselves. This provides an opportunity to assess how well their
portion of the organization is doing with respect to these measures. Good managers want to
have this information available themselves before it goes to someone else.
For measures and metrics that are accumulated at a roll-up level, there is typically a
management position responsible for ensuring that the roll-up and analysis are performed on a
consistent and timely basis. This function is often assigned by management to individuals or
departments with a broad understanding of many aspects of the organization. Some
organizations have a Measures and Metrics function assigned to the Quality Assurance area;
others use staff resources wherever they can find them. Business Analysts usually possess
both the knowledge and the skills needed to perform the analysis and are therefore often
tapped for the job.
Reporting may take the form of a simple distribution of the current periods data, with
comments on significant variations, to all contributors and their chain of command. At the
upper end of the reporting scale are electronic slide shows that contain past and current period
performance data, trend analysis, improvement recommendations and so forth, delivered to
the senior organization (both IT and Business) management. The form and content of the
reporting activities do not have any direct correlation to the commitment to Management by
Process. It is only what management does or causes to be done to address issues that reveals
the commitment to management by process, or lack of it.
An increasingly popular form of reporting is the IT Management Dashboard. This is
composed of organizationally determined key measures that are reported regularly in a
standard format. A Dashboard typically contains not fewer than four or more than nine
measures or metrics. The risk of having too few items on the dashboard is that with only a few
items to maximize, the results can be manipulated by staff members determined to be
successful. The risk of having too many items on the dashboard is that it is difficult to
assimilate them all and form the necessary conclusions about the level of performance. There
is no one dashboard that is correct for every organization.

3.6.3.4

Common Measures for Information Technology

There are no industry wide standards for measuring IT performance. As discussed earlier,
many organizations use the same ratios, but the definition of the components may differ
3-40

Version 9.1

Define, Build, Implement and Improve Work Processes


wildly. The Quality Assurance Institute Research Committee surveyed QAI member
companies to identify the 20 top producer/provider metrics and then surveyed to determine
how QAI member companies ranked those 20 metrics in importance to their measurement
program.
The definition of provider/producer is: Represents organizational entities involved in
delivering IS products and services to the customer. The multiplicity of potential suppliers to
producers and producers to providers should be transparent to the customer. For this reason, in
this phase of the research study, the "provider" represents all entities responsible for
developing and delivering information services.
The top 20 producer/provider metrics, as they were ranked by QAI member companies are:
1. Customer Satisfaction - A subjective measure of the quality of product or service,
versus agreed-to level or expectation.
2. Accuracy (of results) - Percentage of problem results requiring action categorized by
severity of impact on the customer.
3. System Reliability - Defined confidence level that the product or service will be
correctly operational when needed.
4. Completeness (of implemented requirements) - Number of functions delivered on
time versus the number of functions expected and agreed upon by the customer.
5. Availability (of resources) - Percentage of time the system is available versus the
scheduled available time.
6. Maintainability - Actual maintenance cost in relationship to the size of the system
(e.g., hours per KLOC or number of function points).
7. Usability - Ease with which the customers can use the system, which can be expressed
as the number of problem logs attributed to user interface problems as a percentage of
total problem logs.
8. Timeliness of Output/Response Time - Percentage of scheduled reports/outputs and
system availability delivered on time, compared to the total number of reports/outputs
and system availability delivered.
9. Efficiency (of functionality) - Percentage of time required to process all functionality
of a system versus expected time (designated processing window).
10. Defect Density - Number of defects in relationship to size of product (e.g., defects per
KLOC or function points) at a specific point in the life cycle (e.g., development,
operational, test, etc.).
11. Testability - Ease by which defects can be removed from systems.
12. Functional Requirements - Number of functions mutually agreed by the producer
and customers to be included in a product.
13. Conformity to Standards - Building and delivering a product that meets producer
standards.

Version 9.1

3-41

Guide to the CABA CBOK


14. Auditability - Ability to reconstruct processing, including the ability to trace data from
the original source to the final destination.
15. Documentation - Readability, completeness, usability, and ease of finding any specific
topic.
16. Interoperability (between other systems) - Effective interface with other systems,
including interface to shared data, platforms, processes, etc.
17. Portability - Capability to port the software from one system/platform to another.
18. Modularity - Independence and cohesiveness of modules.
19. Security - Ability to protect the system from unintentional and/or unauthorized access.
20. Traceability - Ability to trace functional requirements through processing.

3.6.4 Obstacles to Establishing Effective Measures and


Metrics
Its easier to fail than succeed in using measurement to manage an IT function. Installing
measurement programs in an organization not used to measurement is a real challenge.

3.6.4.1

Use in performance appraisals

The first concern that most individuals have is that quantitative data will be used in their
performance appraisal. For example, if the organization wanted to know how many defects
testers created, testers might be reluctant to provide that information. They might feel if they
acknowledged making defects in testing it would negatively impact their performance
appraisal.

3.6.4.2

Unreliable, invalid measures

Another impediment to measurements is the lack of reliability of quantitative data. For


example, two or more people not getting the same number from the same situation. This is a
fairly common occurrence in organizations where the procedures do not include specific
information on how and when to capture basic data.
Completion of time reports is a classic example of this problem. Some groups and individuals
will record the time spent on each activity on a daily basis. Others will wait until the end of
the reporting period and then try to remember how the time was spent. Failing an accurate
recollection, they will allocate the time the way the employee thinks their manager expects to
see it. In any organization where both systems are in use, calculations based on time reports
are flawed from the outset.

3-42

Version 9.1

Define, Build, Implement and Improve Work Processes


If the measures are not reliable and are not valid, they are not believable. When measures are
not believable, they tend to be ineffective, and in some instances counterproductive. For
example, an incorrect estimation measurement will cause people to meet that measure, as
opposed to producing quality products.

3.6.4.3

Measuring individuals rather than projects or groups of people

When people perceive measures as individual performance evaluators, they tend to


manipulate those measures to make their personal performance look better. This is not
precisely the same as the first item in which even group measures are feared as potential
sources of negative performance information.

3.6.4.4

Non-timely recording of manual measurement data

Another aspect to the example cited under Section 3.6.4.2, is that if people are asked to keep
manual logs they frequently do not keep them up to date, resulting in inaccurate recording of
data. In the time recording example the employee knows how many hours are to be recorded
for each period. The total will be correct, but the details will be wrong. In this aspect, because
the incremental data was not recorded, even the totals will be wrong. Defect logs that do not
list all of the defects, and help desk records that do not reflect all of the trouble-shooting calls,
are but two common examples of this phenomena.

3.6.4.5

Misuse of measurement data by management

This is the dreaded outcome made real; measurement data used to punish employees/projects
rather than improve process quickly undermines the concept of measurement. The results of
this approach are numbers manipulated to become acceptable and processes out of control, as
well as poor employee morale and high turnover.

3.6.4.6

Unwillingness to accept bad numbers

In too many instances, the individual or group responsible for reporting a problem is treated
as though they created the problem. When this exists, everyone will go to great lengths to
avoid being the person or department responsible for telling management that there is a
problem. The classic phrase for this approach is: shoot the messenger.

3.7 Summary
This Skill Category is focused on understanding processes as they relate to the role of the
Software Business Analyst. It looks at how various national and international models and
Version 9.1

3-43

Guide to the CABA CBOK


awards provide support for process-oriented thinking. This provides a context for examining
what a process is, how it is defined, how it functions in the organization and assessing the
maturity of an organizations processes.
By understanding how processes inter-relate using tools such as process mapping, it is
possible to streamline and improve their performance. Application of process management
techniques such as control charts and statistical analysis allow the organization to achieve
better results. These activities rest on a strong foundation of effective measurement and
metrics.

3-44

Version 9.1

Business Fundamentals

Skill
Category

4
Business Fundamentals
4.1 Concept Overview
The title Business Analyst places clear emphasis on understanding The Business. When
Business Analysts were consistently derived from the line of business operations, many felt
(not necessarily correctly) that the Business Analyst would have a solid understanding of
business. As the relationship between Information Technology and their business partners has
changed, so has the ability to consistently identify and recruit knowledgeable business people
into the analyst ranks. Today it is not uncommon to find organizations in which the Business
Analyst has no line of business experience. This lack of direct business knowledge can result
in products which are incompletely or incorrectly specified, test plans which miss critical
issues and implementation strategies which are doomed to chaos.
Furthermore, in many larger organizations, functions have been segregated to the extent that
the Software Business Analyst does not perform the Business Analysis; it is done by the
Financial Analyst. For smaller firms, where individuals "wear more hats, the Software
Business Analyst will be expected to be thoroughly conversant with the business issues and
processes.
Business knowledge comes in two flavors: Financial/Accounting knowledge and Industry
knowledge. This Knowledge Category will provide specific practices and approaches for
financial knowledge and guidance on understanding, finding and applying Industry
knowledge. This CBOK uses the phase business and organization interchangeable to
include both For Profit Businesses and Not-For-Profit enterprises as well as Federal,
Provincial /State and Local Governmental entities.
Knowledge Category 4 examines the knowledge, skills and attitudes which are fundamental to
understanding the policies, procedures and practices used to accomplish operational
objectives. The category is divided into seven areas of understanding which are then further

Version 9.1

4-1

Guide to the CABA CBOK


subdivided. Each area builds upon the preceding one; failure to understand one area will make
it much more difficult to understand the next.
Businesses and organizations exist for a purpose; the Certified Software Business Analyst
(CSBA), Certified Associate Business Analyst (CABA), and Certified Manager Business
Analyst (CMBA) must understand what that purpose is and how it is intended to be achieved
in the organization involved. Most CSBAs will be working for a going concern, which
means that some of the most basic questions have been answered and that those answers are a
part of the culture of the organization. Although answers have been created and published,
this does not always mean that they are relevant to the organization as it exists at a specific
point in time. For this reason it is often prudent to verify that these fundamental answers are
still valid, especially for a large or complex project or when the CSBA is new to the
organization. Section 4.2 will look at what the questions are, how they are answered and by
whom.
Countless studies and articles have documented the consequences of communication failures
between Information Technology and the Business Community. Much of what is often
labeled scope creep is in fact the result of a failure to effectively translate an operational
need into terms that Information Technology understands. This is the heart of the Business
Analyst functionality. The CSBA must be fully bilingual speaking both Information
Technology and Business fluently. Section 4.3 will provide a basic business vocabulary
needed to communicate successfully with the Business community in their language.
Each organization must find ways to generate funds to cover the costs of staff, facilities,
supplies, and all of the other expenses which are part of the going concern. For some
organizations covering expenses is not enough, they are intended to be able to generate a
profit for their stakeholders. The CSBA must understand how their organization obtains the
funds it needs. Part of understanding funding, is the understanding of funding cycles, fiscal
years and calendar years all of which can be very different. Project timing is one of the
essentials for success in many organizations. Business Analysts who are not fully aware of the
potential impact of a project on funding sources or timing may make the wrong decisions.
In Section 4.2 the CSBA learns the fundamentals of the organization. Now armed with a
broader vocabulary and an understanding of the general funding issues, it is time to look at the
external environment in more detail. Each successful organization must possess an intimate
knowledge of their customers or business partners; what do they have, what do they need,
how can the business fulfill that need. Likewise they must understand their suppliers; who are
they, what do they offer, what do they need, and who is the competition. Business Analysts
will need to factor this information into any project assessment in order to help make the
difficult decisions on time, scope and quality.
Information Technology projects often represent the greatest potential expense and the
greatest potential for growth an organization may experience in a year. Few organizations
have the resources to execute all of the potential revenue generating or cost reducing projects
that are identified. The CSBA must be able to investigate potential projects and translate the
ideas into a format which can be used as the basis for evaluation. While the ultimate
responsibility for the final product often rests with the Project Manager, the Business Analyst
will be very involved. In organizations without a Project Management Office (PMO), more
responsibility falls on the Business Analyst to perform these functions.

4-2

Version 9.1

Business Fundamentals
Businesses do not exist in a vacuum. For many organizations, both large and small, there are a
myriad of legal and regulatory issues which create boundaries. The effective Business Analyst
will be well aware of Industry specific issues and the legal issues. These issues, individually
or taken in concert, can radically alter what is possible, what is necessary and when things
must be done.
Cash flow projections alone are not enough to determine which projects should proceed and
which should be tabled. This information needs to be evaluated and placed in context for
effective decision making. The Business Analyst (often in concert with the Project Manager,
or working at their direction) will bring all of the pieces together in products which
communicate the financial, legal as well as technical issues to stakeholders.

Version 9.1

4-3

Guide to the CABA CBOK

4.2 Understanding Vision, Mission, Goals, Objectives,


Strategies and Tactics
The concepts covered in this section provide a foundation for understanding the Business
portion of the Business Analyst function. They provide the necessary focus and linkage to
allow an organization to move as a single entity. The chart below provides a quick summary
of the terms Vision, Mission, Goals, Objectives, Strategies and Tactics
.
Concept

Question Addressed

Time Frame

Vision
Mission

Why Do We Exist?
What Is Our Long
Range Position?
How will we reach
success?
How will we measure
success?
How will we achieve
Objectives?
How will we achieve
Strategy?

5 years or more
3 - 5 years

Goals
Objectives
Strategy
Tactics

2- 3 years
2 -3 years
1 2 years
A few months to a
few years

Organization Level
Involved
Executive
Executive and Senior
Management
Senior and Middle
Management
Senior and Middle
Management
Middle and Line
Management
Line Management

Table 1.

4.2.1

Vision

The primary challenge for any individual or project is to ensure alignment with the
organization. This starts by understanding why the organization exists. Each organization has
a Vision, whether it is articulated as such or not. The Vision is a short statement that captures
the essence of the reason the organization exists. Vision statements should be short and easy
to remember; ideally nine words or less. Examples of effective Vision Statements include
Fords Quality is Job One and can even be the name of the organization such as Toys R
Us. Longer, wordier Vision Statements lack the clarity of purpose and punch necessary to
provide good focus to the organization. When considering accepting employment with an
organization it is a good practice to become familiar with their Vision, and assess to what
extent the organization actually reflects that Vision.
The characteristics of a good Vision Statement are:
Short -Nine words or less is ideal
Memorable

4-4

Version 9.1

Business Fundamentals
Carries an important message
Known by everyone in the organization
Vision Statements are developed at the highest level of the organization, typically Executive
Management and the Board of Directors if there is one. In some countries, such as Canada, the
Boards of Directors are required by Regulators to approve the Vision Statement along with
any associated Code of Conduct.1 They are relatively stable for long periods of time.
Recreating a Vision in a well established organization is symptomatic of some form of
turmoil, either within the organization or in the industry of which it is a part. Frequent changes
in the Vision leave the organization confused about who they are and where they are headed.
This can be devastating for morale. A well understood and accepted Vision Statement
conversely will serve as an effective focal point for project evaluation.
The best companies will seek input from all levels of the organization when developing a
Vision Statement. That being said, the CSBA is rarely directly involved in the development of
a Vision Statement, unless it is a very new organization or one undergoing a radical
transformation. More often the task is to examine a prospective project with the rest of the
project team and answer the question, Does this project conform to the Vision of the
organization? Because Vision Statements are often broad and vague, this may not be an easy
question to answer. Projects which are glaringly out of alignment with the official Vision
Statement need to be challenged at the earliest possible moment. This may be done by asking
questions such as:

How does this support our Vision of ourselves as.?

This appears to be well outside the area covered by our Vision Statement, has
something changed?
It is essential to understand the linkage to the Vision, as at some point, the question must be
answered, Why are we spending these resources on this activity? Projects which do not tie
to the Vision are excellent candidates for later cancellation as support for the resource
commitment dwindles. Terminating these potential projects early is an important part of
managing the organizations resources effectively.
Failure to have an effective and well implemented Vision can result in some or all of the
following:
Loss of market to competitors
Loss of customers
Reduced real or perceived quality
Wasted resources
Churning

1. Michael Gunns, Gunns Group 2006

Version 9.1

4-5

Guide to the CABA CBOK


Low morale
High Turnover
Organizational conflict
Alternatively, acceptance by Executive and Senior Management of a project clearly outside
the scope of the current Vision may be a signal that the organization is evolving in a new
direction. This is important information as it will impact other current and pending projects.
The CSBA must pay close attention to these changes in order to be effective.
Poorly implemented changes in a Businesss Vision of themselves may result in some or all of
the following:
Confusion on the part of management about the current direction
Confusion in the market place about the organizations direction leading to
depressed stock prices or loss of market share
Loss of confidence by employees in managements ability to run the business
Project Vision Statements are a small scale implementation of a Business Vision Statement.
Business Analysts will often be involved in the drafting of a Project Vision Statement for a
larger project. An effective Project Vision Statement will focus the project teams effort on
the work to be done.

4.2.2

Mission

While a good Vision Statement provides clarity of purpose and focus, at nine words or less, it
is short on information about what that Vision looks like in action. The Mission Statement
expands on the Vision Statement by answering the question, What is it we do? This
generally requires more detail about how to recognize the Vision in practice. A Mission
Statement should be more than 1 sentence, but less than 1 page. It should be of sufficient
clarity and precision that the organization can make business decisions based upon it. It will
often take the form of a short list of desired performance characteristics, such as: We will be
our customers provider of choice; We will consistently be viewed as the highest quality
manufacturer of product X; We will be the dominant market share holder for Industry Y in
Europe.
Like the Vision Statement, the Mission Statement is typically developed at a fairly high level
in the organization; Executive and Senior Management are generally involved. Occasionally
Middle Management will participate, but once again, unless it is a small and new
organization, the CSBA will rarely be involved in the development of the Mission Statement.
Unlike the Vision Statements which are time neutral, Mission Statements are focused on the
future and provide a road map for change. Expressed a different way, Mission Statements
should be actionable.
A good Mission Statement:

4-6

Version 9.1

Business Fundamentals
Clearly expands the Vision Statement into positive statements of desired outcomes;
Relates the business to the external environment, customers, suppliers, competitors,
etc.
Looks forward to the edge of the foreseeable business horizon
Is actionable
Mission Statements have a shorter life span than Vision Statements; traditionally this is a 3 to
5 year period. Because the economic environment, competition and products change, what is a
reasonable Mission needs to be reassessed a little more frequently. Because the Mission
Statement is used as the basis for long term resource acquisition and allocation, too frequent
changes to it can have a serious negative bottom line impact. These impacts are very similar to
those resulting from an ineffective Vision Statement.
Mission Statements which are too wordy and vague create the same sorts of problems that
similar Vision Statements do. Because it is supposed to lend depth and understanding to the
Vision Statement, the Mission Statement is the next filter the CSBA might use to examine a
prospective project.
Projects must be examined in light of the Mission Statement, using the same sorts of
questions which were used with the Vision Statement:
What part of the Mission does this project support?
How does it help the business to achieve its Mission?
Each project should display a clear linkage to the Mission and from the Mission to the Vision.
If the existing Mission Statement lacks the desired clarity, the CSBA may need to talk with
multiple members of the business community to get their interpretation of what the Mission
really is. Without this kind of understanding, it will be difficult to ensure that the proper
linkages exist.
The CSBA will occasionally encounter projects which fall outside the Mission, but are
approved anyway. There may be valid business reasons for this to happen. The most common
of these reasons is some form of Regulatory change. Implementation of these projects often
diverts resources from other more focused projects, but must be done anyway. If a proposed
project does not support the Mission and is not required by law, it is important to understand
why it is being put forward. It may signal a shift from the established Mission, or it may
represent a pet project which will later have a negative impact on the organization.
Many projects also have a Mission Statement; these are sometimes referred to as Statements
of Work or Business Objective. This Mission Statement clarifies the intent of the project for
the project team. The Business Analyst, as the communication link between Business and
Information Technology will be very involved in the drafting of the Mission Statement. Care
must be taken to avoid the use of standardized statement templates which sound good, but add
little value or understanding.

Version 9.1

4-7

Guide to the CABA CBOK

4.2.3

Goals

If Mission Statements are actionable, Goals are the series of actions designed in response.
Taken as a whole, Goals should be sufficient to create the environment described by the
Mission Statement. Goals are often found in their consolidated form in documents which look
forward for a specified timeframe (3 Year Plan or 5 Year Plan). If the Mission Statement is the
high level view of where the organization will be, the Goals are the major building blocks for
accomplishing that view. An individual strategy may require 1 to 2 years to complete, but the
entire suite of activities (Goals) may require 3 or more years for successful completion.
Good goals have the following characteristics:
Action oriented wording
Measurable results
Time oriented
Realistic
Goals are typically the product of collaboration between Senior and Middle Management and
reflect a gap between the state desired by the Mission Statement and the existing situation.
Some organization use a process called Gap Analysis to understand what occupies the space
between where the organization is and where it wants to be. A Gap Analysis is a systematic
process for examining:
The marketplace the business occupies
The organizations product(s) for that marketplace
The competitive products and technologies
The external customer
Unfilled needs in the marketplace
Changes in regulation and technology
This information is used to better define what changes must be made to close the gap between
the organization and the target. In small to medium sized organizations, Business Analysts
may be called upon to provide input to a Gap Analysis. Skills the Business Analyst may need
to support a Gap Analysis and the subsequent development of Goals include:
Conducting Market research
Designing questionnaires and surveys
Analyzing the resulting data
Writing effective reports

4-8

Version 9.1

Business Fundamentals
Presenting results
Facilitating meetings
During these activities the Business Analyst will have the opportunity to validate their
understanding of what is important to the organization and why. Participation in the planning
process will provide good insight for working on Requirements. In developing Goals there is a
temptation to create a path that is merely an extension of where the organization is at the
current time. What an effective Gap Analysis will show is if that strategy will close the Gap,
or allow it to widen.
Traditionally, much of this work was done without significant Information Technology input,
however, when technology became a product differentiator their expertise made it important
to include them in the process. The time when simple technology or automation projects
provided significant competitive advantage has past. In the current environment, a core of
sophisticated technologies is assumed to be in place. In this increasingly complex and fast
changing environment, the CSBA must be well educated in how technology is currently
deployed. The Business Analyst may be called upon to function as a translator between the
Business Community and the Information Technology technical specialists as they seek to
determine if, and how, new technology might provide a further competitive advantage.
Creating a useful and practical objective is not always easy. While the business may know that
it needs to sell more products, that is not enough of a statement to be called a Goal. An
essential element in designing effective objectives is that they must be measurable. If there is
no ability to measure, it will not be possible to determine whether or not the objective has
been achieved.
Measurement can take many forms; a goal statement that can be answered yes or no is
measurable. For example, Create a new corporate branding logo. If the logo has been
created, the objective has been met; if it has not, the objective is un-met. Insertion of a number
into the Goal Statement will often make it easier to determine if it has been achieved.
Increase annual sales of widgets by 10%.
To make it clearer, Goal statements often include time frames. Increase annual sales of
widgets by 10% by year end 20xx. This kind of goal statement helps the organization move
forward in a clear direction. When working with projects, the CSBA should be able to link a
project to a specific organization goal. With goals which have different time frames and
priorities, it is important to be very clear about which goal a project is in support of and why
that project is happening now.

4.2.4

Objectives

Creation of measurable goals is the first step toward actual accomplishment of those goals.
Objectives are the incremental signposts along the road which show that progress is being
made toward the goal. Goals are not necessarily tied to one specific project, while objectives
often reflect the successful completion of a project or a related group of projects. There are
usually multiple objectives associated with a single goal.

Version 9.1

4-9

Guide to the CABA CBOK


Objectives for the goal statement, Increase annual sales of widgets by 10% by year end
20xx. might include the following:
Begin marketing in Belgium, Germany and Austria Region by end of 1st quarter
20xx.
Begin marketing in Spain, Portugal and Italy Region by end of 2nd quarter 20xx.
Begin marketing in Luxemburg, Denmark, the Netherlands and Finland Region by
end of 3rd quarter 20xx.
Implement Common Market brand awareness campaign by end of 3rd quarter 20xx.
These objectives reflect a progressive decomposition of the Vision, Mission and Goals into
the major activities which will be required to accomplish them. The work needed to create the
objectives and understand what is involved in the successful completion of each one often
requires intensive interaction between the Business Community and Information Technology.
The Business Analyst is often deeply involved in this process as the activities of developing
high level estimates of work required to complete specific objectives are created.
The characteristics of good objectives are as follows:2
Action oriented wording
Specific
Measurable results
Time oriented
Realistic
Assignable
The high level estimation process is fraught with problems. Failure to do the necessary
research to clearly understand the objective will place IT at a severe disadvantage going
forward. The CSBA should be able to translate the high level objectives into project chunks
for IT to estimate. These estimates should be based on prior experience with other similar
projects and include an estimate of the staff resources required to meet the objective. Project
Estimation is addressed in more detail in Knowledge Area 8.
This is the earliest point in the Business Planning Process that there is the opportunity for a
reality check. Both Information Technology and the Business Community have the
opportunity to assess the time and resources which will be required to achieve specific Goals.
An effective planning processing includes the means and the opportunity to streamline or
scale back Goals based on the cost to the organization, if that is appropriate. The question of
the cost/benefit for the organization can be estimated at a gross level at this point. .
2. George Doran, There is a S.M.A.R.T. Way to Write Management Goals and Objectives, Management Review, November 1981, pp. 35-36.

4-10

Version 9.1

Business Fundamentals
At a high level there are two factors which differentiates Goals and Objectives; Objectives are
specific and they are assignable. While both are time specific, time becomes more granular
when creating Objectives; that is, it is more specific. The units of time are smaller. This
specificity can be seen in the examples Goals and Objectives used above. This is the
difference in granularity between solve world hunger and establish a local food bank.
The value of the increasing level of specificity in Objectives is that it becomes possible to
assign responsibility. Until this point, responsibility has been global; experience shows that
when everyone is responsible, no one is responsible. Therefore is it necessary to decompose
down to the point where it is possible to assign tasks. This is first possible at the Objective
level.
At the Objective Level, this assignment will typically be at the Functional Unit level. A
Functional Unit is a major component of the organization with a specific and well defined
purpose. Information Technology is a functional unit, as are Marketing, Manufacturing,
Human Resources and Finance. The functional unit assigned the responsibility for a specific
objective is typically the major stakeholder for that objective.

4.2.5

Strategies

Strategies are the next level decomposition of Goals. As such, good strategies must meet all of
the criteria for good objectives, but they will be more specific. The Business Analyst will be
heavily involved in the development of strategies. Their knowledge of the existing business
environment, coupled with their knowledge of Information Technology positions the Business
Analyst to become an effective translator. This is an essential role, as this is the point at which
the High Level Requirements for a project begin to emerge and the general scope will be
defined.
Good Strategies meet the following criteria:
Action oriented wording
Specific
Measurable results
Time oriented
Realistic
Assignable
The objective below is one of several identified as necessary to met the goal of increase
annual sales of widgets by 10% by year end 20xx. Listed beneath it are several strategies
which will be necessary to accomplish the objective.
Objective: Begin marketing in Hong Kong Region by end of 1st quarter 20xx.
Strategies:

Version 9.1

4-11

Guide to the CABA CBOK

Identify staff needs and complete recruiting by January 31, 20xx

Select and acquire location by February 15, 20xx

Develop marketing plan by February 20, 20xx

Develop and reproduce marketing materials March 15, 20xx

The granularity of the items has increased significantly as the decomposition progresses. The
original Goal had a time unit measured in years; the subsequent Objectives where measured in
quarters of years; the Strategies are now measured in months or even portions of months. The
advantage of the decreasing scale is that it is easier to verify that a given set of activities is on
track from a time perspective.
The decreasing scale also has a positive impact on the assignability of the Strategies. Where
Objectives were assignable at the Functional Unit Level, Strategies are assignable at the
Department Level. If a Functional Unit is Human Resources, Departments might include
Payroll, Benefits, Employee Relations, and Recruiting. In Information Technology
Departments often include Operations, New Development, Maintenance Services; Voice and
Data Communications and Quality.
For the Business Analyst, the challenge is to ensure that the decomposition does continue and
that each of the involved Departments is participating in the process. Working in concert with
the Project Manager, the Business Analyst will be conducting meetings to discuss each set of
strategies to ensure clarity and commitment. A well rounded understanding of the Business
environment is critical to the decomposition to ensure that required activities are not missed
during the process. Previous experience with similar processes will provide insight into the
process; using a similar project as a cross-reference will capitalize on lessons learned.
The results of the meeting should included documented agreements to the Objectives and
Strategies being discussed. These documents will become a part of the project documentation.
Time and resource estimates which were developed during the Objectives stage will be
refined. Ideas which have had broad acceptance at the Mission and Goal level will become
controversial at the Objective and Strategic level as the reality of the resource commitment
involved becomes apparent. The Business Analyst will need to demonstrate skills in the
following areas to be successful:
Team Building
Facilitation
Motivation
Effective written and verbal communications
Organization
Cost Benefit Analysis
Estimating

4-12

Version 9.1

Business Fundamentals

4.2.6

Tactics

Tactics are the implementation of strategies in the form of individual projects or sub-projects.
For small organizations, one Project Manager and one Business Analyst may be responsible
for an entire suite of Strategies, including all of the associated tactical projects. In very small
organizations, the Project Manager and the Business Analyst may be the same person.
Conversely in larger organizations Project Managers and Business Analysts may work on
only one or two of the Tactical Solutions. Business Analysts may be supporting multiple
Tactics (projects) with different Project Managers, and those Project Managers may be
reporting to a Senior Project Manager. It is essential that the Business Analyst have good time
management skills in this environment.
Tactics are assignable at a unit or individual level depending upon the size of the task and the
size of the organization. Group Units are the lowest organizational level. Within Information
Technology the Maintenance Services area is often divided into groups which provide support
to specific applications or groups of applications. Business Analysts are often specialists in
specific application areas and will be assigned all of the Tactical solutions in their area of
expertise. This becomes their portfolio of project activities.
Time and staff estimates which have been developed earlier are reviewed and fine tuned. In
many organizations it is difficult to increase either at this point. In better organizations, most
time and staff estimates are not finalized until it has been reviewed and agreed to at the
tactical level. This process of roll down (from Vision to Mission to Goals to Objectives to
Strategies to Tactics) and then roll up (only to Goals) will yield the most realistic estimate of
time frames and resource requirements.
When dates have been set in stone, the Business Analyst and the Project Manager will need
to work carefully to ensure that the highest priority items are completed by that date. To
accomplish this, the Business Analyst will need to have a good process for prioritizing
requirements. This is covered in Knowledge Area 5.
At this lowest level of activity, it should be easy to directly identify the Goals that this project
supports and make a clear link to the Mission and Vision. If this is not possible, it is essential
that the question be asked, Why are we doing this?

4.3

Developing an Organizational Vocabulary

The language of business is focused on economic value. The Business Analyst need not be an
accountant or a financial analyst, but they must be familiar with the basic vocabulary of these
two disciplines in order to communicate effectively with the Business Community.
The ability to understand the terminology and apply it correctly will significantly improve the
requirements gathering and test case development processes. Conversely, the lack of understanding can create, at best, time delays and, at worst, critical defects in a product.

Version 9.1

4-13

Guide to the CABA CBOK

Table 4-1 Common Business Terms and Descriptions


Term

Description

Amortization and
Depreciation

The systematic write-off of an asset over its useful life or


some other predefined period. Depreciation and
amortization are expenses which reduce income (but not
cash), therefore effectively reducing potential tax
liabilities. The Accounting Department in most
organizations has the knowledge and documentation to
expand or select amortization and depreciation methods
and lives. Depreciation is a non-cash transaction. Of
particular interest to Information Technology is the
treatment of software purchase costs, which are typically
amortized as non-tangible assets. (Costs to develop
software, unless it is for resale must generally be
expensed.)

Assets

Tangible and intangible property owned by the


corporation. Tangible assets may include cash,
investments, inventory, buildings, machinery and
furniture. Intangible assets include accounts receivable,
prepaid expenses, intellectual capital such as trademarks
and patents, and goodwill.

Balance Sheet

A listing of the assets, liability and equity accounts from


the Trial Balance. The format is prescribed by Generally
Accepted Accounting Principles (GAAP). The sum of
assets should equal liabilities plus equity plus net income
(loss). The Balance Sheet presents the financial position
of an organization at a given point in time (as of XX/
xxxx). All accounts for an organization, except
dividends, should appear on either the Balance Sheet or
the Income Statement; nothing should appear on both.

Business Partner(s)

Those on the same side of a business activity or financial


transaction; a term increasingly being used to refer to
those within the same organization. Business partners
may provide or receive services as a part of a joint
support of the organizations Vision, Mission, Goals,
Objectives, Strategies and Tactics.

4-14

Version 9.1

Business Fundamentals
Table 4-1 Common Business Terms and Descriptions
Term

Description

Capital

Capital is cash used to produce income, by investing it.


That investment may take the form of assets, income
producing investments (including other businesses),
activities such as on going operations or new projects.
Invested capital is a term often used by businesses to
refer to capital used to fund projects.

Cash Flow

The net amount of cash received and paid out during a


period. This period may coincide with standard
accounting cycles (monthly, quarterly, annually) or may
be for the life for the project. Cash flow does not consider
when the revenue was actually earned or the expense
incurred; it simply tracks all cash in and cash out.

Customers

Individuals, groups, businesses, organizations,


government entities, whether local, national or
international who pay an entity or organization for their
goods and services.

Equity

Equity represents 1) The net monies received by an


organization for the issuance of its stock and 2) Retained
earnings. Retained earnings are the sum of net income
(loss) less dividends paid since the inception of the
organization. Businesses which do not issue stock often
refer to equity as net assets.

Feasibility Study

A preliminary research document done to determine if


there is enough potential benefit to proceed with further
investigation and analysis; it is typically conducted very
early in the life of a project. The Feasibility Study often
includes significant high level assumptions about the
potential cost of a project as well as the potential benefits.
Information in a Feasibility Study is often generic rather
than specific.

Forecast

Verifiable assumptions about the future based on history


and trend data. Forecasts are generally more reliable and
sustainable than projections. Items which are often
forecast include interest rates, inflation, sales or revenue
numbers. Forecasts used in developing financial
projections must be clearly identified as such. When
using forecasts it is essential to identify the source of
trend or historical information being used.

Version 9.1

4-15

Guide to the CABA CBOK


Table 4-1 Common Business Terms and Descriptions
Term

Description

Functional Unit

A major component of the organization with a specific


and well defined purpose.

Gap Analysis

A systematic process for examining the marketplace, the


organizations product(s) for that marketplace,
competitive products, and technologies to better define
what changes must be made to close the gap. It looks at
the current situation and the future or proposed situation
in order to identify the gap.

Generally Accepted
Accounting Principles
(GAAP)

Accounting methodologies and practices used in


Financial Reporting as promulgated in the U.S. by the
Financial Accounting Standards Board (FASB) and the
Securities and Exchange Commission (SEC). These
provide a standard for comparison among similar
companies.

Goals

The major building blocks for accomplishing a specified


Mission.

Hurdle Rate

An expected minimum return on internally invested


capital. In the US, hurdle rates are often based on
Treasury Bill or other very secure investment rates of
return. Projects are often expected to produce returns in
excess of the Hurdle Rate to be approved.

4-16

Version 9.1

Business Fundamentals
Table 4-1 Common Business Terms and Descriptions
Term

Description

Income (Loss)

Refers to the net of Revenue less certain expenses. There


are a number of income numbers used in financial
reports and analysis, the most common of these are:
Income (Loss) from Operations - Revenue (as defined
below) after all administrative and operating costs have
been deducted. This would include both direct costs such
as raw materials and overhead such as a Human
Resources Department. In many organizations the
budgetary process will summarize to this number.
Net Income (Loss) before Taxes This number is the
Income (Loss) from Operations less any non-operating or
unique events (such as plant closings, sale of or
discontinuing a line of business)
Net Income (Loss) - This number is the Net Income
(Loss) before Taxes minus accounting tax effect on
income. Taxes are rarely a consideration for the Business
Analyst in project activities. Net Income (Loss) is
commonly referred to as the bottom line.
Income (Gross) - There is no firm accounting definition
for this term, as such it is used in different contexts in
different organization. It may be used to refer to Revenue,
or Revenue with additional deductions or adjustments. If
using the term, it is important that there be a solid
organizational definition for what it does and does not
include.

Income Statement

The summary presentation of revenues less the expenses


(including depreciation, taxes, and non-operating items).
The resulting total of an income statement is Net Income.
All accounts for an organization should appear on either
the Balance Sheet or the Income Statement; nothing
should appear on both.

Liabilities

Obligations of the organization to pay or perform at some


future date. These may include accounts payable, accrued
expenses, and debt. In some industries there are other
items such as Reserves which represent estimates of
potential future losses.

Mission

expands on the Vision Statement by answering the


question, What is it we do?

Version 9.1

4-17

Guide to the CABA CBOK


Table 4-1 Common Business Terms and Descriptions
Term

Description

Net Present Value

The current cash equivalent of some future action(s),


based upon an expected earnings or interest rate over a
given time period. The interest rate used is generally
derived from a well documented/published rate of return
such as Treasury Bills, the Consumer Price Index (CPI)
or an organization standard rate of return.

Opportunity Cost

An economic rather than accounting term and there is not


an industry wide definition. Generally it is the
recognition of the resources (cash and other) associated
with the selection of a specific course of action, and
which are therefore not available for other uses. It is also
used to present the issue of mutually exclusive options, if
Option A is chosen; Option B is no longer available.

Profit

This is not an official accounting term, but is used to


refer to one or more of the definitions of Income. When
profit is used, it implies that there is an excess of
revenues over expenses.

Projection

Assumptions about the future based on speculation,


intuition or deviating significantly from historical or
trend information. Projections are often part of a series of
what-if iterations to determine the possible options.
Projections are generally less reliable than forecasts when
used in developing cash flow information.

Return on Investment
(ROI)

The appreciation received based on the investment made.


This amount is generally expressed as a percent of the
base capital. A time factor must be defined for the
number to be meaningful (a 10% return on investment
over a 2 year period). Other formats which may be used
in an organization to express this are: Return on Assets
(ROA); Return on Equity (ROE) or Internal Rate of
Return (IRR). The name of the format defines the basis
used.

Revenue

The gross sales, receipts or billings for an organization


for a given period, also known as the top line. Revenue
numbers are typically only adjusted for returns,
allowances and pricing adjustments. There is no tax
consideration in revenue numbers

4-18

Version 9.1

Business Fundamentals
Table 4-1 Common Business Terms and Descriptions
Term

Description

Stakeholder(s)

An individual, a small group, and/or another


organization, groups of organizations or stock market
investors who have an interest in an organization
achieving its Vision and Mission and therefore provide
resources to the organization on which they expect a
return.

Statutory Reporting

Non-GAAP reporting; certain industries are required to


provide information to various government entities in
formats and with information not consistent with GAAP.
The most common example of this is tax reporting which
often varies significantly from GAAP. It would not be
atypical for organizations in some industries to do both
GAAP and Statutory Reporting. A major example of this
is the U.S. Insurance industry which does reporting to the
individual states based on accounting principles defined
by the National Association of Insurance Commissioners
(NAIC).

Strategies

The next level decomposition of Goals, and are


actionable, assignable and time oriented.

Sunk Cost

Resources which have been expended or committed, but


which should not be considered in the decision making
process for a future project or program. These may
include monies spent on previous (unsuccessful) projects
and expenses which will not change no matter what
option is selected.

Trial Balance

The listing of all accounts in the organizations books.


Under double entry bookkeeping, which is a fundamental
of GAAP, the books are in balance when, in the Trial
Balance, the debits equal the credits. The accounts from
the Trail Balance will appear in either the Balance Sheet
or the Income Statement, but not both.

Vision

A short statement that captures the essence of the reason


the organization exists.

Version 9.1

4-19

Guide to the CABA CBOK

4.3.1

Risk Analysis and Management

4.3.1.1

What is risk

Risk is the likelihood that an event will occur combined with the potential for negative
consequences resulting from that event. Individuals and organizations take risks every day.
Dealing with Risk is a key issue for Business Analysts as they work on projects. Risks exist in
a large context for the organization as a whole and in a smaller context for individual projects.
Risks to the organization as a whole should be managed at the Enterprise Level; many
organizations have documented Risk Management functions. Risks at the project level are the
responsibility of the key stakeholders, the project manager and the business analyst.
The Committee of Sponsoring Organizations for the Treadway Commission (COSO) has
created a framework for understanding and managing risk. This is often referred to as
Enterprise Risk Management (ERM). COSO created an eight step process for establishing and
managing ERM, which is shown in Table 4-2. Of the eight steps, the Software Business
Analyst is typically only involved in Steps 3, 4, 5 and 6. The eventual implementation of ERM
is a SOX requirement for larger organizations.
Table 4-2 COSO 8 Step Process for Establishing and Managing ERM
Activity Steps

Detail

1. Internal Environment

Management establishes a risk appetite and sets a risk


foundation

2. Objective Setting

Objectives must be established before Enterprise


Risk Management (ERM) process can begin

3. Event Identification

The potential impact of events must be identified

4. Risk Assessment

Identified risks are analyzed so they can be


understood and managed

5. Risk Response

Management selects an approach to align risks with


the entitys risk appetite

6. Control Activities

Policies and procedures are established so that risks


identified will be managed

7. Information and Control

Relevant information is identified, captured and


timely communicated to the proper groups and
individuals

8. Monitoring

Assess the effectiveness of the ERM process and


modify as necessary to improve results

4-20

Version 9.1

Business Fundamentals
Risk Appetite is the term used to describe the extent to which an organization is willing to take
risks. Some organizations are very risk averse, taking only small, manageable risks. Others
have a much greater risk tolerance and are willing to assume substantial risks in return for the
opportunity of substantial profit. Organizations need to clearly define their risk appetite before
people can participate in effective objective setting.

4.3.1.2

Event identification

This process consists of identifying events which can impact a project, both internally and
externally. Potential negative outcomes are risks; potential positive outcomes are
opportunities. If following a structured process which includes a SWOT analysis and Business
Environment Analysis, many of these items will have already been identified. The CSBA
should be maintaining a list of these items as a part of the documentation process. Risks can
arise from any of the six resource areas (Manpower, Processes, Machines, Materials,
Environment and Data).

4.3.1.3

Risk Assessment

This process examines risks to determine how likely they are to occur and how severely they
will impact the organization if they do occur. These risks
Can be organized in a simple table as shown in Table 4-3, and assessed using a simple scoring
methodology.3
Table 4-3 Risk Assessment Chart
Risk Description

Severity

Likelihood

Ranking

One or more test team members


will resign before completion

High

Medium

The accounting department will


not be available on time

Medium

Medium

The testing environment will fail


to mirror production

High

Low

4.3.1.4

Risk Response

The intent of the assessment process is to determine which risks have the potential to create
the largest negative impact on the business. In most projects there are far too many potential

3. In this case a simple 1-2-3 scoring has been used, from low to high. The results shown in the Ranking column are the multiple of Severity times Likelihood.

Version 9.1

4-21

Guide to the CABA CBOK


risks for the organization to respond to all of them. Several key questions should be answered
when considering which risks to address and which might safely be ignored:
Are we willing to assume the consequences of this risk?
Can this risk be avoided entirely?
How can the likelihood that this risk will occur be lessened?
How can we minimize the impact of this risk if it does occur?
Should it occur, is the cost of mitigation or avoidance likely to exceed the impact of
the risk?
The Business Analyst will be actively involved in these discussions and must ensure that the
results are properly recorded for future reference.

4.3.1.5

Control Activities

Many organizations create a Risk Category structure and process which describes how to
approach risks with various Ranking Scores. This is a portion of the Control Activities which
the Business Analyst must understand. In organizations with no official structure for
controlling risks, the Business Analyst must research the existing policies and procedures to
determine if they provide any guidance. Often the control steps are implied in various IT
policies. Failure to effectively control the risks management process negates its effectiveness.
Examples of Control Activities may include authorization needed for changes to requirements
and budget approvals for expenses to mitigate risks (such as the hiring of contract
programmers to cover staffing shortages).

4.4 Knowledge Area Summary


In this section we have seen a view of the business part of the organization which is often
undiscovered by much of the Information Technology staff. The section lays a firm
foundation for learning about the specific instantiations of the business practices within the
organization. For BAs without business experience this section provides a vocabulary for
effective discussion and learning. Learning to be a fully bilingual BA, speaking both
Information Technology and Business fluently, adds both personal and professional value. By
developing a better understanding of the business and creating a partnership with other
portions of the organization, the CSBA will add value to each project activity. In each of the
areas considered, the CSBA is exposed to the full range of potential job requirements, while
being focused on those which will have the most impact.
By understanding the nature of for-profit and not-for-profit organizations and the
alternative structural decisions that can be made, the BA will now be better able to place

4-22

Version 9.1

Business Fundamentals
specific project activities in their proper context. They will be able to associate project activity
with organizational Vision, Mission, Goals and Objectives in a way that is actionable.
A consideration of how and why organizations receive funds and revenues expands on the
earlier exploration of organization types. Project timing is one of the essentials for success in
many organizations. Business Analysts who are not fully aware of the potential impact of a
project on funding sources or timing may make the wrong decisions. The CSBA is now able
to combine that knowledge with the skills to understand where their organization is in the
marketplace, who their customers/clients are and how they interact with vendors and
suppliers.
Within the framework of organizational risks and opportunities, with constrained resources
and limited horizons, decisions must be made on which project to pursue and which to
abandon. The CSBA now has the tools to understand (or even develop) the information
needed to make those decisions. Creating information out of raw data is a significant
communication skill. By looking at potential risks and rewards at increasing levels of detail,
the BA is able to participate in the process. This detail includes not only competitive risks, but
also those posed by the regulatory environment.
This information needs to be evaluated and placed in context for effective decision making.
The Business Analyst, often in concert with the Project Manager, or working at their
direction, will now be able to bring all of the pieces together in products which communicate
the financial, legal as well as technical issues to stakeholders, using tools such as Risk
Analysis documents, SWOT (Strength, Weakness, Opportunity and Threat) Analysis, and
Cash Flow Analysis to complete an effective Feasibility Study.

Version 9.1

4-23

Guide to the CABA CBOK

4-24

Version 9.1

Requirements

Skill
Category

5
Requirements
Throughout Skill Categories One, Two, Three and Four a conceptual foundation has been laid
for the work the Business Analyst must perform. In those categories, the Business Analyst
learned the basic skills of the profession. Skill Category Five is the first step in applying
theses skills.
Just as the Requirements process is the heart of the software development life cycle, so too the
Requirements process is the heart of the Software Business Analysts job. So much depends
upon the quality of the work done during Requirements. If this fails, it will be difficult to save
the project. Just finding requirements can be a challenge; ensuring that they are complete,
correct and meaningful is a whole new challenge. In this Skill Category the CSBA will learn
how to gather, clarify, prioritize and manage requirements.

5.1 Business Requirements


Business Requirements are exactly what the name implies: they are what the organization
needs to accomplish their purpose. Business Requirements are sometime call Functional
Requirements. This other name is based on the idea that these requirements describe what the
system must be able to do.

5.1.1

How Requirements are defined

Requirements definition should be a collaborative process, involving all of the stakeholders


for a specific project. When performed in this fashion, there is the highest probability that the
correct requirements will be identified. As the representation from the stakeholder population
decreases, so does the probability that the requirements will be complete and correct.

Version 9.1

5-1

Guide to the CABA CBOK


Requirements definition should be an iterative process, in which needs and wants are offered,
evaluated, clarified and eventually accepted or rejected by the stakeholders. For small
projects, with few requirements, the individual iterations or process cycles will be brief; for
larger projects, they may be very time-consuming.
The analogy of building a custom home is often used as a construct for building a software
system; they contain many of the same issues and challenges. It is a useful analogy in many
respects; certainly in the requirements phase. The original concept may be sketched out on a
napkin or some other piece of paper. Everyone is excited about the plan, ready to proceed at
full speed; but giving that napkin to the builder at that point is an invitation to disaster. Unless
details are identified and worked out, the dream house will become a waking nightmare. This
is the requirements phase.

5.1.1.1

What is a requirement

A Requirement is a specific and detailed statement about what a system must do or be.
Often the initial statement of what the system must do or be is very high-level: The billing
system must correctly calculate the total amount due for the order. While this is very clear
and specific, there is a lot of detail to be added before it is complete. (In the home building
example it is the equivalent of saying 3 bedrooms and 2 baths.) These requirements will
then be refined in increasing levels of detail until they are fully understood and agreed to by
all the stakeholders.
The sum of all of the Business Requirements identified for a specific project must reflect all
the functionality desired by the stakeholders.

5.1.1.2

Separating requirements and design

Design is all about how the functionality described in the requirements will be provided.
Design includes things like file and table layouts and more commonly screens and reports.
Because many of the answers to provisioning Information Technology systems are so well
known, they are often incorrectly supplied as requirements.
This tendency to begin designing solutions before the problem is fully defined creates a wide
range of problems in the attempt to deliver a quality system. It is common to see the need for
a specific screen or report identified as a requirement when, in fact, these are solutions to
information needs and as such are design.
The Business Analyst when confronted with design elements submitted as requirements must
begin the painstaking task of determining the underlying need. It will be necessary to find out
what purpose the information fulfills for the stakeholder. (We need to be able to review the
complete order while still on the phone with the customer.)
Once the real need is understood, there may be many potential solutions to the problem, of
which the one originally submitted as a requirement is only one.

5-2

Version 9.1

Requirements
Not only do design elements included as requirements rule out other, potentially more
effective, solutions; they may also mask a problem which is not properly explored. For
example:
Question: Why do we need to review the order while on the phone?
Answer: Because the pricing tables are not always current and we want to make certain the
correct price is being charged.
In this case, reviewing the order on the phone does not solve the problem at all; it merely
alerts the employee taking the order that the pricing table for this item is not current.
Correcting this issue will require a different solution. The time and money spent on creating
the requested screen will not solve the problem at all.
This fundamental step in separating requirements and design is often overlooked in the rush to
get the requirements and design done, so coding can begin. This emphasis on coding is a trap
for the unwary; in the hurry to get there, too many errors and ambiguities are left in the
requirements. In traditional Waterfall methodologies, this in turn leads to defects that must be
corrected, requiring additional coding time. When working in the Agile methodologies,
creation of the test cases, prior to the development of code will reduce this problem.
Methodologies will be discussed in more detail in Skill Category 6.

5.1.2

Who Participates in Requirement Definition

In a world of free resources, everyone would be invited to participate in the Requirements


Definition process. As it is, in every project the determination must be made on who to
include and who to exclude from the process. The participants discussed in Sections 5.1.2.1
through 5.1.2.6 represent the minimum knowledge sets that must be involved in the process.

5.1.2.1

Business Project Sponsor or Champion

The Business Project Sponsor or Champion is the individual with the organizational authority to
commit resources to the project and the interest in doing so. This individual is the functional head
of the project on the business side. Typically their signature will appear on the initial request to
begin the project (other, higher level signatures may also appear depending upon the size and scope
of the project.)
The Sponsor or Champion is the individual who will make decisions and approve resource
commitments on the business side. Generally one or more of the other stakeholders and subject
matter experts will report to this person. In most cases this individual fills both the roles: Sponsor
and Champion.
In a project that is cross-functional, involving several departments and their respective
management, the area which originated the request and/or is committing the most resource will
become the Project Sponsor. It is not unusual for the role of Sponsor and Champion to be split in
this instance, with the Sponsor being somewhat more aloof from the project.

Version 9.1

5-3

Guide to the CABA CBOK


The issue of Sponsor aloofness from the project is a very serious one; if the Sponsor is frequently
inaccessible or unresponsive, the entire project will slow down, unless the Champion has adequate
authority. In circumstances like this it is a good idea to determine if the problem is lack of time or
lack of interest. Lack of interest in the project should raise warning flags for the entire project team.
Occasionally there will be a Champion for the project in another area, because of an especially
keen interest in one or more aspects of the project. Project Champions, because of their interest
level are very important to the health of the project. Every effort must be made to keep them
engaged in the process and to educate them as to the reasons for the emphasis on Requirements
Definition.

5.1.2.2

Business Stakeholders and Subject Matter Experts (SME)

Business Stakeholders are other individuals or groups within the organization that have an
interest in the project. A typical example is Accounting, which is often a stakeholder; many
projects sponsored by Sales or Human Resources have tax implications involving the
accounting area. In any project such as this, the Accounting Department is a key stakeholder
in the requirements process. They will have significant contributions to make. Excluding them
from the early stages of the project will result in rework throughout the project.
Subject Matter Experts (SMEs) are a very important group of people. Many people know the
general outlines of the process flow and the business activities. Then there are those few
people who know one or more specific areas in depth. They know the details which cause
business processes to fail. Their knowledge is essential to a successful project, so identifying
the SMEs early is a critical part of the requirements process.
Working with these individuals may require care; while they know their piece of the puzzle
intimately, they often lack perspective on the big picture. When gathering their information
about what the software must do and be, it is essential to verify the processes must actually be
performed as described for some reason other than thats the way we do it here. It is not
uncommon to find the IT support staff in the position of a SME in regards to the system they
support. In some organizations the business units have completely delegated the
understanding of automated processes to the IT maintenance team. This is a potential
dangerous position for all concerned.
When working with both Stakeholders and SMEs, it is important to remember they are often
overbooked. It may be difficult to get their time and attention for the project. Careful
scheduling with these individuals is essential.

5.1.2.3

Developers

Developers play a crucial role in the requirements definition process. They will be able to
articulate the functionality contained in any predecessor system; they may become the SME.
They will be able to identify interfacing systems and articulate the requirements for those.
They also bring knowledge of how similar business problems have been solved before. While

5-4

Version 9.1

Requirements
the requirements process is still about what the system needs to do or be, not how to do it; it is
useful to know what is easy and what is difficult.
Developers will also ask good questions about specifically how things should work. They are
excellent at identifying gaps in information that will cause problems later. Having the
developers participate in the requirements process has another benefit: they obtain a clear
understanding, from the beginning, of what the project is really supposed to accomplish.

5.1.2.4

Testers

Testers bring a unique mind-set to the requirements process; they immediately begin asking
How would I test this. When the answer to that question is not clear, there is a problem with
the requirement. Testers are very analytical and detail oriented. They find things that do not
work. In this capacity they lend great strength to the final requirements product.
The other benefit to having the Testers involved in the Requirements Definition is they
immediately begin the development of the associated test cases. This helps to spread the
testing workload for the project over a longer period, reducing the resource crunch at the end
of the project; that time crunch often results in inadequate testing.

5.1.2.5

Customers and Suppliers

In this context, customers and suppliers are individuals and organizations external to the
developer of the software. As organizations are increasingly vertically integrated with others
in their industry, the need to include those in their chain is more important than ever. If some
portion of the system will impact those supplying goods and services to the organization, it is
important to include them in the Requirements Definition process. Failure to do so may mean
at a later date it will be more difficult or more costly to do business with those organizations.
External customers are especially important in the Requirements Definition process. Many
organizations have an internal representation of an external customer; often in the Sales or
Marketing Department. They are intended to provide the voice of the customer within the
organization; minimizing or eliminating all other direct contact with the customer. These
internal representations of the customer often have a wealth of statistical data about what
customers buy and why. They often hear directly from customers about what they do and do
not like. As such, these individuals can make a valuable contribution to the Requirements
Definition process and should always be included.
However, it is important to remember these individuals are not the actual customer. Their
vision of what the customer does and does not want may be influenced by a few very vocal
customers, not a representation of the majority. They may unconsciously be filtering
important information about product strength and weakness.
Requirements teams must be very careful when they make the decision on who will be the
voice of the customer for the project. In many projects there is more than one customer. Great
care must be taken to ensure that each customer population is adequately and appropriately
represented in the requirements process.

Version 9.1

5-5

Guide to the CABA CBOK


When the project is important, every reasonable effort should be made to obtain direct,
unfiltered information from each of the customer segments about what they would like the
product to do and be. These requirements may be very different from what the internal
representatives anticipate; or they may validate those requirements entirely. Without
verification it is possible to miss the mark on a product with catastrophic financial impact to
the organization.

5.1.2.6

Business Analysts

Business Analysts are the core of the Requirements Definition process. Because they are
fluent in both the language of business and the language of technology, they are able to
translate for both sides. They know who the SMEs are; they know who the IT support team is;
based on their experience with the organization, they usually also know who additional
stakeholders might be. With this knowledge they are uniquely positioned to ensure that all of
the right people participate in the process.
In addition to this they bring the experience gained from previous projects on how to approach
the requirements definition process. For individuals in the business community, this may be
their first project; while they know what they want, they do not know how to go about getting
it. For individuals in IT, this is one of many projects; they know how to deliver functionality,
but often do not understand the business implications of what they deliver. Focusing these two
viewpoints and the rest of the participants on obtaining a clear definition of what is needed
and why, is the job of the Business Analyst.
Section 5.2 will examine various methods for obtaining requirements from this highly diverse
group of contributors and the Business Analysts role in the process.

5.1.3

Attributes of a good requirement

If the requirements are not right, the project has no chance of success. Despite many years of
experiencing the results of inadequate and ineffective requirements definition, organizations
continue to rush through the process; only to create expensive products which fail to deliver
the anticipated functionality. If the requirements are wrong, the product will be wrong; it is
that simple!
IEEE and others have identified a number of attributes which together define what a good
requirement is. This yardstick can be used to measure the adequacy and appropriateness of
proposed requirements. Each of the most common facets is identified below.

5.1.3.1

Correct

Correctness would seem to go without saying, and yet many organizations continue to
develop incorrect requirements. Correctness includes accuracy. If a formula is provided, the
formula must be right. If a set of activities are described, they should be the right activities.

5-6

Version 9.1

Requirements
Lack of correctness may be the result of inattention to detail, failure to step through a
calculation or process flow, or merely a typographical error. The net effect, regardless of the
cause is the same; the requirement is not correct.

5.1.3.2

Complete

Completeness is the complement of correctness. Incomplete requirements are often correct


as far as they go, but they do not go far enough. The entire requirement should be stated in a
single place, not broken up into pieces and scattered throughout the document. Lack of
completeness will lead to missing functionality and inappropriate system responses. Often
lack of completeness is a result of making assumptions about the process, instead of verifying
the steps.
Karl Wiegers, noted expert in the Requirement area, makes this point, The Requirements
may be vague, but the Product will be specific.1 Someone, at some point will fill in the
missing details needed to build a product. If that someone guesses correctly what is actually
intended, wonderful; but often the guesses are not correct, leading to defects and rework.

5.1.3.3

Consistent

A requirement must be both internally and externally consistent. That is, it must not contradict
itself, and it must not conflict with another requirement. Identifying internal conflicts is fairly
straight-forward using typical review and inspection techniques. Checking external
consistency can be more time consuming and potentially complex. Each requirement can be
checked against each other requirement to ensure there is no conflict. A matrix structure will
work well for this in smaller projects. When the number of requirements is in the hundreds,
the process can become quite cumbersome; however, not as cumbersome as finding the
defects in testing or production and having to resolve them at that stage of the software
development life cycle.

5.1.3.4

Unambiguous

The requirement should be stated clearly and concisely, in such a way that one, and only one,
interpretation is possible. Where more than one interpretation is possible, there is ambiguity.
Where there is ambiguity, errors will be inserted in the product. It is not a question of if but of
how many. If two or more technically competent individuals can read the same requirement
and come to different conclusions about what is being requested, the door is open for error. In
the world of multi-national development efforts, ambiguity is a major concern. Adoption of a
uniform and consistent vocabulary across the organization will greatly reduce the opportunity
for these kinds of errors to occur.

1. Wiegers, Karl E, "Cosmic Truths About Software Requirements," 2006.

Version 9.1

5-7

Guide to the CABA CBOK


5.1.3.5

Important

If the requirement is not important, why are resources being allocated for it? Very few
organizations today have enough resources to implement nice to have functionality. In
defining requirements, the question of what does this do for us (or the customer)? should
always be part of the discussion. For this reason many organizations are reluctant to ask about
blue sky requirements. However, failure to do so can leave the organization exposed to
missing opportunities to meet real business needs. The compromise then, is to ask the
question, If you could have anything, what would you want? but follow up with the
statement, But you need to remember that we may not be able to afford to do any of this.
To further aid in ensuring only important requirements are implemented, the final list of
requirements should be prioritized from one to n. Usage of groupings such as high, medium
and low, or A,B,C lends itself to conflict and manipulation of the list of requirements to be
implemented. This topic will be addressed in more detail in Skill Category 5.5.

5.1.3.6

Stable

Building any computerized system involves hitting a moving target. The organization and the
environment are consistently changing. A good requirement takes that into account. Defining
a requirement to interface with a process or system that will be obsolete before the
requirement is implemented is not cost effective. Placing the requirement in a time context
will aid in this discussion; will it still be necessary to do this in 60, 90, 120 days post
implementation. If the answer is no, perhaps another approach would be more cost effective.
Likewise, failing to anticipate the need to connect with another system that will be coming on
line shortly before or after your project implementation can cause major problems.

5.1.3.7

Verifiable

How can this requirement be tested? If it is not possible to answer this question, there is still
work to be done on the requirement. Some literature refers to this attribute as testability.
This can be problematic as many people approach testing as an end of project activity.
Waiting until the end of the project to determine the requirement is too poorly defined to
develop appropriate test cases is not cost effective. Verification includes the ability to perform
reviews and inspections of requirements; these can be early life cycle activities which will
reduce the total project cost. As discussed in Section 5.1.2.4, inclusion of testers in the
Requirements Definition process will ensure questions of verifiability are raised early. Lack
of verifiability is often a result of speculative thinking about how a process might work,
without getting down to the detail levels necessary for successful construction and use.

5.1.3.8

5.1.3.8 Modifiable

It is unrealistic to think a system, once written, will not change. It will change, therefore,
ensuring the change can be managed successfully is essential. To be modifiable, an item must

5-8

Version 9.1

Requirements
be defined once, and only once, in the system. If Date is to be used, it should be defined with
a standard format and all occurrences of Date should use that format. Maintaining
definitions in multiple places means more work when changes do occur; instead of making
one change, there are four or five. Because people forget, eventually one or more of the
occurrences will be missed or done incorrectly; the result is chaos. If for some reason another
format of the date is required, it must have a unique and distinctive name, such as
Date_Mailed. The need for the additional format such be directly linkable to a different
requirement.

5.1.3.9

Traceable

Each requirement must be uniquely identifiable so that it can be traced throughout the project.
Many organizations choose to use the priority ranking for the identifier, thus making it serve
two purposes. At the end of the design phase, it should be possible to identify where each
requirement is implemented in the design. Likewise at the end of the coding stage, it should be
possible to trace each element of the code back through the design to the original
requirement(s). If there is more than one requirement tied to a specific element of code, all of
the requirements must be identified. In that fashion, any problem can be fully researched;
nothing is missing or lost, nothing is added. The use of spreadsheets or word processors for
creating and tracking requirements greatly facilitates the process if the organization does not
already own a tool for this purpose.

5.2 Processes used to define business requirements


Many people refer to the Requirements phase as Requirements Gathering. This name is
unintentionally misleading, helping people to think the requirements are there for the taking,
like apples from a tree. To quote Karl Wiegers, Requirements development is a discovery
and invention process, not a collection process.2 The difference between the two approaches
is enormous. The mistaken idea that requirements are readily available, needing only to be
collected or gathered, may be why so many organizations consistently under resource this
most essential part of the process.
Depending upon the size and complexity of the proposed project, there are a number of
available approaches. Some might be used alone for a small project or in combination with
one or more others for a larger project. There is no one size fits all method however, some
of these methods will be used far more often than others.

2. Op Cit., Wiegers, Karl E.

Version 9.1

5-9

Guide to the CABA CBOK

5.2.1

Joint Application Development (JAD)

Joint Application Development is a methodology developed in the late 1970s by Chuck


Morris and Tony Crawford, both of IBM. It was called Joint Application Design (not
Development), and was intended to bring IT and the customer together in a structured setting
with the intent of obtaining quality requirements.3 This structured approach was rapidly
approved and adapted for a wide variety of development related activities. Regardless of the
specific application, the structure and rigor are the same.
The power of JAD comes from the time and effort spent to include all of the key players in the
requirements process and to create a productive environment for them to work in. A central
concept in the JAD process is that the bonds established early in the process among the
participants will be strong enough to survive the inevitable frustrations and conflicts that are a
part of every major project.

5.2.1.1

What is Joint Application Development

The original definition of the JAD from IBM was as a technique for development of business
systems requirements. The process includes a significant amount of planning and preparation
prior to one or more extended (3 to 5 day) sessions dedicated to defining the requirements.
The initial session(s) may be followed by other shorter sessions to finish the remaining work
and resolve the outstanding issues.
Although JADs gained widespread popularity in the 80s and 90s for projects of all sizes,
they can be very time-consuming. This was due in part to the requirement for an external
(unbiased) facilitator for the process. Consulting firms, called in to fill this roll, often became
involved in the project on a long term basis. The potential high cost, coupled with the time
commitment caused many larger organizations to back away from the process in the late 90s,
especially as the Agile methodologies emerged.
The resulting drop in requirements quality has created a renewed interest in the process for
mid-sized to large scale projects. For these projects, the time invested in the process will yield
significant paybacks. In a study by Capers Jones, a computer industry productivity expert, of
the 60 projects he studied, those that did not use JAD missed 35% of the functionality, and
that functionality represented almost 50% of the final product code. Those using the JAD
process missed less than 10% of the functionality and those items had a minimal impact of the
final product code.4

3. Yatco, Mei C; "Joint Application Design/Development", Systems Analysis, Management Information Systems Program, School of Business Administration, University of Missouri-St. Louis, Fall
1999.
4. Jones, Capers; Programming Productivity; McGraw Hill.

5-10

Version 9.1

Requirements
5.2.1.2

Who participates in Joint Application Development

The list of participants in JAD includes all of the groups and individuals already identified as
a part of the Requirements Definition process (Sponsor/Champion, Developers, Business
Partners, Customers, Testers and Analysts), plus a few additional specialized roles. One
difference between a JAD and a typical facilitated session is the number of participants;
generally the number of participants in a facilitated session is limited to 10-12 total. In a JAD
session, especially the early ones, the number of participants may be as high as 20-22. The
specialized roles in a JAD are as follows:
Facilitator/Session Leader - Chairs the meeting and directs the flow of activities,
keeping the participants on track. The facilitator is responsible for identifying issues
that need to be addressed and determining whether that can be done within the
session or must be followed up on later. The facilitator traditionally contributes no
content to the session. Further information on Facilitation roles, responsibilities and
skills, can be found in Skill Category 2.
Project Manager/Leader - This individual is explicitly included in the JAD session.
It is essential that they be included in the team developed as a part of the process,
and that they are invested in the decisions that are made. It is equally important they
not fill the role of facilitator; this will cause too many role/goal conflicts during the
JAD sessions.
Scribe, Modeler, Documentation Specialist - One or more of these roles will be
needed for every session. The Scribe is responsible for capturing the flow of the
meeting and all results. A Modeler may be there to support the use of Data Flow
and State Transition Models where they are employed for clarity. A Documentation
Specialist may be there to speed the translation of decisions into permanent records.
None of these roles contribute content to the sessions.
Outside Experts - Occasionally it may be desirable to include industry, financial,
legal, or technology experts in the session to provide information to the
participants. They are there as an information resource, to be called upon as needed.
Their presence keeps the sessions from bogging down for lack of information.
Observers - Some organizations choose to include additional members of the
development team as non-participant listeners/observers. This is a very difficult
role for many individuals, and great care should be taken in making the decision to
include observers.

5.2.1.3

How is Joint Application Development conducted

A JAD is conducted in five phases, and three of those five are focused on preparation. Like
any facilitated session, good preparation is the key to achieving desired results.
1. JAD Project Definition - During this part of the process, the facilitator and other
members of the team will need to ensure the necessary project origination, scope and

Version 9.1

5-11

Guide to the CABA CBOK


authorizations exist. This includes background information about how the project
arose. These documents, if they do not exist, must be created before a meaningful
session can be conducted. A general understanding of the size and complexity of the
project is important in determining how many resources it is reasonable to allocate to
this process. The facilitator must also work with others to gather a clear understanding
of the organizational and political issues surrounding the project.
Additionally, the facilitator in concert with the project manager, the business analyst
and the project sponsor will need to determine if all of the stakeholders can be
accommodated in a single JAD series, or if multiple sessions must be scheduled.
Particularly when the external end customer is a part of the JAD, great care must be
taken in identifying who else will participate in those sessions.
2. Research on Requirements - The facilitator needs to be familiar with any high level
requirements that already exist, identify (in conjunction with the project leader and
business analyst) what the anticipated deliverables are, and what the critical success
factors are expected to be. While not all of this information will turn out to be correct, it
will create a reasonable context for planning the JAD sessions.
3. Prepare for the Session - This includes typical activities for the session as detailed in
Skill Category 2.6 and 2.7. On Facilitated Sessions. One key difference here is that in
classical JAD, it is recommended that a full day be planned for team building activities.
This is because the projects will have a long life span and need to have the team style
communication in place for the duration. Time needs to be set aside for developing a
common working vocabulary for the project. Organizations which already have a well
understood common language may be able to short-cut, but not eliminate this part of
the process.
Preparation for the JAD session includes a pre-meeting briefing on the project
objectives and limitations, as well as the expected deliverables. This briefing can be
done via conference call if the participants are geographically dispersed. It is
important however, they all hear the same thing at the same time. This will reduce later
conflict about who was told what.
If a number of information gathering sessions have been held, especially with external
customers, it is important to provide that information to the participants in advance, so
they will have the opportunity to review and analyze that material.
4. Conduct the Session(s) - The session itself brings participants into a structured, neutral
(non-hostile) environment for the purpose of identifying and resolving issues about
requirements. The workshop session will have a highly structured agenda, with clear
objectives, and includes a mechanism for resolving conflicts and issues.
After the initial activities designed to build a collective acceptance of the roles and
objectives of the team, there is usually a period for addressing language and
communication issues. During this time individuals learn a little about each others
function in the project and how they will contribute to the final outcome. Only when
this has been completed is the group ready to begin the process of identification and
refinement of requirements.

5-12

Version 9.1

Requirements
Any one of several approaches can be used during the process of defining the
requirements, from structured or unstructured Brainstorming, to Business Event
Models, Use Cases, and various flow diagrams. The process will proceed from the
general idea stage to the increasingly specific level of detail needed.
Depending upon the size and complexity of the project and the number of
constituencies involved (the number of internal business partner areas or the customer
segments) the group may need to break into sub-groups to consider some topics in
detail.
One of the advantages of the JAD process is that it helps to identify ambiguities early
as individuals from different backgrounds interpret the suggestions presented.
Throughout the process it is essential to maintain the focus on what each requirement
will contribute toward the achievement of the objectives for the project and the
organization.
During one or several sessions, requirements will be identified, refined, documented
and eventually prioritized for the project. It is not unusual for the team to be tasked
with identifying the Critical Success Factors, Critical Assumptions, Project Risks and
Risk Responses.
At each stage of the process, the facilitator is responsible for ensuring the product(s)
being created represent the consensus of the participants and not a vocal minority
view. This may entail considerable time spent in negotiation and the construction of
acceptable compromises.
Because this time and effort is being expended at the front end of the project, it is
highly visible and this is part of what has created a sometimes negative perception of
the JAD process. What is often overlooked is that these issues will arise and will need
to be resolved during the life of the project. By addressing these issues at the front end,
later and much larger project delays and miscues can be avoided. Not all issues will be
quickly and easily resolved. Despite the best efforts of the participants, there may be a
need to continue to work on one or more specific items after the conclusion of the JAD
sessions. In this case, it is the responsibility of the Facilitator to ensure that the items
are followed to their resolution and conclusion. This may entail additional meetings
with individuals not originally a part of the JAD process.
5. Prepare and Obtain Approval for Output Documents - Depending upon the
commission for the JAD, there will be a few or many output documents. Each of these
must be agreed to by the participants and then distributed to the appropriate areas of the
organization. In some cases there are also approvals required.
Great care and thought needs to be given to the issue of post-process approvals. If
someone not in the process has the ability to veto some, or all, of the decisions made
by the group it is rendered ineffective. If this happens on a consistent basis, individuals
will be unwilling to commit the kind of time and mental energy required to produce a
quality product. From an organizational morale and effectiveness position, it is far
more productive to identify the acceptable constraints ahead of the JAD session. This
can be done as a part of the pre-session planning and communication.

Version 9.1

5-13

Guide to the CABA CBOK

5.2.2

Business Event Model

The focus of the Requirements Definition process is determining the parameters of a business
problem and what is needed to address that problem. The Business Analyst may have already
participated in the problem definition process and perhaps in an initial Business Process
Modeling activity. These both focus on what is. The Business Event Model is an excellent
first step in determining what is to be.
The life of any organization contains five kinds of events: 5
1. Strategic Events - are decisions and activities triggered by the organizations strategic
planners. These help to shape the environment, but are internally generated. A strategic
plan event may result in project activity directed toward the customer.
2. System Events - are manual and automated stimuli that trigger activity sets within the
organization; they originate within the organization. They are invented by the
organization to meet perceived needs.
3. Regulatory Events - are external to the organization and can be an activity trigger, but
they do not come from the customer, and may not impact the customer in any direct
way. These may arise at any point during the project, but should be identified as
potential events during the risk management activities for the project. Risk
Management will be addressed in more detail in Skill Category 6.
4. Dependent Events - are typically the result of vendor relationships, often the result of
outsourcing activities. They trigger responses within the organization, but once again
may not directly impact the customer.
5. Business Events - are triggered by the true external customer. Organizations exist to
fulfill business event activity.

5.2.2.1

What is a Business Event Model

The Business Event Model is a representation, from the ultimate customers perspective, of
how the process will operate. It is an effective tool for isolating requirements from design, as
it does not address at any point how a system satisfies the needs, only what the interaction will
be. In this circumstances it is appropriate to think of the customer as the user of the product.
Individuals within the organization may be the ultimate customer for a product or it may be
someone outside the organization. The concept of the customer and the Business Partner is
discussed in more detail in Skill Category 4. The Business Event Model is a depiction of what
the customer will see, hear, feel, experience and/or do.
In the example of a proposed purchase event from a ticket kiosk from the point of arrival at
the kiosk, the interaction might look as follows:

5. Dickinson, Bryan and LCI Press, Creating Customer Focused Organizations; 1998

5-14

Version 9.1

Requirements

Figure 5-1 Sample Business Event Model

Version 9.1

5-15

Guide to the CABA CBOK


And so on through the verification of the payment amount and the production of the desired
tickets.
Nothing in the Event Flow addresses where the information is stored, how it is formatted,
edited, or presented. Although the introduction indicated this was to be a kiosk based system,
there is nothing in the flow of the interaction specified that would prevent this from becoming
a Automated Voice Response System (telephone based). It merely addresses the desired
customer experience, from the perspective of the customer. In this matter it is more basic than
Use Cases, and is often a useful precursor to the development of Use Cases.

5.2.2.2

How is a Business Event Model Created

A Business Event Model represents collaboration between the Business Partner(s) and
Information Technology. The Business Partner understands the functionality they wish to
provide to the ultimate customer, and the more literal and detail oriented IT participants ask
the what if questions, drawing out more details. This process leads to an understanding of
alternative paths, options, and the way errors and exceptions will be handled. Limits and
boundaries on the model will be addressed more fully when considering constraints later in
this Skill Category. During the development of the Business Event Model, the limits and
boundaries will be focused on functionality to be included or excluded. (In the Kiosk example
shown earlier, a limit on the system functionality was drawn when only the options for
Theatre, Football, Concerts and Hockey were included. Because it was not included, City
Tours is excluded.) Neither IT nor the Business Partner can successfully complete the Model
alone.
The process of creating the model is very straight forward. The participants, often with the
Business Analyst functioning as a facilitator, begin the discussion of what the proposed
product must do or be. The discussion starts at the beginning, with the first customer
interaction. (Any Business Process that does not begin with a customer initiated activity is
suspect.)
What does the Customer do first? is the question to ask. After this is described, and written
on a white board or poster paper under the Customer column, the discussion moves on to
What happens then? and that information is posted under the System Response column. As
the model is being constructed, it will become clear that steps have been missed or the process
began sooner than was originally described. Since no code has been designed or written,
making corrections to the flow at this point is easy and cost effective.
Sometimes questions will arise that cannot be answered by the individuals present about what
responses or activities are required. These can be flagged for later clarification and included in
the final version of the model. These may include legal or regulatory issues about which
participants are unclear. (Are we required by law to give them a receipt, or is it just
something we will provide?)
The finished model should have a complete view of the interaction from the customers (i.e.
the user of the product) perspective. It will describe all of the products and services provided
to the customer; but at no point does it describe how those products and services are provided.
That is design.

5-16

Version 9.1

Requirements
For smaller systems, the Business Event Model can often be completed in a single session.
For larger systems, it may be necessary to have a session for each major chunk of
functionality. If it is necessary to chunk it up, one or more sessions will be needed to
perform some consistency checks among the various pieces (to develop good requirements)!
At this point it is also possible to look for common functionality among the components.
Identifying these in requirements will facilitate the design process.

5.2.2.3

Using the Business Event Model

The Business Event Model should be developed very early in the Requirements Process. It is
a real aid to analytical thinking. Above and beyond that the model provides significant
benefits in the Requirements Process.
Scope - The concept of project scope exists from the very beginning. Things that
are in scope are included; things that are out of scope are excluded from the project.
Scope management is a project management challenge from the outset. Anything
which will assist in the difficult and often politically dangerous task of defining
scope is important.
The Business Event Model begins immediately to define what is in scope and what
is not. Those things which appear on the Business Event Model may be in scope;
everything else is out of scope. In the Ticket Kiosk example, the ticket options
listed include Theatre, Symphony, Football and Hockey. Already excluded from
scope are Concerts, Sightseeing Excursions and Railway tickets. Functionality to
provide those tickets will not be provided in this project.
Drill-downs in each of the other flows will yield similar boundaries. The
completed Business Event Model will describe all of the desired functionality
from the customers perspective.
Acceptance Test Case Development - One of the major challenges facing the
Business Analyst is the management of testing resources. Historically this effort has
been back-end loaded on the project; little or no involvement in the project until it is
time for Acceptance Testing. Then it is a scramble to develop and run all of the
necessary test cases in the limited time available.
Use of the Business Event Model allows the development of functional test cases
very early in the project life cycle. Early development makes some resource
leveling possible, moving some activities much earlier in the project. This in turn
leads to much better estimates of the testing effort that will be required.
Allowing testers to participate with the Business Analyst in the development of the
Business Event Model also makes it possible for them to ask the verification
questions: How can I test this? How will I know if this is working correctly? This
will lead directly to better requirement from the outset.

Version 9.1

5-17

Guide to the CABA CBOK


Use Case Development - As will be seen in Skill Category Section 5.2.3,
completed Business Event Models provide a significant jump start on the
development of Use Cases.

5.2.3

Use Cases

The concept of the Use Case was developed by Ivar Jacobson as a part of his work in ObjectOriented (OO) development methodology.6 Use Cases have migrated from the OO world into
the rest of the mainstream of software development processes. Like the business model, Use
Cases are intended to be free of technical jargon and devoid of design considerations. They
are increasingly assumed to be a fundamental part of the Business Analyst's repertoire.

5.2.3.1

What is a Use Case

A Use Case is a technique for capturing the functional requirements of systems through the
interaction between an Actor and the System. The Actor is an individual, group or entity
outside the system. One key differentiator between Use Cases and the Business Event Model,
is that the only interaction described with the system in the Event Model comes from a person
(the customer). In the Use Case, the Actor may include other software systems, hardware
components or other entities.7 The addition of these actors allows the Use Case to add depth
and understanding to what has been simply a customer perspective.
Actors can be divided into two groups: a primary actor is one having a goal which requires the
assistance of the system. A secondary actor is one from which the system requires assistance.
The Use Case shares a common focus with the Event Model on the goal of the interaction. It
looks at what the Actor is attempting to accomplish through the system. Use Cases provide a
way to represent the user requirements and must align with the systems business
requirements. Because of the broader definition of the Actor, it is possible to include other
parts of the processing stream in Use Case development.
Use Cases describe all the tasks that must be performed for the Actor to achieve the desired
objective and include all of the desired functionality. To return to the example of the system
designed to allow the purchase of tickets at a kiosk, one Use Case will follow a single flow
uninterrupted by errors or exceptions from beginning to end.
The example below continues with the Kiosk example developed in the Business Event
Model. All of the identified options are listed. The actions in italics represent the flow of a
single Use Case (for example Shop by Date; Select a Valid Option (Date); Ask Customer
Enter Date ):

6. Jacobson, Ivar; Object-Oriented Software Engineering: A Use Case Driven Approach; AddisonWesley, 1992.
7. Wiegers, Karl, Software Requirements; Microsoft Press, 1999

5-18

Version 9.1

Requirements

Figure 5-2 Sample Use Case


And so on through the completion of the transaction.
Where there is more than one valid path through the system, each valid path is often termed a
scenario.

Version 9.1

5-19

Guide to the CABA CBOK


5.2.3.2

How Use Cases are Created

Use Cases are created as a part of the Requirements Definition process. For small projects
with little complexity, the Business Analyst may move directly to the creation of Use Cases,
bypassing the Business Event Model. Use Cases can be developed as a part of a JAD process,
or as a part of any sound development methodology.
Each Use Case is uniquely identified; Wiegers recommends usage of the Verb-Noun syntax
for clarity. The Use Case above would be Purchase Tickets. An alternative flow (and Use
Case) that addresses use of the Cancel Option at any point might be captioned Cancel
Transaction.
While the listing of the various events as shown earlier can be helpful, Use Case Models are
developed to provide a graphic representation of the possibilities. In addition to the main flow
of a process, Use Cases models can reflect the existence of alternative flows by the use of the
following three conventions:
1. <<extend>> extends the normal course, inserts another Use Case that defines an
alternative path. For example, a path might exist which allows the customer to simply
see what is available without making a purchase. This could be referred to as Check
Availability.
2. <<include>> is a Use Case that defines common functionality shared by other Use
Cases. Process Credit Card Payment might be included as a common function if it is
used elsewhere.
3. Exceptions are conditions that result in the task not being successfully completed. In
the case above, Option Not Available could result in no ticket purchase. In some cases
these may be developed as a special type of alternative path.

5-20

Version 9.1

Requirements

Figure 5-3 Simple USE CASE Model


The initial development of the Use Case may be very simple and lacking in detail. One of the
advantages of the Use Case is that it can evolve and develop over the life of the project.
Because they can grow and change Use Cases for large projects may be classified as follows:8
Essential Use Case - is described in technology free terminology and describes the
business process in the language of the Actor; it includes the goal or object
information. This initial business case will describe a process that has value to the
Actor and describes what the process does.
System Use Case - are at a lower level of detail and describe what the system does;
it will specify the input data and the expected data results. The system Use Case
will describe how the Actor and the system interact, not just what the objective is.
As the popularity of Use Cases has grown, numerous individuals and organizations have
developed templates for them. The use of any particular template should be based on what
makes sense for the organizations. Many of the development and testing tools available
commercially will include a template. Whatever format is adopted should be employed
consistently throughout the organization. The general requirements for items to be included
are:
Use Case Name
8. Ob.cit, Jacobson, Ivar.

Version 9.1

5-21

Guide to the CABA CBOK


Summary Description
Trigger
Basic Course of Events
Alternative Events
Business Rules
Notes
Author and Date
Additional information directly useful to the testing effort is considered in Section 5.2.3.3.

5.2.3.3

How are Use Cases Applied

Because of their flexibility and the vision they provide into the functionality needed by the
customer, Use Cases are an excellent requirements definition tool. They take the information
derived from the Business Event Model and add more detail and greater understanding of
what will be involved.
Using the Kiosk example above, it becomes clear this process will require access to many
kinds of information from multiple sources. Although no design decisions are ready to be
made about how to access that data, the requirement to do so is obvious. A quick survey of
entertainment purveyors (the source of the tickets) may reveal that while hockey, theatre and
symphony tickets are readily accessible, football ticket are not. This may lead to a change in
scope to exclude football tickets or in an upward revision of the time and cost estimates for
achieving that functionality.
Likewise, the Use Case provides an excellent entre into the testing effort, to such an extent
that for many organizations, the benefits of Use Cases for requirements are ignored in the
effort to jump start testing! When Use Cases are the foundation of the testing effort, some
additional information will need to be added to the template:
Pre Conditions
Post Conditions
Iteration Number, Date and Author
Although Pre and Post Condition information is fairly clear, Iteration may require a little
explanation. As the Use Case evolves from a purely Business Event focus to include more
system information, it may be desirable to maintain several versions or levels of the Use Case.
For example the initial Use Case, developed during the first JAD session(s), might be Iteration
1; as it is expanded to include systems information, it becomes Iteration 2 and when fully
configured to include the remaining testing related information it is Iteration 3. Use of

5-22

Version 9.1

Requirements
common Iteration levels across projects will reduce confusion and aid applicability of the Use
Case.

5.2.3.4

Considerations for Use Case Usage

It is essential to remember that the development of Use Cases, in and of themselves, is not
enough to ensure the organization is effectively defining good requirements. It is perfectly
possible to write any number of Use Cases that, in fact, do not address the real requirements!
The most common cause of this problem is, in the rush to begin design and coding, the failure
to rigorously include the true customer in the requirements process.
While the development of appropriate Use Cases will aid in understanding the requirements, it
is possible to develop far too many, wasting valuable time and effort. Some organizations
assign priority rankings to Use Cases to help maintain the focus on what is truly important.
Then if resources run short, it is possible to make decisions based on the highest priority.
Because of the ability to include system based Actors in the Use Case, there is a temptation to
include increasing levels of Information Technology data, moving away from requirements
into design.
Screen and Report layouts and Data Dictionary Definitions are not part of requirements.
Although customers often will present the request for a new screen or report as a requirement,
it is a solution to a business problem; and that is design.

5.2.4

Process Models

Process models provide the Business Analyst with focused views into the requirements. Each
process model provides additional insight about what the system must do and be. Business
people often find these models difficult to understand and work with at the outset. They are on
the boundary between the natural language employed by the business and the structured
languages employed by Information Technology.
Because the process models are not intuitively obvious to many business people, many IT
organizations simply do not share them. This of course will lead to requirements defects when
assumptions are made about some aspect of the process. Taking the time and effort to make
the business partner comfortable with the models, the process for developing them, and the
need to do so is one of the talents of a skilled Business Analysts. The models presented here
vary in the level of technical detail and applicability to specific projects. The Business Analyst
will want to choose those models which are best suited to the project at hand. They are
presented in increasingly levels of complexity and detail. This is often, but not always, the
sequence in which they are developed.

Version 9.1

5-23

Guide to the CABA CBOK


5.2.4.1

Data Flow Diagrams (DFD)

The data flow diagram represents the path of data (information) through a process or function.
It shows the external sources of data, the activities that transform data, and places where data
comes to rest. In due course, it will be necessary to test each of these aspects of the system;
creating complete and correct DFDs is part of ensuring the requirements are verifiable.
The symbolism generally used for DFDs is as follows:
Processes - are represented by circles showing processes that use data. They
modify the inputs of the process to create desired outputs. Some outputs are
temporary and transient, others will be permanent. The verb-noun syntax is once
again recommended for use. Developing an index of names will help to ensure that
one name is not used for more than one object or process.
External Entities or Process Terminators - are represented by rectangles. They
are outside of the system being modeled. Terminators represent where the
information comes from and where it is going to. In developing the requirements, it
is not necessary to understand why the data is needed, only that it is needed. These
are analogous to Actors in Use Cases.
Data Stores - are represented by parallel lines; these are places where data comes to
rest. When creating the initial DFD, no effort is made to identify how the data is
stored; this is a design issue and will be addressed later in the project lifecycle.
Sufficient at this point is to identify that certain types of data must be stored for
some period of time. That time period is often unknown in the early stages of
requirements definition.
Flows - arrows show the movement of data from point to point through the system.
Data Elements - Labels on arrows represent data elements; this may be a single
element or a packet. This will be defined later and in detail in the Data Dictionary.
Data Flow Diagrams are generally developed sequential, or top down much like peeling
away the layers of an onion. The following represent the three general types of DFDs:
Context Diagrams - defines and validates the scope of the project; what is
diagrammed is in scope, what is excluded is out of scope. Data represented at the
Context level always originates and terminates outside the scope of the project.
Good practice recommends that 10 or fewer data store, terminator and process
elements be contained in any one DFD.
N Level Diagrams - these decompose the context level diagram into finer levels of
detail. Each DFD addresses a subprocess; levels are counted down. The lowest
level of detail is the zero level. Large processes may decompose into four or more
levels. Smaller projects may only have one or two.
Action Level Diagrams - are the graphic and textual description of the logic used
by a process to convert inputs to outputs. It ties data models to process models. As a

5-24

Version 9.1

Requirements
part of this analysis, specific calculations and processing, not initially visible, will
be identified as requirements.
The Context Level DFD shown in Figure 5-4 uses the ticket kiosk example discussed to create
the high level flow of data. Notice that the need to have specific kinds of information available
at certain points in the process is made explicit here. There is no indication of how long the
Data Element Ticket Request will need to exist; only that it will be used by two of the
processes. The Data Store Ticket Request Data may be a permanent collection of
information or it may be transient. The designers will make that decision once the full need for
the information has been determined by the requirements process.

Figure 5-4 Data Flow Diagram Example at the Context Level


Ed Yourdan, a pioneer in structured analysis, suggests a less intensive approach may work
well with many projects.9 He recommends the following four step approach:
1. A list of all events is made
2. For each event, a process is constructed

9.

Yourdan, Edward; Just Enough Structure Analysis; http://www.yourdon.info/jesa.php.

Version 9.1

5-25

Guide to the CABA CBOK


3. Each process is linked (with incoming data flows) directly with other processes or via
data stores so that it has enough information to respond to a given event.
4. The reaction of each process to a given event is modeled by an outgoing data flow.
This approach is somewhat less labor intensive, it is called the Event Partitioning Approach. It
evolved from the same roots as the Business Event Model and the two work well together.
For the Business Analyst, the creation of the DFD serves two purposes; it refines and clarifies
the requirements about how various information flows occur and it provides another
opportunity for the early development of test cases. As before, the benefit of early
development is the continuing examination and clarification of the requirements as well as
work load leveling.

5.2.4.2

Entity Relationship Diagrams (ERD)

Entity relationship diagrams offer the opportunity to create a more detailed view of what is
happening with the data. While DFDs are about how the data moves, ERDs are about how
various pieces and groups of data relate to each other. Some of the same nomenclature will be
seen, but in slightly different roles. ERDs are initially created during the Requirements phase,
but will continue into design. In requirements, they are about logical entities; in design those
entities will become physical.
As with DFDs there are a number of approaches to the diagramming schemes but most have
the following symbols in common:
Entities - are represented by Rectangles. An entity is a noun; a discrete object; a
person, place or thing. Entities may represent a single item or a group of items. A
data store may appear as an entity or as an actor. Ticket Requester (a person) and
Ticket Request (a data flow) may both be entities. It is in the role of data, data
elements, and data stores that entities are of prime importance in the ERD. Until
this point in the Requirements process, the details about data have been fairly
general. Now it is necessary to understand exactly what pieces of information are
needed and how they relate to each other.
Attributes of an Entity - are represented by Ovals. A Ticket has Date, Time,
Location and Price attributes. Each attribute of an entity is connected to the entity
by a line. Related attributed may be joined by a single line which is then connected
to the entity. For example, a ticket might also have an attribute of Color, which
could be unrelated to the first four attributes. Color would have a separate line
connector to the entity, Ticket.
Relationships - are represented by Diamonds. Relationships are verbs, phrased in
the active tense; moving, tracking, buying (or moves, tracks, buys). Relationships
connect entities to each other: a Ticket Requestor (an entity) Placing (a relationship)
a Ticket Request (an entity).
Relationships are further described in terms of multiplicity; that is how many of
each entity are a part of the relationship. There are four basic relationships types:
5-26

Version 9.1

Requirements

One to one:

The above is the most basic relationship; it is read as a One-to-one Relationship.


The symbol means that for every item in entity A, there is one and only one item in
entity B. If these entities will eventually become databases, there will be only one
row in A for every row in B and vice-versa. Notice that this one symbol reflects
what is occurring at each end of the relationship. In this one instance, the
relationship is the same for A to B and for B to A. One-to-one relationships are not
common in most applications, although they do occur.
One to zero or one:

The second relationship is both more common and more complex. The left end of
the line, nearer the letter A, reflects the relationship of B to A. In this case each
item in B is uniquely related to one, and only one, item in A. Reading the symbols
at the right end of the line, near the letter B, A is uniquely related to either zero or
1 item in B. If the circle were at the other end of the line, the meanings would be
reversed.
One to many:

The crows-foot, at right end of the line, symbolizes many. This relationship can be
read as one item in A is related to one or more (perhaps many) items in B, but not
zero items. Each item in B however is only related to one item in A. Clearly this
relationship can be reversed. What is not allowed are many-to-many relationships.
Where a many-to-many relationship appears, it must be broken down into finer
detail. It would be possible to have a null value symbol at the A end of the line,
indicating that there might be no match in A for some items in B.
One to zero or many:

Version 9.1

5-27

Guide to the CABA CBOK

The fourth symbol set allows for the possibility an item in A may relate to none or
many items in B. Each item in B however, must still have a relationship with an
item in A. This flow also can be reversed.
Figure 5-5, the Entity Relationship Diagram, describes how the various entities in
the Kiosk example relate to each other. Clearly, by the time the Business Analyst
and the rest of the Requirements teams have been able to identify these
relationships, much of the ambiguity has been driven out of the process.

Figure 5-5 Entity Relationship Diagram

5.2.4.3

State Transition Diagrams

State Transition Diagrams provide a representation of all states in which an object may exist
during a process. They also include a representation of how the process is initiated, limits the
parameters which control the execution of the process, and how the process can be terminated.
While many of these issues have been hinted at in other requirements definition processes,
nowhere are they made this explicit. Because they do require a consideration of limits on the
process, previously unidentified business rules are often identified during the creation of the
State Transition Diagram.

5-28

Version 9.1

Requirements
As with the DFD and the ERD, the State Transition Diagram has a standard set of symbols
which are used to describe the process.
States - Rectangles with rounded corners are used to represent a state in which the
object may exist. A single object may exist in multiple states within a single
process, but may only be in one state at any specific point in time. It must be
possible to determine how an object arrives at that state and what causes it to move
to another state.
Transitions - from one state to another are shown using arrows. Transitions connect
state pairs to reflect allowed moves from one state to another. A single state may
pair with multiple other states in a predetermined sequence.
Causes - labels on arrows are causes. These causes are the conditions that must
exist for an object to move from one state to another.
Guards on the process - are shown as rectangles. These are the rules that must be
satisfied for a move (state change) to be allowed.
Actions taken in response to state change - are represented as slashes. A single
state change may generate multiple actions.

Figure 5-6 State Transition Diagram


Transitions are often troublesome in design and construction. Effective use of the State
Transition Diagram will help identify issues much earlier in the process. Returning to the

Version 9.1

5-29

Guide to the CABA CBOK


Kiosk example, notice in Figure 5-6 the guard box No Activity for 90 Seconds. This is a
business rule that did not surface in earlier discussions. For the Business Analyst and the
Tester, these are key areas to focus on when planning the Acceptance Testing Effort.

5.2.5

Prototyping

A prototype is a non-operational representation of a process or system. Once it is built and has


served its intended purpose, it will be thrown away. For larger projects, the development of a
prototype may be very cost effective as a means of identifying requirements. With the rapid
prototyping tools available, customers and business partners can be shown a representation of
the system that will give them much better insight into what the finished product might be.
Many people in the Business community understand prototyping, however, it is always wise
to preface the use or display of a prototype with the warning that this is not, and cannot be a
production product. The analogy of an architects scale model is an apt one, the building is
made of cardboard and glue, not bricks and mortar. No matter how big it is, people cannot live
or work in it.
Use of the term prototype instead of model will help in making the distinction; after all one
can purchase and use a model home or the floor model of a car. It may take time and effort,
but it can be done. There is no such corollary for prototype. Prototyping and its role in the
development process will be discussed in more detail in Skill Category 6.

5.2.6

Test First

Many of the techniques above grew out of the Object Oriented Methodology. Test First is a
foundation of the so-called Agile Technologies, the best known of these is Extreme
Programming or XP10. These approaches focus on delivering high quality products to
customers in fast increments. The approach to Requirements is to focus on how it can be
tested, and to develop and execute those tests before any code is written. Those tests then
become a part of the overall test suite, first at the unit level and later at the systems level.
This process can be especially beneficial when working on an existing system. By creating
and running the test cases before the new code is written, developers will have a much clearer
understanding of what will need to be changed for the new system to work correctly. This
process of changing the old code to work correctly in the new context, before installing new
code is called refactoring, and is a key step in delivering quality products. Agile
Methodologies will be examined in more detail in Skill Category 6.

10. Andres, Cynthia and Beck, Kent; Extreme Programming Explained: Embrace Change; Addison
Wesley.

5-30

Version 9.1

Requirements

5.3 Quality Requirements


The first two sections of this Skill Category have been focused on the Business or Functional
Requirements. These are, after all, the reason the system is being developed. But it is entirely
possible to develop a system that provides the complete and correct business functionality and
still have a system that is completely unsatisfactory to the intended business partner or
external customer. Business Requirements are not the entire story, there are other things to be
considered. The Business Analyst must understand how well the system must perform
(quality factors) and the issues that restrict the project activities (constraints). Without a full
understanding of these two areas, the best set of business requirements will remain
unsuccessful.

5.3.1

Quality Factors

Quality Factors or Quality Requirements deal with how a system or application will perform.
In some organizations these are referred to as Technical Requirements, either term is
acceptable; it is the concept of Quality Requirements that is essential. The importance of
Quality Factors cannot be overrated. According to Tom Gilb, more systems fail because of
Quality Factors than for any other single reason.11 Often the business partner or customer
expectations regarding these factors are not articulated early in the process; in fact, they often
do not appear until the product is presented and does not meet those expectations. Addressing
the Quality Factors also ensures areas of IT, often not thought of as stakeholders in the
requirements process, are included; Operations, Data Security, and the Network Staff are all
typical of this phenomena.

5.3.1.1

Efficiency

The amount of processing resources, including programs, data and the associated environment,
required to perform a function. Being efficient was once the most highly desirable attribute for
programs, because the resources were very expensive. Every organization has some resource
constraints. Making efficient use of resources is part of creating a quality product. Applications and
products that require excessive resources will create dissatisfaction in the long run. Alternatively,
as the cost of resources declines, continued emphasis on efficiency alone can cause organizations to
make poor decisions on product design and development creating problems for the organization in
other dimensions. For the CSBA, finding the balance point between the two extremes is essential.

5.3.1.2

Reliability

The extent to which a program or process can be expected to perform its intended function
with the required precision. Reliability is closely correlated with the functional attribute of
11. Gilb, Thomas, Software Engineering Management; Addison-Wesley, 1988.

Version 9.1

5-31

Guide to the CABA CBOK


correctness, but takes it a step further. Reliability looks at the consistency with which correct
results are produced and the extent of that correctness. This often includes the stability of the
system in production. When defining reliability within an organization, care must be taken to
clarify the boundary between reliability and availability. A product may consistently produce
the correct result, but be prone to outages for internal or external causes, that is it is highly
unstable. Internal causes should be allocated to Reliability while external causes are allocated
to Availability.

5.3.1.3

Availability / Response Time

The extent to which a system or program is functionally accessible by those intended to use it.
Availability can be measured many ways, at many places in the process, resulting in wildly
varying times; this issue can create friction and frustration if not anticipated and dealt with
effectively. Likewise, a multi-second response time from one screen to the next can create
major processing bottlenecks, or cause customers to go elsewhere. Anticipating volumes of
traffic and provisioning the necessary system resources can be both time consuming and
expensive. Making a significant mistake in this area can be life-threatening for the
organization. A system may be highly stable/reliable, and still have low availability because
of the difficulty of resolving the problems and restoring the system when crashes do occur.

5.3.1.4

Integrity / Security

The extent to which access to software and data by authorized individuals, groups or programs
can be controlled. There are two aspects to integrity and security, one deals with individuals
that are supposed to have access to the system, process or data; the other deals with everyone
else. These two facets can create significant conflicts with other Quality Factors if not fully
understood and explored.

5.3.1.5

Usability

The effort required to learn, operate, prepare input, and interpret output of a program, process
or system. Early programs and processes required individuals using them to remember long
strings of complex commands. As more people used systems, the need to make them easier to
use increased. Ease of use for the ultimate user of the system often adds significantly to the
complexity of the design and the resources required for implementing the design (efficiency).

5.3.1.6

Maintainability

The effort required to enhance and improve the product or system after the initial deployment
of the functionality. Clearly defined requirements, well articulated in design, fully
documented architecture and standardized coding processes all contribute to an application
that is easily maintained. All of these things also drive up the initial time frame for
implementation.
5-32

Version 9.1

Requirements
5.3.1.7

Flexibility

The extent to which the system is capable of being used in multiple configurations, specific to
the needs of one or more groups of business partners or customers, without requiring
significant modifications. The business community and external customers want systems and
processes to be flexible, so they can be changed on-the-fly. They want configuration
options that can be exercised without programming intervention. This flexibility requires
sophisticate design capabilities, generally is resource intensive, and often is a challenge to
manage in the security/integrity arenas.

5.3.1.8

Portability

The effort required to transfer a program or process from one hardware or software platform/
environment to another. During the early days of computerization, hardware and software
environments were relatively stable, changing only slowly over a period of year or more. In
todays environment a commercial product can anticipate a life span of two to three years. The
cost to build in the ability to rapidly change from one platform to another is often seen in
terms of operating efficiency and response time.

5.3.1.9

Reusability

The extent to which a program can be used in other applications, related to the packaging and
scope of the functions and programs. Like efficiency, reusability is a desirable attribute when
resources are very expensive. Reusable programs and modules can also provide the ability to
quickly delivery proven functionality to the customer. For this reason, when an organization is
going to be developing a series of applications that will require very similar functionality, the
development of reusable code can provide considerable cost savings. The price for this is the
time and effort required to develop and maintain the code in an application neutral format.
When considering the development of reusable code, the Business Analyst should work with
the IT staff to verify that the probability of reuse in the very near future is high. Otherwise the
extra time and effort may not provide value to the organization.Writing reusable code requires
significant time and effort and today is rarely worth the effort.

5.3.1.10

Interoperability

The effort required to couple and uncouple systems. This Quality Factor has increasingly been
seen as a design decision rather than as a requirements issue. As the environment for
developing and supporting modular systems has become more robust, the expectation that
data can be shared seamlessly among applications has grown also. The need to have access to
the information is a requirements issue; making it possible is a design issue.

Version 9.1

5-33

Guide to the CABA CBOK

5.3.2

Relationship of Quality Requirements

As can be seen from the descriptions in Section 5.3.1, not all of the Quality Factors interact
well. The effort to make programs more efficient will impede efforts to make them highly
usable and easily maintainable.
The matrix in Figure 5-7 demonstrates the general relationship among some of the Quality
Requirements. The attempt to maximize efficiency will have a negative impact on all of the
other attributes in this matrix. Some of the conflicts can be easily resolved; others are more
difficult.

Factors
Correctness

Correctness

Reliability

Reliability

Efficiency

Efficiency

Integrity

Integrity
o

Usability

Maintainability

Testability

Flexibility

Portability
Reusability
Interoperability

o
x

x
x

x
x

Usability
Maintainability

Testability
Flexibility

o
x
x

Negative impact on the intersecting attribute = x

Portability
o

o
o

Reusability
Interoperability

Positive impact on the intersecting attribute = o

Blanks indicate no relationship between the pairs of attributes

Figure 5-7 Quality Requirements Conflict Matrix


The issues with Integrity and Security are difficult. It is essential that Usability,
Maintainability, and Response Time all be maintained for those who should have access.
Differentiating those who should have access from those who should not takes resources.
Determining precisely what level of access is allowed for each authorized individual takes
more resources. The need to stay one step ahead of determined and creative hackers means
that efforts do not stop with implementation, so maintainability must not only begin high, it
must stay that way.
In addition, attention to the issues presented by the Quality Factors must occur well in
advance of the time when the impact can be seen. The resources are consumed early and often
the benefits show up late. There is an enormous temptation to short change the early
expenditure to save time and money. The negative impact will be correspondingly greater if
this happens. Figure 5-8 shows time relationship for many of the Quality Factors. Note that in
this example Testability has been included, although it was not in the list in Section 5.3.1.
Testability is a necessary attribute of all aspects of requirements.

5-34

Version 9.1

Requirements

Evalu
ation

Development

Post-development

Reqts

Design

Code &
Debug

System
Test

Operation

Revision

Correctness

Expected
Cost Saved vs.
Cost to
Provide
High

Reliability

High

Efficiency

Low

Integrity

Low

Usability

Transition

Medium
High

Maintainability

Testability

High

Flexibility

Medium

Portability

Medium

Reusability

Medium

Interoperability

Low

O = where quality measures should be taken

X = where impact of poor quality is realized

Figure 5-8 Time Line Relationship among Quality Factors

5.3.3

Measuring Business and Quality Requirements

One of the commonly overlooked aspects of the Requirements Definition process is the need
to be able to measure the result. For most Business Requirements, the measure is binary. The
requirement is either present or absent; the flag is on or off; the field is highlighted or it is not;
the calculation is correct or it is not, and so on. While the scenario route to any specific
requirement may be long and complex, certain states may only be able to be achieved after
multiple interactions, but the end result is either positive or negative.
Quality or technical requirements are generally measured on an analogue scale; this means
that there are a range of possible values. Some parts of that range may be acceptable and other
parts may not. It is essential to clearly identify how the requirement will be measured. Earlier
the problems with measuring Availability and Response time were alluded to; a typical
Availability and Response Time Requirements begin by looking like this:
Availability - The system will be available 99.5% of the time, measured monthly.
Response Time - The system will provide sub-second response time between
screens 94% of the time, 1 to 3 seconds not more than 5% of the time. 100% of
between screen response times will be less than 6 seconds.

Version 9.1

5-35

Guide to the CABA CBOK


This may seem like a reasonable and achievable requirement, but it lacks the necessary clarity
and definition. Where will this be measured? At the server? At the desktop? A remote laptop?
The impact of that difference is clear. The measure must be clear also.
Availability - The system will be available 99.5% of the time, measured monthly, at
the server.
Response Time - The system will provide sub-second response time between
screens 94% of the time, 1 to 3 seconds not more than 5% of the time. 100% of
between screen response times will be less than 6 seconds, measured at the server.
While this will help the operations and network staff to understand their requirement, it does
not provide the person using the desktop or the laptop with a clear expectation of what they
will experience.
Availability - The system will be available 99.5% of the time, measured monthly,
measured at the server. The system will be unavailable from 2:00 a.m. until 2:30
a.m., the first Sunday of each month for routine maintenance. The system will be
available 98.5% for desktop and locally connected laptop users at the Main Office,
measured monthly. The system will be available 99.9% of the time between 6 a.m.
Monday morning and 9 p.m. Friday night for all other authorized users, measured
monthly.
This expanded description of what is required in terms of Availability clarifies exactly what
will happen. In addition to a planned monthly outage of 30 minutes, there is also the
possibility of up to 20 minutes outage, per month from other causes.

5.3.3.1

Minimum Level of Acceptable Quality

Quality Requirements measures such as the one in the preceding section clearly identify what
the external customer or the business partner wants and needs. But what happens if IT does
not think they can deliver that level of quality, or at least not initially? Does the project stop?
Does IT agree, knowing full well they cannot deliver? Is something else specified?
The answer to the question is provided by Tom Gilb.12 The development of the Minimum
Level of Acceptable Quality, will address the issue. This is the lowest possible performance by
the system that will allow it to be useful to the intended customer or partner. Below this level
of performance, the system has no value.
Quality Requirement must always include the Minimum Level of Acceptable
Performance as a part of the measurement description. This then allows the Quality
Requirement to specify what the customer actually wants and also what they are willing to
accept.
The dialogue to develop this information will be very productive and informative for both IT
and the customer or business partner. It will provide IT the opportunity to talk about the
12. Op. cit; Tom Gilb.

5-36

Version 9.1

Requirements
relative cost of achieving alternative levels of performance. Those costs may be staff time,
hardware resources, or schedule impact to other projects.
It will provide the customer or partner the opportunity to talk about the relative value of
achieving alternative levels of performance. Those values may include increased revenue,
reduced expenses, customer satisfaction, or market share.
The result of this dialogue is often a staggered implementation of the Quality Requirement;
initial implementation at or near the minimum level of performance and progressing over a
specified period of time to the desired level. This approach makes clear what costs and
benefits are associated with that level, so that reasonable business decisions can be made and
everyone will know how and why.

5.3.4

Enterprise Requirements

Each Business Unit operates under the larger umbrella of the organization or enterprise as a
whole. The Business Units develop strategies and tactics to support the organizational vision
and mission. Just as the individual business unit plans and projects must fit within the overall
plan, so too must the requirements fit within the enterprise umbrella. Often Enterprise
Requirements are defined and embedded in the IT Standards and Procedures Documentation;
in that way each project can simply refer to that section of the Standards and Procedures to
incorporate those Requirements. Enterprise requirements apply to all projects, regardless of
size or intent; they fall into three general categories:
1. Requirements to standardize
2. Requirements to handle accessibility issues
3. Requirements to ensure the existence of the needed levels of control

5.3.4.1

Standardization

These are often referred to as common look and feel to systems. Standardization addresses
such things as how and where organization logos will be used, required language(s) and
acceptable images. While some of the standardization issues fall within design, such as where
controls will be placed on screens, there are others which are strictly requirements.
It is easy to assume that everyone knows that, but failure to explicitly include references to
those standards may result in significant embarrassment for the project team. This is
particularly important when working in a multi-cultural, multi-national environment, where
some words, references or images may be offensive to parts of the organization not
represented on the requirements team.
Standardization is not simply a negative, it is also about who the organization is and how they
wish to be portrayed. Managing that image through their system representation is very
important.

Version 9.1

5-37

Guide to the CABA CBOK


5.3.4.2

Accessibility

As computer technology evolves, individuals once excluded from participation due to


physical limitations are increasingly part of the business and customer base. The requirement
to address those aspects that limit participation is increasingly common. A number of
governments13 have created accessibility standards for their own systems which are rapidly
being adopted by private enterprise.
The visually impaired may need to be provided with voice recognition capabilities that will
allow them to talk to a system, paired with reader capabilities that provide choices verbally
rather than visually.
The hearing impaired may need to be provided with visual error clues instead of the standard
beeps or tones. They may also need text transcription to provide access to dialogues.
The mobility impaired may need to be provided with alternative interface to commands
instead of a mouse or keyboard.
Each of these sets of requirements must be articulated prior to design if the system is going to
be effective in meeting these needs.

5.3.4.3

Control

Skill Category 4 addressed the issue of the need for appropriate controls from the business
perspective. This need must be translated into effective system requirements. Compliance
with various national legislation on the control of financial transactions is a significant part of
the process. While in some countries (U.K. for example), compliance is optional, the choice
not to comply must be explained, which results in few organizations electing non-compliance.
Explicit statements about controls in the requirements document will ensure that those
controls are properly designed into the system.

5.4 Constraints and Trade-offs


In an ideal world, anything is possible and everything will be included. In the real world,
some things, though possible are not practical as there are not enough resources to do
everything. Choices will need to be made.
To this point in Skill Category 5, the emphasis has been on identifying all of the possible
requirements for the prospective project. The point has been to ensure they are all known.
Now that they are known, it is time to put some parameters around them. This section
describes how and where this occurs.

13.Detail of the web assess requirements from the US Federal government are provided in Section 508
of the American with Disabilities Act (ADA); see www.section508.gov/.

5-38

Version 9.1

Requirements
Many of the issues raised here are a direct bridge to the design process. Why are they included
as requirements? The answer is fairly straight-forward, designers cannot be, and should not
be, expected to guess at the environment they are designing for. The job of requirements is to
get this right.

5.4.1

Constraints

A constraint is anything that places a limit on what is possible. For Information Technology
projects, some of the constraints are constant, others change with the project. To place the
requirements in the proper context, it is essential to identify the constraints and fully describe
them before entering Design.

5.4.1.1

Hardware

Hardware constraints run the gamut from simple to complex; the amount and capacity of
available servers; desktop/laptop capabilities; internal and external network capacity and data
storage resources issues to name a few. Will the proposed system run on the existing hardware
platform; is the proposed hardware platform capable of performing the desired functionality;
does hardware to meet the requirements exist or must it be created? For each project some
sub-set of these issues and others like them will need to be addressed during requirements.

5.4.1.2

Software

Software constraints include the current and proposed operating system environment, the
interoperability of multiple products and multiple operating systems; the need to interface
with existing applications and to anticipate the arrival of new applications. Software
constraints may also include required or prohibited languages and tools. Each of these will
impose limitations on the final design solution.

5.4.1.3

Policies, Standards and Procedures

Each of these contribute to the structure that describes what can and cannot be done in IT and
the wider organization. These may support or prohibit changes to the standard hardware and
software platform; they may require or forbid specific development methodologies; they may
support or exclude the inclusion of the business partner and the external customer in the
development process; they may encourage or ignore quality consideration.

5.4.1.4

Resources

This is usually what is mentioned first when the question of constraints arises. The classic duo
of schedule and budget are at the top of the resource constraint list for many organizations.
Version 9.1

5-39

Guide to the CABA CBOK


How many people can we have, for how long? How experienced are they? Will we have the
right number of people, but with the wrong skills? Do we get to choose? What is the budget
for the project, and who is in control of it? How are schedule and budget managed? Are they
inflexible and arbitrary, or realistic and negotiable? As emotionally fraught as these issues are,
it is essential to know the answers to these questions during requirements. They will have an
enormous impact on what is a realistic set of finished requirements.

5.4.1.5

Internal Environment

This addresses many issues that IT often tries to ignore: especially organizational politics.
This includes as consideration of who is the project sponsor and how important is this project
to them and the organization as a whole. It also includes taking cognizance of the wider
issues, is the organization making money or losing it? Is there a stable, collegial environment
or are shakeups and reorganizations a fact of life? What do these issues have to do with
requirements? Everything! Unstable, volatile environments suggest that smaller, less risky,
requirement sets might be delivered quickly, before the rules change. A more stable and
supportive environment makes larger and more complex requirements set a realistic
possibility.

5.4.1.6

External Environment

This addresses issues that originate outside the organization, but that are still important to the
project. Typically included in this set of constraints are local and national legislation, either
currently in force or impending; it may also include industry standards. There may be a need
to interface with a dominant supplier or customer, in which case their requirements are also
project requirements. The economy as a whole may be significant to the project: is there a
small window of opportunity that only the nimble will be able to exploit? Are there local,
national or international implications for the product? If so, each of those must be separately
addressed during requirements to ensure that unwarranted conclusions are not drawn.

5.5

Critical Success Factors and Critical Assumptions

As the Requirements Definition process is being conducted, it becomes apparent not all
requirements are created equal; some are much more important than others. Some are so
important to the project that if they are not properly implemented, the project cannot be
considered a success. There are other things, important to the success of the project, that are
outside the control of the organization. All of these sets of information must be considered
when creating the finished requirement set.

5-40

Version 9.1

Requirements

5.5.1

Critical Success Factors (CSFs)

Critical Success Factors, are those factors within the control of the organization and essential
to a successful project. These were addressed briefly in Skill Category 4, as one important
element of the finished Feasibility Study. Critical Success Factors must meet all of the criteria
for good goals and objectives (Specific, Measurable, Assignable, Realistic and TimeOriented.)
Without an effective definition of what constitutes a successful project, the team may continue
to churn endlessly on trivial items. By clearly agreeing up front, during the Requirements
Definition process, what it takes to succeed, everyone is both focused on the right things
and knows when they have achieved them. Failure to define success early can create strong
motivation for scope creep and make control of the project much more difficult.
The number of Critical Success Factors will vary with the size of the project, but should never
be large. A small project may have one or two. Larger projects may have three to five, per
delivery. Creating larger lists of CSFs for projects merely diverts the attention of the project
team from what is truly important. When everything is critical, nothing is critical. This is often
a difficult process for the team as they must whittle down the list of the important things to the
truly essential ones. Applying techniques such as multi-voting followed by the Forced Choice
process described in Skill Category 2 will help to reduce the number of Critical Success
Factors down to those that are truly critical. Items that are not Critical Success Factors may
still operate as important requirements and/or constraints for the project. It is often an
emotional and difficult process if left until implementation time.
Critical Success Factors should be clearly and directly linked to the Business Problem that
was the originating reason for the project, and so, directly linked to the organizations
customers. Critical Success Factors that have no significance for the customer base must be
scrutinized carefully to determine what purpose they serve. Occasionally, compliance with
some local or national regulation will become a CSF, this is not an exception to this rule. In
the case of regulation, failure to comply may well jeopardize the organizations ability to
serve the customer or to effectively serve the customer, and in extreme cases, to serve the
customer at all.
Development of the projects CSFs will help to focus the teams attention on those things
which must get accomplished. Requirements that do not support these items will have a lower
priority, even though they may still be very important. The weaker the link to the CSFs, the
lower the priority will be. Some of the very lowest priority items may need to be dropped from
the product, either temporarily or permanently. They will become out of scope for the current
delivery.
An example of project CSFs would be:
Required Functionality: the minimum delivered functionality will include the Sales
and Inventory functions as described in Requirements 1 through 4, 8 and 9.
Required Locations: the product will be implemented in all of the EUC locations
except France on or before 1 January, 20xx. Requirements 5, 7 and 12.

Version 9.1

5-41

Guide to the CABA CBOK


Required Availability: the system will be available 99.5% of the standard normal
business week, as locally described, measured at the remote locations within 90
days of implementation. 90% availability as described above is the Minimum
Acceptable Level of Performance for Implementation. Requirements 6 and 15.
Notice the clear traceability of the CSFs to the project requirements. What is also clear is that
they are tied to the most important requirements. If these criteria are met, the project is a
success; otherwise, it is not. Creating reasonable CSFs will keep the team energized to
achieving them. For a more detailed review of establishing goals and objectives, review Skill
Category 4.

5.5.2

Critical Assumptions (CA)

Critical Assumptions are key factors for the project that are outside the control of the project
team and they can make or break the project. As important as it is to define what constitutes
success, it is equally important to identify those things which can jeopardize that success.
Critical Assumptions should always be complete with recommended mitigating tactics or risk
responses.
The discussion about Critical Assumptions is an evolutionary one; the initial focus will be on
what could jeopardize the success of the project. Many of these potential risks are within
control of the organization. They can be managed using various risk techniques; some of them
are common, some are less so. Beginning this list as early as possible in the Requirements
Definition process allows information about the CA to be collected and assessed in a
structured fashion. Any project with a lengthy list of Critical Assumptions should be carefully
appraised. Is the organization willing to expend the resources required on a project with this
many issues outside their control?
Once the list of risks has been created, and those within control of the organization eliminated,
the remainder must be scrutinized to determine how great a risk they pose to the project. Only
those few, which could actually cause the project to fail, should become critical assumptions.
For example, an organization that develops Accounting Software might have the following:
Critical Assumption - No substantial changes to the IRC (Internal Revenue Code)
provisions for the Not-For-Profit Sector, that are required to be effective on or before
the product delivery date, and that have not already been proposed, will be enacted
after the final design is approved and the test cases written, but before the scheduled
delivery date.
Response: In the event that this does occur, a statistical analysis of the customer
base will be conducted. If the change will have a significant impact on less than
20% of the customer base, it will be deferred to the next release. If the change will
impact 20% or more of the customer base, it will be added to the current phase.
An estimate will be prepared for the new requirements. If the estimate will require
80 Standard Work Days (SWD) or less, additional resources will be acquired to

5-42

Version 9.1

Requirements
perform the work and the date will not change. If more than 80 SWD are required,
the delivery date will be changed.
In reviewing this example, there are several items of note. The window of opportunity for this
event to occur has been very narrowly defined. The organization has identified the risk of
changes, but has determined changes proposed prior to a specific point in the process can be
accommodated.
They have also identified an escalating series of responses, depending upon the severity of the
impact.
1. If the impact is not significant to customers, it will be deferred.
2. If the impact is significant, but only to a small segment of customers, it will be deferred.
3. If the impact is significant to a larger number of customers, it will be included in the
current product.
4. If the size of the change is not too large, resources will be added and the date held
constant.
5. Finally if the size of the change is large, the date will be moved.
As with the Critical Success Factors, identifying the responses during the Requirements
Process allows participants to address the issues logically rather than emotionally. Waiting
until the last minute to decide what to do is always the most expensive option. A second
consideration is that under pressure to do something now the necessary time to research
alternatives and come up with the correct response may not be taken. This may result in a
response that is inappropriate, ineffective or inefficient. Finally, experience shows that
obtaining the needed approvals for the resources to address risks is often a time consuming
process. This results in delays to the schedule and cost overruns.
Managing risks, of which Critical Assumptions represent just one category, will be addressed
in more detail in Skill Category 6.

5.6 Prioritizing Requirements and Quality Function


Deployment
Prioritizing Requirements is not simply an academic exercise. It provides real value to the
team and to the organization. Effective use of priorities allows the most important items to be
delivered first. It provides the framework for effective decision making about what to include
in a project, and when to include it.
The time spent in prioritizing requirements will be saved later in the project, when hard
decisions must be made. One of the early situations is the sizing of the project or the initial
delivery of the project. Based upon the priority and time estimates for development, it is
possible to optimize the functionality delivered using the resources available. Much like

Version 9.1

5-43

Guide to the CABA CBOK


packing the trunk of a car, the large important items go in first, then the nooks and crannies
can be filled with smaller, but still important items.
If the project is running late, the best and most obvious strategy, if the date cannot be moved,
is to cut scope. But what to cut and how to decide? With a complete listing of priorities, the
answer is simple, look at the lower ranking items first. This does not mean it is a pain-free
process. It does mean the foundation has been laid for making sound business decisions.
The alternative scenario is a group ranking such as ABC or High, Medium, Low.
Realistically things that a ranked as C or Low rarely make it into the product. So the decision
makers are faced with choosing among things that are all important. Often there is pressure to
make a decision quickly because time is an issue; the result is often not the best choice.

5.6.1

Prioritizing Requirements

The Requirements process does not end with the identification of all of the Requirements. In
harvesting fruit, all of the apparently ripe fruit is picked, then those that are too small, have
rotten spots, or other defects are culled out. So too with requirements. Not all of those defined
will actually be used, or at least not immediately. The question is then how to decide which to
choose.
Skill Category 2, Section 2.3.10.2 discussed the need for Prioritization and presented two
methods for developing consensus around a list of priorities: multi-voting and forced choice.
The Business Analyst will need to be very comfortable with both of those methods.
Sometimes there is additional work to be done in preparation for the use of those two tools.
When working with multiple stakeholders, it is worthwhile to have each stakeholder go
through a mini-prioritization process. This should include all of the common function
requirements that will support all of the stakeholders plus the stakeholder specific
requirements. During this session, it is important to maintain focus on the importance of
achieving the overall business objectives. This will help the individual groups to recognize the
importance of the common functionality.
At this point also it is worthwhile to talk about meaningful chunks of functionality, as
opposed to entire processes. The goal here is to help each stakeholder group begin to develop
realistic expectations about what they might be able to get in the new system and when that
might be feasible. Doing this internally will allow some grumbling about the need to share
resources, without having to fight with another organization for those resources at the same
time.
Defining the functionality chunks can be a challenge. One approach to this problem is to use
the Affinity Diagram process described in Skill Category 1, Section 1.5.1. Allow the
participants to group the requirements into related sets, give the sets a working title, then
determine the relative importance of each functional set. When working with multiple groups,
the Business Analyst will want to maintain a working list of titles so there is no duplication or
conflict.

5-44

Version 9.1

Requirements
Once each of the stakeholder groups have completed this step, they can be brought together to
begin the process of creating a consolidated list. Depending upon the size and complexity of
the proposed system it may be worthwhile to extract the common functionality requirements
and prioritize those first. As each group has already done some internal evaluation, this can
provide a useful benchmark.
When the Affinity Group process has been used, it may be necessary to do a reconciliation of
the items included in the sets. The objective is to have the sets consistently defined. Some
items may need to be dropped out of a set. Homes will need to be found for these
requirements so they do not become lost.
If there are multiple stakeholders, with different agendas, it will be necessary to ensure each
group is appropriately represented in the multi-voting process. Failure to do this may result in
one or two groups packing the room (that is, having more than their representative share or
participants) to ensure their agenda prevails. If this has not been addressed in advance, it will
be necessary for the Business Analyst to ask each stakeholder group who their voting
representative is. This will allow control of the process to be retained. Representation should
be based upon what each group has at stake; in some organization it is possible to do this on a
budgetary basis.
The end result of the prioritization process should be a list of requirements, numbered from 1
to N. Each stakeholder should sign the finished list as confirmation they have been part of the
decision-making process. Each requirement is a discrete entity that can be addressed
independently. This final list will become the source control for all future discussions of
requirements. The paper copy, with signatures, should be maintained securely for the life of
the project. Electronic copies should be created for future use and stored with the appropriate
amount of access control.
Failure to go through a process like this, even though it can be difficult and time-consuming,
will lead to endless strife on the project. Each group will feel they are being treated unfairly
and their expectations of the system are not being addressed. There is no way a project like
this will be seen to be successful.

5.6.2

Quality Function Deployment (QFD)

This technique which grew out of the manufacturing environment in Japan in the 1960s, was
developed by Drs. Yoji Akao and Shigeru Mizuno. The focus of QFD is to hear The Voice of
the Customer (VOC), and to ensure what is heard is successfully translated into products that
have value for the customer. QFD is an integrated, four-stage approach which covers the
entire life-cycle of product development: Product Planning, Assembly/Part Deployment,
Process Planning, and Process/Quality Control.
Product Planning - includes defining and prioritizing the customers needs;
analyzing competitive opportunities, planning a product that responds to the
perceived needs and opportunities and establishing critical characteristics and target
values.

Version 9.1

5-45

Guide to the CABA CBOK


Assembly/Part Deployment - includes identifying critical parts, components and
assemblies of the product, decomposition of critical product characteristics, and
translation of critical components into class characteristics and target values.
Process Planning - includes determining critical processes and process flows,
development of required techniques and technology, and establishing process
control targets.
Process/Quality Control - includes determining individual part and process
characteristics, creating process control methods to support measurement and
establishing standard inspection and testing processes.
Many of the information gathering and measurement development techniques included in
QFD have already been discussed. What makes this approach of value is the structure used to
generate products containing the requirements of the highest value to the customer. This is
accomplished through the use of a series of matrices for listing and ranking various attributes.
Section 5.6.1 on Prioritization discusses a number of subjective evaluation processes that can
be used to determine priorities. QFD attempts a more quantitative approach. Based upon input
gathered from customers, a few, high-level product objectives are defined. (Critical Success
Factors may be used in some circumstances.) Each product objective is given a ranking that
translates into a weight. Once a set of potential requirements has been developed, they are
mapped against the objectives in a matrix and assigned a value based upon how well they
support that objective.

Objectives
Weight
Requirement
Sell tickets for
local events only
Sell tickets for
state wide and
local events only
Sell tickets for
region, state and
local events

Purchase
transaction
takes 2 minutes
or less
9

Ability to
purchase many
kinds of tickets
at one location
5

Ability to
purchase tickets
up to 3 prior to
the Event
3
Score

41

34

30

Figure 5-9 Quality Function Deployment Matrix


Figure 5-9 looks at 3 possible requirements for the Kiosk example used earlier. The score
shown on the right side indicates how well each of these three options support the top three

5-46

Version 9.1

Requirements
characteristics customers said they would want for this product. The requirements are ranked
using a simple high (3), medium (2) and low (1) scale. That ranking is multiplied by the
weight for the objective. The ranking is the sum of all of the results. Using this type of
mathematical model lends an air of certainty to the prioritization process. Developing and
agreeing upon the scales and weights can be difficult and time consuming.
It is possible to consider several other characteristics in this same fashion. Once the list of
requirements has been trimmed down to what may be the final working set, a preliminary
assessment of how long it will take to create the requirement and test the requirement and how
technically difficult that may be is a worthwhile activity. This process allows the team to look
at the relative contribution of specific requirements to the needs of the customer, as opposed
to what it will take to deliver that requirement. Items with low value and high difficulty may
be reassessed.
A second use for this process is in determining the configuration of the various product
releases. The difficulty and duration analysis allows the team to balance the complexity of a
release, while maintaining high customer value.
One other attribute of the QFD approach is the traceability of the requirement, from the Voice
of the Customer, through design and construction to the finished product. This issue of
traceability will be explored in more depth in Section 5.8.
Because the ranking process is performed by all of the stakeholders, it provides an excellent
communication forum. It can be a very positive approach to a difficult problem. There are a
few risks with the approach, chief among them is an unconscious tendency toward group
think which may cause some characteristics to be over-valued and unchallenged.

5.7 Developing Testable Requirements


Throughout this Skill Category, the need to develop testable requirements has been
emphasized. Testability is the only criterion that appears on the most commonly seen lists of
critical attributes for Business Requirements and often on the list for Quality Requirements. It
is one thing to say it is important, it is another to know specific approaches for accomplishing
it. Discussed below are some of the most common and well respected methods for improving
the testability of requirements.
Each of these methods addresses one or more aspects of the quality of the requirement. These
methods can be used individually or in combination, depending upon the needs of the
organization. Each will require the investment of time and effort to obtain the desired result.
Business Analysts and Project Managers who choose to add these steps to the process should
expect to be challenged to demonstrate the cost effectiveness of the process in question. A
sound working knowledge of the economics of quality as presented in Skill Categories 1, 2,
and 3 is necessary.

Version 9.1

5-47

Guide to the CABA CBOK

5.7.1

Ambiguity Checking

Unambiguous is one of the top criteria for good requirements. But, what does it mean to be
unambiguous and how can someone tell?
To be unambiguous, a statement must have one, and only one possible interpretation. That is,
any group of individuals reading the statement independently would all come to the same
conclusion about its meaning.

5.7.1.1

What is Ambiguity Checking

There are several tests that can be applied to a requirement to determine if it is clear, some are
quick and easy, others require more effort. One approach is to change the focus or emphasis of
the statement to see if the meaning changes. The following example is a classic analysis.14

Statement

In contrast to

Mary had a little lamb


Mary had a little lamb
Mary had a little lamb
Mary had a little lamb
Mary had a little lamb
Mary had a little lamb

it was hers, not someone elses


but she doesnt have it anymore
just one, not several
it was very, very small
not a goat or a chicken
but John still has his

Each iteration in the example looks at the possible changes in meaning to a simple and
familiar statement. The rhyme goes on to say its fleece was white as snow. This could be
read as a requirement for the lamb. If this phrase is appended to the original statement, Mary
had a little lamb, some of the ambiguity might be removed. However, if the two requirements
are fragmented or examined in isolation, the intent is unclear.
Another approach is to substitute a synonym for one or more of the key words to determine if
the meaning will change. For example, the requirement for a system to be able to punch the
ticket can be interpreted as placing a small mark or hole on a physical document, or it might
mean to hit the document. There are also several slang interpretations of this phase which
could further cloud the issue.

5.7.1.2

Performing Ambiguity Checking

One place to begin the process is to do a simple test for potential ambiguity. Select a small set
of related requirements. Choose a small group of qualified developers and ask them to do a
quick estimate on how long it will take to implement those requirements. The resulting
numbers themselves might be of little interest, what is important is the level of consistency.
14. www.construx.com; Ambiguity Checking Heuristics; March 2007

5-48

Version 9.1

Requirements
If the responses are all within a fairly narrow range, there is probably little ambiguity (the
requirements may not be correct, but everyone interprets them the same way). If there is a
noticeable divergence, ambiguity may be an issue and the requirements set should be subject
to further scrutiny.
The checking process can be done by as few as two or as many as five or six qualified
professionals, at least one of whom should be representative of the stakeholder group
submitting the requirement. Walking through the requirement in the group and performing
restatements will provide the opportunity for ambiguities to be identified and resolved.
Building this into the development process would indicate that early estimation of
requirements sets, by qualified individuals, can help to identify potential areas of ambiguity.
As estimates will be needed eventually, doing them when they can contribute to the quality of
the requirements makes good sense.
Many organizations have limited estimating skills, with the result that only a few individuals
routinely perform the estimating tasks. In this environment, the need to have multiple
estimates for a single body of requirements may meet a lot of resistance. One solution to this
is to have one experienced estimator as a part of the group and include several other novices.
Initially there will be wide divergences, but over time the quality and consistency will
improve. The novices will learn the estimating skills while helping to identify potential
ambiguities. This will help the organization to ease the estimating bottleneck.

5.7.2

Resolving Requirement Conflicts

Conflicts among requirements require special attention. The need for consistency in
requirements has been discussed earlier. Consistency means there are no internal
contradictions among and between requirements statements. A conflict would mean it would
not be possible to satisfy the conflicting requirements concurrently.
There are two possible causes of conflict: 1) a misunderstanding or miscommunication of
what is actually needed or wanted, resulting in one or more requirements being incorrect; or 2)
an actual conflict in the underlying business rules as presented by the stakeholders, based
upon existing priorities and practices. Of these two, the easier to deal with is the first category.
Once identified, the incorrect or misstated requirement is repaired and everyone is satisfied
that there is not a conflict.
The second scenario is more difficult for the Business Analyst to resolve. The initial steps are
to clarify with each of the respective stakeholder groups that their requirement, as stated,
accurately reflects the existing Business Rules. It is also worthwhile to learn the underlying
reason or rational for the Business Rule(s). It may be possible to resolve the conflict at this
point by a determination that one group is operating from old information or the business
need has changed since the rules were implemented. In this case, all of the groups can be
reconciled to meeting the new business needs and the conflict corrected.
The final, and most difficult case, is that a genuine conflict exists. It may also occur in loosely
coupled organizations, where each unit has a wide range of authority in adopting operating

Version 9.1

5-49

Guide to the CABA CBOK


approaches. The potential solutions to this are to not share the system, to create different
versions, or to go with the stakeholder with the most at stake.
This situation may occur when the stakeholders represent different external customers who
have different business needs. In this case, a business decision must be made about which or
how many customer sets to satisfy and in what sequence. The decision is often made to satisfy
the largest group in the initial release and then to create other versions to address the smaller
groups.
Identifying conflicts is fairly simple for smaller systems. In the process of prioritizing
requirements, they will be compared against each other at a one-to-one level. The conflict
should be immediately apparent.
For larger systems, the possibility of overlooking conflicts is much greater. The same
approach, one-to-one comparisons, can be onerous in a system with several hundred
requirements. If the relative priority of the requirements is the same for each of the
stakeholder groups, they may be discussed in the same prioritization session, or in sessions
attended by the same people. This is an argument for having some consistency in the
prioritization processes.
In the case that potential conflicts may affect the health or safety of individuals, there may be
no alternative but do performed detailed, one-to-one comparisons. In that situation, adequate
time must be allowed during the Requirements process.

5.7.3

Reviews

The term review is widely used to cover virtually any kind of project meeting. This leads to
confusion about what a review is, how it is conducted and who is involved. The end result is a
project with a large number of meetings, with little or no impact on the quality and correctness
of the finished product.
This Skill Category contains descriptions of a number of specific meeting types, and the
activities conducted during those meetings. By using the correct names for those activities and
others to follow, the confusion about the purpose of a specific meeting will be reduced.
The term review generally implies a lack of depth and detail; it is a fairly high-level
presentation of the subject matter. There are other names for more specific activities; i.e. a
code inspection as opposed to a code review.
A review is properly regarded as an organizational and political event, held at the end of a
major project landmark. The purpose of the review is to certify the landmark has been
completely and correctly achieved. Reviews should include all stakeholders for the product,
process or project subject to review. Reviews are generally not decision-making sessions.
They are a recitation and ratification of decisions already made.
Reviews often feature presentations by technical experts to decision makers, describing what
has been done and why. Review meetings are differentiated from status meetings primarily in
the level of detail presented and the lack of technical decision making and action plan
development. In a status meeting the technical team will examine the issue, determine an
5-50

Version 9.1

Requirements
action plan and assign responsibility for ensuring the actions are taken. In a review meeting
the management team may or may not be advised that this was necessary.
Reviews should always be conducted in a no surprises environment. Potentially contentious
issues should have been presented to key stakeholders in advance and agreement secured to
the proposed course of action. If agreement has not been reached, the review should be
postponed. Public arguments are bad for projects.
Requirements should be reviewed at the end of the JAD process if one was conducted.
Requirements should be reviewed with each key stakeholder group prior to consolidation with
other stakeholder groups, but after internal prioritization. Requirements should be reviewed at
the end of the consolidated prioritization process, before design begins in earnest. At each of
these landmarks, the key decision makers, sponsors, champions and stakeholder will be asked
to affirm their support for the product presented. In this way there is a clear link from each
point of origin to the finished product, at an organizational level.

5.7.4

Fagan Inspections

In 1974 Michael Fagan,15 then of IBM, developed a methodology for improving the quality of
software; the resulting publication in 1976 made this approach available to the industry. His
inspection technique is credited with dramatically reducing the number of defects escaping
into production. The Fagan Inspection Technique, also referred to as Formal Inspections, is a
rigorous, structured approach to finding defects. Initially developed and applied to code, the
use of the formal inspection process has been expanded to include work products at all stages
of the development process. Examples of work products would include items such as a
Function Requirements Document, a Test Plan or Acceptance Test Plan, an External or
Internal Design Document as well as many others. The application of the Fagan Inspection
process to Requirements has proved to be extraordinarily cost effective:
Karl E. Wiegers, noted Requirements expert, states inspecting requirements is the
single most effective process in reducing defects and improving product quality,
returning as much as 40 to 1 the time spent in Inspections!16
Kirby Fortenberry,17 another well known consultant and writer about inspections cited
the following results based on his work with clients:

Inspections were three times more effective than testing in finding defects

$1600 savings/defect found prior to test

$105 cost to find/fix a defect in Requirements using Inspections

15. Fagan, Michael E. Design and Code Inspections; IBM Systems Journal, 1976.
16. Wiegers, Karl E, Cosmic Truths about Requirements, Quality Assurance Institute International
Quality Conference, Orlando Florida, April 2006.
17. Kirby Fortenberry, Software Inspections. Proceedings, 16th International Software Testing Conference, Quality Assurance Institute, 1998

Version 9.1

5-51

Guide to the CABA CBOK

5.7.4.1

$1700 cost to find/fix in Testing

$25,000 average Inspection savings

10 25% reduction, development time

90% reduction, corrective maintenance

Fagan Inspection Process

The Fagan Inspection process has a single objective: to find defects. As defect data is the
engine that drives all process improvement it is clear how this approach contributes to both
short and long-term improvements. To accomplish this, Fagan devised a structured approach
for reviewing work products at points of stability, that is, when they are supposed to be
done.
Because the Business Analyst has been instrumental in the development of many of the work
products, especially the Requirements documents and the Acceptance Test Plan document,
they will be very knowledgeable about the specific product. They may be involved as a
representative of the authoring group if it is a product they have worked on. Alternatively,
they may be present in the role of a skilled, but objective, pair of eyes if this is not a project
they have been working on. In either case, they will be a major asset to the process of finding
defects.
To determine when a product is ready for inspection, the organization must have and use an
effective set of standards and procedures. This includes both the project management
information and the development methodology details. These two provide information about
the activities that must be completed for a product to be ready for inspection; these are called
the entry criteria.
These references must also include the level of performance for the work being examined;
these are called the exit criteria. Work that fails to meet the performance standard for one or
more reasons is defective. Work products are judged against both the entry criteria and the
exit criteria.
When inspecting requirements, defects are categorized as either major or minor. The classic
definition of a major defect is that it will cause a failure in production. Since what
constitutes a failure in production varies a great deal from organization to organization, this
must be defined and agreed to by all parties. Anything that is not a major defect is a minor
defect.
Major and minor defects are also categorized by type: wrong, missing or extra. Of the three,
wrong is the easiest to identify, missing requirements take time and effort to see the gaps
where a requirement should be. Extra requirements tend to be the most error prone, as they
do not result from a customer or business partner request, so there are no real requirements.
While it is possible to further categorize defects, there is little reason to do so. What will be
important is to track where in the development process the defect is identified. If a

5-52

Version 9.1

Requirements
requirements defect escapes into design, code, test or production, some work will need to be
done to determine why the defect was not found earlier.

5.7.4.2

Fagan Inspection Participants

All inspections include the same set of roles, although the roles may be filled by different
individuals for different types of inspections.
Moderator - The Moderator plans and conducts the inspection session(s), ensures
the required rework is addressed, and ensures the results of the inspection are
properly recorded and reported. The Moderator must be a skilled facilitator, well
organized, and able to provide the time and effort needed to ensure a good result.
The Moderator also participates as an Inspector.
Author - The Author is a representative of the authoring group for the work
product. The Author is the expert in the work product under review and is able to
answer questions or issues that are identified during the inspection. The Author is
responsible for carrying the identified defects back to the authoring group, which is
responsible for ensuring they are fixed. The Author is also and Inspector. Authors
are generally the best inspectors on the team as they have the best insight into the
product and the project.
Reader - The Reader is responsible for reading and paraphrasing the material (to
ensure that only one interpretation is possible). The Reader maintains the pace of
the meeting; each item is read, restated, a reasonable pause is allow for someone to
offer a defect, if none, move on to the next item. The Reader is also and Inspector.
Recorder - The Recorder is responsible for logging all identified defects and their
classification (Major or Minor; Wrong, Missing, Extra). At the end of the session,
the Recorder reviews all defects to ensure that there is consensus on them. The
Recorder is also an Inspector. It is possible for one individual to perform the roles
of Recorder and Moderator in a session. These are the only two roles that can be
combined.
Inspector - In addition to the roles above, one or more qualified individuals may
participate in the inspection session. The recommended total group size is seven;
eight is possible, larger than that becomes difficult to manage. Each Inspector is
responsible for having inspected the material prior to the session and having
identified as many defects as possible. Unprepared Inspectors may not remain in the
session as they slow the process down and have a negative impact on morale.
The Moderator can be from any part of the organization, but should be technically competent
to inspect the material. Often Business Analysts and Software Quality Assurance staff possess
the right skill mix for this job. The Reader is one of the most demanding jobs, and initially
should be performed by the most skilled individuals. The various roles for inspecting
requirements should include representatives from the business unit, business analysts,

Version 9.1

5-53

Guide to the CABA CBOK


developers and testers. This mix will provide the best coverage of the most important element
of the project.
One fundamental rule of Inspections is that Managers may not participate. There are very
sound reasons for this, the most important being that the possibility of this information
becoming part of the personnel evaluation process is enough to kill the effectiveness of
inspections.

5.7.4.3

Fagan Inspection Issues

Fagan Inspections are a powerful tool for identifying defects early in the life cycle, however,
despite this, they have not been implemented as widely as might be expected. There are a
number of issues that may contribute to this.
Time - Inspections are labor intensive, particularly in the early stages of
implementation. The history of the industry is to short-cut any process that delays
the beginning of coding, regardless of how counter-productive that strategy may be.
Even for well intentioned organizations, the commitment to inspecting products,
line by line can be difficult to maintain in light of the pressure to deliver products
quickly. The fact that Inspections will actually improve delivery time and reduce
expenses is not well accepted despite numerous studies of it effectiveness.18
Level of Maturity - Effective use of the Fagan Inspection technique requires that
organizations be at least a CMM(I)19 Level Two, and probably Level Three to
obtain the maximum benefit on a consistent basis. The key to this is the need to
have well defined processes and procedures that are employed consistently by the
organization, and not jettisoned as soon as time pressures are applied.
Fixing Defects -The identification of defects during the session is firmly separated
from the correction of the defects, which is the responsibility of the authoring
group. Failure to require defects to be addressed in a consistent manner, will
discourage participants from investing the time and effort needed to find the
defects. For some organizations, the discipline needed to fix the defects is very
difficult.
Environmental Barriers - One final issue for many organizations is the need to
establish a trusting environment in which finding defects does not have a negative
connotation. This topic was addressed in Skill Category 2, and the direct need for
that environment is seen here. For organizations intent on finding someone to blame
for every defect, Inspections are not a viable option.

18. In addition to the Fagan and Fortenberry information previously cited, there are studies by Motorola
in conjunction with their Six Sigma initiative, work by Harlan Mills, and Capers Jones.
19. CMM(I) is a trademarked process of the Carnegie Mellon University, Software Engineering Institute. Detailed descriptions of the CMMI Model, its development, use and the content of various levels is found in Skill Category 3.

5-54

Version 9.1

Requirements

5.7.5

Gilb Agile Specification Quality Control

Because of the some of the issues above, Tom Gilb has developed an alternative approach to
the Inspection Process.20 This approach uses sampling of work products early in the life cycle
to predict defect rates and to identify potentially defect laden products from being developed.

5.7.5.1

Agile Specification Quality Control (SQC) Approach

Rather than attempting to perform line by line inspections of many large documents, this
approach takes small, representative, samples and examines them carefully, but not
necessarily line by line. The results of these examinations are then used as the basis for
calculating the defects in the entire body of work, using statistical data.
For the purpose of the SQC, Gilb defined a major defect as anything that can potentially lead
to loss of time or product quality. Minor defects are anything that is not correct but will not
lead to loss of time or product quality. This definition sets the bar much lower than the Fagan
definition.
Instead of reviewing against the entire process and procedure set, Inspectors select a limited
number of very important criteria to use in the inspection process, typically between 3 and 7.
For Requirements Gilb recommends clear enough to test, unambiguous to the intended
readers and completeness when compared to sources. Using a streamline rule set speeds up
the process, but does leave the potential for missing other kinds of errors.
Experience with this method had determined that two inspectors, working together for 30-60
minutes will find about one third (33%) of the total defects in the document of about one page.
Individuals working alone will find a smaller percentage. Using either the average of all
reviewers or the best average, the result of the sample inspection is then used to calculate a
defect rate for the document as a whole:
10 major defects identified on one page of an 80 page document would mean a defect
density of 2400 for the entire document (10 * 3 * 80).
Organizations establish an acceptable exit rate for defects. Initially it may be set fairly high
and then reduced as the processes improve.

5.7.5.2

Using Inspection Results

Unlike the Fagan approach that mandates the correction of all (or almost all) defects, the Gilb
approach is used to educate the staff about how to do things correctly. Following participation
in an SQC session, the defect injection rate falls by about 50%, thus improving product
quality.
For this reason Gilb recommends using SQC very early in the process, before products are
large and fully developed. He recommends the on-going sampling of products through
20. Gilb, Tom; Simplified SQC, [email protected]; October 7, 2004.

Version 9.1

5-55

Guide to the CABA CBOK


development. This strategy will lead to the incremental improvement of the products being
developed. It may indicate the need for additional training or modification of standards and
procedures before the product is too badly flawed to be saved.

5.7.5.3

Agile SCQ Participants

A minimum of two inspectors and one inspection leader are required for each 60 minute
session. Depending upon the size of the material to be sampled there may be as many as five
or six inspectors. Inspectors must be professionally competent to understand the material
being examined. Business partners, business analysts, testers, developers and managers are all
viable participants.

5.7.5.4

Agile SCQ Issues

While this method does address many of the issues raised with the traditional Fagan
Inspection, there are other issues created:
Defects are not found - The emphasis in this method is not on finding all of the
defects, it is about estimating the defect density and using this information to
motivate engineers to learn to avoid defect injection in the first place. Defects will
still need to be found and the will still need to be addressed. Because of the use of a
few simplified rules, only certain kinds of defects will be identified. Others will
simply be missed. One strategy is to use root cause analysis to determine the largest
sources of errors, and use those rules first. As these defects types are reduced, other
rules can replace them in the examination process.
No Feedback Loop - Because the emphasis is not on finding defects, the kind of
process improvement loop that is typical in an organization committed to Fagan
inspections is missing. While it is anticipated that individuals will do a better job of
creating work products, this improvement does not necessarily become
institutionalized.
No Accountability - Because there is no structure around the process, there is no
way to track what happens to the defects that are discovered. This means that even
relatively defect laden material can be moved along the process, as no one has an
obligation to ensure problems are being corrected.

5.7.6

Consolidated Inspection Approach

Some organizations are reporting good results by using a combination of the two inspection
approaches. The Gilb Agile SQC method is used early in the development of requirements as
a diagnostic aid. Those products that exceed a specified threshold are required to undergo a

5-56

Version 9.1

Requirements
full Fagan Inspection; those below a specified threshold are approved; and those in the middle
are subject to on-going sampling to monitor their quality.
This approach appears to allow organization to focus their efforts on those products most in
need, while conserving significant staff resources.

5.8 Tracing Requirements


The entire reason for creating a list of requirements is that it represents what the business
partner or customer needs or wants to find in the finished project. While this fact may be
embarrassingly obvious, too many organizations fail to take the steps necessary to ensure that
the requirements are there. These are fairly straight-forward and can be accomplished easily if
a small amount of forethought is given to the process.

5.8.1

Uniquely Identify All Requirements

Notice the emphasis on the word all. Many organizations track requirements that are included
in the approved version of the product. This is an excellent starting place, but it is not enough.
Every potential requirement that survived any requirements definition session to end up on a
document should be tracked. This means there must be at least two listings for even small
projects; one for the requirements to be implemented and one for those that will not be
implemented.
In Section 5.6 the need to give each requirement a unique priority ranking was discussed. For
most projects, this ranking, combined with a project number is sufficient to uniquely identify
each requirement. It is possible to become much more complex in creating a numbering
scheme, but there is little payback for doing so. Keeping this part of the process as simple and
clear as possible will save resources.
Items that are not to be included in any planned release of the products are often the source of
later conflict and contention. For each rejected item, there should be listed, in addition to the
requirement itself, the date, the group or authority making the decision and the reason for
rejection. This will preempt later attempts to revive requirements that have already been
investigated and found not to fit the scope of the project. For some requirements the criteria
that disqualified it may change or disappear, making future consideration of the requirement a
possibility. It will also allow a later evaluation of the decision-making process on what to
include and what to exclude.
By creating the listing of rejected requirements incrementally, and making it electronically
accessible to all stakeholders, it is possible to manage down the amount of time spent cycling
around pet requirements. Requirements on this list should be uniquely identified; a simple 1 to
N scheme will work. Organizations can use an R for Rejected or N for Not Approved to
preface the number. This eliminates confusion in numbering.

Version 9.1

5-57

Guide to the CABA CBOK


For larger projects, the listing of requirements to be implemented can be broken down by
those for each planned release of the project. This provides clarity about what the total product
will include, and in what sequence functionality will be delivered. Making these decisions is a
part of the prioritizing process that was discussed earlier. This list also should be available
electronically to stakeholders.

5.8.2

From Requirements to Design

Once the list of uniquely identified requirements has been agreed upon, it is time for the
designers to begin the work of translating those requirements into solutions. The listing
provides the opportunity for a two way inspection of the final design.

5.8.2.1

Tracing Requirements to Design

This is, of course, essential. It should be possible to take each requirement and see where it is
addressed in the design. If the requirement was a good requirement, the designer will have had
the complete information necessary to come up with the best solution.
Tracing each requirement to the design component takes time, but also ensures the product to
be built will meet the business partners or the customers wants and needs. It will ensure that
nothing is missing from the design.

5.8.2.2

Tracing Design to Requirements

This backward flow is an essential step to ensure additional requirements have not been
inserted into the design. After each element has been traced back to the source requirement,
there should be nothing left over. If there is, these represent defects in the design.
The better the basic requirements set is, the less likely it is that the designer will feel the need
to add functionality. Providing the designer with access to the list of Rejected (Not Approved)
or deferred requirements, means that before inserting unspecified functionality, they can look
to see if it is coming later or has been rejected.

5.8.3

From Requirements to Test

This aspect of traceability allows the Testing staff to develop the needed test cases for the
functionality to be delivered. As discussed earlier, the sooner the testing effort can begin, the
better the final product will be. Creating test cases early in the life cycle allows the effort to be
spread out over a much longer time.

5-58

Version 9.1

Requirements
5.8.3.1

Tracing Requirements and Design to Test

The question of how to test a specific requirement will arise early and often. By the time the
test plan is complete, all of the issues about how to test specific parts of the requirements
should be resolved. Clearly some of those issues will not be resolved until the design is
complete. Testers need to know the specific implementation that the design will create for a
requirement.
The requirement may be that the result of a calculation be made available to the customer. The
implementation of that requirement in design may be to show the total on the screen. The
resulting test cases will need to address both the initial intent to correctly perform the
calculation and the subsequent decision to place that result in a specific screen after a specific
set of inputs. This relationship creates a treelike structure of test cases that can be executed.
Tracing the relationships will expose areas for which the needed test cases do not exist,
allowing this defect to be corrected before the product is released to the customer.
Methods for determining how much to test and what to test based on project risks will be
addressed in much more detail in Skill Categories 6 and 7.

5.8.3.2

Tracing Test Cases to Design and Requirements

As with the backwards flow of design, the reverse tracing of test cases can highlight potential
problems. Redundant test cases waste resources. Tests for items not included also waste
resources. This being said, when experienced business analysts or testers identify a missing
requirement, they should develop the appropriate test cases and bring the issue to the
attention of the project team. This issue will be addressed in more depth in Skill Category 7,
Acceptance Testing.
By addressing traceability issues as early as in the project as possible, resources can be spent
as effectively as possible.

5.9 Managing Requirements


Requirements will change.
Despite the best efforts of all concerned, for all but the very smallest of projects, the
requirements will change. Typical reasons for changes to requirements include errors in the
original requirement (missing, incorrect or extra); changes to the internal organization or its
functions (added or deleted); changes to the external environment (regulatory or competitive).
This means the goal of creating a fully complete, fully correct and unchanging set of
requirements at the beginning of a multi-year project is unrealistic.
The Agile approach suggests collecting just enough requirements for the current development
effort, and keeping that effort small enough to control the rate of change. While this does
work well for many projects, it is not always the best approach, so organizations must have a

Version 9.1

5-59

Guide to the CABA CBOK


viable alternative. A further analysis of the strengths and weaknesses of the Agile and other
development approaches, and the kinds of projects for which each is best suited is discussed in
Skill Category 6. As organizations progress from CMM(I) Level 1 to Level 2 the emphasis on
an effective Requirements Gathering process pays major dividends in both quality and
productivity. All too often however the attention paid to rigor and control during the initial
stages of Requirements Gathering erodes as the project moves forward to Design,
Development and Testing. The result is miscommunication, additional time in the testing
stages and more expensive products.
For this reason, it is only prudent to plan for the changes that will occur during a project, and
to develop a mechanism for managing that change effectively. As with traceability, managing
change entails a small set of fundamental steps. Developing a Requirements Configuration
Management Process will allow the organization to maintain control over the project in a way
that will best meet the needs of the organization. Without this kind of a process, creeping
scope can rapidly lead to out of control projects which are headed for significant quality,
budget and timeliness problems!
By looking at the Requirements Configuration Management activities as a process, it is
possible to leverage what we know about our processes and improve the product(s). The same
key concepts that historically have been demonstrated to work effectively for source code
management and production control can be applied to requirements. There are two key
components, a Requirements Configuration Manager, and a Requirements Change Control
Board. These two work together to ensure that only properly authorized requirements are
added to the project scope. Data regarding sources should be captured in an automated system
which will allow trend analysis.
Unlike beginning to establish control over the production environment, much of the basic
work has already been done. The same key concepts which have been demonstrated to work
effectively for source code management and production control can be applied to
requirements. Organizations operating above CMM (I) Level 1 will have already established
the traditional change control and change management processes for the management of the
production environment. What remains is to create and formalize a similar process for
requirements and to employ it consistently.
Effective team building skills on the part of the overall project management will help to create
a positive environment where the need to objectively discuss and decide upon proposed
changes can be handled smoothly. For projects of any significant size and duration, the project
manager will typically identify an individual to function as a requirements configuration
manager. This individual will have access to the requirements document and be able to make
authorized modifications to that document. The Requirements Configuration Manager may
fill that role for a single project, or for the organization as a whole. At a minimum, the
configuration manager must be perceived as a team player and one upon whom the rest of the
team can rely. The Business Analyst may find that there is no established requirement
configuration manager for a specific project. In that instance, they may need to fill the gap
while helping the organization understand why it is essential that the role be filled.
If Information Technology and the Business Community are working as partners to solve
common business problems, contention over scarce resources diminishes. Prioritization of

5-60

Version 9.1

Requirements
proposed changes, including the impact on budget and schedule are handled in a much less
adversarial fashion.

5.9.1

Create a known base

The Project Owner or key stakeholders have been identified and documented in the Project
Charter. The base is the approved Requirements Document for the project. This document is
the result of all of the various Requirements Definition and Prioritization activities that have
taken place on the project to-date. It includes the Critical Success Factors, the Critical
Assumptions, as well as other significant information about the project. The complete list of
contents and the process for developing the Project Charter is contained in Skill Category 4.
Finding these documents for an active project is rarely a challenge, assuming they have
actually been created. These documents should be accessible to the project team members and
stakeholders electronically. Creating and maintaining paper based documents becomes very
time consuming and labor intensive. It also opens the door for the possibility the paper copy
will be lost or destroyed by accident.
A fundamental component of this known base is the document that contains all of the
approvals for the requirement set by the project stakeholders, champions, sponsors, and other
appropriate team members. It also contains a document change history. This document is
proof-positive of the approved list of requirements, the known base. Going forward,
anything not included on this signed document is a change to the project and must be treated
as such.

5.9.2

Manage Access

Once final agreement has been reached on the Requirements document, it is essential to
protect the integrity of the document. In this electronic age, that is readily managed through
the careful application of appropriate access permissions. The document may be read by
many, but only changed by a few.
To make this work, there must be a process for managing access. This process must be a part
of the organizations standards and procedures. It must include the necessary Do and Check
steps. Typically, the parties authorized to submit changes provide an updated version of the
material to the individuals actually making the change. These changes may be regarded as an
update to any application system and processed via the normal change control process.
In some organizations all changes are routed through the network or operations staff. In other
organizations this can be done by selected team members. The first approach is preferable as
the network staff will require the appropriate documentation be submitted on a consistent
basis. This eliminates the possibility of people forgetting approvals are required for
changes.

Version 9.1

5-61

Guide to the CABA CBOK

5.9.3

Authorize changes

The approval process for change requests should be a subset of the original approval process,
managed by the Requirements Change Control Board. Often organizations create change
categories which determine what approval is required based upon the scope and impact of the
changes. This is effective at keeping the approval process simple, but care must be taken that
large changes are not merely being broken into pieces that can be approved at a lower level.
Changes to the Requirements may or may not be a result of a defect in the process. The
documentation for the new or revised requirement must meet the standards for the original
requirements. Once the requirement has been documented it can be properly estimated.
Documentation accompanying the change request should include not only how long it will
take and how much it will cost, but also the impact on the schedule. Any change request
marked no impact should be subject to very close scrutiny and probably rejection.

5.9.3.1

Change Control Board (CCB)

For configuration management of requirements to be effective it must be officially sanctioned


at some level of the organization. While it is possible to create this as a one time function for a
single project, that defeats the purpose of approaching configuration management as a
process. Spending the time to establish the roles and goals of a configuration management
board and documenting them in a charter may seem frivolous, but it is not. The purpose of the
Change Control Board is to ensure the integrity of the requirements change process, not to
address or resolve technical issues. A CCB is not an inspection agency. While the team on the
CCB will generally be highly skilled professionals, any move on their part to take over the
roles of the project team is inappropriate.
The act of ceremonializing a function lends it credibility and respect. The discussions about
how the process will work and developing an agreement about how to handle specific
situations brings potential issues to the surface which must be resolved. It is a mistake to jump
into the process without a clear understanding of its purpose, what is to be accomplished and
how much authority is available to do so.
Creating a Charter for the Requirements Change Control Board will force these things to
occur. The charter should specify how members of the CCB are selected; generally it will be
composed of members from both the business side and the technical side of the project team.
Often key stakeholders will be named as the representatives from their organization to the
CCB. Typically they will delegate the responsibility for handling the routine matters to staff
members and only participate directly when there are significant issues or conflicts.
Because not all changes are accepted or approved, it is essential that those responsible for the
Requirements Change Control Board are perceived as being fair and responsible. This starts
with excellent communication skills, especially listening skills. If not listened to carefully and
considerately, people requesting or suggesting the change will become disillusioned and
frustrated with the process and look for ways around it.
Some level of conflict is inevitable whenever there are scarce resources, the Requirements
CCB needs to be skilled at anticipating and defusing conflict when possible. They also need to

5-62

Version 9.1

Requirements
be prepared to handle the conflict when it becomes unavoidable; working toward a solution
which is in the best interests of the entire organization.
The Charter should specify which projects are subject to the CCB process if it does not apply
to all projects. In some organizations certain kinds of changes are exempted, especially when
just getting started; however this is not the best method long term. Creating realistic project
thresholds will improve the creditability of the process and discourage the development of
creative strategies for avoiding it. The process definition should also specify what kinds of
changes must be reviewed by the Board, regardless of other criteria. This is common when
the project under development contains health or safety dimensions
In larger organizations the Requirements Change Control Board may be supported
administratively by the Business Analyst or the Quality Assurance organization, either of
which may also function as the acting Chair for routine meetings.

5.9.3.2

Change Scope: Requirement, Enhancement or New Project

Scope creep is the bane of every business analyst and project managers existence. The need
to keep the product within schedule and budget parameters while maintaining the quality and
functionality is their number one responsibility. Each suggested or requested or demanded
change to the requirements challenges the team to maintain that balance.
Typically, the first aspect of the change addressed is: how big is it? It this a small change,
easily accommodated? Is it a little bigger, but still not requiring any significant redesign? Or is
it major, perhaps impacting work products with design, code or even testing already
completed?
It is essential to preserve a sense of scale regarding potential changes to the system
requirements. If the organization has a reasonably effective requirements process, large, late
development stage changes should be relatively uncommon; and they should come from a
limited number of sources. Generally they should have been anticipated, at least in the
abstract, in the Critical Assumptions.
If the requested changes are approaching 10-15% of the estimated work effort at the
completion of Design, some serious rethinking must be done. Is this really a part of this
project, or is it a new project? When a lot of projected resources are to be consumed by a
function or series of functions not strongly related to the business goal, it may be time to break
them out as a separate project. If the key customers see them as essential to the project, it may
be time to revisit the project definition to ensure everyone has the same understanding of the
problem to be solved.
As a general rule of thumb, any change that increases the original scope by 10% should be
looked at as a potentially new project, requiring a separate cost-benefit analysis and associated
approvals. The CCB should be asking fundamental questions about every proposed change to
requirements, especially non-trivial ones; they include:
Why does this need to be done?
Must it be done now?

Version 9.1

5-63

Guide to the CABA CBOK


How is it related to the overall business objective of this project?
How is it cost justified?
Does this replace something else previously included?

5.9.3.3

Establish priority

The Change Control Board, having ensured that the same level of analysis was exercised in
the development of the proposed requirement as with the original requirements, must have the
authority to accept or reject the change. Because the CCB is comprised of the key
stakeholders or their assignees, this should not be an issue. It may be difficult to reach
consensus, but the authority is present.
The process for the decision making should be a microcosm of the initial requirements
development. In particular, the CCB will need to determine, based on input from all invested
parties, what the priority of each of the requested changes is. The new requirement will need a
unique identifier that will allow it to be tracked through the development and testing process.

5.9.3.4

Publish

Once the decisions have been made, the results must be made official through publication.
This includes updating the appropriate listings of accepted and rejected requirements and
having them moved into the viewable space by the authorized individuals or groups. Failure to
update the log of deferred or rejected items will often cause them to reappear with a new name
or sponsor. Communication is essential.

5.9.4

Control results

The intent of the process is to help the organization achieve the result it desires. The controls
exist to combat chaos and confusion. The process provides the development team in particular
with protection from random, unwarranted intrusions into the life of the project. Effective
control of the final result is dependent upon the diligence of the CCB and the project team to
specific areas of concern.

5.9.4.1

5.8.4.1 Size the effort

The CCB should never accept for consideration a change request that is not accompanied by a
standard estimate of work to be performed. Allowing generic estimates (small, medium, large)
or vague ones (3-5 staff-weeks, based on available resources), creates the opportunity for
severely defective requirements to be approved.

5-64

Version 9.1

Requirements
5.9.4.2

Adjust the resources or the plan

What will be a more troublesome issue in many organizations is how to integrate an approved
change into the existing project plan. The CCB has been provided with information about the
size and complexity of the proposed change. They also know the priority of the request. The
basic options, mentioned earlier in this Skill Category, are few:
Integrate the change into the plan with no change to the official schedule, resources,
scope or quality. While this does not make rational sense in the overwhelming
number of situations, this is the option most commonly chosen. This is done despite
having estimates that demonstrate the change will, in fact, require time and effort to
develop and test. This decision is characteristic of CCM(I) Level 1 organizations
that will not accept reality that this is actually a decision to cut quality, as well as
jeopardize the ability to achieve schedule and budget.
Integrate the change into the plan and adjust schedule, budget or both. These are
acceptable solutions if the adjustments are realistic. These are generally unpopular,
especially with the ultimate customer or the business partner, who may have created
elaborate plans based upon the planned implementation cost or schedule. It is not
unusual to see the amount of the adjustment adjusted further (and always lower)
in the face of pressure. Allowing too few resources to do the job properly creates
exactly the same scenario as seen in the first response. It does exacerbate the
frustration on the part of the business partner or customer, because they feel the
problem was addressed when resources were added.
Integrate the change into the plan and cut scope to compensate for the added work.
This is where the effort to estimate and prioritize both the original requirement set
and subsequent changes to requirements will pay dividends. For example, if the
new requirement was prioritized between the old number 14 and 15, it is now 15.
Anything prioritized at 16 or above is fair game. If the release includes priorities
through 31, the numerical higher requirements (the lower priority requirements) can
be evaluated to determine which ones can be deferred to accommodate the new
requirement. This approach preserves schedule, budget and quality, as well as the
integrity of the development process.

5.9.4.3

Communicate

Throughout the process it is essential to ensure all of the stakeholders are made aware of each
proposed change to the approved set of requirements. They must have the opportunity to
assess the risk, the reward, the potential cost in dollars, staff, schedule, scope and quality. The
decision making process must be transparent to maintain creditability. Compromises, when
they are made, must be undertaken in the context of what is best for the organization as a
whole, and then supported by the entire decision making group.

Version 9.1

5-65

Guide to the CABA CBOK

5.10Summary
This Skill Category addresses the single most important part of the development process,
Requirements. There are those who argue that construction is more important, but without
good requirements, whatever is built will be defective. The steps taken to move a vaguely
understood, poorly articulated want or need to a clearly actionable statement are detailed.
Methods for gathering the information, refining it and prioritizing it are presented.
The result is a process that creates and manages usable, highly accurate information for the
design and development processes. It provides an ideal opportunity to jump start the testing
process. The Business Analyst, in participating in an effective requirements process will be
ideally positioned to support the project through the remaining stages of development and
testing as well as the final implementation.

5-66

Version 9.1

Software Development Processes, Project and Risk Management

Skill
Category

6
Software Development
Processes, Project and Risk
Management
The Business Analyst is in the business of helping organizations develop software to meet
business needs. Throughout the previous skill categories, the need for processes has been
emphasized. Skill Category 6 focuses on the processes that are used by Information
Technology to perform their work. Skill Category 6 begins by examining the Software
Development process. It includes a comparison of various methodologies. Software
development occurs in the context of an individual project and represents a single
instantiation of a methodology. Successful project planning and management are essential for
delivery of the products needed by the organization. For the Business Analyst, solid project
management skills will enable them to keep the development effort on track and in control.
Skill Category 6 concludes by examining the role of risk management in the context of
successful software development projects.

6.1 Software Development in the Process Context


Since a Business Analyst works in and around the software development process on a daily
basis, it is critical to understand both the terminology and the products of various parts of the
process. Skill Category 3 focused on the impact of processes in Information Technology
product development. A brief recap of key definitions is provided below.

Version 9.1

6-1

Guide to the CABA CBOK

6.1.1

Policies, Standards, Procedures and Guidelines

These elements provide the framework for planning and constructing software. If the
framework is incomplete, inadequate or ineffective, the products developed will rely upon the
heroic efforts of individuals to produce quality software.

6.1.1.1

Policies

The objectives to be achieved. Policies are high-level documents that guide and direct a wide
range of activities and answer the basic questions; Why are we doing this, What is the
purpose of this process? A policy should link to the strategic goals and objectives of the
organization and support customer needs and expectations. It will include information about
intentions and desires. Policies also contain a quantifiable goal. The development of
Standards is addressed in more detail in Skill Category 3, Section 3.3.3.1.

6.1.1.2

Standards

Standards describe how work will be measured. A standard answers the question what?
What must happen to meet the intent/objectives of the policy? Standards may be developed
for both processes and the deliverables produced. These standards may be independent of
each other only to the extent that the standards set for the process must be capable of
producing a product which meets the standard set for it. A standard must be measurable,
attainable and critical. The development of Standards is addressed in more detail in Skill
Category 3, Section 3.3.3.2.

6.1.1.3

Procedures

Procedures establish how work will be performed and how it will be verified. There may be a
significant number of procedures to be performed in any single process. A process may
require a lengthy period of time for completion. Substandard or defective work produced early
in the process can result in schedule-breaking delays for rework if they are not caught
promptly. For this reason there are two types of procedures: Do Procedures and Check
Procedures. Development of Procedures is addressed in more detail in Skill Category 3,
Section 3.3.4.

6.1.1.4

Guidelines

Guidelines are recommendations for ways of working. They are optional, not mandatory. This
means that they are not enforceable. Guidelines may be useful when the organization is
piloting new procedures, but generally should be avoided. Where there is more than one
acceptable procedure, it is better to include all of them with the caveat that these are the only
allowable procedures; one of them must be used.

6-2

Version 9.1

Software Development Processes, Project and Risk Management

6.1.2

Entry and Exit Criteria

Entry and Exit Criteria describe the input and output for a given process. Each plays a critical
role in the development of quality products. Many organizations assume that individuals
know what they are doing and fail to provide adequate information in this area.

6.1.2.1

Entry Criteria

Entry Criteria are the must have inputs for a specific process. Entry criteria are the border
guards between processes. They serve as a filter to keep incomplete products from moving
forward. Failure to pass through this filter can waste considerable resources.
At the simplest level, the entry criteria for the Design Process is the Requirements Document.
Typically there are several other must have items: a Project Scope Statement and Plan, a
budget and authorization to use resources, a preliminary test plan and so on. Each of these
must be available for a meaningful and productive Design Process to begin.
Entry criteria are often found as a check list contained in the related Standards and Procedures
documentation.

6.1.2.2

Exit Criteria

Exit Criteria address the question of how well the work has been performed. They are the
details found in the Standards and Procedures. If the Entry Criteria answer the question What
must we do?, the Exit Criteria provide the information in response to the question, How will
we know if we have done it right?
Additionally, it is essential to know when a product is done. The development of effective
and agreed upon implementation criteria is the basis for the decision. This issue will be
addressed in Skill Category 7, Acceptance Testing.
The work done to define and document what constitutes an acceptable work product is applied
here. Available as a reference during the development of the product, it is used again at the
end to determine if the product will measure up.
The Requirements Process Exit Criteria will contain information about the format of the
document, the amount of detail in the project and test plan, and which approvals are required.

6.1.2.3

Applying Entry and Exit Criteria

Entry and Exit Criteria can be used effectively in many ways. The most basic is the phase end
review and approval process employed by many organizations as an alternative to the throw
it over the wall approach. The authoring organization is responsible for verifying that their
product (output) meets the exit criteria (first line quality control of a product), that is, defects
have been identified and corrected.

Version 9.1

6-3

Guide to the CABA CBOK


The receiving organization then verifies that it has received all of the necessary inputs (Entry
Criteria) from each of the contributing organizations. They may also choose to spot-check the
items received to ensure that the product meets the Exit Criteria for the previous stage. This
will provide the recipient with a minimum comfort level regarding the quality of the material
to be used. Substandard material is immediately returned to the producer for correction.
A second important use for Entry and Exit Criteria is to serve as the baseline for an Inspection
Process. This usage was discussed in detail in Skill Category 5.
Entry and Exit Criteria are also useful as a kind of mental checklist of things to be included in
developing the project estimate. Clearly understanding the nature and amount of the work to
be done will assist in the definition of the workload and the skill sets required to accomplish
the workload.

6.1.3

Benchmarks and Measures

Benchmarking is often an early activity in the organizational assessment process that initiates
a major quality/process improvement effort by an organization. Organizations seek to learn
how well others perform in many aspects of their business. Benchmarking activities often are
the precursor to major systems initiatives and, therefore, are of great importance to the
Business Analyst.
Within Information Technology (IT), common benchmark issues include items such as the
percent of the IT budget devoted to new development, maintenance, enhancements, and
problem resolution. In the Development Lifecycle, Benchmark issues may focus on the
Percentage of Defects identified in Requirements, Design, Code, Test, and Production
respectively. Organizations seek to learn who is best and how their organization compares.
They also seek to learn how those results are being achieved.
Skill Category 1 discussed Benchmarking in this context, as a macro level activity. It can also
be applied at a much lower, micro level, examining how well other organizations perform
specific sets of activities or tasks. If the organization is considering a change to the testing
process, by expanding test automation, they may canvas multiple organizations to determine
what percentage of their time and budget is allocated to manual versus automated testing, and
how happy each respondent is with their allocation. This information can help the
organization determine if additional automation might be beneficial.
Organizations considering the adoption of one of the Agile Methodologies may wish to
determine what the most common team size and cycle length is for organizations in their
industry. They may also want to establish how long it took for other organizations to become
comfortable with the process. These kinds of benchmarks provide guide posts along the
implementation path.

6-4

Version 9.1

Software Development Processes, Project and Risk Management


6.1.3.1

The Role of Measures and Metrics in the Development Environment

Skill Category 3 examined the process for developing measures. It emphasized that the initial
work product measures should be made by the individuals and groups actually performing the
work. Historically this has focused on measuring schedule and budget, planned to actual.
While this information is important, it is essential to go beyond these boundaries to identify
and measure key elements in the development process.
Once developed, these measures and the metrics developed from them, can be used to guide
process improvement, target training issues and recognize performance excellence. Without
substantive measures, this is all based on perception, which may or may not be well grounded
in fact. Measures and metrics, over time, will be added to the estimating process to improve
the accuracy of the estimates and to do a better job of resource assignment and planning.

6.1.3.2

Establishing Meaningful Software Development Measures

Throughout this and preceding Skill Categories, various potential measures have been
mentioned. The list is, in fact, virtually endless. It is possible to develop a staggering array of
measures and associated metrics. The Business Analyst is often able to help in the analysis of
the organizations true commitment to a measurement process. The investigation involves
discussions with those who would sponsor a measurement initiative and those for whom it
could have value. It is essential is to determine the answers to the following questions:
Why does the organization wish to measure? If the answer is because it seems to be
the thing to do, any set of measures will be acceptable. If the answer is because we
need to get better, then a good understanding of the areas the organization perceives
to be in need of improvement will be required. Defect data and other information
the Business Analyst has at their disposal will be a major asset in this investigation.
Since organizations (and individuals) get what they measure, the areas for
improvement will need to be carefully targeted to achieve the desired result.
What will be done with the measures collected? If the answer to this question is
placed in file for unspecified future reference, once again, it is not worth expending
enormous effort to collect the measures. If the answer is, it will be used as the basis
for developing staff training plans and budgets, establishment of improvement task
forces, and so on, then it is important to get it right.
Who will have access to the information? Wide accessibility to the information,
especially to the producers of the product and to those involved in the establishment
and improvement of the methodology is the key here. Simply reporting to
management is not enough.
What resources will be available to collect and analyze measures? If the work to
collect and analyze the data is merely one more set of tasks assigned to already
over-worked and over-stressed staff members, the end result may be of little or no
value. To be effective, the individuals assigned must have adequate time to collect,
validate, and analyze the data. There must be time to try different metrics to find

Version 9.1

6-5

Guide to the CABA CBOK


ones that are meaningful and repeatable. Not everyone on the IT staff has the skills
or the interest to do this.

6.2 Major Components of Software Processes


(Objectives and Deliverables)
The Business Analyst may be involved in many stages of the development of a software
product, depending upon the organization for which they work. In some organizations, the
Business Analyst only participates in the Analysis and Requirements phase; in others they are
almost exclusively involved in Testing; for some the focus is on the training and
implementation aspects of the project. In smaller organizations, the CSBA may be doing all of
these things. If everyone worked for the same employer forever, knowing parts of the process
that are not a requirement of their current position would not be necessary. With the
increasing mobility of employees and jobs, it is essential to be fully informed on all aspects of
development.
The phases, products, and document titles below are common throughout the industry when
using the waterfall or spiral methodologies. Due to the different approach of the Agile
Methodologies, they will be considered later. The material that follows focuses on the
traditional development methodologies; however, many of the same products will be created,
simply in a different flow. While an individual organization may have a different name or title
for a specific work product, what is important is the intent and the content of the document.

6.2.1

Pre-Requirements Analysis

Ideas for new or revised systems come from many places within an organization. The number
of proposed projects vastly outnumber those projects actually initiated. Every organization
has a method of winnowing out those projects that appear to offer limited benefits in return for
the resources expended. Regardless of what these steps are called, they have common
characteristics.

6.2.1.1

Preliminary Scope Statement

This is often begun as a memo or a conversation. It is the original idea for the project. Projects
may remain at the conversation stage for relatively long periods of time, and involve many
parts of the organization. At some point it becomes necessary to make it official. This
document contains the preliminary description of the business functionality desired by the
project sponsor. It will typically include high level language and lots of ambiguities. It will
outline what will be included and what will be excluded. The Business Analyst may work
with the project sponsor to draft the Preliminary Scope Statement, asking probing questions to

6-6

Version 9.1

Software Development Processes, Project and Risk Management


help clarify the vision. The preliminary Scope Statement for the Kiosk example used in Skill
Category 5 might be as follows:
We want to be able to sell tickets to multiple events from multiple locations across the
region. We already provide a ticket by mail service, but customers often want to have a
faster turnaround and a better way to select their activities. We would purchase event
seats wholesale, allow the customer to select the event, time and seat they want, accept
payment and produce a paper ticket, using a self contained kiosk.

6.2.1.2

Preliminary Benefit Estimate

The preliminary benefit estimate is the rationale for pursuing the project. Organizations
undertake projects for only one of two reasons: because they must or because it appears to
offer some form of benefit to the organization. Benefits can take many forms, they often
include increased profit, decreased cost, improved customer satisfaction, reduced time to
market, improved market share, and so on.
The Preliminary Benefit Estimate will indicate both the potential sources of benefits and
estimates of what the amount of those benefits will be. The Project Sponsor generally has
identified some benefit areas before suggesting the project. Development of the Preliminary
Benefit Estimate is discussed in more detail in Skill Category 4.

6.2.1.3

Preliminary Estimate and Related Cost Calculation

There is great concern in the Information Technology area when a business partner or external
customer requests a ball park estimate. There are a number of very good reasons for this
concern: if the first number is much too low, and the project goes forward, the business
partner will not be happy to see the price going up and up; if the first number is much too
high, the project may be dropped when it could have been very successful, and once again the
business partner will not be happy.
Nonetheless, it is essential to obtain an early estimate of what it will cost to create the system.
Skill Category 4 discusses approaches for creating the early estimate and how it will be
refined over time.

6.2.1.4

Feasibility Study Report

The final product of the Pre-Requirements Analysis phase is the Feasibility Study. This
document brings together the three items above to form a cohesive look at a potential project.
Developing a Feasibility Study report is discussed in more detail in Skill Category 4. If the
Feasibility Study results in a favorable recommendation to proceed, and that recommendation
is approved, the project will go forward. Otherwise it may be officially terminated or sent
back for further study.
Because it is essential to understand the full scope of the work to be done before completing
the final Cost Benefit Analysis, this step is best done in conjunction with the Requirements

Version 9.1

6-7

Guide to the CABA CBOK


Development process. If the project Feasibility Study is approved, the next step will be the
Requirements phase.

6.2.2

Requirements

Skill Category 5 covers this topic extensively. There are however, specific outputs that must
be created during the Requirements phase of a project. The CSBA will be heavily involved in
the creation of these work products and so must understand them fully.

6.2.2.1

Requirements Definition

The development of this work product has already been covered in detail. It is the central
deliverable for this phase.

6.2.2.2

Preliminary Test Cases and Test Plan

Based upon the size and scope of the project, it will be possible to develop some portion of the
functional test cases during Requirements. This will help the Testing team size the testing
effort and create a effective test plan. The Business Analyst will be very involved in the
development of the Acceptance Test Plan, which is a subset of the overall Test Plan.
Acceptance Test Planning will be addressed in more detail in Skill Category 7.

6.2.2.3

Project Charter

The Project Charter is the high level document that addresses the project strategy, as well as
the project deliverables and objectives. It is both a technical and a business document. It
serves as the organization and table of contents for other project related documents that will
be created. When agreement has been achieved at a detail level for those documents, such as
the Scope Statement and the Project Plan, the Project Charter can be signed off on. Project
Planning will be discussed in more detail in Skill Category 7.

6.2.2.4

Project Scope Statement

The Project Scope Statement documents the agreement between all of the principal
participants regarding what is to be accomplished. Project Scope is often the single most
contentious issue for a project team to reconcile. Throughout the Requirements Development
process, the list of desired functions and attributes has been growing. Using the various
prioritization processes discussed in Skill Category 5, that list is reduced to the set that will
comprise the completed functionality. A complete Project Scope Statement will include both
what is included and what is specifically excluded. This is an important, but often neglected

6-8

Version 9.1

Software Development Processes, Project and Risk Management


piece of information. By addressing the exclusions early in the life of the project it is possible
to reduce future conflict on the topic.
The Scope Statement must be agreed to by all key stakeholder for the project. Failure to
secure agreement to this document sets the stage for runaway scope creep. Scope creep is
the gradual expansion of project inclusions. Scope creep can be controlled most effectively
through the use of a rigorous Requirements Configuration Management process.
All future decisions regarding actions to accept or reject should be based upon the contents of
the Scope Statement.1 The Project Scope Statement may be a part of a Project Plan document,
or the Plan document may be a part of the Scope Statement. Often both are included as
attachments to a Project Charter. All must exist in some format.

6.2.2.5

Project Plan

The Project Plan describes what will be done, by whom and when. Initially this information
will be created at a very high, conceptual level. As the information about what is to be
accomplished is developed and solidified, the ability to break this information down to finer
levels of detail will emerge. Failure to create plans at an appropriate level of detail will result
in misallocation of time and resources and is a frequent cause of late project delivery.
Developing the Project Plan will be addressed in more detail in Skill Category 7.

6.2.2.6

Revised Estimate, Cost and Benefit Information

Based upon the information learned during this stage of the project, it will be necessary to
expand and update cost and benefit information originally presented as a part of the
Feasibility Study.

6.2.2.7

Organizational Approvals

Throughout the Requirements Definition and Project Plan activities, the Business Analyst and
others on the project team have been working to build consensus in support of the finished
document. The final step is to ensure that the formal organizational approvals are acquired
before moving into the Design stage.

6.2.3

Design

Design is the process of translating what the business product must do or be (Requirements)
into a technical solution for achieving that functionality. While the Business Analyst must be
conversant with the process and the terminology, the work will be performed by others. The
1. Riordan, Jeb; Project Magazine; 2001. http://projectmagazine.com/scope/writing-a-scope-statement.html

Version 9.1

6-9

Guide to the CABA CBOK


CSBA may be called upon to provide clarity or insight into the relative value of different
implementation, but will not be doing the Design. In some organizations the BA may be
charged with developing report and screen layouts based on input from the business
community. It is essential that this work, which is Design, is not undertaken until the
Requirements issues have been completed. Failure to follow this approach often results in
missed opportunities and early elimination of otherwise valuable options.

6.2.3.1

External and Internal Design

These elements are the products of the process of translating the requirements into actionable
items for programmers. For all but the smallest projects, this stage will include some
consideration of the project architecture. This is a conceptual view of the components of the
product and how they will relate to each other.
The External Design looks at the project as a complete entity. The External Design process
uses the requirements and what is known of the architecture to determine what the major
components of a product will be. Using more detailed versions of the Data Flow, Entity
Relationship and State Transition diagrams discussed in Skill Category 5, the designer will
describe how the functionality defined in the requirements will be implemented. Included in
this will be a consideration of interfaces to external systems and the environment.
The Internal Design describes what will happen within each of the major components at two
levels of detail. The first provides a map of the sub-components required to achieve the
desired functionality:
Files - file types needed, file formats, file names
Data- new databases to be created and their structure, existing data bases to be
accessed, data formats
Modules and sub-routines to be accessed from existing applications
Common error formats
As the work of design progresses, individual items will be described in greater detail, so for a
given sub-component information such as:
Component name and description
Inputs and Outputs
Arguments and formulas connecting inputs to outputs
Specific error
Versioning information

6-10

Version 9.1

Software Development Processes, Project and Risk Management


6.2.3.2

Revised Requirements

Throughout the Design process there will be opportunities to clarify and improve
Requirements. As the organization becomes more proficient at developing requirements,
these opportunities should be reduced, but they will never be eliminated. The Business
Analyst will need to exercise the same skill and care in the development of these later stage
requirements that was used earlier.

6.2.3.3

New and Revised Test Cases and Test Plan

During the Requirements stage of the life cycle, the majority of the functional requirements
and many of the quality requirements are identified and initial test cases developed. As the
requirements are instantiated in the design, additional test cases will be required to address the
specific situation. The Designer, working with the Development Team and the BAs, will be
able to provide the traceability of each requirement into the final design.
Changes to test cases developed earlier may arise from the recognition that the initial
requirement was not complete or correct (often this is discovered in the process of writing test
cases). The Acceptance Test Process and Test Case Development will be addressed in more
detail in Skill Category 8.

6.2.3.4

Final Project Scope and Plan

For large projects one output of the design activity may well be the need to re-validate the
project scope and plan. The actual implementation of some desired functionality may be too
costly, too difficult, and/or too time consuming for the organization. This may result in a
change in scope or plan.
Alternatively, the design process may uncover missed, but needed functionality resulting in an
increase in the project scope. These will create changes to the project plan that may have
significant financial, operational, and/or performance implications for the organization.
Before proceeding with the actual implementation, it is essential to ensure that any changes
are accepted and approved by the organization.

6.2.3.5

Final Project Estimate, Cost and Benefit Information

The project time and cost estimates can be finalized at this point. The scope is set, the design
is known and planned for, the needed resources can be fully identified and the time for
completion of tasks can be realistically estimated based upon past performance. There may be
some changes to the Cost Benefit Analysis as a result of what has been learned through the
Design process.
Although many organizations are unwilling to do so, terminating projects at this point if they
can no longer meet the organizations ROI standards is the most prudent action to take.

Version 9.1

6-11

Guide to the CABA CBOK


Although a significant amount of money, time and emotional energy may have been
expended, it should not be allowed to continue if the project cannot succeed.

6.2.4

Code, Development, Construct or Build

The traditional name for the next stage, coding, arises from the early languages format that
was very difficult for humans to read and understand. Newer approaches have created much
easier interfaces that in turn, result in new terms such as construction, building,
development, or design implementation. Which ever name is used, the object is the same:
to turn the design into something that can be successfully executed by the hardware and
software platform(s) of choice.
Although this process is being presented in a linear fashion, in many organizations, the
transition from Design to Code is much less coherent. In an effort to shorten the delivery time
span, work begins on building parts of the system before the design is complete. The risk in
this situation is that the missing pieces of the design will have an unintended or unexpected
impact on work that has already been completed, causing errors and rework.
Often the specifics of the of the construction process are determined by the product being
used. Each product has a language and the BA should be familiar with the coding tool being
used and the associated terminology. During the initial stages of construction the BA will
typically not be heavily involved in the activity. Only when the product reaches the executable
stage will the interaction increase.

6.2.4.1

Unit and System Tested Code

As each unit of the program or module is completed, the developer (or coder or builder) tests
that component to see if it will work as programmed. The chunks of functionality being
tested will become larger as more pieces are developed. The developer identifies errors and
corrects them in an iterative process.
Unit Testing generally refers to the testing done by the developer at the small component
level. Each component or program or module is tested when it is deemed to be complete by
the producer. While the developer may have access to test cases developed earlier in the life
cycle by the Business Analyst and the Requirements team, in some organizations they may
choose not use them, developing their own cases instead. (Some of the issues surrounding this
choice will be addressed in more detail in Skill Category 7, Acceptance Testing.) Developers
tend to perform positive testing designed to answer the question Does it work? Some
developers will perform limited boundary testing, but this is not consistently seen across
organizations. While much unit testing is focused on removing syntax errors, it will
sometimes highlight deeper problems. The failure to perform rigorous negative testing at this
stage is one of the sources of conflict between developers and testing organizations.
System Testing refers to the testing done on complete processes and sub-processes. System
Testing will often combine the work products of several developers to ensure the functionality

6-12

Version 9.1

Software Development Processes, Project and Risk Management


that performs correctly in the isolation of unit testing continues to perform correctly. It also
ensures that complete processing streams are performing the necessary handoffs. System
testing often requires a significant level of development resources to perform.

6.2.4.2

Revised Requirements and Test Cases

Errors and problems uncovered during Unit and System Testing may reveal the need for new
or revised requirements. This will involve the Business Analyst in the process. New or revised
requirements will in turn drive the need for new or revised test cases. The importance of
keeping the test bed current cannot be overstressed.

6.2.4.3

Defect Data

Collection of defect data during this stage of the process is very important, but is often
overlooked. Defects uncovered during the development process can shed light on problems
with earlier parts of the process. Since these are often unreported, the problems they would
highlight are often underestimated. This in turn leads to a lack of willingness to invest
resources in a small problem.

6.2.5

Test

The objective of testing activities is to ensure that the developed product fulfills the defined
requirements, providing the desired results in an acceptable fashion. Although a significant
amount of testing has already been conducted, the Test phase is typically focused on testing
performed by someone other than the developer. The need for independent testing, is a
separate issue from the need for acceptance testing, although the two may happen
concurrently or, in fact, be the same testing.
For the Business Analyst, Testing is a significant part of their role in the development of
quality software. Understanding the work products of testing is a fundamental part of the job.
Testing will be explored in more detail in Skill Category 7.

6.2.5.1

Executed Test Cases

Successfully executed test cases are the objective of the Testing process. The number of test
cases created and executed should be the minimum number necessary to ensure the desired
product quality and functionality. Once selected for execution, test cases may need to be rerun many times before they are successful. The Business Analyst is often executing tests as
well as working with others to determine if the test results are acceptable or not. Tracking
progress is both rewarding and frustrating. As will be discussed in Skill Category 7, an
effective Acceptance Testing process will establish criteria for determining when the

Version 9.1

6-13

Guide to the CABA CBOK


cumulative successful execution of test cases is sufficient to state that the product is
production ready.

6.2.5.2

Production Ready System

This is the product the entire process was designed to create. What constitutes production
ready should be consistently organizationally defined to avoid putting defective products in
the hands of customers. While it is often worthwhile (and necessary) to allow business
partners to begin using a product with known defects, the costs in terms of lost productivity
and frustration should be carefully weighed against the proposed benefits. The bitterness of
poor quality remains long after the sweetness of meeting the schedule has past.2

6.2.5.3

Organizational Approvals

Once the development team determines that the product is ready for production, it is important
to secure the organizational approvals for implementation. While for small projects this may
be trivial, for larger projects it may be quite contentious, especially if it has been necessary to
reduce scope or go into production with significant functionality missing or not performing as
required.

6.2.5.4

Defect Data

From the unsuccessful test case execution will come information about defects: where they
were created, their cause, their significance, where they were found and how much they cost
to repair, are only a few of the vital pieces of information to be gathered in this process. The
development and maintenance of a centralized defect database is typically the responsibility
of someone either in the Testing area or Quality Assurance. The Business Analyst will
contribute a great deal to the quality of both the current and future products through the
rigorous collection and recording of defect data.

6.2.6

Implementation

The final stage of the project is the actual implementation of the product. The products of this
stage provide the setting for the successful use of the product throughout its life-cycle. The
objective of the products and activities of this stage is to put the product in the hands of the
individual(s) who will actually use it, as smoothly and cleanly as possible by providing a
minimum level of disruption to the other work of those individuals. Since the focus of these
products is on the customer or business partner who will use them, the Business Analyst will

2. Perry, William E.; Quality Assurance Institute; QAI Journal

6-14

Version 9.1

Software Development Processes, Project and Risk Management


be heavily involved in their development. Skill Category 10 will explore this topic in greater
detail.

6.2.6.1

Product Documentation / User Manual

Few products are so intuitively obvious that no instructions are necessary. Computer
programs are notorious for their complexity. An effective guide to the use of the product is
essential. It should contain the information needed to successfully navigate the system and
produce the desired result(s). It should also help users of the system identify and recover from
problems that can be reasonably anticipated.
Historically, this material was provided in a paper based product. As people are becoming
more comfortable with on-line searching, there is a trend toward placing this material on an
intranet or the internet, depending upon the intended audience. Regardless of this, creating
and maintaining a paper based master copy of this material is good practice.
If the development of this material is left until the end of the product development and testing,
the result is often a less than complete and less than useful product. Incremental development
will help to avoid a pre-implementation time crisis.

6.2.6.2

Training

Not all application projects will require training. For large, new, complex functionality
products, especially in an organizational setting, letting people figure it out for themselves is
not cost effective. The size and scope of the training effort is dependant upon the product itself
and can range from small informal one-on-one sessions to large scale multi-site training
road-shows. The Business Analyst, with their unique understanding of how the product will
actually be used by the intended audience is often involved in developing train the trainer
materials. They may also provide some of the initial training.

6.2.6.3

Product Turnover

Moving a new product into the production environment is akin to installing a new engine in an
automobile. It is one thing to have a new engine and an automobile sitting side by side, it is
another to have the engine properly installed and wired into to the other vehicle components
so that when the key is turned, the engine starts and continues to operate until turned off by the
operator. This work is typically performed by skilled specialists in the organizations
operations staff. There are opportunities for unanticipated defects and delays. The Business
Analyst, like the remainder of the development team, watches the turnover process with
anticipation.

Version 9.1

6-15

Guide to the CABA CBOK


6.2.6.4

Help Desk

Since not all defects will be discovered in testing, because User Manuals are not perfect, and
people are sick on the day the training class is held, there must be a resource to respond to
questions and problems. That is the role of the Help Desk. For the Help Desk to be effective,
they must have a process for diagnosing problems and know the answers to routine questions.
The Business Analyst may be the one to provide some of the initial training for the Help Desk
staff, and in some cases may work on the Help Desk during the first few critical days of
implementation.

6.3 Traditional Approaches to Software Development


A Business Analyst will be working with one or more development processes during their
career. Increasingly, organizations are using blended development approaches in order to
achieve specific objectives. A clear understanding of the various System Development
Methodologies (SDMs), their strengths and weaknesses, will help the Business Analyst to
provide the best project support regardless of the process selected.
The description of various methodologies is reproduced from:
A Survey of System Development Process Models3
CTG.MFA - 003
Models for Action Project:
Developing Practical Approaches to Electronic Records Management
and Preservation

6.3.1

Introduction

This section provides an overview of the more common system development Process Models
used to guide the analysis, design, development, and maintenance of information systems.
There are many different methods and techniques used to direct the life cycle of a software
development projects and most real-world models are customized adaptations of the generic
models. While each is designed for a specific purpose or reason, most have similar goals and
share many common tasks. This unit will explore the similarities and differences among these

3. This material is based upon work supported in part by the National Historical
Publications and Records Commission under Grant No. 96023 Center for Technology in Government
University at Albany / SUNY 1998 Center for Technology in Government The Center grants permission to reprint this document provided that it is printed in its entirety.
Numeric section headings have been inserted to aid in locating key materials and have no impact on the
content of each section

6-16

Version 9.1

Software Development Processes, Project and Risk Management


various models and will also discuss how different approaches are chosen and combined to
address practical situations.

6.3.2

Typical Tasks in the Development Process Life Cycle

Professional system developers and the customers they serve share a common goal of building
information systems that effectively support business process objectives. In order to ensure
that cost-effective, quality systems are developed which address an organizations business
needs, developers employ some kind of system development Process Model to direct the
projects life cycle. Typical activities performed include the following:4
System conceptualization
System requirements and benefits analysis
Project adoption and project scoping
System design
Specification of software requirements
Architectural design
Detailed design
Unit development
Software integration & testing
System integration & testing
Installation at site
Site testing and acceptance
Training and documentation
Implementation
Maintenance

4. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University; list is partially created from lecture notes: Software Engineering Best Practices, 1997.

Version 9.1

6-17

Guide to the CABA CBOK

6.3.3

Process Model / Life-Cycle Variations

While nearly all system development efforts engage in some combination of the above tasks,
they can be differentiated by the feedback and control methods employed during development
and the timing of activities. Most system development Process Models in use today have
evolved from three primary approaches: Ad-hoc Development, Waterfall Model, and the
Iterative process.

6.3.4

Ad-hoc Development

Early systems development often took place in a rather chaotic and haphazard manner, relying
entirely on the skills and experience of the individual staff members performing the work.
Today, many organizations still practice Ad-hoc Development either entirely or for a certain
subset of their development (e.g. small projects).
The Software Engineering Institute (SEI) at Carnegie Mellon University5 points out that with
Ad-hoc Process Models, process capability is unpredictable because the software process is
constantly changed or modified as the work progresses. Schedules, budgets, functionality, and
product quality are generally (inconsistent). Performance depends on the capabilities of
individuals and varies with their innate skills, knowledge, and motivations. There are few
stable software processes in evidence, and performance can be predicted only by individual
rather than organizational capability.6

Figure 6-1 Ad-hoc Development


Even in undisciplined organizations, however, some individual software projects produce
excellent results. When such projects succeed, it is generally through the heroic efforts of a
5. Information on the Software Engineering Institute can be found at http://www.sei.cmu.edu.
6. Mark C. Paulk, Charles V. Weber, Suzanne M. Garcia, Mary Beth Chrissis, and Marilyn W. Bush,
"Key Practices of the Capability Maturity Model, Version 1.1," Software Engineering Institute, February 1993, p 1.

6-18

Version 9.1

Software Development Processes, Project and Risk Management


dedicated team, rather than through repeating the proven methods of an organization with a
mature software process. In the absence of an organization-wide software process, repeating
results depends entirely on having the same individuals available for the next project. Success
that rests solely on the availability of specific individuals provides no basis for long-term
productivity and quality improvement throughout an organization.7

6.3.5

The Waterfall Model

The Waterfall Model is the earliest method of structured system development. Although it has
come under attack in recent years for being too rigid and unrealistic when it comes to quickly
meeting customers needs, the Waterfall Model is still widely used. It is attributed with
providing the theoretical basis for other Process Models because it most closely resembles a
generic model for software development.

Feedback and Error Correction

Figure 6-2 Waterfall Model


The Waterfall Model consists of the following steps:
System Conceptualization - Refers to the consideration of all aspects of the
targeted business function or process, with the goals of determining how each of
those aspects relates with one another, and which aspects will be incorporated into
the system.
Systems Analysis - Refers to the gathering of system requirements, with the goal of
determining how these requirements will be accommodated in the system.
Extensive communication between the customer and the developer is essential.

7. Ibid.

Version 9.1

6-19

Guide to the CABA CBOK


System Design - Once the requirements have been collected and analyzed, it is
necessary to identify in detail how the system will be constructed to perform
necessary tasks. More specifically, the System Design phase is focused on the data
requirements (what information will be processed in the system?), the software
construction (how will the application be constructed?), and the interface
construction (what will the system look like? What standards will be followed?).
Coding - Also known as programming, this step involves the creation of the system
software. Requirements and systems specifications from the System Design step are
translated into machine readable computer code.
Testing8 - As the software is created and added to the developing system, testing is
performed to ensure that it is working correctly and efficiently. Testing is generally
focused on two areas: internal efficiency and external effectiveness. The goal of
external effectiveness testing is to verify that the software is functioning according
to system design, and that it is performing all necessary functions or sub-functions.
The goal of internal testing is to make sure that the computer code is efficient,
standardized, and well documented. Testing can be a labor-intensive process, due to
its iterative nature.

6.3.5.1

Problems/Challenges associated with the Waterfall Model

Although the Waterfall Model has been used extensively over the years in the production of
many quality systems, it is not without its problems. Criticisms fall into the following
categories:
Real projects rarely follow the sequential flow that the model proposes.
At the beginning of most projects there is often a great deal of uncertainty about
requirements and goals, and it is therefore difficult for customers to identify these
criteria on a detailed level. The model does not accommodate this natural
uncertainty very well.
Developing a system using the Waterfall Model can be a long, painstaking process
that does not yield a working version of the system until late in the process.

6.3.6

Iterative Development

The problems with the Waterfall Model created a demand for a new method of developing
systems which could provide faster results, require less up-front information, and offer greater
flexibility. With Iterative Development, the project is divided into small parts. This allows the
development team to demonstrate results earlier on in the process and obtain valuable
8. Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber, "Capability Maturity Model
for Software, Version 1.1," Software Engineering Institute, February 1993, p 18.

6-20

Version 9.1

Software Development Processes, Project and Risk Management


feedback from system users. Often, each iteration is actually a mini-Waterfall process with the
feedback from one phase providing vital information for the design of the next phase. In a
variation of this model, the software products which are produced at the end of each step (or
series of steps) can go into production immediately as incremental releases.

Figure 6-3 Iterative Development

6.3.6.1

Problems/Challenges associated with the Iterative Model9

While the Iterative Model addresses many of the problems associated with the Waterfall Model, it
does present new challenges.
The user community needs to be actively involved throughout the project. While
this involvement is a positive for the project, it is demanding on the time of the staff
and can cause project delays.
Communication and coordination skills take center stage in project development.
Informal requests for improvement after each phase may lead to confusion -- a
controlled mechanism for handling substantive requests needs to be developed.
9. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University, from lecture notes: Software
Engineering Best Practices, 1997.

Version 9.1

6-21

Guide to the CABA CBOK


The Iterative Model can lead to scope creep, since user feedback following each
phase may lead to increased customer demands. As users see the system develop,
they may realize the potential of other system capabilities which would enhance
their work.

6.3.7

Variations on Iterative Development

A number of Process Models have evolved from the Iterative approach. All of these methods
produce some demonstrable software product early on in the process in order to obtain
valuable feedback from system users or other members of the project team. Several of these
methods are described below.

6.3.7.1

Prototyping

The Prototyping Model was developed on the assumption that it is often difficult to know all
of your requirements at the beginning of a project. Typically, users know many of the
objectives that they wish to address with a system, but they do not know all the nuances of the
data, nor do they know the details of the system features and capabilities. The Prototyping
Model allows for these conditions, and offers a development approach that yields results
without first requiring all information.
When using the Prototyping Model, the developer builds a simplified version of the proposed
system and presents it to the customer for consideration as part of the development process.
The customer in turn provides feedback to the developer, who goes back to refine the system
requirements to incorporate the additional information. Often, the prototype code is thrown
away and entirely new programs are developed once requirements are identified.
There are a few different approaches that may be followed when using the Prototyping Model:
Creation of the major user interfaces without any substantive coding in the
background in order to give the users a feel for what the system will look like
Development of an abbreviated version of the system that performs a limited subset
of functions; development of a paper system (depicting proposed screens, reports,
relationships etc.)
Use of an existing system or system components to demonstrate some functions
that will be included in the developed system10

10. Linda Spence, University of Sutherland, Software Engineering, available at http://osiris.sunderland.ac.uk/rif/linda_spence/HTML/contents.html

6-22

Version 9.1

Software Development Processes, Project and Risk Management


6.3.7.2

Prototyping steps
Prototyping is comprised of the following steps:
Requirements Definition/Collection - Similar to the Conceptualization phase of
the Waterfall Model, but not as comprehensive. The information collected is usually
limited to a subset of the complete system requirements.
Design - Once the initial layer of requirements information is collected, or new
information is gathered, it is rapidly integrated into a new or existing design so that
it may be folded into the prototype.
Prototype Creation/Modification - The information from the design is rapidly
rolled into a prototype. This may mean the creation/modification of paper
information, new coding, or modifications to existing coding.
Assessment - The prototype is presented to the customer for review. Comments and
suggestions are collected from the customer.
Prototype Refinement - Information collected from the customer is digested and
the prototype is refined. The developer revises the prototype to make it more
effective and efficient.
System Implementation - In most cases, the system is rewritten once requirements
are understood. Sometimes, the Iterative process eventually produces a working
system that can be the corners stone for the fully functional system.

6.3.7.3

Problems/Challenges associated with the Prototyping Model

Criticisms of the Prototyping Model generally fall into the following categories:
Prototyping can lead to false expectations. Prototyping often creates a situation
where the customer mistakenly believes that the system is finished when in fact it
is not. More specifically, when using the Prototyping Model, the preimplementation versions of a system are really nothing more than one-dimensional
structures. The necessary, behind-the-scenes work such as database normalization,
documentation, testing, and reviews for efficiency have not been done. Thus the
necessary underpinnings for the system are not in place.
Prototyping can lead to poorly designed systems. Because the primary goal of
Prototyping is rapid development, the design of the system can sometimes suffer
because the system is built in a series of layers without a global consideration of
the integration of all other components. While initial software development is often
built to be a throwaway, attempting to retroactively produce a solid system design
can sometimes be problematic.

Version 9.1

6-23

Guide to the CABA CBOK


6.3.7.4

Variation of the Prototyping Model

A popular variation of the Prototyping Model is called Rapid Application Development


(RAD). RAD introduces strict time limits on each development phase and relies heavily on
rapid application tools which allow for quick development.11

6.3.8

The Exploratory Model

In some situations it is very difficult, if not impossible, to identify any of the requirements for
a system at the beginning of the project. Theoretical areas such as Artificial Intelligence are
candidates for using the Exploratory Model, because much of the research in these areas is
based on guess-work, estimation, and hypothesis. In these cases, an assumption is made as to
how the system might work and then rapid iterations are used to quickly incorporate
suggested changes and build a usable system. A distinguishing characteristic of the
Exploratory Model is the absence of precise specifications. Validation is based on adequacy
of the end result and not on its adherence to pre-conceived requirements.

6.3.8.1

The Exploratory Model

The Exploratory Model is extremely simple in its construction; it is composed of the


following steps:
Initial Specification Development - Using whatever information is immediately
available, a brief System Specification is created to provide a rudimentary starting
point.
System Construction/Modification - A system is created and/or modified
according to whatever information is available.
System Test - The system is tested to see what it does, what can be learned from it,
and how it may be improved.
System Implementation - After many iterations of the previous two steps produce
satisfactory results, the system is dubbed as finished and implemented.

6.3.8.2

Problems/Challenges associated with the Exploratory Model

There are numerous criticisms of the Exploratory Model:


It is limited to use with very high-level languages that allow for rapid development,
such as LISP.
11. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University, from lecture notes: Software
Engineering Best Practices, 1997.

6-24

Version 9.1

Software Development Processes, Project and Risk Management


It is difficult to measure or predict its cost-effectiveness.
As with the Prototyping Model, the use of the Exploratory Model often yields
inefficient or crudely designed systems, since no forethought is given as to how to
produce a streamlined system.

6.3.9

The Spiral Model

The Spiral Model was designed to include the best features from the Waterfall and
Prototyping Models, and introduces a new component - risk-assessment. The term spiral is
used to describe the process that is followed as the development of the system takes place.
Similar to the Prototyping Model, an initial version of the system is developed, and then
repetitively modified based on input received from customer evaluations. Unlike the
Prototyping Model, however, the development of each version of the system is carefully
designed using the steps involved in the12 Waterfall Model. With each iteration around the
spiral (beginning at the center and working outward), progressively more complete versions
of the system are built.13

Figure 6-4 Spiral Model14


12. Frank Kand, A Contingency Based Approach to Requirements Elicitation and Systems Development, London School of Economics, J. Systems Software 1998; 40: pp. 3-6.
13. Linda Spence, University of Sutherland, Software Engineering, available at http://osiris.sunderland.ac.uk/rif/linda_spence/HTML/contents.html

Version 9.1

6-25

Guide to the CABA CBOK


Risk assessment is included as a step in the development process as a means of evaluating
each version of the system to determine whether or not development should continue. If the
customer decides that any identified risks are too great, the project may be halted. For
example, if a substantial increase in cost or project completion time is identified during one
phase of risk assessment, the customer or the developer may decide that it does not make
sense to continue with the project, since the increased cost or lengthened time frame may
make continuation of the project impractical or unfeasible.

6.3.9.1

The Spiral Model steps

The Spiral Model is made up of the following steps:


Project Objectives - Similar to the system conception phase of the Waterfall
Model. Objectives are determined, possible obstacles are identified and alternative
approaches are weighed.
Risk Assessment - Possible alternatives are examined by the developer, and
associated risks/problems are identified. Resolutions of the risks are evaluated and
weighed in the consideration of project continuation. Sometimes prototyping is
used to clarify needs.
Engineering & Production - Detailed requirements are determined and the
software piece is developed.
Planning and Management - The customer is given an opportunity to analyze the
results of the version created in the Engineering step and to offer feedback to the
developer.

6.3.9.2

Problems/Challenges associated with the Spiral Model

The risk assessment component of the Spiral Model provides both developers and customers
with a measuring tool that earlier Process Models do not have. The measurement of risk is a
feature that occurs every day in real-life situations, but (unfortunately) not as often in the
system development industry. The practical nature of this tool helps to make the Spiral Model
a more realistic Process Model than some of its predecessors.

6.3.10 The Reuse Model


The premise behind the Reuse Model is that systems should be built using existing
components, as opposed to custom-building new components. The Reuse Model is clearly

14. Kal Toth, Intellitech Consulting Inc. and Simon Fraser University, from lecture notes: Software
Engineering Best Practices, 1997

6-26

Version 9.1

Software Development Processes, Project and Risk Management


suited to Object-Oriented computing environments, which have become one of the premiere
technologies in todays system development industry.
Within the Reuse Model, libraries of software modules are maintained that can be copied for
use in any system. These components are of two types: procedural modules and database
modules.
When building a new system, the developer will borrow a copy of a module from the
system library and then plug it into a function or procedure. If the needed module is not
available, the developer will build it, and store a copy in the system library for future usage. If
the modules are well engineered, the developer, with minimal changes, can implement them.

6.3.10.1

The Reuse Model steps

The Reuse Model consists of the following steps:


Definition of Requirements - Initial system requirements are collected. These
requirements are usually a subset of complete system requirements.
Definition of Objects - The objects, which can support the necessary system
components, are identified.
Collection of Objects - The system libraries are scanned to determine whether or
not the needed objects are available. Copies of the needed objects are downloaded
from the system.
Creation of Customized Objects - Objects that have been identified as needed, but
that are not available in the library are created.
Prototype Assembly - A prototype version of the system is created and/or
modified using the necessary objects.
Prototype Evaluation - The prototype is evaluated to determine if it adequately
addresses customer needs and requirements.
Requirements Refinement - Requirements are further refined as a more detailed
version of the prototype is created.
Objects Refinement - Objects are refined to reflect the changes in the
requirements.

6.3.10.2

Problems/Challenges Associated with the Reuse Model

A general criticism of the Reuse Model is that it is limited for use in object-oriented
development environments. Although this environment is rapidly growing in popularity, it is
currently used in only a minority of system development applications.

Version 9.1

6-27

Guide to the CABA CBOK

6.3.11 Creating and Combining Models


In many cases, parts and procedures from various Process Models are integrated to support
system development. This occurs because most models were designed to provide a framework
for achieving success only under a certain set of circumstances. When the circumstances
change beyond the limits of the model, the results from using it are no longer predictable.
When this situation occurs it is sometimes necessary to alter the existing model to
accommodate the change in circumstances, or adopt or combine different models to
accommodate the new circumstances.
The selection of an appropriate Process Model hinges primarily on two factors: organizational
environment and the nature of the application. Frank Land, from the London School of
Economics, suggests that suitable approaches to system analysis, design, development, and
implementation be based on the relationship between the information system and its
organizational environment. Four categories of relationships are identified:
The Unchanging Environment - Information requirements are unchanging for the
lifetime of the system (e.g. those depending on scientific algorithms). Requirements
can be stated unambiguously and comprehensively. A high degree of accuracy is
essential. In this environment, formal methods (such as the Waterfall or Spiral
Models) would provide the completeness and precision required by the system.
The Turbulent Environment - The organization is undergoing constant change
and system requirements are always changing. A system developed on the basis of
the conventional Waterfall Model would be, in part; already obsolete by the time it
is implemented. Many business systems fall into this category. Successful methods
would include those, which incorporate rapid development, some throwaway code
(such as in Prototyping), the maximum use of reusable code, and a highly modular
design.
The Uncertain Environment - The requirements of the system are unknown or
uncertain. It is not possible to define requirements accurately ahead of time because
the situation is new or the system being employed is highly innovative. Here, the
development methods must emphasize learning. Experimental Process Models,
which take advantage of prototyping and rapid development, are most appropriate.
The Adaptive Environment - The environment may change in reaction to the
system being developed, thus initiating a changed set of requirements. Teaching
systems and expert systems fall into this category. For these systems, adaptation is
key, and the methodology must allow for a straightforward introduction of new
rules.

6-28

Version 9.1

Software Development Processes, Project and Risk Management

6.3.12 Process Models Summary


The evolution of system development Process Models has reflected the changing needs of
computer customers. As customers demanded faster results, more involvement in the
development process, and the inclusion of measures to determine risks and effectiveness, the
methods for developing systems evolved. In addition, the software and hardware tools used in
the industry changed (and continue to change) substantially. Faster networks and hardware
supported the use of smarter and faster operating systems that paved the way for new
languages and databases, and applications that were far more powerful than any predecessors.
These and numerous other changes in the system development environment simultaneously
spawned the development of more practical new Process Models and the demise of older
models that were no longer useful.15

6.4 Agile Development Methodologies


As the preceding discussion makes clear, there are strengths and weaknesses associated with
all of the methodologies. In the mid 1990s a number of alternative development solutions
appeared to address some of the perceived shortcomings, especially the lack of flexibility.
These approaches, which include Scrum, Crystal Clear, Adapative Software Development,
Feature Driven Development, Dynamic Systems Development Methodology (DSDM) and
probably best know Extreme Programming (XP), collectively have come to be referred to as
Agile Methodologies.

6.4.1

Basic Agile Concepts

As the various methodologies emerged, there were similarities and differences. In an effort to
bring some cohesion and critical mass to the Agile movement, there were a number of
conferences and workshops. In 2001, a number of the key figures16 in the Agile movement
met in an effort to define a lighter, faster way of creating software which was less structural
and more people focused. The result was a document which has become known as the Agile
Manifesto; it articulates the key principles of the Agile technologies.

15. This is the end of the material based upon work supported in part by the National Historical Publications and Records Commission under Grant No. 96023 Center for Technology in Government University at Albany / SUNY 1998 Center for Technology in Government The Center grants
permission to reprint this document provided that it is printed in its entirety.
16. Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin
Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, Dave Thomas

Version 9.1

6-29

Guide to the CABA CBOK

Principles behind the Agile Manifesto


We follow these principles:
Our highest priority is to satisfy the customer through early and continuous delivery
of valuable software.
Welcome changing requirements, even late in development. Agile processes harness
change for the customer's competitive advantage.
Deliver working software frequently, from a couple of weeks to a couple of months,
with a preference to the shorter time scale.
Business people and developers must work together daily throughout the project.
Build projects around motivated individuals. Give them the environment and support
they need, and trust them to get the job done.
The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development. The sponsors, developers, and
users should be able to maintain a constant pace indefinitely.
Continuous attention to technical excellence and good design enhances agility.
Simplicity--the art of maximizing the amount of work not done--is essential.
The best architectures, requirements, and designs emerge from self-organizing teams.
At regular intervals, the team reflects on how to become more effective, then tunes
and adjusts its behavior accordingly17

17. 2001, Per the above authors this declaration may be freely copied in any form, but only in its
entirety through this notice.

6-30

Version 9.1

Software Development Processes, Project and Risk Management

6.4.2

Agile Practices

Despite the reputation for being undisciplined, effective Agile approaches are anything but
that. The structure and the flow is very different from traditional approaches, and the way
products are derived is very different, even if the name is the same.
Detailed Planning - is done for each iteration, this may be called a timebox or a
sprint or merely a cycle or iteration. Requirements are gathered from the
customer as stories of desired capabilities. In a bullpen style discussion they are
analyzed in depth to understand what is needed and what will be required to provide
that functionality. This process combines both a verification step for Requirements
and a group high level design.
Design is kept as simple as possible while still achieving the desired functionality.
Future potential use is not considered. While the concept of a product architecture
may exist, it does not drive the inclusion of functionality not requested by the
customer or essential to meet an immediate customer need.
Work to be done is carefully estimated using a standard unit of measure (often
called points). The amount of work, measured in units, that can be completed
within a given amount of time (cycle, iteration, etc.) is tracked over time, and is
known as velocity. Once established, velocity is relatively fixed and it determines
how much functionality can be delivered per cycle.
Test Driven Development - To ensure that Requirements are fully understood, the
test cases are developed and run before the code is written. This process helps to
identify things that will need to be changed for the new functionality to work
properly.
Refactor Relentlessly - Refactoring is the term for changing existing code to work
properly with the new requirements. This part of the process is one of the most
contentious for those accustomed to traditionally architected systems which strive
to pre-plan all possible interfaces and accesses. Failure to effectively and
aggressively refactor will result in a steady increase in the testing effort combined
with a significant decline in productivity.
Continuous Integration - As individual units of work are completed, they are
added to the existing base and integration tested. New test cases developed for the
specific functionality are installed and become a part of the future test base for all
other developers. Updates which fail must be removed and repaired, along with the
associated test cases so that the work of others will not be jeopardized. Typical
development cycles lasting from one to three weeks will often see twice daily
integration activity.
Paired Programming - To obtain the benefits of reviews and inspections, as well
as to facilitate the dispersion of the knowledge base, programmers work in pairs.
The pairs may change partners often to promote the collective ownership of code.

Version 9.1

6-31

Guide to the CABA CBOK


This is also supported by the strict adherence to a set of uniform coding standards
that makes code understandable and modifiable by any experienced team member.
Onsite Customer - One of the major issues addressed by the Agile approach is the
need for continuous involvement on the part of the intended product user, or a
designated representation. This individual is a part of the team and co-located,
making access a matter of walking across the room. The onsite requirement
prevents delays and confusion resulting from the inability to access the customer or
business partner at critical times in the development process. It also addresses the
testing and requirements issues previously cited.

6.4.3

Effective Application of Agile Approaches

The Agile approach works well in projects where it is difficult to obtain solid requirements
due to an unstable environment especially those in which the requirements will continue to
emerge as the product is used. For organizations that see themselves as nimble or
responsive or market-driven and that view the necessary refactoring as an acceptable
price to pay for being quick to market, Agile works well.
Agile development teams are generally small; Kent Beck18 suggests 10 or fewer. Projects
requiring more people should be broken down into teams of the recommended size. Becks
suggestion has led people to think that Agile only works well for small projects, and it often
excels in this area. Many organizations are experimenting with the use of Agile on larger
projects; in fact, Becks original project was very large.
One of the key efficiencies achieved through the use of Agile methodologies is the
elimination of much of the documentation created by the tradition processes. The intent is for
programs to be self documenting with extensive use of commentary.
This lack of documentation is one of the drawbacks for many organizations considering Agile,
especially those which impact the health and safety of others. Large, publicly traded
corporations involved in international commerce are finding the lack of external
documentation can cause problems when complying with various international laws that
require explicit documentation of controls on financial systems.
Agile development is less attractive in organizations in that are highly structured with a
command and control oriented. There is less incentive and less reward for making the
organizational and cultural changes required for Agile when the environment exists for
developing a stable requirements base.

18. Beck, Kent; Extreme Programming Explained

6-32

Version 9.1

Software Development Processes, Project and Risk Management

6.4.4

Integrating Agile with Traditional Methodologies

As the challenges and benefits of employing Agile methodologies become more widely
understood and accepted, there is a move toward selective integration. Organizations are
targeting projects with a positive Agile benefit profile and applying that methodology, even
while maintaining a substantial portfolio of tradition waterfall or iterative projects.
This approach allows the organization to respond rapidly to a crisis or opportunity by quickly
deploying an entry level product and then ramping up the functionality in a series of iterations.
Once the initial result has been achieved, it is possible to either continue with the Agile
development, or consider the production product as a super-prototype that can either be
expanded or replaced.

6.5 Software Development Process Improvement


Skill Category 3 examined the role of processes, how they are established, monitored,
measured and targeted for improvement. When working with the development process, there
are specific tools that will assist in effective process improvement techniques.

6.5.1

Post Implementation Reviews

A Post Implementation Review is a method for examining a specific project to determine


what worked, what did not work and what should be changed for the future. In some
organizations, this is referred to as the Lessons Learned Review. Post Implementation
Reviews should be conducted within a few weeks of the production turnover, but leaving
enough time to gauge how well the product is functioning in the hands of the intended user.
Although these reviews appear in virtually every methodology, they are often ignored or
given lip service in the rush to move on to the next project. Failure to learn from the current
project means that the problems encountered will continue to reoccur.
An effective Post Implementation Review will include key participants from each of the areas
participating in the project:
Business partners
Business analysts
Project managers
Software quality assurance
Designers

Version 9.1

6-33

Guide to the CABA CBOK


Developers
Testers
Change management
Training
Help desk
Operations
In smaller projects, one person may fill several roles; for larger projects there may be more
than one person representing each interest area. For smaller recurrent projects, the time
required to address the lessons learned will be brief, 15 to 20 minutes may be sufficient. For
larger projects, the time commitment may be a full day.
The agenda for the review will vary somewhat depending upon the specifics of the project, but
should include the following six categories taken from the Root Cause Analysis process:
1. Methods
2. Materials
3. Machines
4. Personnel
5. Environment
6. Data
A productive discussion in the personnel area would include the following kinds of issues:
Was the staffing level adequate? Too high or too low?
Was the skill and experience level appropriate for the tasks to be done?
Were people available when needed?
Were there motivation or interpersonal issues that impacted the project either
positively or negatively?
Similar discussions focused on each of the other areas will identify critical advantages or
problems that were reflected in the overall quality of the product and productivity of the team
and would include items such as the following:
Methods
Was a project plan developed? Was it reasonable? How closely did the project time
line follow the plan?
Was the proper project management available? Was it followed?
Was the proper project methodology selected? Did it perform as expected?

6-34

Version 9.1

Software Development Processes, Project and Risk Management


Were process improvements identified and implemented to one or more processes
or sub-processed based upon the experiences of this project?
Machines
Were the hardware items needed for requirements available in a timely fashion?
what about design, coding, testing and acceptance testing? Were they present in the
appropriate amounts?
Did the hardware provided reflect of the needed configurations?
Were any changes required to the production hardware installed and successfully
tested prior to implementation?
Environment
Were the appropriate software products and tools available and installed as needed?
Were staff members properly trained in how to use the environment and tools
provided?
Was the environment stable and reliable? Was response time adequate for the work
to be done
Did the environments established for testing mirror those of the production
environment? Were any gap significant in the final quality of the product?
Data
Was there adequate data to conduct testing? Was test data properly scrubbed and
protected?
Were test results properly stored and backup?
Were data integrity concerns identified and responded to promptly? Is the integrity
of the final product appropriate?
Were defects captured and analyzed? Was root cause analysis conducted on
significant areas of high defect density? Are all defects recorded in the defect
database?
Materials
Were hardcopy examples, materials and supplies available when needed? In the
appropriate amounts?
Were mock-up of output documents ready when needed for testing? Were revisions
required prior to implementation?
In addition to these standard areas of discussion, the project may have presented some unique
issues that will need to be added to the agenda.

Version 9.1

6-35

Guide to the CABA CBOK


These discussions can cause people to become uncomfortable, particularly if there are
politically or emotionally charged issues to be addressed. In this case it is essential to have an
outside facilitator for the process to maintain a productive climate and keep discussions on
focus. For particularly sensitive projects it may be worthwhile to have the facilitator use
polling equipment that will allow individuals to respond honestly but anonymously to the
issues.
The results of the Post Implementation Review should be used as data for the process
improvement process. It is possible to learn important lessons from a single project in this
context.

6.5.2

Defect Studies

Defects are the fingerprints of problems in the process. Capturing all possible information
about the defects created by each project will allow the organization to focus on areas for
improvement. Unlike the Post Implementation Review that focuses on a single project, Defect
Studies examine patterns of problems created over multiple projects. Data collected by the BA
during the project can be an invaluable contribution to Defect Studies. Defects and Defect
Studies will be examined in more detail in Skill Category 9.

6.5.3

Surveys

One often overlooked tool for improving the process is the use of surveys. Unlike either Post
Implementation Reviews or Defect Studies, a survey is usually targeted at a specific problem
or opportunity. Often it is the follow up step after a problem or opportunity is identified and
can be used to either validate that the information is correct or to learn more.
Developing and administering surveys requires careful time and attention to detail. The act of
being surveyed arouses expectations in the minds of those participating. Customer satisfaction
surveys are an excellent case in point. If customers tell an organization in a survey that it takes
too long to use certain parts of a system, it creates and expectation that the organization will
address the make that part of the system run faster. If surveying business partners on
satisfaction with how long it takes to deliver functionality or the quality of the finished
products, the organization must be prepared to hear negative responses and address them
promptly.
Because of this phenomenon, organizations opting to conduct surveys should be careful to
limit the scope to those areas in which they are prepared to take action. Taking action includes
having identified staff resources necessary to investigate further and initiate the needed
activity. Surveys of employees of the organization can be very informative and useful, but
will likewise create expectation of change.
Constructing survey questions should only be done after the organization has clearly
articulated the following:

6-36

Version 9.1

Software Development Processes, Project and Risk Management


What is it we need to know?
Who has that information?
Will they provide it to us voluntarily?
How many responses will we need?
How long will it take to collect the information?
How will we deliver the questions and receive the answers?
How will the information be used?
When must we know the answers?
How will we respond?
Can we respond in a timely fashion?
Writing the actual survey questions includes a determination of what kinds of data will be
collected and what scale(s) will be used. The number of responses often has an impact on the
format of responses; the larger the required number of responses, the more important it will be
to use simple numeric data that can be collected on-line, machine read or entered quickly.
The Business Analyst may be involved in the surveying process, supporting the development
of the survey instrument, participating in the analysis of the results, and developing an action
plan to respond to the information gathered. If a process improvement was initiated as a result
of survey responses, a follow up survey should be conducted to ensure that the problem has
been addressed satisfactorily.

6.6 Summary
Planning for and executing successful Information Technology projects requires a sound
understanding of the various development approaches, their strengths and weaknesses.
Although the Business Analyst will rarely be involved in the decision of what methodology to
use, understanding how they work and what they have to offer will help achieve the maximum
result for the project.
Likewise, the Business Analyst typically plays a supporting, rather than a leading role in the
actual management of the development project. However, understanding what needs to be
done, being able to effectively communicate on project management issues and step into the
breach if needed, are all part of the essential skills for a Business Analyst.
A key element of any successful project is the anticipation and assessment of potential risk
events and creating a plan for addressing those risks. The Business Analyst will be a key
player in identifying business related risks and helping to assess various strategies for
effectively managing them.
Version 9.1

6-37

Guide to the CABA CBOK

6.7 References
Bassett, Paul Partnerships take out the garbage, Software Magazine, Nov 1994
v14 n11 p96(2).
S. Bell and A. T. Wood-Harper, Rapid Information Systems Development - a NonSpecialists Guide to Analysis and Design in an Imperfect World, McGraw-Hill,
Maidenhead, UK, 1992.
W Cotterman and J Senn, Challenges and Opportunities for Research into Systems
Development, John Wiley, Chichester, 1992.
A L Friedman (with D.S. Cornford), Computer Systems Development: History,
Organization and Implementation, John Wiley, Chichester, 1989.
Frank Kand, A Contingency Based Approach to Requirements Elicitation and
Systems Development, London School of Economics, J. Systems Software 1998.
Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber, "Capability
Maturity Model for Software, Version 1.1", Software Engineering Institute,
February 1993
Mark C. Paulk, Charles V. Weber, Suzanne M. Garcia, Mary Beth Chrissis, and
Marilyn W. Bush, "Key Practices of the Capability Maturity Model, Version 1.1",
Software Engineering Institute, February 1993.
Linda Spence, University of Sutherland, Software Engineering, available at http:/
/osiris.sunderland.ac.uk/rif/linda_spence/HTML/contents.html
Kal Toth, Intellitech Consulting Inc. and Simon Fraser University; lecture notes:
Software Engineering Best Practices, 1997.
G Walsham Interpreting Information Systems in Organizations, John Wiley,
Chichester, 1993.
Center for Technology in Government
University at Albany / SUNY
1535 Western Avenue
Albany, NY 12203
Phone: 518-442-3892
Fax: 518-442-3886
[email protected]
www.ctg.albany.edu

6-38

Version 9.1

Acceptance Testing

Skill
Category

7
Acceptance Testing
Previous Skill Categories have discussed the development of requirements, processes,
procedures and controls. Included in these discussions were the details of verification
activities appropriate to each stage of development. In addition, earlier Skill Categories
provided an understanding of Systems Development Methodologies (SDM), the associated
processes and their role in the life of the project. In this Skill Category these tools will be
combined with testing tools and techniques to complete the process of developing products
and services needed by the organization.
The objective of software development is to develop the software products that meet the true
needs of the organization, not just the system specifications. To accomplish this, the Business
Analyst will work with the developers and subject matter experts early in a project to clearly
define the criteria that would make the software acceptable in meeting the business needs.
Throughout the project, once the acceptance criterion has been established, the Business
Analyst should integrate those criteria into all aspects of development process.
For the Business Analysts whose primary role is in testing, the Quality Assurance Institute
Certified Software Test Engineers Common Body of Knowledge1 is an additional and
expanded resource for testing information. It addresses all of the phases of software testing in
significantly greater detail.
Testing is conducted throughout the life of a software product. Each set of testing has some set
of acceptance criteria. Individuals performing these tests often refer to their activities as
Acceptance Testing. This usage is found in much of the testing literature.
Final product testing is often identified as a separate phase, and has historically been referred
to as User Acceptance Testing. Skill Category 1 and subsequent Skill Categories have
addressed the semantic difficulties produced by the term user. Throughout this Body of
Knowledge, the term Business Partner has been used to designate those within the
organization; these Business Partners may be co-developers and/or the intended recipients of
the finished application. The term Customer is used to designate those outside the

1. QAI, CSTE, Common Body of Knowledge, 2006.

Version 9.1

7-1

Guide to the CABA CBOK


organization who are the intended recipients of the actual software product or the services
provided by that product.
The two terms, Business Partner and Customer, together replace the less useful and more
generic term user. In this Skill Category term Acceptance Testing, unless otherwise qualified,
will refer to what has historically been termed User Acceptance Testing. A more productive
term, Business Acceptance Testing will appear from time to time in this Skill Category.

7.1 Concepts of Testing


Testing is a process for examining a unit or body of work against a defined criteria. In the case
of Information Technology, that body of work may be requirements, design, code, test cases
and so forth. The standards are established in many sources: the Systems Development
Methodology, Standards and Procedures, GUI Standards, product branding, industry
standards, and most especially the requirements for the project.
Unlike the tests and examinations administered by schools and universities, in the
development process, the test is not the end; once defects are uncovered by testing, the
opportunity to correct them is available. So, for the purposes of the Business Analyst, the
definition of testing is the examination of a body of work against a known standard with the
intent to discover and repair defects. Acceptance Testing is the final subset of testing.
Acceptance testing has a more focused intent: it is designed to validate that the product
delivered meets the customers needs, that is, it is acceptable to the customer.
The world of testing is divided into two realms. Those activities that can be performed without
the use of executable code, called static testing; and those activities that require the use of
some portion of the executable code, called dynamic testing. Reviews and inspections are
classic examples of static testing, while systems and acceptance testing are typically focused
exclusively on dynamic testing.
Throughout much of the literature of testing, the terms Validation and Verification are used.
The usage is not consistent from source to source at a superficial level. Although a deep and
penetrating analysis of the supporting documentation by each of the authoring groups can
resolve some of the points of conflict, it is not a particularly useful exercise.
For the purposes of this Common Body of Knowledge and Skill Category, verification is used
to refer to activities performed prior to the existence of executable code. Validation is used to
refer to those activities which are performed with the executable code.
The V Model shown in Figure 7-1 represents the relationship of various life cycle activities
and the verification and validation activities that are conducted. The base of the V is the point
at which executable code is available. This marks the transition from the ability to only
perform static testing to the ability to perform both static and dynamic testing. In many
organizations testing does not begin until there is executable code. This deprives them of the
opportunity to improve the quality of their products at a reduced cost through the early use of
static testing approaches.

7-2

Version 9.1

Acceptance Testing
Verification activities begin immediately with the initiation of a potential project: during the
feasibility study and the development of the Cost Benefit Analysis, information about the
potential costs and benefits of the project are first estimated and later refined through financial
verification activities. These are discussed in Skill Category 4, Business Knowledge.

Figure 7-1 The V Model of Verification and Validation


Likewise, the activities to verify the Requirements, such as Inspections and the early
development of Use Cases are discussed in Skill Category 5, Requirements. The importance
of these early life-cycle verification activities to the success of acceptance testing cannot be
overstated. Failure to perform these functions in a timely and professional manner will cause
significant project quality and schedule problems.

7.1.1

Dynamic Testing Activities

When the term testing is used in Information Technology, dynamic testing is what comes to
mind. In many organizations there are no testing activities prior to Unit test. The quality of
the products produced in this scenario is generally poor and the relative costs are high. The
Business Analyst must understand the role and potential contributions of each of the various
testing activities in order to maximize the contribution of each to a quality product.

Version 9.1

7-3

Guide to the CABA CBOK


7.1.1.1

Unit Testing

Unit testing is performed by the developer upon completion of some amount (or unit) of work.
Unit testing begins with the smallest bits of code as they are developed. The developer, by
habit, training and inclination, tests to ensure that the code works as it was written to work.
This positive or confirmation testing is one of the reasons many defects are undetected during
unit testing.
The developer tests to verify that when I input the order quantity, the system properly
multiplies it by the unit price and displays the total price. As the code is developed, the size
of the unit being tested by the developer grows, but it is still being testing in isolation from
any other part of the system. It is good practice for developers to test each others code; this is
addressed through paired programming in Agile methodologies.
As the functionality of the unit grows, the testing will become more complex, simulating the
input from predecessor units and creating output for successor units. Good developers will
make use of a wider range of applicable techniques discussed in Section 7.1.2. Providing the
developer with the test cases to be used by the acceptance test team can significantly improve
the quality of the process, causing many more errors to be identified at this time.
The Business Analyst can provide a quality contribution by reviewing the proposed
Acceptance Test material with the developer during this stage. Many times the developer will
indicate where there are complex areas of the code which helps the Business Analyst better
understand the nature and scope of the acceptance testing required. It also leverages the time
and effort spent to develop these test cases, by using them, or pieces of them, repeatedly.
While code is being developed, good developers take advantage of low overhead, high impact
verification techniques such as peer reviews of code and test cases. During unit testing, the
developer will uncover and repair any number of defects, some caused by ambiguous or
incorrect requirements, some caused by poor design and some created by the translation of the
design into code. Very few organizations are able to capture defect information at this level,
as helpful as it would be. Self-reporting of defects would provide invaluable information for
process improvement.
In traditional development life cycles, the size of the individual units can be quite large,
requiring weeks or even months to complete. Until the functionality is complete, it remains in
isolation. Iterative methods will generally have smaller, but still significant size. Agile
methods use a much shorter life cycle and continuous integration, so an individual unit may
only exist for a matter of a few hours to a few days.

7.1.1.2

Integration Testing

Components developed in isolation may seem to be working perfectly, but when added to
other units of functionality, defects appear. This is the purpose of integration testing, to find
those defects that occur in the transitions between one unit and another. Two, three or more
units of functionality may be combined to determine if they still operate correctly internally,
and if their joint functionality is achieved. These are still fragments of the system.

7-4

Version 9.1

Acceptance Testing
Typical examples of defects encountered in integration testing include data not being passed
at all or not passed correctly between one module and another or error messages that do not
transfer or are inconsistent. In addition, integration testing may reveal portions of the
processing that are not as efficient as the design predicted. These performance issues must be
addressed promptly to avoid negative consequence for both the schedule and customer
satisfaction.
During integration testing, the emphasis is still on validation of the functionality. Only limited
assessments can be made of system performance while it is in a fragmentary state. If the
Business Analyst and the developer work together on creating acceptance test cases early in
the project, their reuse will again speed up and improve the quality of the product. The
Business Analyst or the test team may also be called upon to develop additional test cases as
issues are identified during integration testing. If developers are not provided with the needed
resources and incentive, testing will again focus on verifying that the product performs as
expected.
In each case it is necessary to identify the source of the error and to correct it. If defect logging
was not initiated at the unit level, it is essential that it begin here. For small modifications,
developer testing may essentially end with integration, as the unit that was developed is
integrated into the working system and tested.

7.1.1.3

System Testing

System testing is the rigorous examination of all of the integrated components of a new
product to ensure that they meet the full agreed upon set of requirements for that specific
delivery. System testing is conducted in a replicated context of the production environment to
allow the product to interact with other components. System testing is generally conducted by
a combination of a third party test team and the development team.
If the appropriate level of unit and integration testing has been conducted, the number of
functional defects identified should mainly be those resulting from the interfaces with other
products and systems. The majority of simple, linear functional defects should have been
discovered and addressed in earlier stages. What will be identified during systems testing in
addition to interface defects, are quality defects, and more complex defects caused by
interaction. Additionally, system testing will identify defects in code which were previously
blocked from execution due to missing components. System testing is often divided into two
stages, Alpha and Beta.
Alpha testing is conducted in the early stages and is often very rough, with many defects
encountered. Frequently during alpha testing, the system will not run to conclusion, but will
result in an abnormal termination. Alpha testing should be conducted by the development
team to ensure the quality of the product prior to turn over to the test team. As these early
defects are removed and the system becomes more stable, testing moves into the Beta testing
stage.
During Beta testing the system is still known to contain a number of defects, and so is not
production ready. The intent of Beta testing is to drive out defects occurring in the most
commonly used paths through the product and is generally conducted by the third party test

Version 9.1

7-5

Guide to the CABA CBOK


team, not developers. This test team may be comprised of Information Technology staff as
well as individuals from the business unit. The Business Analyst, whether a part of IT or the
business, is typically included in this test team.
As was the case in integration testing, the source of the error and the design of the solution
will impact the timeframe for correction and return to testing. Each defect should be tracked
carefully. Analysis should be performed on all defects that have been exposed to prior quality
control activities to determine how they remained undetected.
During systems testing for larger products, the Business Analyst may be increasingly involved
in the interpretation of the test results and in the development of additional test cases to
address solutions to previously identified defects.

7.1.1.4

Acceptance Testing

At a theoretical level, acceptance testing is the final set of activities immediately prior to
implementation of the product; the software should be production ready. The definition of
production may differ depending upon the product. For example, if a vendor is developing
custom code for inclusion in a clients existing product, production ready may mean that it
is ready for the client to test. It is conducted by the user to ensure that the product meets the
requirements. In the previous example, the client would perform the acceptance test of the
vendor code. Acceptance testing was intended to become a validation of the existence of the
functionality, or what is sometimes termed a Victory Lap. In all too many organizations,
acceptance testing has become an extended version of the beta portion of the system test.
If early stages of the testing effort have not been as extensive and as rigorous as they should
be, the result is that the number of defects remaining is very high. This in turn leads to
additional problems in completing an effective acceptance testing process, resulting in poor
quality systems, late deliveries and budget overruns. The keys to preventing these issues will
be addressed later in this Skill Category.

7.1.2

Dynamic Testing Types

Throughout the testing process there are many approaches that can be used to determine what
to test, when to test, how much to test and so on. Understanding each of these approaches,
what they contribute to the testing process and to the quality of the final product is essential
for the construction of a good acceptance test plan.
Some of the approaches are simple and almost intuitively obvious; others are more complex,
requiring significant time and effort to apply effectively. Each approach offers specific
advantages. The Business Analyst, working with the project manager and a testing manager if
there is one, will determine which tests will be appropriate for the specific product under
consideration.
Some of these approaches will be used consistently, beginning with Unit Testing and
continuing all the way through Acceptance Testing; others may have only limited application.

7-6

Version 9.1

Acceptance Testing
Although this Skill Category focuses on the use of these testing types in the Acceptance
testing process, their use in other areas will occasionally be noted.

7.1.2.1

White Box

White box testing could be referred to as clear box testing; it is the exact opposite of black box
testing in that what is happening inside the box is the entire focus of activity. The focus of
white box testing is to determine if the product has been built right, for this reason it is often
referred to as structural testing. Unit and integration testing are examples of white box testing.
White box testing relies upon a sound knowledge of how the product was built and tests the
paths and decision points of the product in ways that black box testing does not. In white box
testing it is not enough to know that a given set of inputs does produce a specific output; it is
important to know if it will always perform in the same manner. For this reason white box
testing often accesses logic and paths that are rarely encountered in black box testing.
White box testing is generally performed by developers and should be completed by the time
acceptance testing begins. Because there will be some rework and because over time testers
and business analysts come to understand the structure of the product they are testing, some
white box testing may be performed during acceptance testing. If a significant amount of
acceptance testing resources are being spent on white box testing, it is generally an indication
of poor development practices.

7.1.2.2

Black Box

The concept of black box testing originated in the world of hardware, with the idea that most
people didnt really care what was going on inside, they just wanted product B to successfully
connect product A to product C. If the product functioned successfully, the users didnt care
what was going on inside the black box. This emphasis on functionality is central to the
acceptance testing of a software product and black box is the most consistently used form of
acceptance testing.
Black box testing requires a sound understanding of what the product is intended to do. Test
cases will track input(s) to output(s) ensuring that the proper responses are received. Use cases
developed throughout the development of the product are focused on functionality and are
black box testing. Black Box Testing is sometimes difficult for developers to use successfully,
because they are fully aware of the internal structure of the product and that knowledge may
change the way they design tests.
One advantage of black box testing is that testers do not need to be IT knowledgeable, they
must only understand the business side and be willing to execute the test process carefully
according to the plan. In addition, black box testing focuses on the most fundamental parts of
the product; has the right product been built.

Version 9.1

7-7

Guide to the CABA CBOK


7.1.2.3

Equivalence Testing

In even a very small product, the number of black box and white box test cases that can be
developed for execution can become overwhelming. Over time a number of approaches have
been developed to help reduce the sheer volume of test cases, while not compromising the
integrity of the testing process. Equivalence testing is one of those approaches.
In a typical product, there are multiple situations in which some groups of responses are valid
and other groups are not. These groups are referred to as equivalency classes and the set of
data should be treated the same by the module under test and should produce the same answer.
Test cases should be designed so that the inputs lie within these equivalency classes.2
Example 1. A specific field might call for a value between 1 and 9. The numbers 1
though 9 are all valid and are all members of the same group or class of responses:
the valid group. Numbers less than 1 or greater than 9 belong to a second group:
invalid responses.
In creating test cases it is not necessary to test all the valid or positive responses,
only a representative few. Likewise it is not necessary to test all of the invalid
responses, only a representative few. However, that representation must include
both types of invalid responses: error and negative.
There may be multiple groups of invalid and valid responses for any particular
entry, not merely two. Each class of responses must be represented in the test
cases.
Example 2. In the Example 1, the valid classes could be expanded based upon
some criteria: Sales, use 10 through 19; Marketing use 30 though 35; Finance use
50 though 58; and Human Resources use 70 through 79.
Test cases would need to be executed to verify the behavior of each valid and
invalid group.

7.1.2.4

Boundary Value Analysis

Equivalency testing helps to reduce the number of test cases by assigning responses to classes.
Those classes can be quite large and have many members. Boundary value analysis helps to
target the remaining test cases to those areas most likely to exhibit problems.
Boundary value analysis is based on the concept that errors tend to congregate at boundaries
between valid and invalid input.3 The phenomenon of defect clustering is well established and
boundary value analysis uses that to leverage the effectiveness of the executed test cases by
focusing on higher risk areas. These limits need not just be between valid and invalid data;
they can also be at block boundaries such as 255, 256, 1024,1025.
2. Williams, Laura, Op. cit; reference Boris Beizer; Balck Box Testing; John Wiley &Sons; New York;
(1995).
3. Beizer, Boris, Software Testing Techniques; International Thompson Computer Press; Boston; 1990.

7-8

Version 9.1

Acceptance Testing
Focusing testing efforts on the edges or boundaries of each class is likely to reveal defects. By
using a focused approach testing resources can be used most efficiently. Because of their
location on the outer limits of the acceptable range, some organizations also refer to this
approach as limit testing.
Example 3. In Example 2 there are multiple boundaries between valid and invalid
data. The lowest number of each range is the lower boundary value (10,30,50,70);
the next lower number in each case is an invalid entry (9,29,49,69). Likewise the
highest number in each case is the upper boundary value (19,35,58,79), with the
next higher number in each case being invalid (20,36,59,80).
Developing test cases that focus on these numbers will satisfy the requirements of
looking for defects where they are most likely to occur (on the margins) while
controlling the total number of black box test cases to be executed. If the lower and
upper boundary ranges are maintained for two of the four subgroups, it may not be
necessary to test each control limit; however, if defects are identified, further
testing will be required.

7.1.2.5

Smoke Testing

At the early stages of each testing effort, and especially the acceptance testing effort, it is
essential to perform a quick assessment regarding the overall quality and readiness of the
product. To do this a series of test cases that exercise the basic functionality of the product are
executed. It establishes that the major functionality is present, the product is stable and that it
works under normal conditions.This initial series of test cases are often referred to as a
smoke test.
The reason behind this assessment is to determine if the product is indeed ready to be tested in
a more rigorous fashion. If basic functionality is missing or not being executed properly (as
documented in the requirements), there is little point in proceeding to full acceptance testing;
the product should be returned to the developers or systems testers for further work.
To make this process function effectively, the methodology should contain stage and phase
exit and entry criteria. For each project that criteria should have been fully specified during
the project planning process. Smoke tests yielding results that do not meet entry criteria for
acceptance testing are rejected.
A stripped down version of the regression test suite developed and used during integration and
systems testing can be used for the initial smoke test. Using a risk based analysis of the major
functionality; it should focus on the key project deliverables. This should be a pro forma
activity; no defects should be uncovered by these test cases at this stage of the project. It
should not be necessary to allow any defect correction time in the plan for the smoke test. If
defects are uncovered at this juncture, it points to major development issues in the
organization
Use of smoke tests is not limited to acceptance testing, it is a valuable and time saving
approach to each transition during the testing of a product or process. Smoke tests need not

Version 9.1

7-9

Guide to the CABA CBOK


focus exclusively on functionality (or black box testing). For some products testing of
performance and quality factors will be an appropriate portion of the smoke test process.
Good practice would indicate that before certifying, promoting or announcing that a product is
ready to move to the next level, the entry level smoke test for the next stage should be
executed successfully. Failure to do so is an indication of lack of understanding of the intent
of the development and testing processes.

7.1.2.6

Regression Testing

Each component added to a system or product has the capability of causing other components
to fail. If appropriate steps are not taken these failures may remain undetected until late in the
test life cycle, or even into the production environment.
Regression testing is the name given to the activities designed to ensure that changes do not
have a negative impact; things that worked before the new component was added, still work.
Like smoke testing, regression testing has a primary, but not exclusive focus on functionality.
Unlike a smoke test, the regression test focuses selectively on the areas that have been
changed to ensure that nothing has been broken.
In the systems and acceptance testing environments, effective regression testing is essential.
In large products it is common to deliver functionality for testing in staged increments. The
first step when reviewing a new version of the system is to run the regression test suite to
ensure that everything that worked correctly before, still works.
During the initial stages of system testing, the regression test suite (those cases used
consistently to ensure adherence to specifications already implemented) will be fairly small.
As new functionality is successfully added to the product and validated through testing, the
more cases will be added.
Controlling the growth of the regression test suite, so that it does not consume a
disproportionate amount of resources is an ongoing challenge for large projects, especially
those that are updated with sizable releases of new and revised code. One approach to
handling this workload is to create sub-suites that address only targeted areas of
functionality for use with defect corrections. Full regression testing, using all of the cases, is
then conducted at planned intervals (weekly, biweekly, etc.).
Because regression tests focus on functionality and will be executed repeatedly, they are
excellent candidates for automation. Passing the regression test does not mean that the new
functionality is working properly, or that it is implemented correctly, it only means that it does
not break anything else. Regression testing is not a substitute for other testing activities.
Likewise, careful testing of the new component, in and of itself, does not mean that the change
has not unintended consequences.

7.1.2.7

Stress Testing

Until the functionality of the product has been validated to a significant extent, it is not
possible to effectively determine how well it will function in the production environment.

7-10

Version 9.1

Acceptance Testing
Each change carries with it the potential to positively or negatively impact performance. As
system testing progresses, it is necessary to confirm that the performance level requested in
requirements, created in the design, and seen in unit and integration testing will exist in the
hands of the ultimate product user. This must be validated if the product is in fact production
ready.
Products are designed to operate within specified tolerances. These tolerances should be
clearly defined as a part of the requirements definition process. In the absence of any
specification, the designer will assume that performance is at the discretion of the designer (a
very dangerous assumption). For small projects and products, stress testing may not be an
issue, for larger projects and products, it is consistently an issue.
Stress testing seeks to push systems activity past the design limitations to determine how and
when it will fail. If the system is designed to allow 1000 concurrent transactions and still
provide 1 second or less screen to screen response time, a stress test may first attempt the
designated level (1000 concurrent transactions) and if that is successful, increment that
number until either transactions fail the screen to screen response time test or a designated
volume is achieved.
Because of the need for very high volumes of activity, many stress tests are conducted using
transaction simulators, software that is specifically designed for this purpose. These products
are installed and maintained by the organizations operations and network staff. The tests
themselves may be conducted without any human activity, often overnight. Alternatively,
some products allow for people to participate concurrently with the tool; in these cases the
business analyst may perform specific transactions to observe the results first hand.
Very rarely stress tests will be conducted using only human input. These tests require
significant time and effort to plan and execute. They typically are conducted during nonbusiness hours (nights, weekends, and holidays) and require business people access the
application and perform specific tasks during the test time. This is a very expensive approach
to stress testing. Often the reasons for this type of stress testing are more politically than
technically based.
Large scale stress tests are typically conducted late in the acceptance testing process, after the
product has become quite stable and is reliably producing the intended functional results.
Scheduling stress tests involving the business community before this occurs can result in
irreparable damage to the perception of the quality of the product and can jeopardize the
success of the project.

7.1.2.8

Conditional and Cycle Testing

During unit testing the ability of the product to perform specific tasks and provide the correct
results is validated. For many projects, there are multiple activities that can occur during the
life of a product and individual tasks are combined in varying sequences to produce a specific
result.
In the simplest case, a product might require that a customer account must exist before an
order can be placed. This condition must be met before the order taking process can proceed.
During unit testing various components of the functions needed to establish a customer
Version 9.1

7-11

Guide to the CABA CBOK


accounts and take an order will be tested individually. During integration testing the full
functionality of each process will be tested; in both systems and acceptance testing, the
combination will be tested. Test case documentation must clearly specify both the precondition (a valid customer account) and the post condition (a valid order).
Many applications operate on activity cycles that must be replicated during the testing process.
Often the activity cycles are either date or time driven. To effectively test the application it is
necessary to simulate the time or date flow to ensure the proper result, this is cycle testing.
Billing systems offer an excellent example of the issues addressed by the cycle testing
process:
Day 1 - Order is placed. A batch process collects all order information overnight and
forwards it to the billing system to hold pending notification of shipment.
Day 2 - Shipment confirmation passed to billing system in a batch process overnight.
Shipments and pending bills are matched; bills are formatted and printed.
Day 3 - Bills are mailed.
Days 4 through 30 - Payments are posted to customer file as received. Payment in full
closes the invoice, partial payments are recorded.
Day 31 - Follow up invoice is generated for outstanding full or partial payment. File is
flagged as late.
Days 32 through 60 - Payments are posted to customer file as received. Payment in
full closes the invoice, partial payments are recorded. If a previous partial payment
was received and an additional partial payment is received, it will constitute payment
in full if the difference is less than $5.00.
Day 61 - Files flagged as late are sorted into groups based on account standing and
amount: Large accounts (annual volume greater than $100K) with less than $1000 due
are sent a second notice; new accounts with payments of more than $500 due are sent
to a collection agency and a letter to that effect goes to customer; accounts with more
than $100 due are flagged to require payment before another order can be accepted.
All other accounts are listed on a report that is distributed to accounting for specific
follow-up and resolution.
To effectively test how well the product performs the various actions specified on Day 61, all
of the previous cyclical activity must have occurred. Cycle testing can be very time
consuming for extended application cycles, even when months are processed as days, which
can be done using some of the tools on the market. Failing to allow adequate analysis time for
review of test results between cycles can result in defects escaping into later stages. It is also
important to remember that not all defects will be uncovered and resolved in the first cycle
loop. The schedule should include enough time to run the cycle at least 3 times during
acceptance testing.

7-12

Version 9.1

Acceptance Testing
7.1.2.9

Parallel Testing

When the new product or system is designed to replace an existing one that performs
essentially the same function, it is important to verify that given the same inputs, they will
produce the same results. A parallel test is designed to fulfill that need.
Parallel tests are typically conducted near the conclusion of the acceptance testing cycle and
are black box style (functional orientation) tests. Construction of an effective parallel test
requires that the production environment be replicated in enough detail to allow for interfaces
among systems, updating of mock data stores, execution of batch processes and production of
journals, control totals and printed materials such as forms and reports.
One of the issues with a parallel test is the need to use production data to ensure the validity of
the processing. In most countries use of personal information is closely controlled to prevent
the unauthorized access to, and use of, that information. By its very nature, a parallel test will
strain that protective wall separating development from production. Construction of an
appropriate parallel test will include a plan for who will be able to review the results of the test
and how the data stores and printed materials will be managed to protect the confidentiality of
customers or employees whose records appear.
Financial applications in particular are subject to extended parallel tests to ensure that the
organizations records will not be compromised in any way by the installation and
implementation of the new product. It is common to run for several weeks to a full quarter in
parallel before the decision is made to move to production. This can place a significant burden
on the operations staff and they must be a full participant in the parallel planning process. It
can also dictate the possible implementation dates for the project.

7.1.2.10

Risk-based Testing

Few projects have ever had the resources needed to fully test all of the application until every
defect is uncovered and resolved. At some point, decisions must be made about what to test
and how long to test. Risk-based testing is a fact-based technique to making those decisions.
Risk-based testing can be applied at any stage in the testing process after unit testing has been
successfully completed.
In Skill Category 6. the fundamentals of project risk management were presented. Using the
same approach to focus testing resources will help to make the best use of the resources
available. When creating a risk model to determine the testing emphasis, the following factors
should be considered:4
Customer or user impact
Prior history
New and modified parts

4. Dustin, Elfriede, Effective Software Test Management; 2005

Version 9.1

7-13

Guide to the CABA CBOK


Complexity
Frequency of Use
Exceptions
Internal Operations
Business Continuity
Paretos Law (80/20 Rule)
By weighting each factor and assessing the test cases against those factors, risk-based testing
will separate those cases that will have the most positive impact on the final product. A matrix
of this sort will help the Business Analyst, and the test team, make the most of the available
testing resources.

7.1.2.11

Security Testing

The growing ranks of hackers, who trash systems and applications for pleasure and notoriety,
has lead to a significant increase in the security envelop surrounding products. Most of the
testing effort has focused on how well the application performs within that envelop, but that is
not enough. It is essential to know that once exposed to the production environment, it will
stand up to repeated assaults by hackers, as well as the probing of the merely curious.
The basic security envelope is part of the initial requirements, and as such the components and
functions have been tested incrementally throughout the development of the product. This
includes verification that the rights and privileges of authorized users function appropriately
to allow or deny access in specific situations. This is not the same as a concerted attack on the
product, conducted by those skilled in evading or fooling security checkpoints. A full security
test will include both internal and external attempts to compromise the product.
The Business Analyst may not possess the skills needed to fully test the security envelop; it
may be necessary to bring in someone with special skills from another area of the organization
or a consultant. Within the organization the skills may be found in the Network Support
group, in Data Security or occasionally in the Internal Audit staff. If they do not have the
skills, they can often recommend where to look for them.
Not every implementation will require a full security test, however any application that
requires even a small change to the security envelop should include some security testing. The
more significant the change the more robust the testing must be. As the representative of the
business community in the development effort, it is the responsibility of the Business Analyst
to ensure that security tests are included in the project plan once the risks are identified, and
conducted when appropriate.

7-14

Version 9.1

Acceptance Testing
7.1.2.12

Backup and Recovery Testing

Organizations continue to capture and retain increasing amounts of data about their products,
their customers, their business and their environment. The organization relies upon the
assumption that if at any point the production data files are compromised or damaged, they
can be restored from backups. Like any other assumption, if untested, this assumption may
result in serious problems. Throughout the development life cycle, code and test cases have
been stored and backed up using the development environment. When that code is preparing
to move from development and test to the production environment, it is important to verify the
result are as needed.
For large new projects or those that add significantly to the data being stored, a test of the
backup process and data restoration should be performed shortly before implementation, near
the end of acceptance testing. These tests will be performed by the Operations and Network
staff, not the test team or the business analyst. Once successfully in production, testing of
backup and recovery procedures will become a routine part of the production environment.
As with the Security tests, the responsibility of the Business Analyst is to ensure that these
tests are included in the project plan and conducted in a timely fashion.

7.1.2.13

Failure and Ad Hoc Testing

The emphasis of many of the testing approaches discussed is on verifying that the
functionality exists and works correctly. To increase test coverage, this testing focuses on
areas that are generally overlooked or particularly error prone (boundaries) and uses
techniques such as equivalency partitioning and risk-based testing to reduce the total number
of test cases to be written.
Failure testing is often experience driven; it is designed specifically to target areas the
Business Analyst or tester suspects might not function properly. Failure testing is often
performed by the most experienced members of the acceptance test team because they have
developed an understanding of the kinds of things that might cause problems for the product
or system. Professional testers and experienced BAs may need to create specific test cases for
this purpose toward the end of acceptance testing if significant defects are encountered
In the final stages of acceptance testing, many organizations ask members of the business
community or the potential customers to try it out, do what you normally would do.5 This
ad hoc approach may reveal unexpected data entries or unanticipated screen and functionality
sequences.
The hope is that no defects will be uncovered during these sessions and that those involved in
the testing effort will form a favorable opinion of the system. The problem with ad hoc testing
is that recreation of the problem is often a challenge as the documentation of what was done

5. Some experienced testers will allocate as much as 20-25% of the allocated test time for this technique and are careful to include both very experienced testers and some with little or no product
knowledge or test skills.

Version 9.1

7-15

Guide to the CABA CBOK


may be non-existent. As this is true of problems occurring post implementation, this is a good
opportunity to involve the help desk staff.

7.1.2.14

Other Test Types

This is not an exhaustive list of every possible type of test that any organization could
possibly use for acceptance testing. Some of the others that may be used include:
Usability Testing - conducted to evaluate the extent to which a user can learn to
operate, prepare inputs for, and interpret outputs of a system or component.6
Data Validation Testing - conducted to evaluate that the data components satisfy
the specified requirements
Production Verification Test - conducted using selected processing streams to
verify that all necessary parts of an application were included in the move to
production. Sometimes referred to as a staging test.

7.2 Roles and Responsibilities


In the previous discussion of testing approaches and strategies it is evident that there are many
potential participants in the acceptance testing process. A clear understanding of the roles and
responsibilities of each is essential if the acceptance testing process is to run smoothly and
meet its objectives. The listings below are focused on those activities with a direct impact on
the acceptance testing activities. Each of these lists could be expanded significantly if a
consideration of the full scope of their functions was included.

7.2.1

Project Manager

Not all projects have an official project manager, but all teams have someone fulfilling that
role. The job of the project manager is to:
Create, maintain and control the overall project plan
Coordinate and manage the integration of sub-plans, such as the Test Plan, ensuring
that sub-plans include time and resources for the appropriate levels of verification
and validation activities
Maintain control of scope, which includes the growth of requirements and any
resulting growth of the test effort
6. IEEE; 1990.

7-16

Version 9.1

Acceptance Testing
Manage budget, staff resources and progress in accordance with the project plan
Ensure communications regarding status of, and changes to, the project plan
Manage the resolution of issues resulting failure to meet schedule, budget,
functionality or quality criteria throughout the life of the project
Ensure that all project metrics, including defect data are collected and maintained
Ensure the successful completion of end of project activities, including the decision
to implement, implementation and post implementation reviews and the archive of
project artifacts

7.2.2

Test Manager and the Test Team

The person responsible for the systems and acceptance testing effort on the project is the Test
Manager; in some organizations they are referred to as the Test Lead. The Test Manager is
usually a member of the independent team, which is typically a part of the Information
Technology department. In some organizations there is a counter-part organization located in
the business that is responsible for acceptance testing. The exact relationship between the two
testing areas varies widely from organization to organization. The responsibilities of the Test
Manager are to:
Create and maintain the overall test plan to reflect changes to the overall project
plan
Determine what testing tools, techniques and strategies will be needed to effectively
test the project and ensure the staff know how to perform that testing
Coordinate the integration of the sub-plan for acceptance testing into the overall test
plan
Assist the project manager in the integration of the test plan into the overall project
plan
Control of the scope of the test effort
Manage test budget, test staff resources and progress in accordance with the test
plan
Ensure communications regarding status of, and changes to, the test plan
Participate in early life cycle verification activities such as requirements inspections
and/or coordinate the participation of testing personnel
Participate in test execution, analysis and defect tracking as needed

Version 9.1

7-17

Guide to the CABA CBOK


Manage the resolution of issues resulting failure to meet schedule, budget,
functionality or quality criteria throughout the testing life cycle of the project
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

7.2.3

Designer/Developer

The work of translating the requirements of what a system must do or be (Requirements)


into a product that will meet those needs is called design. The work of design may include the
participation of systems architects, database administrators, testers, training, documentation
and others to come up with how to meet the needs. The designed product is then transfer to
those who will make it come to life i.e., the developers.
Developers are the traditional heart of Information Technology. For many years there has
been an implicit understanding that coding is the only real work in systems development.
While this may not be precisely true, coding does turn concept into reality. The designers/
developers within IT are responsible for:
Create and maintain the overall design and development plan to reflect changes to
the overall project plan
Coordinate the integration of the sub-plan for unit and alpha testing into the overall
test plan
Maintain control of the scope of the design and development effort
Manage design and development budget, staff resources and progress in accordance
with the project plan
Execute the development plan and ensure communications regarding status of, and
changes to, the development plan
Participate in scheduled verification activities such as requirements inspects, design
reviews
Determine the scope of testing necessary and ensure that the appropriate tools and
resources are in place
Ensure the execution of scheduled black box and white box testing, defect analysis
and tracking
Manage and participate in the resolution of issues resulting failure to meet schedule,
budget, functionality or quality criteria throughout the development life cycle of the
project
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

7-18

Version 9.1

Acceptance Testing

7.2.4

Business Partner and Subject Matter Expert

The participation of the business partner and subject matter experts from the business
community is essential to the success of the project. Just as it is not possible to get the
requirements right early in the project without their cooperation, so too it is necessary to have
their input on both the creation and results of various functional tests.
With both the business partners and subject matter experts it is important to be conscious of
their competing priorities and be efficient and effective in the scheduling of their
participation. The responsibilities of these individuals in support of acceptance testing are:
Support the creation and maintenance of an effective overall project and test plan
with specific attention to areas of functional correctness and performance
characteristics of the product
Participate in the training for the use of testing tools, techniques and strategies to
test the product
Provide expert insight into functionality issues that arise from the development and
execution of test cases
Assist the project manager in the integration of the business and subject matter
experts into the project plan and specifically where needed in requirements
verification and acceptance testing
Support efforts to maintain control of the project scope by active participation in
early test life cycle activities such as requirements verification and test case
development
Provide support for an appropriate level of test budget and test staff resources in
both IT and the business community
Ensure communications regarding status of, and changes to, the test plan
Participate in test execution, analysis and defect tracking and management as
needed
Participate in the resolution of issues resulting in failure to meet schedule, budget,
functionality or quality criteria throughout the testing life of the project
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

7.2.5

Operations and Network

The Operations and Network staffs are often overlooked in the test planning process, because
the focus is on the development environment. Failure to include these areas in the planning

Version 9.1

7-19

Guide to the CABA CBOK


process may result in delays in the establishment of needed environments and reduction or
elimination of performance, response time and stress testing due to lack of critical resources.
Not only do these groups support acceptance testing, but at the end of the testing process, they
are responsible for supporting the product in production. Operations has a vested interest in
ensuring that only applications that are of high quality and robust performance move into
production; as such they are an excellent ally during acceptance testing. The responsibilities
of these areas in support of acceptance testing are:
Support the creation and maintenance of the overall test plan with specific attention
to performance and operational characteristics of the product
Provide the installation and maintenance of one or more environments to support
the acceptance testing activities
Determine what testing tools, techniques and strategies will be needed to effectively
test the project from the operations and network perspective and ensure the staff
know how to perform stress testing, database backup and recovery and end to end
application response time measures
Coordinate the integration of the sub-plan for operations and network acceptance
testing into the overall test plan
Assist the project manager in the integration of the test plan into the overall project
plan
Maintain control of the scope of the test effort
Support the development of the operations and network test budget, test staff
resources and monitor progress in accordance with the test plan
Ensure communications regarding status of, and changes to, the test plan

Create and maintain an appropriate test environment with the necessary level of
accuracy relative to the production environment

Participate in test execution, analysis and defect tracking as needed


Participate in the resolution of issues resulting failure to meet schedule, budget,
functionality or quality criteria throughout the testing life cycle of the project
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

7.2.6

Data Security and Internal Audit

These two areas have overlapping concerns. Data Security is a part of the development
process from the beginning, helping to ensure that appropriate controls over data are designed

7-20

Version 9.1

Acceptance Testing
into the product. Acceptance testing validates that data is being captured and stored
appropriately, but also that all privacy concerns, including access to sensitive data, are well
managed.
The Audit area does not typically participate in the development activity. They conduct
routine audit reviews to ensure that the product is in compliance with published policies and
procedures. They are also directly tasked with the responsibility to ensure that the proper
levels of control exist for any application that will impact the financial statements of the
organizations. These requirements are more fully detailed in 7.2.7 Control Verification and
the Independent Tester. The responsibilities of these two groups in relationship to acceptance
testing are:
Support the creation and maintenance of an effective overall test plan with specific
attention to the existence and adequacy of operational controls and the integrity of
the production environment
Determine what testing tools, techniques and strategies will be needed to effectively
test the project and ensure the staff know how to perform that testing
Maintain control of the scope of the test effort
Manage test data security and internal audit budget resources to ensure that
appropriate levels of testing are performed
Ensure communications regarding status of, and changes to, the test plan
Verify that management is actively involved in ensuring compliance with existing
policies and procedures for early life cycle verification activities such as
requirements inspections
Participate in test execution, analysis and defect tracking as needed
Participate in the resolution of issues resulting in failure to meet schedule, budget,
functionality or quality criteria
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

7.2.7

Control Verification and the Independent Tester

With the recent advent of various national laws regarding verification of process controls, it is
necessary to clearly plan for and document how those requirements are met. The requirement
for verification of control over the development process (i.e. separation of duties) results in
the establishment of independent test units in many organizations that previously had none.
The controls in place in any application that updates the financial records of an organization
must be adequate to ensure that:

Version 9.1

7-21

Guide to the CABA CBOK


No unauthorized changes to the code can be made to a production system either
intentionally or inadvertently
No unauthorized changes to the data of a production system can be made either
intentionally or inadvertently
Every change to either the code or the data of a production system produces
sufficient audit trail to identify the source of the change and the authorization
The existence of sufficient controls ensuring the above three items can be
independently verified
For many organizations these functions are clearly defined and implemented in an existing
Configuration Management operation that is not a part of the development organization. If
that function does not exist, or if it fails to fully perform these functions, the penalties can be
severe.

7.2.8

Business Analyst

In earlier skill categories the point has been made that not all organizations have the same
structure, and that as a result the Business Analyst often finds they are needed to perform roles
and responsibilities of other functions. If there is no project manager, the BA may be needed
to fulfill those responsibilities. If there is no test manager or designated test lead, the BA may
be doing that work. There are however specific roles and responsibilities for the BA in the
acceptance testing process:
Participate in the creation of the overall test plan and assist with the necessary
maintenance to reflect changes to the overall project plan
Plan for and participate in early life cycle verification activities such as
requirements inspections and early development and inspection of use cases
Plan for and participate in functional, regression, stress, security and performance
test execution, analysis and defect tracking as needed, with specific attention to
those aspect of acceptance testing that are essential to the successful
implementation of the finished product
Coordinate the integration of the sub-plan for acceptance testing into the overall test
plan
Understand what testing tools, techniques and strategies will be needed to
effectively perform acceptance testing for the project and ensure the subject matter
experts know how to perform appropriate testing and test verification
Provide expert insight into functionality and performance issues that arise from the
development and execution of test cases

7-22

Version 9.1

Acceptance Testing
Assist in the maintenance of the scope of the acceptance testing effort
Participate in the development of the acceptance test budget, acceptance test staff
resources and progress in accordance with the test plan
Ensure communications regarding status of, and changes to, the acceptance test
plans
Support the resolution of issues resulting failure to meet schedule, budget,
functionality or quality criteria throughout the testing life cycle of the project
Participate in the completion of end of project activities, including the decision to
implement, test reports, implementation and post implementation reviews

7.2.9

Other Participants

In addition to the participants identified above, there can be many other groups and
individuals contributing to the acceptance testing effort, such as:
Vendors
Key Stakeholders
Beta Customers
Power Users

7.3 Use Cases for Acceptance Testing


The development of Use Cases for design and testing grew out of the evolution of ObjectOriented Programming. Use Cases were introduced by Jacobson in 1992.7 Over time,
members of the IT community began to see more ways to apply the Use Case concepts. It is
now a standard tool for Requirements/Analysis throughout the industry.
There are a host of definitions for, and a wide range of approaches to, Use Cases. At the
simplest level, a Use Case is a technique for capturing the functional requirements of systems
through the interaction between an Actor and the System. The Actor is an individual, group or
entity outside the system. In the Use Case, the Actor may include other software systems,
hardware components or other entities.8

7. Jacobson, I; Object-Oriented Software Engineering: A Use-Case Driven Approach; Addision-Wesley; 1992.


8. Wiegers, Karl, Software Requirements; Microsoft Press, 1999

Version 9.1

7-23

Guide to the CABA CBOK


Actors can be divided into two categories: Primary Actors and Secondary Actors.
According to Alastair Cockburn, A primary actor is one having a goal requiring the
assistance of the system. A secondary actor is one from which the system needs assistance to
satisfy its goal. One of the actors is designated as the system under discussion.9 Primary and
secondary actors will have different expected results for the same test.

Figure 7-2 Simple Use Case Model


The addition of these actors allows the Use Case to add depth and understanding to what is
simply a customer perspective. Use cases are the test case development activity most closely
associated with acceptance testing, because they are strongly based in the functionality
desired by the stakeholders and created at a systems level.
Cockburn further expands the understanding of the relationship between the actor and the
system by integrating it with the concept of responsibilities. Each actor has a set of
responsibilities. To carry out that responsibility, it sets some goals. To reach a goal, it
performs some actions.10

9. Cockburn, Alistair, A.R.; Humans and Technology; (http://alistair.cockburn.us/); 2007


10. Ibid.

7-24

Version 9.1

Acceptance Testing

7.3.1

Use Case Definition

While there are many working definitions of Use Cases, presented by many different authors,
there are a central group of concepts that generally appear. One of those concepts is the
separation of Use Cases into two categories: Essential Use Cases and Conventional Use
Cases.
The Use Case definition can be restated somewhat differently than the Wiegers definition
previously cited. A Use Case is a description of a set or sequences of actions, including
variants, that a system performs that yields an observable result of value to a particular
actor.11 The two definitions are consistent, but the second allows one of the problems with
Use Cases to be made explicit: they can easily lead the BA and the development team to
premature design decisions.
The Essential Use Case is designed to prevent that from occurring. An essential use case is a
structured narrative, expressed in the language of the application domain and of users,
comprising a simplified, generalized, abstract, technology-free and implementation
independent description of one task or interaction that is complete, meaningful, and welldefined from the point of view of users in some role or roles in relation to a system and that
embodies the purpose or intentions underlying the interaction12
The advantages of the Essential Use Case are:
They explicitly exclude any consideration of User Interfaces, which reduces the
tendency to premature design
They represent a very high-level abstraction of the interaction
They are short and simple to understand
They can be written and revised quickly
They can form the basis for initial validation of functionality13
Once the essential Use Cases have been defined, it is possible to interrogate them further and
decompose them into finer levels of detail. Figure 7-3 is an example of an Essential Use Case
for the Kiosk Project. This amount of information can easily be captured on a single piece of
paper or a 3x5 index card.
The list on the left of Exhibit represents the users view of the function and the list on the right
represents the system responsibilities in accomplishing the goal. The jagged line down the
center signifies the interaction between the user (actor) and the system.

11. Jacobson, ob.cit.


12. Constantine and Lockwood; Software for Use; A Practical Guide to the Models and Methods of
Usage- Centered Design; Addison-Wesley; 1999.
13. Biddle, Noble and Tempero; From Essential Use Cases to Objects; for USE 2002 Proceedings;
Amersand Press; 2002.

Version 9.1

7-25

Guide to the CABA CBOK


The Business Analyst has participated in the definition of Use Cases throughout the
development of the product, beginning with the Essential Use Cases. This began early in
Requirements and continued through design and coding. While the development of the model
has been explored, the resulting test cases and scenarios have not.

Figure 7-3 An Essential Use Case


Figure 7-4, A Simple Use Case Model, provides an expanded overview of some of the
functionality to be provided in the Kiosk Ticket System. This functionality can be further
decomposed into even finer levels of detail. The narrative flow that accompanies the Model
provides some of that detail. During Unit and Systems testing, the developers will have
verified that the flow from decision to decision is working properly.

7-26

Version 9.1

Acceptance Testing

Figure 7-4 A Simple Use Case Model


As the detail begins to emerge, so do the business rules associated with each of the potential
choices. Acceptance Testing will apply Use Cases to validate the proper application of those
business rules. For the last menu choice in Figure 7-4 the following business rules might be
applied:
Event choices may only appear if there are tickets available for the date selected

Version 9.1

7-27

Guide to the CABA CBOK


Event choices may only appear if there are tickets available for the time selected
If no events match the date and time criteria display no available events for that
date and time
Each combination of functionality and business rules will provide the basis for one or more
Use Cases.

7.3.2

Use Case Development

Use case development typically results in a series of related tests call scenarios. Each scenario
represents a unique functional (goal oriented) path through the system, if there is only one
path, with no opportunities for decisions, changes or errors, there will be no scenarios, only a
sequence.14 Scenarios describe the alternative actions that can occur. All the scenarios for a
single Use Case must meet two criteria:
All of the interactions described relate to the same goal or sub-goal
Interactions start at the triggering event and end when the goal is delivered or
abandoned, and the system completes its responsibilities with respect to that
interaction.15
Applying the business rules shown in Section 7.3.1 to Figure 7-4, the following scenarios are
intended to exist:
Basic course of events: A date and time are selected and the desired event choice is
displayed.
Alternative course of events: A date and time are selected, but the desired event in
not displayed.
Alternative course of events: A date and time are selected and no available events
message is displayed.
Other results may occur; as none of them are intended results, they represent error conditions.
If there are alternative paths that lead to a successful conclusion of the interaction (the actor
achieves the desired goal) through effective error handling, these may be added to the list of
scenarios as recovery scenarios. Unsuccessful conclusions that result in the actor abandoning
the goal are referred to as failure scenarios.

14. Cockburn, Alistair, A.R.; Humans and Technology; (http://alistair.cockburn.us/); 2007


15. Cockburn; op.cit.

7-28

Version 9.1

Acceptance Testing

7.3.3

Use Case Format

Just as there are many workable approaches to Use Case development, so to are there a wide
range of recommended formats. The following list represents those found most commonly,
with comments regarding specific application or justification. This information should be
captured in an organization standard format.
Case ID - A unique identifier for each Use Case, it includes cross reference to the
requirement(s) being tested so that it is possible to trace each requirement through
testing. Cockburn recommends that scenarios be represented as numbered points
following the main or happy path alternative. For example:
1. Customer enters a valid date and time combination
1.a. Submitted data is invalid
1.a.1. Kiosk requests valid date/time data
1.b. Submitted data is incomplete
1.b.1. Kiosk requests completed data
Use Case Name - A unique short name for the Use Case that implicitly expresses
the users intent or purpose16; based upon the Essential Use Case shown in
Figure 7-3, the case above might be captioned ChooseEventTime. Using this
nomenclature ties the individual Use Case directly to the Essential Use Cases
originally described and allows it to be sequenced on a narrative basis alone.
Summary Description - A several sentence description summarizing the use cases.
This might appear redundant when an effective Use Case naming standard is in
place, but with large systems, it is possible to become confused about specific
points of functionality.
Frequency / Iteration Number - These two pieces of information provide
additional context for the case. The first, frequency, deals with how often the actor
executes or triggers the function covered by the use case. This helps to determine
how important this functionality is to the overall system. Iteration number addresses
how many times this set of use cases has been executed. There should be a
correlation between the two numbers.
Status - This is the status of the case itself: In Development, Ready for Review, and
Passed or Failed Review are typical status designation.
Actors - The list of actors associated with the case; while the primary actor is often
clear from the summary description, the role of secondary actors is easy to miss.
This may cause problems in identifying all of the potential alternative paths.

16. Ambler, Scott; Web services programming tips and tricks: Documenting a use case;
([email protected]) ; October 2000.

Version 9.1

7-29

Guide to the CABA CBOK


Trigger - This is the starting point for any action in a process or sub-process. The
first trigger is always the result of interaction with the primary actor. Subsequent
triggers initiate other processes and sub-processes needed by the system to achieve
the actors goal and to fulfill its responsibilities.
Basic Course of Events - This is called the main path, the happy path or the
primary path. It is the main flow of logic an actor follows to achieve the desired
goal. It describes how the system works when everything functions properly. If the
Use Case Model contains an <<includes>> or <<extends>>, it can be described
here. Alternatively any additional categories for <<extends>> and <<includes>>
must be create. If there are relatively few, they should be broken out so they will not
be overlooked. If they are common, either practice will work.
Alternative Events - Less frequently used paths of logic, these may be the result of
alternative work processes or an error condition. Alternative events are often
signaled by the existence of an <<exception>> in the Use Case Model.
Pre-Conditions - A list of conditions, if any, that must be met before the Use Case
can be properly executed. In the Kiosk examples cited previously, before a payment
can be calculated, an event, the number and location of seats must be selected.
During Unit and System testing this situation is handled using Stubs. By acceptance
testing, there should be no Stubs left in the system.
Business Rules and Assumptions - Any business rules not clearly expressed in
either the main or alternate paths must be stated. These may include disqualifying
responses to pre-conditions. Assumptions about the domain that are not made
explicit in the main and alternate paths must be recorded. All assumptions should
have been verified prior to the product arriving for acceptance testing.
Post Conditions - A list of conditions, if any, that will be true after the Use Case
finished successfully. In the Kiosk example the Post Conditions might include:

The customer receives the correct number of tickets

Each ticket displays the correct event name and price

Each ticket shows the requested data, time and seat location

The total price for the ticket(s) was properly calculated

The customer account is properly debited for the transaction

The ticket inventory is properly updated to reflect tickets issued

The accounts receivable system receives the correct payment information

Notes - Any relevant information not previously recorded should be entered here. If
certain types of information appear consistently, create a category for them.

7-30

Version 9.1

Acceptance Testing
Author, Action and Date - This is a sequential list of all of the authors and the
date(s) of their work on the Use Case. Many Use Cases are developed and reworked
multiple times over the course of a large project. This information will help
research any problems with the case that might arise.

7.3.4

Writing effective Use Cases

The following list of practices will help to ensure the quality and usefulness of the completed
Use Cases17:
Write from the point of view of the actor and in the active voice
Write scenario text, not functional requirements18
Use cases only document behavioral requirements
Dont forget the use interface
Create a use case template
Organize use case diagrams consistently
Dont forget the system responses to the actions of actors
Alternate courses of events are important
Dont get hung up on <<include>> and <<extend>> associations
Let use cases drive user documentation
Let use cases drive presentations (as well as development, testing and training)

7.4 Defect Management


The purpose of most of the testing activities performed during a project is to detect defects.
While the primary purpose of Acceptance Testing is to verify that the application is ready for
production, it also is engaged in identifying defects. Throughout this Skill Category
identifying defects has been the subject of much of the material. Defects are identified with
the objective of understanding and correcting them. This process is one of the most critical

17. Ambler, Scott; Web services programming tips and tricks: Documenting a use case;
([email protected]) ; October 2000.
18. For even experienced Business Analysts and Testers this can be a very difficult distinction at times.

Version 9.1

7-31

Guide to the CABA CBOK


and potential contentious in the entire project. Understanding defects, what they are and how
they are managed is a key knowledge area for the Certified Software Business Analyst.

7.4.1

Defect Definitions

The way in which a defect is defined will determine what is captured and addressed by an
organization. These definitions should be included in the Information Technology Standards
and used consistently for all projects. These definitions should be jointly developed and
agreed upon by Information Technology and the Business Partner. Failure to clearly define
what a defect is or is not will create the opportunity for confusion, conflict and these will in
turn negatively impact the project.

7.4.1.1

Defect

Not every organization will define defects precisely the same way because of the impact of
specific kinds of defects on the mission of the organization. The inclusion of the Business
Partner in the definition process ensures that mission-critical issues are properly identified.
At its most basic level, a defect is anything wrong, missing, or extra19 in the application. This
definition was first introduced in Skill Category 5, 5.7.4 Fagan Inspections. The simplicity of
the basic definition allows it to be clearly understood. Examples of items that are wrong
include numbers that fail to add up correctly, formulas that are incorrect, and logic paths that
yield false positive or negative responses. Examples of items that are missing would include
requirements not identified during the requirements process; incomplete listings of states or
(States), conditions or business rules; and failure to develop or include items specified by the
organizations standards. Extra items are those which cannot be traced back to a business
requirement. These are often added by designers and developers who think they are providing
value (this is occasionally referred to as gold-plating the application).
Most organizations choose to expand upon this definition and include examples of what is
considered wrong and what is not. In some mathematically oriented applications, such as
accounting systems, misspelling the label on a report (i.e. Branche Offices instead of Branch
Offices), might not be considered a significant defect. In other kinds of applications, such as
those that product checks, misspellings are a serious defect.
In addition to knowing the kind of defect, it is also useful to classify them according to their
severity. Fagan suggest a simple two category classification: Major and Minor.20 Major is
anything that will cause a failure in production; Minor is everything else. The simplicity of
this system works well in the context for which it was developed. It does rely upon the
organization having a clear definition of what constitutes a failure in production.

19. Fagan, Michael E. Design and Code Inspections; IBM Systems Journal, 1976.
20. Fagan, op cit.

7-32

Version 9.1

Acceptance Testing
Many organizations use an expanded scale that allows for a greater range of distinction among
the defects found. The five (5) point scale used by the United States Defense industry is
typical:
Severity 1 (major):

Will cause the system to malfunction

Prevent accomplishment of essential capability, jeopardize safety, security,


data integrity or other critical function

Severity 2 (major):

Adversely affect accomplishment of an essential capability and no workaround solution is known

Adversely affect technical, cost or schedule risks to project or life cycle support of the system and no work-around is known

Severity 3 (minor):

Adversely affect the accomplishment of an essential capability but a workaround solution is known

Adversely affect technical, cost or schedule risks to the project or to life cycle
support of the system, but a work-around solution is known

Severity 4 (minor):

Result in business partner/operator inconvenience or annoyance but does not


affect a required operational or mission-essential capability

Results in inconvenience or annoyance for development or maintenance personnel but does not prevent the accomplishment of the responsibilities of those
personnel

Severity 5 (minor):

Typos or grammatical errors that do not change the meaning of the document
or appear to the end users; all other effects

There is a distinct tradeoff between the level of granularity of the classification system used
and the time and value of the information that can be generated as a result. The larger the
number of potential classes that exist, the more difficult it will be for individuals to decide
which classification is correct and the more likely it is that occasional errors will be made.
Alternatively, more classifications yield a better understanding of what the potential impact of
defects would be. It will also yield more information about the quality of the processes used to
develop the product. Generally speaking, keeping the classification system simple (2-5
categories) is the best place to begin the process. Expansion of the system can then be based
upon the needs of the organization.

Version 9.1

7-33

Guide to the CABA CBOK


7.4.1.2

Errors, Faults and Defects

Previous Skill Categories have presented the concept that placing the Quality Control step
(Check Process) in line with the activity (Do Process) allows defects to be captured at the
earliest possible point in the process. The economics of this are described in more detail
below.
One outgrowth of this is the idea that there is a proper time and activity to catch defects.
Taking this concept one step further, these organizations are defining defects by their point of
capture.21 Problems that are captured in the phase where they were created, by the quality
control processes designed to capture them, are referred to as errors. Problems discovered at
the wrong time, after they have moved from the phase in which they were created, are
designated as defects.
The advantage of this nomenclature is that it recognizes and rewards the efforts to capture
defects promptly, when it is most cost-effective to do so. It also allows the organization better
insight into the effectiveness of the quality control activities being used, providing better
information for the process improvement activities.
The use of this nomenclature also helps to establish the trusting environment required to
support the self-reporting of problems (either errors or defects.) Only when self reporting is
fully functional does an organization have all of the information it needs about the quality of
the development processes. Until that time it has only information about problems discovered
and disclosed by others.

7.4.1.3

Recidivism and Persistent Failure Rates

These two concepts are closely linked and essential to managing defects.
Recidivism is the introduction of new errors in the attempt (successful or
unsuccessful) to correct a defect. Recidivism is often the result of inadequate or
incomplete analysis of the reported defect, leading to an inappropriate solution.
This solution may, in fact, cause the defect to be masked (the test cases provided no
longer trigger the error) but not resolved (the error will still be triggered by another
set of circumstances). A high rate of recidivism (the ratio of injected defect to all
defects) indicates a problem with the analysis process. As this is one of the key
areas for the Business Analyst, this metric is one that should receive their careful
attention.
Persistent Failure Rate is the name for defects that are not resolved by new releases
of the product to the testers as compared to the total defects reported. A high
Persistent Failure Rate is often coupled with a growing defect backlog and should
be a signal that the project is experience significant quality problems. Because the
Persistent Failure Rate can be measured beginning very early in the life of the

21. Seider, Ross; Implementing Phase Containment Effectiveness Metrics at Motorola; Quality Assurance Institute Journal; October 2007.

7-34

Version 9.1

Acceptance Testing
product, it is an excellent early indicator of quality problems. These will lead to
schedule and budget problems that may not be obvious that early in the project.

7.4.2

Defect Life Cycle

Every defect follows a known path. For the Business Analyst, following the progress of
defects along the path to a successful resolution is an essential part of ensuring the final
quality of the delivered product. Not all resolutions are successful, and in some cases it will
take extraordinary effort on the part of the project team, including the BA, to move a defect
off the critical path.

7.4.2.1

Creation

Defects can be created in many ways. In earlier Skill Categories the opportunities for the
insertion of defects were highlighted. Early studies into the sources of defects done on behalf
of large software developers quickly focused on the Requirements stages as the one most error
prone. This is due in large part to the large proportion of person to person communication; a
notoriously unreliable process.
Figure 7-5 reflects the data developed James Martin in a study to identify the source of defects
and when they were located. The first column in front left represents Requirements
(Analysis). The study showed that 56% of errors were caused in Requirements. 27% were
caused in Design and the remaining 27% were split between the Coding/Testing stage and
Production/ Maintenance.

Version 9.1

7-35

Guide to the CABA CBOK

Figure 7-5 Where Errors are Caused


The reasons for the large proportion of errors in the Requirements phase have not changed
significantly since the study was published. In the rush to begin design and code activities,
analysis is left incomplete. Business Partners are not fully involved in the Requirements
process. Focus on schedule and budget push projects ahead despite the misgivings voiced by
participants and sponsors.

Figure 7-6 Where Defects Are Found


Figure 7-6 has been modified to include the second set of data collected by James Martin in
his study. This second column is in sharp contrast to the first; it reflects where the defects

7-36

Version 9.1

Acceptance Testing
were located (found). About 50% of all errors are located during the Coding/Testing stage and
23% in Production. Only about 25% were found in Analysis and Design combined. This is
almost a complete reversal of where they are caused.

Figure 7-7 Relative Cost of Defects by Phase


The third set of columns added to the chart (Figure 7-7) represents the relative cost to find and
correct a defect, by development phase. These costs grow exponentially because earlier steps
must be repeated when defects are found later in the development cycle. When the cost to find
and fix defects are factored into the cost equation the results are staggering.

Cost savings for finding 1 error from


requirements in requirements instead of test.
Fully burdened
hourly rate $30

$1470.00

Fully burdened
hourly rate $40

$1960.00

Fully burdened
hourly rate $50

$2450.00

Figure 7-8 Extrapolated Cost of Defect Created in Requirements and Discovered in Test
Figure 7-8 translates the impact of late rather than early discovery into economic terms. This
facilitates making the business case for spending resources on the early discovery of defects.

Version 9.1

7-37

Guide to the CABA CBOK


7.4.2.2

Analysis

Once a defect has been discovered, by whatever means, it must be understood. It is one thing
to know that the system response to a calculation, query or actor is incorrect. It is another to
know how the wrong response is being created.
This multi-step process begins by verifying that the response or action is indeed correct. In the
case of a test, this may involve re-creating the document or test that produced the response. If
the identified sequence fails to consistently produce the (same) incorrect response, the actual
defect may be elsewhere. Where the defect is in a static document, such as a Requirements
Document, the information perceived to be incorrect must be verified with the originating
source. In the event of conflicting sources, time must be taken to determine the correct
answer.
Where the source of the defect is not immediately obvious, someone from the project team,
who understands both the project and the media of the defect, must research the problem. In
the case of a program, this is typically a programmer; if it is a test case, a tester; if the
requirements are involved it might be the Business Analyst.
A second aspect of the analysis activity is to understand the extent of the problem. When the
defect was first identified, it is typically assigned a classification (as described in Section
7.4.1.1) This assignment was based upon the initial understanding of the defect. When it is
more clearly understood, that assessment must be re-evaluated, as any of the following
situations may exist:
The problem is much less extensive than the original classification
The problem is much more extensive that the original classification
The wrong problem was described and classified
There is not a problem
There is a problem, but it has already been reported, described and classified
The classification of a problem is generally closely related to the level of attention it receives.
In the classification scheme describe for use by the United States Defense industry, the
classification also contains an action level standard, for example the action levels added in
italics below the classification description:
Severity 1 (major):

Will cause the system to malfunction

Prevent accomplishment of essential capability Jeopardize safety, security, data


integrity or other critical function

Must be fixed immediately to move on

Severity 3 (minor):

7-38

Version 9.1

Acceptance Testing

Adversely affect the accomplishment of an essential capability but a workaround solution is known

Adversely affect technical, cost or schedule risks to the project or to life cycle
support of the system, but a work-around solution is known

Must be fixed before implementation

Severity 5 (minor):

Typos or grammatical errors that do not change the meaning of the document
or appear to the end users; all other effects

Fix as time available

Good classification systems will contain action level standards such as these. For this reason,
the classification process can become contentious, as the way a particular defect or group of
defects are classified can have a major impact on project schedule and budget.

7.4.2.3

Resolution

Defect resolution can cover a wide range of possibilities, as shown in the action level
standards above. Clearly, some problems will be resolved immediately, others will be
included in the next release, some will be distributed as a part of a bug fix, still others will
become the basis for a new feature or enhancement.
The resolution activities and cost will depend in large part on when and how the problem was
identified. If the defect is corrected, that correction must be validated through all of the
development steps that preceded its discovery. A defective requirement, found in the
Requirements phase of development will be re-written, inspected and approved.
Resolution can also include the determination that a particular defect will not be fixed.
Although this is usually not the initial assessment, defects that continue to linger on
unresolved usually do so because the cost to fix them exceeds the benefit of doing so.
Regardless of what resolution path is chosen, it must represent the agreement of all of the
stakeholders.

7.4.2.4

Closure

Defects cannot and should not live forever. At some point, the defect report and supporting
material should be closed. Ideally this point occurs when the defect has been corrected and
the correction validated. Closing the defect at this point should mean that it is gone forever,
except as a statistic.
STATUS: Corrected and Closed - The defect was corrected and tested as a part of
Release 3.0.3 of this product.

Version 9.1

7-39

Guide to the CABA CBOK


When defects are not going to be corrected immediately, or even soon, a decision needs to be
made about how to manage this defect. If it is anticipated that the defect will eventually be
corrected as a part of the current project, even if that correction will be in a much later release,
it must remain open or pending. Annotating the defect record with the anticipated resolution
information will save time for everyone:
STATUS: Pending - The defect will be resolved as a part of Release 3.2 of this
product. At that time all of the calculations involved will be replaced with a revised
table-driven structured to address the added complexity. The field has been
instructed to continue using the manual calculation (see memo dated 11-03-xx.)
If the agreement has been reached that this defect is part of a new enhance or an entirely new
project, the defect can be so annotated and closed.
STATUS: Transferred and Closed - The defect will be resolved as a part of the
new Sales SP3 system project Requirement #2-17). At that time all of the
calculations involved will be replaced with a revised table-driven structured to
address the added complexity. The field has been instructed to continue using the
manual calculation (see memo dated 11-03-xx.)
If it has been agreed that the defect will not be corrected, the closing information should
clearly indicate why, as the problem will not disappear. The people using the system will
continue to experience the defect.
STATUS: Closed- No Action - The defect will not be resolved. As agreed in
Defect Meeting dated 05-10-XX, this problem only occurs when a customer whose
account is more than 150 days in arrears attempts to purchase a priority product on
credit. The estimated cost to change the logic and the resulting error message is
$5300. In this case there is no benefit to fixing the problem. When this error occurs
the field has been instructed to advise the customer that they must either pay cash or
pay down the arrears.
Providing the additional information about how the problem was closed will allow others to
respond to questions that may arise later. It also will allow for the statistical analysis of the
distribution of problem resolution activities. Projects that show a high ratio of defects Closed
with No Action to Total Defects typically will have a very high level of customer
dissatisfaction with the product.

7.5 Summary
Skill Category 7 examines one of the key roles of the Business Analyst in the development of
a software product, the successful execution of the acceptance testing needed to verify that the
product is ready for production. By creating an effective acceptance test plan that begins in
Requirements, and allocates resources to these early life cycle activities, the Business Analyst

7-40

Version 9.1

Acceptance Testing
leverages the total resource contribution and minimizes the amount of actual testing time
required.
Participation in the Requirements and Design activities also allows the early development of
test cases which can be used throughout the lifecycle of the product. Effective Use Case
development not only aids in the creation of quality requirements, it also streamlines the total
volume of test cases that need to be executed to verify the quality of the product.
By developing and documenting meaningful acceptance test criteria, it is possible to have a
data driven decision of production readiness. This reduces the probability that a severely
defective product will be moved to production simply because the allocated time for testing
has been exhausted. It also builds a common level of organizational understanding about what
it means to deliver a quality product which is essential to long term customer satisfaction.
Finally, by creating and managing an effective defect tracking system, the organization gains
valuable insight into what problems are created, where they are created, how they are resolved
and the impact of unresolved defects on the organization.

Version 9.1

7-41

Guide to the CABA CBOK

7-42

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting

Skill
Category

8
Commercial Off-the-Shelf
Software and Performance
Based Contracting
The majority of the information provided in the preceding 7 Skill Categories has focused on
the development of systems by the organization. In this Skill Category the focus will be on the
Business Analysts role in projects where the software is developed either in part or entirely
by outside organizations. For many organizations this was confined to the purchase of
Commercial Off-the-Shelf software (COTS). The growing role of outside developers, either
on-shore or off-shore, has lead to an increased interest in methods to manage this
development.
The outsourcing trend is growing because many small to medium sized businesses cant
afford to hire huge numbers of information-technology workers, yet must deal with office
technology thats becoming increasingly complex. These companies are grappling with a
surge of new technologies such as Internet telephony, Web video, new mobile devices - from
Blackberries to cell phones- and increased security threats from viruses and worms.1
Performance Based Contracting (PBC) is the most effective approach. Regardless of the
scope of the external acquisition, the Business Analyst will play a key role in ensuring the
quality of the final product.

1. Outsourcing Finds New Niche: More Small Firms Farm Out Tech Work to Tap Experts, Pare Costs;
Pui-Wing Tam; Wall Street Journal; April 17, 2007.

Version 9.1

8-1

Guide to the CABA CBOK

8.1 Definitions
Product acquisition can run the gamut from completely custom designed and based on unique
requirements, to products that are pre-packaged and installed intact. Many combinations of
those two, with or without some development on the part of the acquiring organization, are
also possible. Regardless of where on the spectrum a specific organization finds a solution,
there are significant issues to be addressed.

8.1.1

Commercial Off-the-Shelf Software (COTS)

Commercial Off-the-Shelf (COTS) software includes the wide range of software products
developed for sale by software development companies. These products are generally
intended to be installed and operated with no modifications by the purchaser. At the high end
of the cost spectrum are large scale resource management and accounting packages, smaller
and less costly products include word processing and spreadsheet products.

8.1.2

Custom Software

Custom Software includes all of the products developed by outside individuals, agencies, and
organizations, on a for profit basis, where the day to day activities are not directed by the
organization itself. This excludes software developed by consultants or contractors that are
hired to augment the internal staff and whose day to day work activity is directed by the
organization itself.

8.1.3

Modified Off-the-Shelf Software (MOTS)

Modified Off-the-Shelf (MOTS) software includes all of those products that originate as
COTS and are subsequently changed for the exclusive benefit of a specific organization.
These modifications may be performed by the originating vendor, the purchasing
organization, or an outside third party.

8.1.4

Performance Based Contracting (PBC)

Performance Based Contracting (PBC) is an approach for managing the interaction between a
software vendor and a requesting organization, that includes the following minimum
components2
1. Performance requirements defining the work in measurable, mission-related terms.

8-2

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


2. Performance standards (i.e., quality, quantity, timeliness) tied to the performance
requirements.
3. A Quality Assurance (QA) plan describing how the contractors work will be measured
against the performance standards.
4. Positive and negative incentives tied to the QA plan measurements if the acquisition is
either critical to the organizations mission accomplishment or a relatively large
expenditure of funds.

8.2 Establish Requirements


It makes no difference if the product is custom developed or COTS, it is essential to conduct a
rigorous requirements process. If the product is to be custom developed, the requirements will
be the foundation for the development contract. If the product is to be purchased, the
requirements will form the criteria against which products are to be judged. The process for
identifying, documenting and prioritizing requirements is covered in Skill Category 5. Often
the Requirements Development Process is concurrent with the Vendor Selection Process.
Great care must be taken with the Requirements if the vendor has already been selected. The
possibility exists that the vendor will shape the product to what is easy, cost effective or
efficient for them to provide rather than what the organization actually needs. This potential is
especially great when an organization has made a long term outsource commitment to a
vendor.

8.3 Commercial Off-The-Shelf Software (COTS)


Considerations
The preceding discussion has focused on the use of Performance Based Contracting for
custom development. Since they are developed for a specific customer with a specific
hardware and software environment, these areas should not present an issue. COTS is
developed for a wide range of customers who may share common hardware and software
platforms, but the product is not tailored to any organization s unique characteristics.
Therefore, when purchasing COTS it is essential to verify these details, and others.
For the Business Analyst, the use of COTS to address a business problem is potentially either
a tremendous benefit or a huge waste of resources. Taking steps to ensure that the ultimate
decision creates benefits rather than waste will take careful and persistent attention. All too
often the impetus to acquire COTS is the result of a glitzy advertising campaign, a skilled
sales pitch by the vendor, or a presentation by industry peers that focuses solely on the
2. http://www.gao.gov/new.items/d021049.pdf

Version 9.1

8-3

Guide to the CABA CBOK


potential benefits. The Business Analyst will need all of their knowledge and skills to ensure
that an effective Requirement Development process occurs before the decision is made on
which product to purchase. The existence of an effective working relationship among the
Project Managers, Test Managers and the Business Analyst will be especially valuable in
addressing the cart before the horse. Failure to do so will result in the expenditure of
significant resources on a solution in search of a problem.

8.3.1 Determine compatibility with your computer


environment
Once a business need is established and the product requirements are identified, prioritized
and agreed upon, the process of determining potential solutions can begin. There are literally
thousands of software products available for purchase in every price range. It is likely that
there will be more than one with the potential to solve the business problem.
Before beginning a detailed and time-consuming comparison of features and functionality, it
makes sense to eliminate those products that will not operate in the existing environment. This
can be done in a logical, step-by-step method that will systematically reduce the pool of
candidates.

8.3.1.1

Hardware Compatibility

While hardware is no longer as expensive as it once was, it still represents a major investment
for most organizations. Determine what the current hardware platform(s) is/are and what
plans there are for the future. It is most cost effective to find a solution that will run on
hardware already installed, or hardware that is already planned for installation.
It is not cost effective to acquire a product that must run on hardware with no other potential
use, or hardware that is already becoming obsolete. To do so locks the organization into an
undesirable maintenance and cost structure.
Many organizations have multiple hardware platforms that include some mainframe
capability, a client-server platform as well as networked terminals and PCs. Some
organizations have standardized on a single hardware vendor for their desktop solutions,
others will acquire what ever is cost-effective and meets the need. It is essential to have this
information to perform an assessment of the hardware compatibility of specific vendor
solutions.
Included in the consideration of hardware are the network considerations. Some products are
designed to be run standalone on a single workstation. Others are designed to be installed on
a server and shared by a limited number of concurrent users. Still others are intended to
provide access to a virtually unlimited number of individuals. The product under
consideration must match the intended usage of the organization.
Products intended to be used by a very large number of concurrent users, initiating a high
volume of large transactions may create the need for additional network bandwidth. Other

8-4

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


vendors may solve the same problem in a way that does not require large bandwidth. The
acquisition of this resource, if needed, must be factored into the overall cost of the proposed
solution.

8.3.1.2

Operating System Compatibility

The industry has standardized on a relatively small number of operating systems for each of
the hardware platforms. The pairing of the hardware platform and the operating system
creates an environment in which application (business) software can execute and be used by
the business community.
When choosing potential software solutions to business problems, it is important to verify that
the product can run in one of the existing environments. While organizations can and do add
environments, it is an expensive addition to the solution of the business problem. If a product
can be found that will operate within the existing framework, it will be much easier to install
and support. Over the long term, this translates into a less costly solution for the organization.
As with hardware, it is desirable to select a product that will run in both the current and
planned future environments. Selecting a product that requires the maintenance of a backleveled (this refers to products that are not the most current release by the vendor; for
example, requiring the use of Windows 98 or 2000 after Vista is released) operating system
exposes the organization to a possible of lack of technical support for problems encountered.
This should be avoided.

8.3.1.3

Software Compatibility

Software compatibility is much less of an issue than it was in the past. Relatively few
organizations are still running DOS based applications that have the potential to interact
negatively with other applications. For those organizations that are in this situation, ensuring
that the new product can run in a safe environment is typically resolved by the addition of
segregated servers.

8.3.1.4

Security

Since security and access control are fundamental to the product design, it is rarely possible to
have significant changes made to the product in these areas. Therefore, it is prudent to do a
careful review of the security and access control fundamentals early in the evaluation process.
Products that do not provide or support these areas in a fashion consistent with the
organizations needs should be dropped, even if they are functionally sound.

8.3.1.5

Virtualization

Increasingly, organizations are also looking at using the resources of other organizations or
other existing applications through the virtualization approach. The definitions in this area

Version 9.1

8-5

Guide to the CABA CBOK


are still evolving; however, in essence, virtualization allows a hardware service provider to
share one resource among multiple users, while protecting the integrity and functionality of
each.
There are two basic approaches to virtualization3 as well as several other theoretically
possible options:
1. Hardware virtualization - These approaches are intended to support multiple types of
Operating Systems on a single server. Each Operating System appears to have an entire
server at its command. This allows organizations greater range and flexibility in their
choice(s) of Operating Systems, without necessarily requiring significant new server
resources.
2. Operating System virtualization - These approaches support multiple instances of a
single Operating System on one server. It does not allow for multiple Operating
Systems. This approach allows an organization that is committed to one or a few
Operating Systems to minimize the software overhead required to support applications.
If the vendor anticipates the use of virtualization as a part of their cost control approach, the
organization must understand what the trade-offs will be in terms of availability, performance
and stability. Variations on the two options may require specialized software and support.

8.3.2 Ensure the software can be integrated into your


business system work flow
Most organizations are committed to their existing work processes. While there may be some
inefficiencies in those processes, the business has evolved over time to provide the products
and services their customers need and want in a specific manner. Any software that will
require business practices be revised to conform to their product must be evaluated with great
care. The cost of changing the business process can be very high; in some circumstances the
cost may be worth it, but this is not a trivial decision.

8.3.2.1

Current system based on certain assumptions

Organizations create processes based on their historical experience of interfacing products and
services. The processes organizations create are based on that experience and include
assumptions about how other parts of the system will interoperate. A specific process may
assume that it will be passed valid data in a specified format from the target product, because
that is what it received from the existing product.
Customers and users may assume they will have access to certain kinds of information at
specific points in the process, because that is what they have in the current process. If the
3. Top Ten Considerations for Choosing A Virtualization Technology; SWSOFT; www.swsoft.com;
July 2006.

8-6

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


current environment produces many defects at certain point in the process, there will typically
be a wealth of controls at that point, designed to prevent or catch those defects.
If the target vendor product will not provide that data, or make that information available
when needed, some action will need to be taken to address the issue. That action will have a
cost associated with it. If the assumptions about the current system are not made explicit, it
will not be possible to identify and develop these costs. In some instances, the assumed
functionality or compatibility will not be possible with one or more of the vendor products. In
that case, if the features are essential, those products should be eliminated from consideration.

8.3.2.2

Existing forms, existing data, and existing procedures

Today many organizations have already automated all of their basic business functions. The
products and solutions they seek are replacements for existing systems that no longer meet the
business need. Those current systems will have created data, as well as forms and procedures
for capturing and using that data.
When considering a replacement system, it is important to carefully consider the data
conversion process and the integrity of the finished database. Some products will provide a
data migration process as a part of the basic product; often these will be new releases of older
products or a replacement product offered when the old vendor was acquired.
More typically the vendor will offer a migration utility to be executed by the organizations
staff. The conversion and validation of the output files can be a very critical and time
consuming process, these costs must be considered when determining the overall cost of the
proposed vendor solution.
Rarely is it possible to continue using the same forms and processes with no modifications.
What ever work will need to be done to update and reprint paper products and provide training
in how to use them must also be considered.

8.3.2.3

COTS based on certain assumptions

Just as the organization has certain assumptions about how a product will operate, so to the
developer of the product has made certain assumptions about how their product will be used
and in what environment it will operate. Some of this has been addressed in the hardware and
software considerations section.
If the product does not contain a report generation facility, or a very limited facility, the
vendor may assume that the organization already has a Report Writer package installed. If that
is not the case, the cost of adding that functionality to the environment must be considered
when calculating the cost of the solution.
Likewise, because many products grow out of a specific organizations practices, assumptions
may be made about how business will be done. For example, if the originating organization
had a very rigid segregation of duties, a sales person might only be able to enter and update
sales information; they might be prohibited from setting up new accounts. In an organization

Version 9.1

8-7

Guide to the CABA CBOK


with less segregation of duties, sales people might perform both functions. A system designed
to support the first organization would not function well in the second system.
Additionally, the vendor may assume that certain types of organizational structures or
departments exist. At a conceptual level, reorganizations are not particularly expensive. In
practice they can be very expensive as they often disrupt the normal flow of business for days,
weeks or months. If they are not present in the current organization, it is important to consider
what the cost of establishing those functions would be, and to determine if the benefits of
using the vendors product will be sufficient to recoup those expenses
The vendor may be willing to make modifications to accommodate specific requirements, but
these can be costly to acquire and maintain.

8.3.3

Assuring product usability

A specific vendor product may appear to meet all of the organizations requirements based on
presentations made by salesmen or at trade shows. A review of the platform issues and
organizational compatibility issues may reveal no problems. It is still not safe to assume that
the product will perform as needed without additional research. One excellent approach is to
talk to existing customers about their experience.

8.3.3.1

Telephone interviews with current and previous customers

Whether there is only one vendor or several on the list of potential COTS suppliers, an
excellent next step is to talk to existing customers about their experience with the product and
the vendor.
Ask the vendor to supply a list of current customers, preferably including one or more in the
organizations industry. Generally the vendor will contact the customer to confirm their
willingness to talk to a potential customer before they provide a name and telephone number.
The acquisition team, including the sponsor, subject matter experts, developers, testers,
business analysts and help desk staff, should develop a set of questions for these interviews.
Any telephone interview with an existing customer represents lost time for them. When
developing the list of questions, focus on the important issues. Thirty minutes should be
adequate for the interview.
General open ended questions for a telephone interview would include:
How long have you had the product installed?
What other products did you consider?
Why did you choose this product?
How long did it take for the product to be tested and implemented in production?
Was this more or less time than you estimated? Why?

8-8

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


How much training did you do? Who provided the training? How successful was it?
Did you have Performance Based Criteria for this acquisition? What did you use?
What were the results?
In addition to these open ended questions, developing a few questions using a Likert scale
might also be useful:
On a scale of 1 to 7, how happy were you with the overall acquisition and
installation experience?
On a scale of 1 to 7, how completely does this product meet your functional needs?
On a scale of 1 to 7, how would you rate this vendor in terms of quality, timeliness
and support?
At the end of the discussion it is effective to close with the following questions:
Is there something else important about this product and this vendor that we should
know?
Given what you know now, would you make the same decision?
To the extent that it is possible to do so, try to ask the same questions, the same way each
time. Record the responses from each customer on a spreadsheet. Use the combined results of
the phone calls for all of the customers of each of the vendors to help evaluate the potential
choices.
It is worth asking a vendor to provide a list of former customers. They may not be willing to
do so, but many are. Former customers are an excellent source for identifying potential
problems with the vendor or the product. If the vendor does not have those names, it is
sometime possible to get them from the current customers. It is important to remember that
the reasons one organization chooses to move to a different product may not impact your
organization. Questions to ask former customers are:
How long did you use the product?
What made you decide to change?
What product did you select?
Was the vendor helpful in making the transition?
What was the best thing about the product / vendor?
What was the worst thing about the product / vendor?
Responses from former customers must be evaluated with care, as there may be some residual
hostility between the customer and the vendor that has no impact on the product itself.

Version 9.1

8-9

Guide to the CABA CBOK


8.3.3.2

Demonstrate the software in operation

Before committing to the acquisition of a vendor package, it is essential to see the product in
use at another client location. Most vendors have a list of customers who are willing to let
prospective purchasers visit their location to see the product in action. Be very cautious if the
vendor is not willing to have this happen, it may indicate that there are significant issues with
the product.
A Subject Matter Expert (SME), a member of the IT installation and support team, a tester and
the Business Analyst should make the visit to the installation site. Each of these individuals
brings a unique perspective to the demonstration and is capable of identifying strengths and
weaknesses in the product that others might miss. Each team member should be focused on
issues directly related to their specialty, as well as the general items at the end of this section.
If it is essential to send a smaller team, a thorough pre-visit preparation session should be
conducted.
Prior to the visit, talk with the host customer to discuss the organizations special interests.
Develop a list of specific functions to be executed and provide it in advance to the host
customer. Make it clear that part of the purpose is to talk to people, at all levels of the
organization, who are involved in the actual use of the product. If the review team only talks
to the in house product champion, they may learn little more than through discussions with the
vendor sales teams.
During the visit, the review team should be listening and watching as the product is used,
focusing on the following issues:
Ease of navigation through the system to accomplish tasks
Understandability
Clarity of communications
Need for external references, such as manuals during operation
Ease of use of manuals
Functionality of software
Knowledge needed to execute tasks
Effectiveness of help routines

8.4 Roles and Responsibilities


There are several key roles to fill when dealing with contracted software, whether custom or
off the shelf. Not all organizations title these roles in precisely the same way. Regardless of

8-10

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


what they are called, these functions must be performed for the project to be successful. Many
of these roles and responsibilities have been identified in earlier Skill Categories.

8.4.1

Project Manager

Not all projects have an official project manager, but all teams have someone fulfilling that
role. The project manager has overall schedule, budget, and quality responsibility for the
project. The job of the project manager is to:
Create, maintain and control the overall project plan
Coordinate and manage the integration of sub-plans, such as the vendor delivery
plan with the customer Test Plan, ensuring that sub-plans include time and
resources for the appropriate levels of verification and validation activities
Maintain control of scope, which includes the growth of requirements and any
resulting growth of the budget and/or schedule, including the test effort
Manage budget, staff resources and progress in accordance with the project plan
Ensure communications regarding status of, and changes to, the project plan
Manage the resolution of issues resulting failure to meet schedule, budget,
functionality or quality criteria throughout the life of the project
Ensure that all project metrics, including defect data are collected and maintained
Ensure the successful completion of end of project activities, including the decision
to implement, implementation and post implementation reviews and the archive of
project artifacts

8.4.2

Project Liaison

This person is the primary project contact with the other party or parties to the contract. For
small projects, this role may be filled by the project manager; for large projects there may be
multiple individuals, each focusing on a specific vendor or product. The responsibilities of the
project liaison are to:

Ensure that proper communications, both incoming and outgoing, occur

Track vendor deliverable against the contracted schedule


Monitor the flow of defect reports and resolutions

Version 9.1

8-11

Guide to the CABA CBOK


Facilitate technical and logistical problem resolutions sessions between the
customer and the vendor

8.4.3

Attorneys and Legal Counsel

No prudent organization executes a contract without the advice of their legal counsel. These
may be individuals who work for the organization and provide this service routinely, or they
may be members of a law firm hired by the organization on an as needed basis.
Review all documents presented to ensure conformance to both the organizations
best interests and the laws governing that organization
Ensure that the contract(s) created are fair and support the intent expressed by the
Business Unit and Information Technology
Provide suggestions for clarification or improvement of documents presented
Support contract negotiation efforts when required
Provide expert support in the event of an unsuccessful contract termination

8.4.4

Test Manager and the Test Team

The person responsible for the systems and acceptance testing effort on the project is the Test
Manager; in some organizations they are referred to as the Test Lead. The responsibilities of
the Test Manager are to:
Create and maintain the overall test plan to reflect changes to the overall project
plan
Determine what testing tools, techniques and strategies will be needed to effectively
test the project and ensure the staff know how to perform that testing
Assist the project manager in the integration of the test plan into the overall project
plan
Manage test budget, test staff resources and progress in accordance with the test
plan
Participate in test execution, analysis and defect tracking as needed
Other functions of the Test Manager are identified in Skill Category 7

8-12

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting

8.4.5

Designer/Developer

The work of translating the requirements of what a system must do or be (Requirements)


into a product that will meet those needs is called design. The work of design may include the
participation of systems architects, database administrators, testers, trainers, documentation
tech writers and others to come up with how to meet the needs. The designed product is then
transferred to those who will make it come to life i.e., the developers.
In the world of contracted software, the developers will be part of the vendor
organization

8.4.6

Business Partner and Subject Matter Expert

The participation of the business partner and subject matter experts from the business
community is essential to the success of the project. Just as it is not possible to get the
requirements right early in the project without their cooperation, so too it is necessary to have
their input on both the creation and results of various functional tests.
With both the business partners and subject matter experts it is important to be conscious of
their competing priorities and be efficient and effective in the scheduling of their
participation. The responsibilities of these individuals in support of contracted software are:
Participate in the development and prioritization of the requirements forming the
basis for the contract
Support the creation and maintenance of an effective overall project with specific
attention to areas of functional correctness and performance characteristics of the
product
Provide expert insight into functionality issues that arise from the development of
requirements and execution of test cases
Assist the project manager in the integration of the business and subject matter
experts into the project plan and specifically where needed in requirements
verification and acceptance testing
Support efforts to maintain control of the project scope by active participation in
early test life cycle activities such as requirements verification and test case
development
Provide support for an appropriate level of test budget and test staff resources in
both the IT and the business communities
Participate in the completion of end of project activities, including the decision to
implement, implementation and post implementation reviews

Version 9.1

8-13

Guide to the CABA CBOK

8.4.7

Business Analyst

In earlier Skill Categories the point has been made that not all organizations have the same
structure, and that as a result, the Business Analyst often finds they are needed to perform
roles and responsibilities of other functions. If there is no project manager, the BA may need
to fulfill those responsibilities. If there is no test manager or project liaison, BA may be doing
that work. Regardless of this work, the responsibilities of the BA are:
Participate in the creation of the overall project and test plan and assist with the
necessary maintenance to reflect changes to the overall project plan
Plan for and participate in early life cycle verification activities such as
requirements inspections and early development and inspection of use cases
Plan for and participate in functional, regression, stress, security and performance
test execution, analysis and defect tracking as needed, with specific attention to
those aspects of acceptance testing that are essential to the successful
implementation of the finished product
Support the resolution of issues resulting in the failure to meet the schedule, budget,
functionality or quality criteria throughout the testing life cycle of the project
Participate in the completion of end of project activities, including the decision to
implement, test reports, implementation and post implementation reviews.

8.4.8

Other Participants

In addition to the participants identified above, there can be many other groups and
individuals contributing to the contract programming project, such as:
Other Vendors
Key Stakeholders
Beta Customers
Power Users

8.5 Summary
This Skill Category discusses the special issues and approaches needed to work with vendors
for the development and acquisition of software products. This may range from completely
custom, developed from the customers requirements, to completely off-the-shelf. Regardless

8-14

Version 9.1

Commercial Off-the-Shelf Software and Performance Based Contracting


of the source of the requirements, there is a need to measure the product against a defined
standard to ensure delivery of the correct product. Performance Based Contracting helps both
the customer and the vendor clearly understand what is expected and how delivery will be
measured.

Version 9.1

8-15

Guide to the CABA CBOK

8-16

Version 9.1

Appendix

A
Vocabulary
The organization of this document is primarily alphabetical. Acronyms are grouped at the
beginning of each alphabetical section, and are followed by words, terms and phrases. Acronyms are expanded at the beginning of each alphabetical section and defined with the full
term or phrase. Four modifications are the grouping of terms and phrases in the domains of
specifications, testing, qualification, and validation. Those related terms are located sequentially to assist the user in finding all defined terms in these domains, e.g., functional testing is
defined under testing, functional.
The terms are defined, as much as possible, using available standards. The source of such definitions appears immediately following the term or phrase in parenthesis, e.g. (NIST). The
source documents are listed below.
The New IEEE Standard Dictionary of Electrical and Electronics Terms, IEEE Std. 1001992.
IEEE Standards Collection, Software Engineering, 1994 Edition, published by the Institute
of Electrical and Electronic Engineers Inc.
National Bureau of Standards [NBS] Special Publication 500-75 Validation, Verification, and
Testing of Computer Software, 1981.
Federal Information Processing Standards [FIPS] Publication 101, Guideline For Lifecycle
Validation, Verification, and Testing of Computer Software, 1983.
Federal Information Processing Standards [FIPS] Publication 105, Guideline for Software
Documentation Management, 1984.
American National Standard for Information Systems, Dictionary for Information Systems,
American National Standards Institute, 1991.
FDA Technical Report, Software Development Activities, July 1987.
FDA Guide to Inspection of Computerized Systems in Drug Processing, 1983.
FDA Guideline on General Principles of Process Validation, May 1987.
Version 9.1

A-1

Guide to the CABA CBOK


Reviewer Guidance for Computer Controlled Medical Devices Undergoing 510(k) Review,
Office of Device Evaluation, CDRH, FDA, August 1991.
HHS Publication FDA 90-4236, Preproduction Quality Assurance Planning.
MIL-STD-882C, Military Standard System Safety Program Requirements, 19JAN1993.
International Electrotechnical Commission, International Standard 1025, Fault Tree Analysis.
International Electrotechnical Commission, International Standard 812, Analysis Techniques for System Reliability - Procedure for Failure Mode and Effects Analysis [FMEA].
FDA recommendations, Application of the Medical Device GMP to Computerized Devices
and Manufacturing Processes, May 1992.
Pressman, R., Software Engineering, A Practitioner's Approach, Third Edition, McGrawHill, Inc., 1992.
Myers, G., The Art of Software Testing, Wiley Interscience, 1979.
Beizer, B., Software Testing Techniques, Second Edition, Van Nostrand Reinhold, 1990.
Additional general references used in developing some definitions are:
Bohl, M., Information Processing, Fourth Edition, Science Research Associates, Inc., 1984.
Freedman, A., The Computer Glossary, Sixth Edition, American Management Association,
1993.
McGraw-Hill Electronics Dictionary, Fifth Edition, 1994, McGraw-Hill Inc.
McGraw-Hill Dictionary of Scientific & Technical Terms, Fifth Edition, 1994, McGraw-Hill
Inc..
Webster's New Universal Unabridged Dictionary, Deluxe Second Edition, 1979.

A-2

Version 9.1

Vocabulary

-AADC. analog-to-digital converter.


ALU. arithmetic logic unit.
ANSI. American National Standards Institute.
ASCII. American Standard Code for Information Interchange.
abstraction. The separation of the logical properties of data or function from its
implementation in a computer program. See: encapsulation, information hiding, software
engineering.
access. (ANSI) To obtain the use of a resource.
access time. (ISO) The time interval between the instant at which a call for data is initiated
and the instant at which the delivery of the data is completed.
accident. See: mishap.
accuracy. (IEEE) (1) A qualitative assessment of correctness or freedom from error. (2) A
quantitative measure of the magnitude of error. Contrast with precision. (CDRH) (3) The
measure of an instrument's capability to approach a true or absolute value. It is a function of
precision and bias. See: bias, precision, calibration.
accuracy study processor. A software tool used to perform calculations or determine
accuracy of computer manipulated program variables.
actuator. A peripheral [output] device which translates electrical signals into mechanical
actions; e.g., a stepper motor which acts on an electrical signal received from a computer
instructing it to turn its shaft a certain number of degrees or a certain number of rotations. See:
servomechanism.
adaptive maintenance. (IEEE) Software maintenance performed to make a computer
program usable in a changed environment. Contrast with corrective maintenance, perfective
maintenance.
address. (1) A number, character, or group of characters which identifies a given device or a
storage location which may contain a piece of data or a program step. (2) To refer to a device
or storage location by an identifying number, character, or group of characters.
addressing exception. (IEEE) An exception that occurs when a program calculates an
address outside the bounds of the storage available to it.
algorithm. (IEEE) (1) A finite set of well-defined rules for the solution of a problem in a
finite number of steps. (2) Any sequence of operations for performing a specific task.
algorithm analysis. (IEEE) A software V&V task to ensure that the algorithms selected are
correct, appropriate, and stable, and meet all accuracy, timing, and sizing requirements.
alphanumeric. Pertaining to a character set that contains letters, digits, and usually other
characters such as punctuation marks.

Version 9.1

A-3

Guide to the CABA CBOK


American National Standards Institute. 11 West 42nd Street, New York, N.Y. 10036. An
organization that coordinates the development of U.S. voluntary national standards for nearly
all industries. It is the U.S. member body to ISO and IEC. Information technology standards
pertain to programming languages, electronic data interchange, telecommunications and
physical properties of diskettes, cartridges and magnetic tapes.
American Standard Code for Information Interchange. A seven bit code adopted as a
standard to represent specific data characters in computer systems, and to facilitate
interchange of data between various machines and systems. Provides 128 possible characters,
the first 32 of which are used for printing and transmission control. Since common storage is
an 8-bit byte [256 possible characters] and ASCII uses only 128, the extra bit is used to hold a
parity bit or create special symbols. See: extended ASCII.
analog. Pertaining to data [signals] in the form of continuously variable [wave form] physical
quantities; e.g., pressure, resistance, rotation, temperature, voltage. Contrast with digital.
analog device. (IEEE) A device that operates with variables represented by continuously
measured quantities such as pressures, resistances, rotations, temperatures, and voltages.
analog-to-digital converter. Input related devices which translate an input device's [sensor]
analog signals to the corresponding digital signals needed by the computer. Contrast with
DAC [digital-to-analog converter]. See: analog, digital.
analysis. (1) To separate into elemental parts or basic principles so as to determine the nature
of the whole. (2) A course of reasoning showing that a certain result is a consequence of
assumed premises. (3) (ANSI) The methodical investigation of a problem, and the separation
of the problem into smaller related units for further detailed study.
anomaly. (IEEE) Anything observed in the documentation or operation of software that
deviates from expectations based on previously verified software products or reference
documents. See: bug, defect, error, exception, fault.
application program. See: application software.
application software. (IEEE) Software designed to fill specific needs of a user; for example,
software for navigation, payroll, or process control. Contrast with support software; system
software.
architectural design. (IEEE) (1) The process of defining a collection of hardware and
software components and their interfaces to establish the framework for the development of a
computer system. See: functional design. (2) The result of the process in (1). See: software
engineering.
architecture. (IEEE) The organizational structure of a system or component. See:
component, module, subprogram, routine.
archival database. (ISO) An historical copy of a database saved at a significant point in time
for use in recovery or restoration of the database.
archive. (IEEE) A lasting collection of computer system data or other records that are in long
term storage.

A-4

Version 9.1

Vocabulary
archive file. (ISO) A file that is part of a collection of files set aside for later research or
verification, for security purposes, for historical or legal purposes, or for backup.
arithmetic logic unit. The [high speed] circuits within the CPU which are responsible for
performing the arithmetic and logical operations of a computer.
arithmetic overflow. (ISO) That portion of a numeric word that expresses the result of an
arithmetic operation, by which the length of the word exceeds the word length of the space
provided for the representation of the number. See: overflow, overflow exception.
arithmetic underflow. (ISO) In an arithmetic operation, a result whose absolute value is too
small to be represented within the range of the numeration system in use. See: underflow,
underflow exception.
array. (IEEE) An n-dimensional ordered set of data items identified by a single name and one
or more indices, so that each element of the set is individually addressable; e.g., a matrix,
table, or vector.
as built. (NIST) Pertaining to an actual configuration of software code resulting from a
software development project.
assemble. See: assembling.
assembler. (IEEE) A computer program that translates programs [source code files] written in
assembly language into their machine language equivalents [object code files]. Contrast with
compiler, interpreter. See: cross-assembler, cross-compiler.
assembling. (NIST) Translating a program expressed in an assembly language into object
code.
assembly code. See: assembly language.
assembly language. (IEEE) A low level programming language, that corresponds closely to
the instruction set of a given computer, allows symbolic naming of operations and addresses,
and usually results in a one-to-one translation of program instructions [mnemonics] into
machine instructions. See: low-level language.
assertion. (NIST) A logical expression specifying a program state that must exist or a set of
conditions that program variables must satisfy at a particular point during program execution.
assertion checking. (NIST) Checking of user- embedded statements that assert relationships
between elements of a program. An assertion is a logical expression that specifies a condition
or relation among program variables. Tools that test the validity of assertions as the program
is executing or tools that perform formal verification of assertions have this feature. See:
instrumentation; testing, assertion.
asynchronous. Occurring without a regular time relationship, i.e., timing independent.
asynchronous transmission. A timing independent method of electrical transfer of data in
which the sending and receiving units are synchronized on each character, or small block of
characters, usually by the use of start and stop signals. Contrast with synchronous
transmission.
audit. (1) (IEEE) An independent examination of a work product or set of work products to
assess compliance with specifications, standards, contractual agreements, or other criteria.
Version 9.1

A-5

Guide to the CABA CBOK


See: functional configuration audit, physical configuration audit. (2) (ANSI) To conduct an
independent review and examination of system records and activities in order to test the
adequacy and effectiveness of data security and data integrity procedures, to ensure
compliance with established policy and operational procedures, and to recommend any
necessary changes. See: computer system audit, software audit.
audit trail. (1) (ISO) Data in the form of a logical path linking a sequence of events, used to
trace the transactions that have affected the contents of a record. (2) A chronological record of
system activities that is sufficient to enable the reconstruction, reviews, and examination of
the sequence of environments and activities surrounding or leading to each event in the path
of a transaction from its inception to output of final results.
auxiliary storage. Storage device other than main memory [RAM]; e.g., disks and tapes.
-BBIOS. basic input/output system.
bps. bits per second.
band. Range of frequencies used for transmitting a signal. A band can be identified by the
difference between its lower and upper limits, i.e. bandwidth, as well as by its actual lower
and upper limits; e.g., a 10 MHz band in the 100 to 110 MHz range.
bandwidth. The transmission capacity of a computer channel, communications line or bus. It
is expressed in cycles per second [Hz], and also is often stated in bits or bytes per second. See:
band.
bar code. (ISO) A code representing characters by sets of parallel bars of varying thickness
and separation that are read optically by transverse scanning.
baseline. (NIST) A specification or product that has been formally reviewed and agreed upon,
that serves as the basis for further development, and that can be changed only through formal
change control procedures.
BASIC. An acronym for Beginners All-purpose Symbolic Instruction Code, a high-level
programming language intended to facilitate learning to program in an interactive
environment.
basic input/output system. Firmware that activates peripheral devices in a PC. Includes
routines for the keyboard, screen, disk, parallel port and serial port, and for internal services
such as time and date. It accepts requests from the device drivers in the operating system as
well from application programs. It also contains autostart functions that test the system on
startup and prepare the computer for operation. It loads the operating system and passes
control to it.
batch. (IEEE) Pertaining to a system or mode of operation in which inputs are collected and
processed all at one time, rather than being processed as they arrive, and a job, once started,
proceeds to completion without additional input or user interaction. Contrast with
conversational, interactive, on-line, real time.

A-6

Version 9.1

Vocabulary
batch processing. Execution of programs serially with no interactive processing. Contrast
with real time processing.
baud. The signalling rate of a line. It's the switching speed, or number of transitions [voltage
or frequency change] made per second. At low speeds bauds are equal to bits per seconds;
e.g., 300 baud is equal to 300 bps. However, one baud can be made to represent more than one
bit per second.
benchmark. A standard against which measurements or comparisons can be made.
bias. A measure of how closely the mean value in a series of replicate measurements
approaches the true value. See: accuracy, precision, calibration.
binary. The base two number system. Permissible digits are "0" and "1".
bit. A contraction of the term binary digit. The bit is the basic unit of digital data. It may be in
one of two states, logic 1 or logic 0. It may be thought of as a switch which is either on or off.
Bits are usually combined into computer words of various sizes, such as the byte.
bits per second. A measure of the speed of data transfer in a communications system.
black-box testing. See: testing, functional.
block. (ISO) (1) A string of records, words, or characters that for technical or logical purposes
are treated as a unity. (2) A collection of contiguous records that are recorded as a unit, and
the units are separated by interblock gaps. (3) A group of bits or digits that are transmitted as a
unit and that may be encoded for error-control purposes. (4) In programming languages, a
subdivision of a program that serves to group related statements, delimit routines, specify
storage allocation, delineate the applicability of labels, or segment parts of the program for
other purposes. In FORTRAN, a block may be a sequence of statements; in COBOL, it may
be a physical record.
block check. (ISO) The part of the error control procedure that is used for determining that a
block of data is structured according to given rules.
block diagram. (NIST) A diagram of a system, instrument or computer, in which the
principal parts are represented by suitably annotated geometrical figures to show both the
basic functions of the parts and the functional relationships between them.
block length. (1) (ISO) The number of records, words or characters in a block. (2) (ANSI) A
measure of the size of a block, usually specified in units such as records, words, computer
words, or characters.
block transfer. (ISO) The process, initiated by a single action, of transferring one or more
blocks of data.
blocking factor. (ISO) The number of records in a block. The number is computed by
dividing the size of the block by the size of each record contained therein. Syn: grouping
factor.
blueprint. An exact or detailed plan or outline. Contrast with graph.
bomb. A trojan horse which attacks a computer system upon the occurrence of a specific
logical event [logic bomb], the occurrence of a specific time-related logical event [time

Version 9.1

A-7

Guide to the CABA CBOK


bomb], or is hidden in electronic mail or data and is triggered when read in a certain way
[letter bomb]. See: trojan horse, virus, worm.
boolean. Pertaining to the principles of mathematical logic developed by George Boole, a
nineteenth century mathematician. Boolean algebra is the study of operations carried out on
variables that can have only one of two possible values; i.e., 1 (true) and 0 (false). As ADD,
SUBTRACT, MULTIPLY, and DIVIDE are the primary operations of arithmetic, AND, OR,
and NOT are the primary operations of Boolean Logic. In Pascal a boolean variable is a
variable that can have one of two possible values, true or false.
boot. (1) (IEEE) To initialize a computer system by clearing memory and reloading the
operating system. (2) To cause a computer system to reach a known beginning state. A boot
program, in firmware, typically performs this function which includes loading basic
instructions which tell the computer how to load programs into memory and how to begin
executing those programs. A distinction can be made between a warm boot and a cold boot. A
cold boot means starting the system from a powered-down state. A warm boot means
restarting the computer while it is powered-up. Important differences between the two
procedures are; 1) a power-up self-test, in which various portions of the hardware [such as
memory] are tested for proper operation, is performed during a cold boot while a warm boot
does not normally perform such self-tests, and 2) a warm boot does not clear all memory.
bootstrap. (IEEE) A short computer program that is permanently resident or easily loaded
into a computer and whose execution brings a larger program, such an operating system or its
loader, into memory.
boundary value. (1) (IEEE) A data value that corresponds to a minimum or maximum input,
internal, or output value specified for a system or component. (2) A value which lies at, or just
inside or just outside a specified range of valid input and output values.
boundary value analysis. (NBS) A selection technique in which test data are chosen to lie
along "boundaries" of the input domain [or output range] classes, data structures, procedure
parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.
This technique is often called stress testing. See: testing, boundary value.
box diagram. (IEEE) A control flow diagram consisting of a rectangle that is subdivided to
show sequential steps, if-then-else conditions, repetition, and case conditions. Syn: Chapin
chart, Nassi-Shneiderman chart, program structure diagram. See: block diagram, bubble chart,
flowchart, graph, input-process-output chart, structure chart.
branch. An instruction which causes program execution to jump to a new point in the
program sequence, rather than execute the next instruction. Syn: jump.
branch analysis. (Myers) A test case identification technique which produces enough test
cases such that each decision has a true and a false outcome at least once. Contrast with path
analysis.
branch coverage. (NBS) A test coverage criteria which requires that for each decision point
each possible branch be executed at least once. Syn: decision coverage. Contrast with
condition coverage, multiple condition coverage, path coverage, statement coverage. See:
testing, branch.

A-8

Version 9.1

Vocabulary
bubble chart. (IEEE) A data flow, data structure, or other diagram in which entities are
depicted with circles [bubbles] and relationships are represented by links drawn between the
circles. See: block diagram, box diagram, flowchart, graph, input-process-output chart,
structure chart.
buffer. A device or storage area [memory] used to store data temporarily to compensate for
differences in rates of data flow, time of occurrence of events, or amounts of data that can be
handled by the devices or processes involved in the transfer or use of the data.
bug. A fault in a program which causes the program to perform in an unintended or
unanticipated manner. See: anomaly, defect, error, exception, fault.
bus. A common pathway along which data and control signals travel between different
hardware devices within a computer system. (A) When bus architecture is used in a computer,
the CPU, memory and peripheral equipment are interconnected through the bus. The bus is
often divided into two channels, a control channel to select where data is located [address
bus], and the other to transfer the data [data bus or I/O bus]. Common buses are: ISA [Industry
Standard Architecture] the original IBM PC 16 bit AT bus; EISA [Extended Industry
Standard Architecture] the IBM PC 32 bit XT bus [which provides for bus mastering]; MCA
[MicroChannel Architecture] an IBM 32 bit bus; Multibus I & II [advanced, 16 & 32 bit
respectively, bus architecture by Intel used in industrial, military and aerospace applications];
NuBus, a 32 bit bus architecture originally developed at MIT [A version is used in the Apple
Macintosh computer]; STD bus, a bus architecture used in medical and industrial equipment
due to its small size and rugged design [Originally 8 bits, with extensions to 16 and 32 bits];
TURBO Channel, a DEC 32 bit data bus with peak transfer rates of 100 MB/second; VMEbus
[Versa Module Eurocard Bus], a 32 bit bus from Motorola, et.al., used in industrial,
commercial and military applications worldwide [VME64 is an expanded version that
provides 64 bit data transfer and addressing]. (B) When bus architecture is used in a network,
all terminals and computers are connected to a common channel that is made of twisted wire
pairs, coaxial cable, or optical fibers. Ethernet is a common LAN architecture using a bus
topology.
byte. A sequence of adjacent bits, usually eight, operated on as a unit.
-CCAD. computer aided design.
CAM. computer aided manufacturing.
CASE. computer aided software engineering.
CCITT. Consultive Committee for International Telephony and Telegraphy.
CD-ROM. compact disc - read only memory.
CISC. complex instruction set computer.
CMOS. complementary metal-oxide semiconductor.
CO-AX. coaxial cable.
Version 9.1

A-9

Guide to the CABA CBOK


COTS. configurable, off-the-shelf software.
CP/M. Control Program for Microcomputers.
CPU. central processing unit.
CRC. cyclic redundancy [check] code.
CRT. cathode ray tube.
C. A general purpose high-level programming language. Created for use in the development
of computer operating systems software. It strives to combine the power of assembly language
with the ease of a high-level language.
C++. An object-oriented high-level programming language.
calibration. Ensuring continuous adequate performance of sensing, measurement, and
actuating equipment with regard to specified accuracy and precision requirements. See:
accuracy, bias, precision.
call graph. (IEEE) A diagram that identifies the modules in a system or computer program
and shows which modules call one another. Note: The result is not necessarily the same as that
shown in a structure chart. Syn: call tree, tier chart. Contrast with structure chart. See: control
flow diagram, data flow diagram, data structure diagram, state diagram.
cathode ray tube. An output device. Syn: display, monitor, screen.
cause effect graph. (Myers) A Boolean graph linking causes and effects. The graph is
actually a digital-logic circuit (a combinatorial logic network) using a simpler notation than
standard electronics notation.
cause effect graphing. (1) (NBS) Test data selection technique. The input and output
domains are partitioned into classes and analysis is performed to determine which input
classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect
set. (2) (Myers) A systematic method of generating test cases representing combinations of
conditions. See: testing, functional.
central processing unit. The unit of a computer that includes the circuits controlling the
interpretation of program instructions and their execution. The CPU controls the entire
computer. It receives and sends data through input-output channels, retrieves data and
programs from memory, and conducts mathematical and logical functions of a program.
certification. (ANSI) In computer systems, a technical evaluation, made as part of and in
support of the accreditation process, that establishes the extent to which a particular computer
system or network design and implementation meet a prespecified set of requirements.
change control. The processes, authorities for, and procedures to be used for all changes that
are made to the computerized system and/or the system's data. Change control is a vital subset
of the Quality Assurance [QA] program within an establishment and should be clearly
described in the establishment's SOPs. See: configuration control.
change tracker. A software tool which documents all changes made to a program.
check summation. A technique for error detection to ensure that data or program files have
been accurately copied or transferred. Basically, a redundant check in which groups of digits;

A-10

Version 9.1

Vocabulary
e.g., a file, are summed, usually without regard to overflow, and that sum checked against a
previously computed sum to verify operation accuracy. Contrast with cyclic redundancy
check [CRC], parity check. See: checksum.
checksum. (IEEE) A sum obtained by adding the digits in a numeral, or group of numerals [a
file], usually without regard to meaning, position, or significance. See: check summation.
chip. See: integrated circuit.
client-server. A term used in a broad sense to describe the relationship between the receiver
and the provider of a service. In the world of microcomputers, the term client-server describes
a networked system where front-end applications, as the client, make service requests upon
another networked system. Client-server relationships are defined primarily by software. In a
local area network [LAN], the workstation is the client and the file server is the server.
However, client-server systems are inherently more complex than file server systems. Two
disparate programs must work in tandem, and there are many more decisions to make about
separating data and processing between the client workstations and the database server. The
database server encapsulates database files and indexes, restricts access, enforces security,
and provides applications with a consistent interface to data via a data dictionary.
clock. (ISO) A device that generates periodic, accurately spaced signals used for such
purposes as timing, regulation of the operations of a processor, or generation of interrupts.
coaxial cable. High-capacity cable used in communications and video transmissions.
Provides a much higher bandwidth than twisted wire pair.
COBOL. Acronym for COmmon Business Oriented Language. A high-level programming
language intended for use in the solution of problems in business data processing.
code. See: program, source code.
code audit. (IEEE) An independent review of source code by a person, team, or tool to verify
compliance with software design documentation and programming standards. Correctness and
efficiency may also be evaluated. Contrast with code inspection, code review, code
walkthrough. See: static analysis.
code auditor. A software tool which examines source code for adherence to coding and
documentation conventions.
code inspection. (Myers/NBS) A manual [formal] testing [error detection] technique where
the programmer reads source code, statement by statement, to a group who ask questions
analyzing the program logic, analyzing the code with respect to a checklist of historically
common programming errors, and analyzing its compliance with coding standards. Contrast
with code audit, code review, code walkthrough. This technique can also be applied to other
software and configuration items. Syn: Fagan Inspection. See: static analysis.
code review. (IEEE) A meeting at which software code is presented to project personnel,
managers, users, customers, or other interested parties for comment or approval. Contrast with
code audit, code inspection, code walkthrough. See: static analysis.
code walkthrough. (Myers/NBS) A manual testing [error detection] technique where
program [source code] logic [structure] is traced manually [mentally] by a group with a small
set of test cases, while the state of program variables is manually monitored, to analyze the

Version 9.1

A-11

Guide to the CABA CBOK


programmer's logic and assumptions. Contrast with code audit, code inspection, code review.
See: static analysis.
coding. (IEEE) (1) In software engineering, the process of expressing a computer program in
a programming language. (2) The transforming of logic and data from design specifications
(design descriptions) into a programming language. See: implementation.
coding standards. Written procedures describing coding [programming] style conventions
specifying rules governing the use of individual constructs provided by the programming
language, and naming, formatting, and documentation requirements which prevent
programming errors, control complexity and promote understandability of the source code.
Syn: development standards, programming standards.
comment. (1) (ISO) In programming languages, a language construct that allows
[explanatory] text to be inserted into a program and that does not have any effect on the
execution of the program. (2) (IEEE) Information embedded within a computer program, job
control statements, or a set of data, that provides clarification to human readers but does not
affect machine interpretation.
compact disc - read only memory. A compact disk used for the permanent storage of text,
graphic or sound information. Digital data is represented very compactly by tiny holes that
can be read by lasers attached to high resolution sensors. Capable of storing up to 680 MB of
data, equivalent to 250,000 pages of text, or 20,000 medium resolution images. This storage
media is often used for archival purposes. Syn: optical disk, write-once read-many times disk.
comparator. (IEEE) A software tool that compares two computer programs, files, or sets of
data to identify commonalities or differences. Typical objects of comparison are similar
versions of source code, object code, data base files, or test results.
compatibility. (ANSI) The capability of a functional unit to meet the requirements of a
specified interface.
compilation. (NIST) Translating a program expressed in a problem-oriented language or a
procedure oriented language into object code. Contrast with assembling, interpret. See:
compiler.
compile. See: compilation.
compiler. (1) (IEEE) A computer program that translates programs expressed in a high-level
language into their machine language equivalents. (2) The compiler takes the finished source
code listing as input and outputs the machine code instructions that the computer must have to
execute the program. See: assembler, interpreter, cross-assembler, cross-compiler.
compiling. See: compilation.
complementary metal-oxide semiconductor. A type of integrated circuit widely used for
processors and memories. It is a combination of transistors on a single chip connected to
complementary digital circuits.
completeness. (NIST) The property that all necessary parts of the entity are included.
Completeness of a product is often used to express the fact that all requirements have been
met by the product. See: traceability analysis.

A-12

Version 9.1

Vocabulary
complex instruction set computer. Traditional computer architecture that operates with
large sets of possible instructions. Most computers are in this category, including the IBM
compatible microcomputers. As computing technology evolved, instruction sets expanded to
include newer instructions which are complex in nature and require several to many execution
cycles and, therefore, more time to complete. Computers which operate with system software
based on these instruction sets have been referred to as complex instruction set computers.
Contrast with reduced instruction set computer [RISC].
complexity. (IEEE) (1) The degree to which a system or component has a design or
implementation that is difficult to understand and verify. (2) Pertaining to any of a set of
structure based metrics that measure the attribute in (1).
component. See: unit.
computer. (IEEE) (1) A functional unit that can perform substantial computations, including
numerous arithmetic operations, or logic operations, without human intervention during a run.
(2) A functional programmable unit that consists of one or more associated processing units
and peripheral equipment, that is controlled by internally stored programs, and that can
perform substantial computations, including numerous arithmetic operations, or logic
operations, without human intervention.
computer aided design. The use of computers to design products. CAD systems are high
speed workstations or personal computers using CAD software and input devices such as
graphic tablets and scanners to model and simulate the use of proposed products. CAD output
is a printed design or electronic output to CAM systems. CAD software is available for
generic design or specialized uses such as architectural, electrical, and mechanical design.
CAD software may also be highly specialized for creating products such as printed circuits
and integrated circuits.
computer aided manufacturing. The automation of manufacturing systems and techniques,
including the use of computers to communicate work instructions to automate machinery for
the handling of the processing [numerical control, process control, robotics, material
requirements planning] needed to produce a workpiece.
computer aided software engineering. An automated system for the support of software
development including an integrated tool set, i.e., programs, which facilitate the
accomplishment of software engineering methods and tasks such as project planning and
estimation, system and software requirements analysis, design of data structure, program
architecture and algorithm procedure, coding, testing and maintenance.
computer instruction set. (ANSI) A complete set of the operators of the instructions of a
computer together with a description of the types of meanings that can be attributed to their
operands. Syn: machine instruction set.
computer language. (IEEE) A language designed to enable humans to communicate with
computers. See: programming language.
computer program. See: program.
computer science. (ISO) The branch of science and technology that is concerned with
methods and techniques relating to data processing performed by automatic means.

Version 9.1

A-13

Guide to the CABA CBOK


computer system. (ANSI) a functional unit, consisting of one or more computers and
associated peripheral input and output devices, and associated software, that uses common
storage for all or part of a program and also for all or part of the data necessary for the
execution of the program; executes user-written or user-designated programs; performs userdesignated data manipulation, including arithmetic operations and logic operations; and that
can execute programs that modify themselves during their execution. A computer system may
be a stand-alone unit or may consist of several interconnected units. See: computer,
computerized system.
computer system audit. (ISO) An examination of the procedures used in a computer system
to evaluate their effectiveness and correctness and to recommend improvements. See:
software audit.
computer system security. (IEEE) The protection of computer hardware and software from
accidental or malicious access, use, modification, destruction, or disclosure. Security also
pertains to personnel, data, communications, and the physical protection of computer
installations. See: bomb, trojan horse, virus, worm.
computer word. A sequence of bits or characters that is stored, addressed, transmitted, and
operated on as a unit within a given computer. Typically one to four bytes long, depending on
the make of computer.
computerized system. Includes hardware, software, peripheral devices, personnel, and
documentation; e.g., manuals and Standard Operating Procedures. See: computer, computer
system.
concept phase. (IEEE) The initial phase of a software development project, in which user
needs are described and evaluated through documentation; e.g., statement of needs, advance
planning report, project initiation memo. feasibility studies, system definition documentation,
regulations, procedures, or policies relevant to the project.
condition coverage. (Myers) A test coverage criteria requiring enough test cases such that
each condition in a decision takes on all possible outcomes at least once, and each point of
entry to a program or subroutine is invoked at least once. Contrast with branch coverage,
decision coverage, multiple condition coverage, path coverage, statement coverage.
configurable, off-the-shelf software. Application software, sometimes general purpose,
written for a variety of industries or users in a manner that permits users to modify the
program to meet their individual needs.
configuration. (IEEE) (1) The arrangement of a computer system or component as defined by
the number, nature, and interconnections of its constituent parts. (2) In configuration
management, the functional and physical characteristics of hardware or software as set forth
in technical documentation or achieved in a product.
configuration audit. See: functional configuration audit, physical configuration audit.
configuration control. (IEEE) An element of configuration management, consisting of the
evaluation, coordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration identification. See:
change control.

A-14

Version 9.1

Vocabulary
configuration identification. (IEEE) An element of configuration management, consisting of
selecting the configuration items for a system and recording their functional and physical
characteristics in technical documentation.
configuration item. (IEEE) An aggregation of hardware, software, or both that is designated
for configuration management and treated as a single entity in the configuration management
process. See: software element.
configuration management. (IEEE) A discipline applying technical and administrative
direction and surveillance to identify and document the functional and physical characteristics
of a configuration item, control changes to those characteristics, record and report change
processing and implementation status, and verifying compliance with specified requirements.
See: configuration control, change control, software engineering.
consistency. (IEEE) The degree of uniformity, standardization, and freedom from
contradiction among the documents or parts of a system or component. See: traceability.
consistency checker. A software tool used to test requirements in design specifications for
both consistency and completeness.
constant. A value that does not change during processing. Contrast with variable.
constraint analysis. (IEEE) (1) Evaluation of the safety of restrictions imposed on the
selected design by the requirements and by real world restrictions. The impacts of the
environment on this analysis can include such items as the location and relation of clocks to
circuit cards, the timing of a bus latch when using the longest safety-related timing to fetch
data from the most remote circuit card, interrupts going unsatisfied due to a data flood at an
input, and human reaction time. (2) verification that the program operates within the
constraints imposed upon it by requirements, the design, and the target computer. Constraint
analysis is designed to identify these limitations to ensure that the program operates within
them, and to ensure that all interfaces have been considered for out-of-sequence and erroneous
inputs.
Consultive Committee for International Telephony and Telegraphy. See: International
Telecommunications Union - Telecommunications Standards Section.
control bus. (ANSI) A bus carrying the signals that regulate system operations. See: bus.
control flow. (ISO) In programming languages, an abstraction of all possible paths that an
execution sequence may take through a program.
control flow analysis. (IEEE) A software V&V task to ensure that the proposed control flow
is free of problems, such as design or code elements that are unreachable or incorrect.
control flow diagram. (IEEE) A diagram that depicts the set of all possible sequences in
which operations may be performed during the execution of a system or program. Types
include box diagram, flowchart, input-process-output chart, state diagram. Contrast with data
flow diagram. See: call graph, structure chart.
Control Program for Microcomputers. An operating system. A registered trademark of
Digital Research.

Version 9.1

A-15

Guide to the CABA CBOK


controller. Hardware that controls peripheral devices such as a disk or display screen. It
performs the physical data transfers between main memory and the peripheral device.
conversational. (IEEE) Pertaining to a interactive system or mode of operation in which the
interaction between the user and the system resembles a human dialog. Contrast with batch.
See: interactive, on-line, real time.
coroutine. (IEEE) A routine that begins execution at the point at which operation was last
suspended, and that is not required to return control to the program or subprogram that called
it. Contrast with subroutine.
corrective maintenance. (IEEE) Maintenance performed to correct faults in hardware or
software. Contrast with adaptive maintenance, perfective maintenance.
correctness. (IEEE) The degree to which software is free from faults in its specification,
design and coding. The degree to which software, documentation and other items meet
specified requirements. The degree to which software, documentation and other items meet
user needs and expectations, whether specified or not.
coverage analysis. (NIST) Determining and assessing measures associated with the
invocation of program structural elements to determine the adequacy of a test run. Coverage
analysis is useful when attempting to execute each statement, branch, path, or iterative
structure in a program. Tools that capture this data and provide reports summarizing relevant
information have this feature. See: testing, branch; testing, path; testing, statement.
crash. (IEEE) The sudden and complete failure of a computer system or component.
critical control point. (QA) A function or an area in a manufacturing process or procedure,
the failure of which, or loss of control over, may have an adverse affect on the quality of the
finished product and may result in a unacceptable health risk.
critical design review. (IEEE) A review conducted to verify that the detailed design of one or
more configuration items satisfy specified requirements; to establish the compatibility among
the configuration items and other items of equipment, facilities, software, and personnel; to
assess risk areas for each configuration item; and, as applicable, to assess the results of
producibility analyses, review preliminary hardware product specifications, evaluate
preliminary test planning, and evaluate the adequacy of preliminary operation and support
documents. See: preliminary design review, system design review.
criticality. (IEEE) The degree of impact that a requirement, module, error, fault, failure, or
other item has on the development or operation of a system. Syn: severity.
criticality analysis. (IEEE) Analysis which identifies all software requirements that have
safety implications, and assigns a criticality level to each safety-critical requirement based
upon the estimated risk.
cross-assembler. (IEEE) An assembler that executes on one computer but generates object
code for a different computer.
cross-compiler. (IEEE) A compiler that executes on one computer but generates assembly
code or object code for a different computer.

A-16

Version 9.1

Vocabulary
cursor. (ANSI) A movable, visible mark used to indicate a position of interest on a display
surface.
cyclic redundancy [check] code. A technique for error detection in data communications
used to assure a program or data file has been accurately transferred. The CRC is the result of
a calculation on the set of transmitted bits by the transmitter which is appended to the data. At
the receiver the calculation is repeated and the results compared to the encoded value. The
calculations are chosen to optimize error detection. Contrast with check summation, parity
check.
cyclomatic complexity. (1) (McCabe) The number of independent paths through a program.
(2) (NBS) The cyclomatic complexity of a program is equivalent to the number of decision
statements plus 1.
-DDAC. digital-to-analog converter.
DFD. data flow diagram.
DMA. direct memory access.
DOS. disk operating system.
data. Representations of facts, concepts, or instructions in a manner suitable for
communication, interpretation, or processing by humans or by automated means.
data analysis. (IEEE) (1) Evaluation of the description and intended use of each data item in
the software design to ensure the structure and intended use will not result in a hazard. Data
structures are assessed for data dependencies that circumvent isolation, partitioning, data
aliasing, and fault containment issues affecting safety, and the control or mitigation of
hazards. (2) Evaluation of the data structure and usage in the code to ensure each is defined
and used properly by the program. Usually performed in conjunction with logic analysis.
data bus. (ANSI) A bus used to communicate data internally and externally to and from a
processing unit or a storage device. See: bus.
data corruption. (ISO) A violation of data integrity. Syn: data contamination.
data dictionary. (IEEE) (1) A collection of the names of all data items used in a software
system, together with relevant properties of those items; e.g., length of data item,
representation, etc. (2) A set of definitions of data flows, data elements, files, data bases, and
processes referred to in a leveled data flow diagram set.
data element. (1) (ISO) A named unit of data that, in some contexts, is considered indivisible
and in other contexts may consist of data items. (2) A named identifier of each of the entities
and their attributes that are represented in a database.
data exception. (IEEE) An exception that occurs when a program attempts to use or access
data incorrectly.

Version 9.1

A-17

Guide to the CABA CBOK


data flow analysis. (IEEE) A software V&V task to ensure that the input and output data and
their formats are properly defined, and that the data flows are correct.
data flow diagram. (IEEE) A diagram that depicts data sources, data sinks, data storage, and
processes performed on data as nodes, and logical flow of data as links between the nodes.
Syn: data flowchart, data flow graph.
data integrity. (IEEE) The degree to which a collection of data is complete, consistent, and
accurate. Syn: data quality.
data item. (ANSI) A named component of a data element. Usually the smallest component.
data set. A collection of related records. Syn: file.
data sink. (IEEE) The equipment which accepts data signals after transmission.
data structure. (IEEE) A physical or logical relationship among data elements, designed to
support specific data manipulation functions.
data structure centered design. A structured software design technique wherein the
architecture of a system is derived from analysis of the structure of the data sets with which
the system must deal.
data structure diagram. (IEEE) A diagram that depicts a set of data elements, their
attributes, and the logical relationships among them. Contrast with data flow diagram. See:
entity-relationship diagram.
data validation. (1) (ISO) A process used to determine if data are inaccurate, incomplete, or
unreasonable. The process may include format checks, completeness checks, check key tests,
reasonableness checks and limit checks. (2) The checking of data for correctness or
compliance with applicable standards, rules, and conventions.
database. (ANSI) A collection of interrelated data, often with controlled redundancy,
organized according to a schema to serve one or more applications. The data are stored so that
they can be used by different programs without concern for the data structure or organization.
A common approach is used to add new data and to modify and retrieve existing data. See:
archival database.
database analysis. (IEEE) A software V&V task to ensure that the database structure and
access methods are compatible with the logical design.
database security. The degree to which a database is protected from exposure to accidental or
malicious alteration or destruction.
dead code. Program code statements which can never execute during program operation.
Such code can result from poor coding style, or can be an artifact of previous versions or
debugging efforts. Dead code can be confusing, and is a potential source of erroneous
software changes. See: infeasible path.
debugging. (Myers) Determining the exact nature and location of a program error, and fixing
the error.
decision coverage. (Myers) A test coverage criteria requiring enough test cases such that each
decision has a true and false result at least once, and that each statement is executed at least

A-18

Version 9.1

Vocabulary
once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage,
path coverage, statement coverage.
decision table. (IEEE) A table used to show sets of conditions and the actions resulting from
them.
default. (ANSI) Pertaining to an attribute, value, or option that is assumed when none is
explicitly specified.
default value. A standard setting or state to be taken by the program if no alternate setting or
state is initiated by the system or the user. A value assigned automatically if one is not given
by the user.
defect. See: anomaly, bug, error, exception, fault.
defect analysis. See: failure analysis.
delimiter. (ANSI) A character used to indicate the beginning or the end of a character string.
Syn: separator.
demodulate. Retrieve the information content from a modulated carrier wave; the reverse of
modulate. Contrast with modulate.
demodulation. Converting signals from a wave form [analog] to pulse form [digital].
Contrast with modulation.
dependability. A facet of reliability that relates to the degree of certainty that a system or
component will operate correctly.
design. (IEEE) The process of defining the architecture, components, interfaces, and other
characteristics of a system or component. See: architectural design, preliminary design,
detailed design.
design description. (IEEE) A document that describes the design of a system or component.
Typical contents include system or component architecture, control logic, data structures, data
flow, input/output formats, interface descriptions and algorithms. Syn: design document.
Contrast with specification, requirements. See: software design description.
design level. (IEEE) The design decomposition of the software item; e.g., system, subsystem,
program or module.
design of experiments. A methodology for planning experiments so that data appropriate for
[statistical] analysis will be collected.
design phase. (IEEE) The period of time in the software life cycle during which the designs
for architecture, software components, interfaces, and data are created, documented, and
verified to satisfy requirements.
design requirement. (IEEE) A requirement that specifies or constrains the design of a system
or system component.
design review. (IEEE) A process or meeting during which a system, hardware, or software
design is presented to project personnel, managers, users, customers, or other interested
parties for comment or approval. Types include critical design review, preliminary design
review, system design review.

Version 9.1

A-19

Guide to the CABA CBOK


design specification. See: specification, design.
design standards. (IEEE) Standards that describe the characteristics of a design or a design
description of data or program components.
desk checking. The application of code audit, inspection, review and walkthrough techniques
to source code and other software documents usually by an individual [often by the person
who generated them] and usually done informally.
detailed design. (IEEE) (1) The process of refining and expanding the preliminary design of a
system or component to the extent that the design is sufficiently complete to be implemented.
See: software development process. (2) The result of the process in (1).
developer. A person, or group, that designs and/or builds and/or documents and/or configures
the hardware and/or software of computerized systems.
development methodology. (ANSI) A systematic approach to software creation that defines
development phases and specifies the activities, products, verification procedures, and
completion criteria for each phase. See: incremental development, rapid prototyping, spiral
model, waterfall model.
development standards. Syn: coding standards.
diagnostic. (IEEE) Pertaining to the detection and isolation of faults or failures. For example,
a diagnostic message, a diagnostic manual.
different software system analysis. (IEEE) Analysis of the allocation of software
requirements to separate computer systems to reduce integration and interface errors related to
safety. Performed when more than one software system is being integrated. See: testing,
compatibility.
digital. Pertaining to data [signals] in the form of discrete [separate/pulse form] integral
values. Contrast with analog.
digital-to-analog converter. Output related devices which translate a computer's digital
outputs to the corresponding analog signals needed by an output device such as an actuator.
Contrast with ADC [Analog-to-Digital Converter].
direct memory access. Specialized circuitry or a dedicated microprocessor that transfers data
from memory to memory without using the CPU.
directed graph. (IEEE) A graph in which direction is implied in the internode connections.
Syn: digraph.
disk. Circular rotating magnetic storage hardware. Disks can be hard [fixed] or flexible
[removable] and different sizes.
disk drive. Hardware used to read from or write to a disk or diskette.
disk operating system. An operating system program; e.g., DR-DOS from Digital Research,
MS-DOS from Microsoft Corp., OS/2 from IBM, PC-DOS from IBM, System-7 from Apple.
diskette. A floppy [flexible] disk.

A-20

Version 9.1

Vocabulary
documentation. (ANSI) The aids provided for the understanding of the structure and
intended uses of an information system or its components, such as flowcharts, textual
material, and user manuals.
documentation, level of. (NIST) A description of required documentation indicating its
scope, content, format, and quality. Selection of the level may be based on project cost,
intended usage, extent of effort, or other factors; e.g., level of concern.
documentation plan. (NIST) A management document describing the approach to a
documentation effort. The plan typically describes what documentation types are to be
prepared, what their contents are to be, when this is to be done and by whom, how it is to be
done, and what are the available resources and external factors affecting the results.
documentation, software. (NIST) Technical data or information, including computer listings
and printouts, in human readable form, that describe or specify the design or details, explain
the capabilities, or provide operating instructions for using the software to obtain desired
results from a software system. See: specification; specification, requirements; specification.
design; software design description; test plan, test report, user's guide.
drift. (ISO) The unwanted change of the value of an output signal of a device over a period of
time when the values of all input signals to the device are kept constant.
driver. A program that links a peripheral device or internal function to the operating system,
and providing for activation of all device functions. Syn: device driver. Contrast with test
driver.
duplex transmission. (ISO) Data transmission in both directions at the same time.
dynamic analysis. (NBS) Analysis that is performed by executing the program code. Contrast
with static analysis. See: testing.
-EEBCDIC. extended binary coded decimal interchange code.
EEPROM. electrically erasable programmable read only memory.
EMI. electromagnetic interference.
EPROM. erasable programmable read only memory.
ESD. electrostatic discharge.
ESDI. enhanced small device interface.
editing. (NIST) Modifying the content of the input by inserting, deleting, or moving
characters, numbers, or data.
electrically erasable programmable read only memory. Chips which may be programmed
and erased numerous times like an EPROM. However an EEPROM is erased electrically.
This means this IC does not necessarily have to be removed from the circuit in which it is
mounted in order to erase and reprogram the memory.

Version 9.1

A-21

Guide to the CABA CBOK


electromagnetic interference. Low frequency electromagnetic waves that emanate from
electromechanical devices. An electromagnetic disturbance caused by such radiating and
transmitting sources as heavy duty motors and power lines can induce unwanted voltages in
electronic circuits, damage components and cause malfunctions. See: radiofrequency
interference.
electronic media. Hardware intended to store binary data; e.g., integrated circuit, magnetic
tape, magnetic disk.
electrostatic discharge. The movement of static electricity, e.g. sparks, from a nonconductive surface to an approaching conductive object that can damage or destroy
semiconductors and other circuit components. Static electricity can build on paper, plastic or
other non-conductors and can be discharged by human skin, e.g. finger, contact. It can also be
generated by scuffing shoes on a carpet or by brushing a non-conductor. MOSFETs and
CMOS logic ICs are especially vulnerable because it causes internal local heating that melts
or fractures the dielectric silicon oxide that insulates gates from other internal structures.
embedded computer. A device which has its own computing power dedicated to specific
functions, usually consisting of a microprocessor and firmware. The computer becomes an
integral part of the device as opposed to devices which are controlled by an independent,
stand-alone computer. It implies software that integrates operating system and application
functions.
embedded software. (IEEE) Software that is part of a larger system and performs some of the
requirements of that system; e.g., software used in an aircraft or rapid transit system. Such
software does not provide an interface with the user. See: firmware.
emulation. (IEEE) A model that accepts the same inputs and produces the same outputs as a
given system. To imitate one system with another. Contrast with simulation.
emulator. (IEEE) A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system. Contrast with simulator.
encapsulation. (IEEE) A software development technique that consists of isolating a system
function or a set of data and the operations on those data within a module and providing
precise specifications for the module. See: abstraction, information hiding, software
engineering.
end user. (ANSI) (1) A person, device, program, or computer system that uses an information
system for the purpose of data processing in information exchange. (2) A person whose
occupation requires the use of an information system but does not require any knowledge of
computers or computer programming. See: user.
enhanced small device interface. A standard interface for hard disks introduced in 1983
which provides for faster data transfer compared to ST-506. Contrast with ST-506, IDE,
SCSI.
entity relationship diagram. (IEEE) A diagram that depicts a set of real-world entities and
the logical relationships among them. See: data structure diagram.
environment. (ANSI) (1) Everything that supports a system or the performance of a function.
(2) The conditions that affect the performance of a system or function.

A-22

Version 9.1

Vocabulary
equivalence class partitioning. (Myers) Partitioning the input domain of a program into a
finite number of classes [sets], to identify a minimal set of well selected test cases to represent
these classes. There are two types of input equivalence classes, valid and invalid. See: testing,
functional.
erasable programmable read only memory. Chips which may be programmed by using a
PROM programming device. Before programming each bit is set to the same logical state,
either 1 or 0. Each bit location may be thought of as a small capacitor capable of storing an
electrical charge. The logical state is established by charging, via an electrical current, all bits
whose states are to be changed from the default state. EPROMs may be erased and
reprogrammed because the electrical charge at the bit locations can be bled off [i.e. reset to the
default state] by exposure to ultraviolet light through the small quartz window on top of the
IC. After programming, the IC's window must be covered to prevent exposure to UV light
until it is desired to reprogram the chip. An EPROM eraser is a device for exposing the IC's
circuits to UV light of a specific wavelength for a certain amount of time.
error. (ISO) A discrepancy between a computed, observed, or measured value or condition
and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect,
exception, fault.
error analysis. See: debugging, failure analysis.
error detection. Techniques used to identify errors in data transfers. See: check summation,
cyclic redundancy check [CRC], parity check, longitudinal redundancy.
error guessing. (NBS) Test data selection technique. The selection criterion is to pick values
that seem likely to cause errors. See: special test data; testing, special case.
error seeding. (IEEE) The process of intentionally adding known faults to those already in a
computer program for the purpose of monitoring the rate of detection and removal, and
estimating the number of faults remaining in the program. Contrast with mutation analysis.
event table. A table which lists events and the corresponding specified effect[s] of or
reaction[s] to each event.
evolutionary development. See: spiral model.
exception. (IEEE) An event that causes suspension of normal program execution. Types
include addressing exception, data exception, operation exception, overflow exception,
protection exception, underflow exception.
exception conditions/responses table. A special type of event table.
execution trace. (IEEE) A record of the sequence of instructions executed during the
execution of a computer program. Often takes the form of a list of code labels encountered as
the program executes. Syn: code trace, control flow trace. See: retrospective trace, subroutine
trace, symbolic trace, variable trace.
exception. (IEEE) An event that causes suspension of normal program operation. Types
include addressing exception, data exception, operation exception, overflow exception,
protection exception, underflow exception. See: anomaly, bug, defect, error, fault.

Version 9.1

A-23

Guide to the CABA CBOK


extended ASCII. The second half of the ACSII character set, 128 thru 255. The symbols are
defined by IBM for the PC and by other vendors for proprietary use. It is non-standard ASCII.
See: ASCII.
extended binary coded decimal interchange code. An eight bit code used to represent
specific data characters in some computers; e.g., IBM mainframe computers.
extremal test data. (NBS) Test data that is at the extreme or boundary of the domain of an
input variable or which produces results at the boundary of an output domain. See: testing,
boundary value.
-FFDD. floppy disk drive.
FIPS. Federal Information Processing Standards.
FMEA. Failure Modes and Effects Analysis.
FMECA. Failure Modes and Effects Criticality Analysis.
FTA. Fault Tree Analysis.
FTP. file transfer protocol.
Fagan inspection. See: code inspection.
fail-safe. (IEEE) A system or component that automatically places itself in a safe operational
mode in the event of a failure.
failure. (IEEE) The inability of a system or component to perform its required functions
within specified performance requirements. See: bug, crash, exception, fault.
failure analysis. Determining the exact nature and location of a program error in order to fix
the error, to identify and fix other similar errors, and to initiate corrective action to prevent
future occurrences of this type of error. Contrast with debugging.
Failure Modes and Effects Analysis. (IEC) A method of reliability analysis intended to
identify failures, at the basic component level, which have significant consequences affecting
the system performance in the application considered.
Failure Modes and Effects Criticality Analysis. (IEC) A logical extension of FMEA which
analyzes the severity of the consequences of failure.
fault. An incorrect step, process, or data definition in a computer program which causes the
program to perform in an unintended or unanticipated manner. See: anomaly, bug, defect,
error, exception.
fault seeding. See: error seeding.
Fault Tree Analysis. (IEC) The identification and analysis of conditions and factors which
cause or contribute to the occurrence of a defined undesirable event, usually one which
significantly affects system performance, economy, safety or other required characteristics.

A-24

Version 9.1

Vocabulary
feasibility study. Analysis of the known or anticipated need for a product, system, or
component to assess the degree to which the requirements, designs, or plans can be
implemented.
Federal Information Processing Standards. Standards published by U.S. Department of
Commerce, National Institute of Standards and Technology, formerly National Bureau of
Standards. These standards are intended to be binding only upon federal agencies.
fiber optics. Communications systems that use optical fibers for transmission. See: optical
fiber.
field. (1) (ISO) On a data medium or in storage, a specified area used for a particular class of
data; e.g., a group of character positions used to enter or display wage rates on a screen. (2)
Defined logical data that is part of a record. (3) The elementary unit of a record that may
contain a data item, a data aggregate, a pointer, or a link. (4) A discrete location in a database
that contains an unique piece of information. A field is a component of a record. A record is a
component of a database.
file. (1) (ISO) A set of related records treated as a unit; e.g., in stock control, a file could
consists of a set of invoices. (2) The largest unit of storage structure that consists of a named
collection of all occurrences in a database of records of a particular record type. Syn: data set.
file maintenance. (ANSI) The activity of keeping a file up to date by adding, changing, or
deleting data.
file transfer protocol. (1) Communications protocol that can transmit binary and ASCII data
files without loss of data. See: Kermit, Xmodem, Ymodem, Zmodem. (2) TCP/IP protocol
that is used to log onto the network, list directories, and copy files. It can also translate
between ASCII and EBCDIC. See: TCP/IP.
firmware. (IEEE) The combination of a hardware device; e.g., an IC; and computer
instructions and data that reside as read only software on that device. Such software cannot be
modified by the computer during processing. See: embedded software.
flag. (IEEE) A variable that is set to a prescribed state, often "true" or "false", based on the
results of a process or the occurrence of a specified condition. Syn: indicator.
flat file. A data file that does not physically interconnect with or point to other files. Any
relationship between two flat files is logical; e.g., matching account numbers.
floppy disk. See: diskette.
floppy disk drive. See: disk, disk drive.
flowchart or flow diagram. (2) (ISO) A graphical representation in which symbols are used
to represent such things as operations, data, flow direction, and equipment, for the definition,
analysis, or solution of a problem. (2) (IEEE) A control flow diagram in which suitably
annotated geometrical figures are used to represent operations, data, or equipment, and arrows
are used to indicate the sequential flow from one to another. Syn: flow diagram. See: block
diagram, box diagram, bubble chart, graph, input-process-output chart, structure chart.
formal qualification review. (IEEE) The test, inspection, or analytical process by which a
group of configuration items comprising a system is verified to have met specific contractual

Version 9.1

A-25

Guide to the CABA CBOK


performance requirements. Contrast with code review, design review, requirements review,
test readiness review.
FORTRAN. An acronym for FORmula TRANslator, the first widely used high-level
programming language. Intended primarily for use in solving technical problems in
mathematics, engineering, and science.
full duplex. See: duplex transmission.
function. (1) (ISO) A mathematical entity whose value, namely, the value of the dependent
variable, depends in a specified manner on the values of one or more independent variables,
with not more than one value of the dependent variable corresponding to each permissible
combination of values from the respective ranges of the independent variables. (2) A specific
purpose of an entity, or its characteristic action. (3) In data communication, a machine action
such as carriage return or line feed.
functional analysis. (IEEE) Verifies that each safety-critical software requirement is covered
and that an appropriate criticality level is assigned to each software element.
functional configuration audit. (IEEE) An audit conducted to verify that the development of
a configuration item has been completed satisfactorily, that the item has achieved the
performance and functional characteristics specified in the functional or allocated
configuration identification, and that its operational and support documents are complete and
satisfactory. See: physical configuration audit.
functional decomposition. See: modular decomposition.
functional design. (IEEE) (1) The process of defining the working relationships among the
components of a system. See: architectural design. (2) The result of the process in (1).
functional requirement. (IEEE) A requirement that specifies a function that a system or
system component must be able to perform.
-GGB. gigabyte.
gigabyte. Approximately one billion bytes; precisely 230 or 1,073,741,824 bytes. See:
kilobyte, megabyte.
graph. (IEEE) A diagram or other representation consisting of a finite set of nodes and
internode connections called edges or arcs. Contrast with blueprint. See: block diagram, box
diagram, bubble chart, call graph, cause-effect graph, control flow diagram, data flow
diagram, directed graph, flowchart, input-process-output chart, structure chart, transaction
flowgraph.
graphic software specifications. Documents such as charts, diagrams, graphs which depict
program structure, states of data, control, transaction flow, HIPO, and cause-effect
relationships; and tables including truth, decision, event, state-transition, module interface,
exception conditions/responses necessary to establish design integrity.

A-26

Version 9.1

Vocabulary

-HHDD. hard disk drive.


HIPO. hierarchy of input-processing-output.
Hz. hertz.
half duplex. Transmissions [communications] which occur in only one direction at a time, but
that direction can change.
handshake. An interlocked sequence of signals between connected components in which
each component waits for the acknowledgement of its previous signal before proceeding with
its action, such as data transfer.
hard copy. Printed, etc., output on paper.
hard disk drive. Hardware used to read from or write to a hard disk. See: disk, disk drive.
hard drive. Syn: hard disk drive.
hardware. (ISO) Physical equipment, as opposed to programs, procedures, rules, and
associated documentation. Contrast with software.
hazard. (DOD) A condition that is prerequisite to a mishap.
hazard analysis. A technique used to identify conceivable failures affecting system
performance, human safety or other required characteristics. See: FMEA, FMECA, FTA,
software hazard analysis, software safety requirements analysis, software safety design
analysis, software safety code analysis, software safety test analysis, software safety change
analysis.
hazard probability. (DOD) The aggregate probability of occurrence of the individual events
that create a specific hazard.
hazard severity. (DOD) An assessment of the consequence of the worst credible mishap that
could be caused by a specific hazard.
hertz. A unit of frequency equal to one cycle per second.
hexadecimal. The base 16 number system. Digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E,
& F. This is a convenient form in which to examine binary data because it collects 4 binary
digits per hexadecimal digit; e.g., decimal 15 is 1111 in binary and F in hexadecimal.
hierarchical decomposition. See: modular decomposition.
hierarchy of input-processing-output. See: input- processing-output.
hierarchy of input-processing-output chart. See: input-process-output chart.
high-level language. A programming language which requires little knowledge of the target
computer, can be translated into several different machine languages, allows symbolic naming
of operations and addresses, provides features designed to facilitate expression of data
structures and program logic, and usually results in several machine instructions for each

Version 9.1

A-27

Guide to the CABA CBOK


program statement. Examples are PL/1, COBOL, BASIC, FORTRAN, Ada, Pascal, and "C".
Contrast with assembly language.
-II/0. input/output.
IC. integrated circuit.
IDE. integrated drive electronics.
IEC. International Electrotechnical Commission.
IEEE. Institute of Electrical and Electronic Engineers.
ISO. International Organization for Standardization.
ITU-TSS. International Telecommunications Union - Telecommunications Standards
Section.
implementation. The process of translating a design into hardware components, software
components, or both. See: coding.
implementation phase. (IEEE) The period of time in the software life cycle during which a
software product is created from design documentation and debugged.
implementation requirement. (IEEE) A requirement that specifies or constrains the coding
or construction of a system or system component.
incremental integration. A structured reformation of the program module by module or
function by function with an integration test being performed following each addition.
Methods include top-down, breadth-first, depth-first, bottom-up. Contrast with
nonincremental integration.
incremental development. (IEEE) A software development technique in which requirements
definition, design, implementation, and testing occur in an overlapping, iterative [rather than
sequential] manner, resulting in incremental completion of the overall software product.
Contrast with rapid prototyping, spiral model, waterfall model.
industry standard. (QA) Procedures and criteria recognized as acceptable practices by peer
professional, credentialing, or accrediting organizations.
infeasible path. (NBS) A sequence of program statements that can never be executed. Syn:
dead code.
information hiding. The practice of "hiding" the details of a function or structure, making
them inaccessible to other parts of the program. See: abstraction, encapsulation, software
engineering.
input/output. Each microprocessor and each computer needs a way to communicate with the
outside world in order to get the data needed for its programs and in order to communicate the
results of its data manipulations. This is accomplished through I/0 ports and devices.

A-28

Version 9.1

Vocabulary
input-process-output chart. (IEEE) A diagram of a software system or module, consisting of
a rectangle on the left listing inputs, a rectangle in the center listing processing steps, a
rectangle on the right listing outputs, and arrows connecting inputs to processing steps and
processing steps to outputs. See: block diagram, box diagram, bubble chart, flowchart, graph,
structure chart.
input-processing-output. A structured software design technique; identification of the steps
involved in each process to be performed and identifying the inputs to and outputs from each
step. A refinement called hierarchical input-process-output identifies the steps, inputs, and
outputs at both general and detailed levels of detail.
inspection. A manual testing technique in which program documents [specifications
(requirements, design), source code or user's manuals] are examined in a very formal and
disciplined manner to discover errors, violations of standards and other problems. Checklists
are a typical vehicle used in accomplishing this technique. See: static analysis, code audit,
code inspection, code review, code walkthrough.
installation. (ANSI) The phase in the system life cycle that includes assembly and testing of
the hardware and software of a computerized system. Installation includes installing a new
computer system, new software or hardware, or otherwise modifying the current system.
installation and checkout phase. (IEEE) The period of time in the software life cycle during
which a software product is integrated into its operational environment and tested in this
environment to ensure that it performs as required.
installation qualification. See: qualification, installation.
Institute of Electrical and Electronic Engineers. 345 East 47th Street, New York, NY
10017. An organization involved in the generation and promulgation of standards. IEEE
standards represent the formalization of current norms of professional practice through the
process of obtaining the consensus of concerned, practicing professionals in the given field.
instruction. (1) (ANSI/IEEE) A program statement that causes a computer to perform a
particular operation or set of operations. (2) (ISO) In a programming language, a meaningful
expression that specifies one operation and identifies its operands, if any.
instruction set. (1) (IEEE) The complete set of instructions recognized by a given computer
or provided by a given programming language. (2) (ISO) The set of the instructions of a
computer, of a programming language, or of the programming languages in a programming
system. See: computer instruction set.
instrumentation. (NBS) The insertion of additional code into a program in order to collect
information about program behavior during program execution. Useful for dynamic analysis
techniques such as assertion checking, coverage analysis, tuning.
integrated circuit. Small wafers of semiconductor material [silicon] etched or printed with
extremely small electronic switching circuits. Syn: chip.
integrated drive electronics. A standard interface for hard disks which provides for building
most of the controller circuitry into the disk drive to save space. IDE controllers are
functionally equivalent to ST-506 standard controllers. Contrast with EDSI, SCSI, ST-506.

Version 9.1

A-29

Guide to the CABA CBOK


interactive. (IEEE) Pertaining to a system or mode of operation in which each user entry
causes a response from or action by the system. Contrast with batch. See: conversational, online, real time.
interface. (1) (ISO) A shared boundary between two functional units, defined by functional
characteristics, common physical interconnection characteristics, signal characteristics, and
other characteristics, as appropriate. The concept involves the specification of the connection
of two devices having different functions. (2) A point of communication between two or more
processes, persons, or other physical entities. (3) A peripheral device which permits two or
more devices to communicate.
interface analysis. (IEEE) Evaluation of: (1) software requirements specifications with
hardware, user, operator, and software interface requirements documentation, (2) software
design description records with hardware, operator, and software interface requirements
specifications, (3) source code with hardware, operator, and software interface design
documentation, for correctness, consistency, completeness, accuracy, and readability. Entities
to evaluate include data items and control items.
interface requirement. (IEEE) A requirement that specifies an external item with which a
system or system component must interact, or sets forth constraints on formats, timing, or
other factors caused by such an interaction.
International Electrotechnical Commission. Geneva, Switzerland. An organization that sets
standards for electronic products and components which are adopted by the safety standards
agencies of many countries.
International Organization for Standardization. Geneva, Switzerland. An organization
that sets international standards. It deals with all fields except electrical and electronics which
is governed by IEC. Syn: International Standards Organization.
International Standards Organization. See: International Organization for Standardization.
International Telecommunications Union - Telecommunications Standards Section.
Geneva, Switzerland. Formerly, Consultive Committee for International Telephony and
Telegraphy. An international organization for communications standards.
interpret. (IEEE) To translate and execute each statement or construct of a computer program
before translating and executing the next. Contrast with assemble, compile.
interpreter. (IEEE) A computer program that translates and executes each statement or
construct of a computer program before translating and executing the next. The interpreter
must be resident in the computer each time a program [source code file] written in an
interpreted language is executed. Contrast with assembler, compiler.
interrupt. (1) The suspension of a process to handle an event external to the process. (2) A
technique to notify the CPU that a peripheral device needs service, i.e., the device has data for
the processor or the device is awaiting data from the processor. The device sends a signal,
called an interrupt, to the processor. The processor interrupts its current program, stores its
current operating conditions, and executes a program to service the device sending the
interrupt. After the device is serviced, the processor restores its previous operating conditions
and continues executing the interrupted program. A method for handling constantly changing
data. Contrast with polling.

A-30

Version 9.1

Vocabulary
interrupt analyzer. A software tool which analyzes potential conflicts in a system as a result
of the occurrences of interrupts.
invalid inputs. (1) (NBS) Test data that lie outside the domain of the function the program
represents. (2) These are not only inputs outside the valid range for data to be input, i.e. when
the specified input range is 50 to 100, but also unexpected inputs, especially when these
unexpected inputs may easily occur; e.g., the entry of alpha characters or special keyboard
characters when only numeric data is valid, or the input of abnormal command sequences to a
program.
I/O port. Input/output connector.
-JJCL. job control language.
job. (IEEE) A user-defined unit of work that is to be accomplished by a computer. For
example, the compilation, loading, and execution of a computer program. See: job control
language.
job control language. (IEEE) A language used to identify a sequence of jobs, describe their
requirements to an operating system, and control their execution.
-KKB. kilobyte.
KLOC. one thousand lines of code.
Kermit. An asynchronous file transfer protocol developed at Columbia University, noted for
its accuracy over noisy lines. Several versions exist. Contrast with Xmodem, Ymodem,
Zmodem.
key. One or more characters, usually within a set of data, that contains information about the
set, including its identification.
key element. (QA) An individual step in an critical control point of the manufacturing
process.
kilobyte. Approximately one thousand bytes. This symbol is used to describe the size of
computer memory or disk storage space. Because computers use a binary number system, a
kilobyte is precisely 210 or 1024 bytes.
-LLAN. local area network.

Version 9.1

A-31

Guide to the CABA CBOK


LSI. large scale integration.
ladder logic. A graphical, problem oriented, programming language which replicates
electronic switching blueprints.
language. See: programming language.
large scale integration. A classification of ICs [chips] based on their size as expressed by the
number of circuits or logic gates they contain. An LSI IC contains 3,000 to 100,000
transistors.
latency. (ISO) The time interval between the instant at which a CPU's instruction control unit
initiates a call for data and the instant at which the actual transfer of the data starts. Syn:
waiting time.
latent defect. See: bug, fault.
life cycle. See: software life cycle.
life cycle methodology. The use of any one of several structured methods to plan, design,
implement, test. and operate a system from its conception to the termination of its use. See:
waterfall model.
linkage editor. (IEEE) A computer program that creates a single load module from two or
more independently translated object modules or load modules by resolving cross references
among the modules and, possibly, by relocating elements. May be part of a loader. Syn: link
editor, linker.
loader. A program which copies other [object] programs from auxiliary [external] memory to
main [internal] memory prior to its execution.
local area network. A communications network that serves users within a confined
geographical area. It is made up of servers, workstations, a network operating system, and a
communications link. Contrast with MAN, WAN.
logic analysis. (IEEE) Evaluates the safety-critical equations, algorithms, and control logic of
the software design. (2) Evaluates the sequence of operations represented by the coded
program and detects programming errors that might create hazards.
longitudinal redundancy check. (IEEE) A system of error control based on the formation of
a block check following preset rules.
low-level language. See: assembly language. The advantage of assembly language is that it
provides bit-level control of the processor allowing tuning of the program for optimal speed
and performance. For time critical operations, assembly language may be necessary in order
to generate code which executes fast enough for the required operations. The disadvantage of
assembly language is the high-level of complexity and detail required in the programming.
This makes the source code harder to understand, thus increasing the chance of introducing
errors during program development and maintenance.
-M-

A-32

Version 9.1

Vocabulary
MAN. metropolitan area network.
Mb. megabit.
MB. megabyte.
MHz. megahertz.
MIPS. million instructions per second.
MOS. metal-oxide semiconductor.
MOSFET. metal-oxide semiconductor field effect transistor.
MSI. medium scale integration.
MTBF. mean time between failures.
MTTR. mean time to repair.
MTTF. mean time to failure.
machine code. (IEEE) Computer instructions and definitions expressed in a form [binary
code] that can be recognized by the CPU of a computer. All source code, regardless of the
language in which it was programmed, is eventually converted to machine code. Syn: object
code.
machine language. See: machine code.
macro. (IEEE) In software engineering, a predefined sequence of computer instructions that
is inserted into a program, usually during assembly or compilation, at each place that its
corresponding macroinstruction appears in the program.
macroinstruction. (IEEE) A source code instruction that is replaced by a predefined
sequence of source instructions, usually in the same language as the rest of the program and
usually during assembly or compilation.
main memory. A non-moving storage device utilizing one of a number of types of electronic
circuitry to store information.
main program. (IEEE) A software component that is called by the operating system of a
computer and that usually calls other software components. See: routine, subprogram.
mainframe. Term used to describe a large computer.
maintainability. (IEEE) The ease with which a software system or component can be
modified to correct faults, improve performance or other attributes, or adapt to a changed
environment. Syn: modifiability.
maintenance. (QA) Activities such as adjusting, cleaning, modifying, overhauling equipment
to assure performance in accordance with requirements. Maintenance to a software system
includes correcting software errors, adapting software to a new environment, or making
enhancements to software. See: adaptive maintenance, corrective maintenance, perfective
maintenance.

Version 9.1

A-33

Guide to the CABA CBOK


mean time between failures. A measure of the reliability of a computer system, equal to
average operating time of equipment between failures, as calculated on a statistical basis from
the known failure rates of various components of the system.
mean time to failure. A measure of reliability, giving the average time before the first failure.
mean time to repair. A measure of reliability of a piece of repairable equipment, giving the
average time between repairs.
measure. (IEEE) A quantitative assessment of the degree to which a software product or
process possesses a given attribute.
measurable. Capable of being measured.
measurement. The process of determining the value of some quantity in terms of a standard
unit.
medium scale integration. A classification of ICs [chips] based on their size as expressed by
the number of circuits or logic gates they contain. An MSI IC contains 100 to 3,000
transistors.
megabit. Approximately one million bits. Precisely 1024 K bits, 220 bits, or 1,048,576 bits.
megabyte. Approximately one million bytes. Precisely 1024 K Bytes, 220 bytes, or 1,048,576
bytes. See: kilobyte.
megahertz. A unit of frequency equal to one million cycles per second.
memory. Any device or recording medium into which binary data can be stored and held, and
from which the entire original data can be retrieved. The two types of memory are main; e.g.,
ROM, RAM, and auxiliary; e.g., tape, disk. See: storage device.
menu. A computer display listing a number of options; e.g., functions, from which the
operator may select one. Sometimes used to denote a list of programs.
metal-oxide semiconductor. One of two major categories of chip design [the other is
bipolar]. It derives its name from its use of metal, oxide and semiconductor layers. There are
several varieties of MOS technologies including PMOS, NMOS, CMOS.
metal-oxide semiconductor field effect transistor. Common type of transistor fabricated as
a discrete component or into MOS integrated circuits.
metric based test data generation. (NBS) The process of generating test sets for structural
testing based upon use of complexity metrics or coverage metrics.
metric, software quality. (IEEE) A quantitative measure of the degree to which software
possesses a given attribute which affects its quality.
metropolitan area network. Communications network that covers a geographical area such
as a city or a suburb. Contrast with LAN, WAN.
microcode. Permanent memory that holds the elementary circuit operations a computer must
perform for each instruction in its instruction set.
microcomputer. A term used to describe a small computer. See: microprocessor.

A-34

Version 9.1

Vocabulary
microprocessor. A CPU existing on a single IC. Frequently synonymous with a
microcomputer.
million instructions per second. Execution speed of a computer. MIPS rate is one factor in
overall performance. Bus and channel speed and bandwidth, memory speed, memory
management techniques, and system software also determine total throughput.
minicomputer. A term used to describe a medium sized computer.
mishap. (DOD) An unplanned event or series of events resulting in death, injury,
occupational illness, or damage to or loss of data and equipment or property, or damage to the
environment. Syn: accident.
mnemonic. A symbol chosen to assist human memory and understanding; e.g., an
abbreviation such as "MPY" for multiply.
modeling. Construction of programs used to model the effects of a postulated environment for
investigating the dimensions of a problem for the effects of algorithmic processes on
responsive targets.
modem. (ISO) A functional unit that modulates and demodulates signals. One of the
functions of a modem is to enable digital data to be transmitted over analog transmission
facilities. The term is a contraction of modulator-demodulator.
modem access. Using a modem to communicate between computers. MODEM access is
often used between a remote location and a computer that has a master database and
applications software, the host computer.
modifiability. See: maintainability.
modular decomposition. A structured software design technique, breaking a system into
components to facilitate design and development. Syn: functional decomposition, hierarchical
decomposition. See: abstraction.
modular software. (IEEE) Software composed of discrete parts. See: structured design.
modularity. (IEEE) The degree to which a system or computer program is composed of
discrete components such that a change to one component has minimal impact on other
components.
modulate. Varying the characteristics of a wave in accordance with another wave or signal,
usually to make user equipment signals compatible with communication facilities. Contrast
with demodulate.
modulation. Converting signals from a binary-digit pattern [pulse form] to a continuous wave
form [analog]. Contrast with demodulation.
module. (1) In programming languages, a self- contained subdivision of a program that may
be separately compiled. (2) A discrete set of instructions, usually processed as a unit, by an
assembler, a compiler, a linkage editor, or similar routine or subroutine. (3) A packaged
functional hardware unit suitable for use with other components. See: unit.
module interface table. A table which provides a graphic illustration of the data elements
whose values are input to and output from a module.

Version 9.1

A-35

Guide to the CABA CBOK


multi-processing. (IEEE) A mode of operation in which two or more processes [programs]
are executed concurrently [simultaneously] by separate CPUs that have access to a common
main memory. Contrast with multi-programming. See: multi-tasking, time sharing.
multi-programming. (IEEE) A mode of operation in which two or more programs are
executed in an interleaved manner by a single CPU. Syn: parallel processing. Contrast with
multi-tasking. See: time sharing.
multi-tasking. (IEEE) A mode of operation in which two or more tasks are executed in an
interleaved manner. Syn: parallel processing. See: multi-processing, multi-programming, time
sharing.
multiple condition coverage. (Myers) A test coverage criteria which requires enough test
cases such that all possible combinations of condition outcomes in each decision, and all
points of entry, are invoked at least once. Contrast with branch coverage, condition coverage,
decision coverage, path coverage, statement coverage.
multiplexer. A device which takes information from any of several sources and places it on a
single line or sends it to a single destination.
multipurpose systems. (IEEE) Computer systems that perform more than one primary
function or task are considered to be multipurpose. In some situations the computer may be
linked or networked with other computers that are used for administrative functions; e.g.,
accounting, word processing.
mutation analysis. (NBS) A method to determine test set thoroughness by measuring the
extent to which a test set can discriminate the program from slight variants [mutants] of the
program. Contrast with error seeding.
-NNBS. National Bureau of Standards.
NIST. National Institute for Standards and Technology.
NMI. non-maskable interrupt.
NMOS. n-channel MOS.
National Bureau of Standards. Now National Institute for Standards and Technology.
National Institute for Standards and Technology. Gaithersburg, MD 20899. A federal
agency under the Department of Commerce, originally established by an act of Congress on
March 3, 1901 as the National Bureau of Standards. The Institute's overall goal is to
strengthen and advance the Nation's science and technology and facilitate their effective
application for public benefit. The National Computer Systems Laboratory conducts research
and provides, among other things, the technical foundation for computer related policies of
the Federal Government.
n-channel MOS. A type of microelectronic circuit used for logic and memory chips.

A-36

Version 9.1

Vocabulary
network. (1) (ISO) An arrangement of nodes and interconnecting branches. (2) A system
[transmission channels and supporting hardware and software] that connects several remotely
located computers via telecommunications.
network database. A database organization method that allows for data relationships in a netlike form. A single data element can point to multiple data elements and can itself be pointed
to by other data elements. Contrast with relational database.
nibble. Half a byte, or four bits.
node. A junction or connection point in a network, e.g. a terminal or a computer.
noncritical code analysis. (IEEE) (1) Examines software elements that are not designated
safety-critical and ensures that these elements do not cause a hazard. (2) Examines portions of
the code that are not considered safety-critical code to ensure they do not cause hazards.
Generally, safety-critical code should be isolated from non-safety-critical code. This analysis
is to show this isolation is complete and that interfaces between safety-critical code and nonsafety-critical code do not create hazards.
nonincremental integration. A reformation of a program by immediately relinking the entire
program following the testing of each independent module. Integration testing is then
conducted on the program as a whole. Syn: "big bang" integration. Contrast with incremental
integration.
non-maskable interrupt. A high priority interrupt that cannot be disabled by another
interrupt. It can be used to report malfunctions such as parity, bus, and math co-processor
errors.
null. (IEEE) A value whose definition is to be supplied within the context of a specific
operating system. This value is a representation of the set of no numbers or no value for the
operating system in use.
null data. (IEEE) Data for which space is allocated but for which no value currently exists.
null string. (IEEE) A string containing no entries. Note: It is said that a null string has length
zero.
-OOCR. optical character recognition.
OEM. original equipment manufacturer.
OOP. object oriented programming.
object. In object oriented programming, A self contained module [encapsulation] of data and
the programs [services] that manipulate [process] that data.
object code. (NIST) A code expressed in machine language ["1"s and "0"s] which is normally
an output of a given translation process that is ready to be executed by a computer. Syn:
machine code. Contrast with source code. See: object program.

Version 9.1

A-37

Guide to the CABA CBOK


object oriented design. (IEEE) A software development technique in which a system or
component is expressed in terms of objects and connections between those objects.
object oriented language. (IEEE) A programming language that allows the user to express a
program in terms of objects and messages between those objects. Examples include C++,
Smalltalk and LOGO.
object oriented programming. A technology for writing programs that are made up of selfsufficient modules that contain all of the information needed to manipulate a given data
structure. The modules are created in class hierarchies so that the code or methods of a class
can be passed to other modules. New object modules can be easily created by inheriting the
characteristics of existing classes. See: object, object oriented design.
object program. (IEEE) A computer program that is the output of an assembler or compiler.
octal. The base 8 number system. Digits are 0, 1, 2, 3, 4, 5, 6, & 7.
on-line. (IEEE) Pertaining to a system or mode of operation in which input data enter the
computer directly from the point of origin or output data are transmitted directly to the point
where they are used. For example, an airline reservation system. Contrast with batch. See:
conversational, interactive, real time.
operating system. (ISO) Software that controls the execution of programs, and that provides
services such as resource allocation, scheduling, input/output control, and data management.
Usually, operating systems are predominantly software, but partial or complete hardware
implementations are possible.
operation and maintenance phase. (IEEE) The period of time in the software life cycle
during which a software product is employed in its operational environment, monitored for
satisfactory performance, and modified as necessary to correct problems or to respond to
changing requirements.
operation exception. (IEEE) An exception that occurs when a program encounters an invalid
operation code.
operator. See: end user.
optical character recognition. An information processing technology that converts human
readable data into another medium for computer input. An OCR peripheral device accepts a
printed document as input, to identify the characters by their shape from the light that is
reflected and creates an output disk file. For best results, the printed page must contain only
characters of a type that are easily read by the OCR device and located on the page within
certain margins. When choosing an OCR product, the prime consideration should be the
program's level of accuracy as it applies to the type of document to be scanned. Accuracy
levels less than 97% are generally considered to be poor.
optical fiber. Thin glass wire designed for light transmission, capable of transmitting billions
of bits per second. Unlike electrical pulses, light pulses are not affected by random radiation
in the environment.
optimization. (NIST) Modifying a program to improve performance; e.g., to make it run
faster or to make it use fewer resources.

A-38

Version 9.1

Vocabulary
Oracle. A relational database programming system incorporating the SQL programming
language. A registered trademark of the Oracle Corp.
original equipment manufacturer. A manufacturer of computer hardware.
overflow. (ISO) In a calculator, the state in which the calculator is unable to accept or process
the number of digits in the entry or in the result. See: arithmetic overflow.
overflow exception. (IEEE) An exception that occurs when the result of an arithmetic
operation exceeds the size of the storage location designated to receive it.
-PPAL. programmable array logic.
PC. personal computer.
PCB. printed circuit board.
PDL. program design language.
PLA. programmable logic array.
PLD. programmable logic device.
PMOS. positive channel MOS.
PROM. programmable read only memory.
paging. (IEEE) A storage allocation technique in which programs or data are divided into
fixed length blocks called pages, main storage/memory is divided into blocks of the same
length called page frames, and pages are stored in page frames, not necessarily contiguously
or in logical order, and pages are transferred between main and auxiliary storage as needed.
parallel. (1) (IEEE) Pertaining to the simultaneity of two or more processes. (2) (IEEE)
Pertaining to the simultaneous processing of individual parts of a whole, such as the bits of a
character or the characters of a word, using separate facilities for the various parts. (3) Term
describing simultaneous transmission of the bits making up a character, usually eight bits [one
byte]. Contrast with serial.
parallel processing. See: multi-processing, multi- programming.
parameter. (IEEE) A constant, variable or expression that is used to pass values between
software modules. Syn: argument.
parity. An error detection method in data transmissions that consists of selectively adding a 1bit to bit patterns [word, byte, character, message] to cause the bit patterns to have either an
odd number of 1-bits [odd parity] or an even number of 1-bits [even parity].
parity bit. (ISO) A binary digit appended to a group of binary digits to make the sum of all
the digits, including the appended binary digit, either odd or even, as predetermined.
parity check. (ISO) A redundancy check by which a recalculated parity bit is compared to the
predetermined parity bit. Contrast with check summation, cyclic redundancy check [CRC].

Version 9.1

A-39

Guide to the CABA CBOK


Pascal. A high-level programming language designed to encourage structured programming
practices.
password. (ISO) A character string that enables a user to have full or limited access to a
system or to a set of data.
patch. (IEEE) A change made directly to an object program without reassembling or
recompiling from the source program.
path. (IEEE) A sequence of instructions that may be performed in the execution of a
computer program.
path analysis. (IEEE) Analysis of a computer program [source code] to identify all possible
paths through the program, to detect incomplete paths, or to discover portions of the program
that are not on any path.
path coverage. See: testing, path.
perfective maintenance. (IEEE) Software maintenance performed to improve the
performance, maintainability, or other attributes of a computer program. Contrast with
adaptive maintenance, corrective maintenance.
performance requirement. (IEEE) A requirement that imposes conditions on a functional
requirement; e.g., a requirement that specifies the speed, accuracy, or memory usage with
which a given function must be performed.
peripheral device. Equipment that is directly connected a computer. A peripheral device can
be used to input data; e.g., keypad, bar code reader, transducer, laboratory test equipment; or
to output data; e.g., printer, disk drive, video system, tape drive, valve controller, motor
controller. Syn: peripheral equipment.
peripheral equipment. See: peripheral device.
personal computer. Synonymous with microcomputer, a computer that is functionally
similar to large computers, but serves only one user.
physical configuration audit. (IEEE) An audit conducted to verify that a configuration item,
as built, conforms to the technical documentation that defines it. See: functional configuration
audit.
physical requirement. (IEEE) A requirement that specifies a physical characteristic that a
system or system component must posses; e.g., material, shape, size, weight.
pixel. (IEEE) (1) In image processing and pattern recognition, the smallest element of a
digital image that can be assigned a gray level. (2) In computer graphics, the smallest element
of a display surface that can be assigned independent characteristics. This term is derived
from the term "picture element".
platform. The hardware and software which must be present and functioning for an
application program to run [perform] as intended. A platform includes, but is not limited to
the operating system or executive software, communication software, microprocessor,
network, input/output hardware, any generic software libraries, database management, user
interface software, and the like.

A-40

Version 9.1

Vocabulary
polling. A technique a CPU can use to learn if a peripheral device is ready to receive data or to
send data. In this method each device is checked or polled in-turn to determine if that device
needs service. The device must wait until it is polled in order to send or receive data. This
method is useful if the device's data can wait for a period of time before being processed,
since each device must await its turn in the polling scheme before it will be serviced by the
processor. Contrast with interrupt.
positive channel MOS. A type of microelectronic circuit in which the base material is
positively charged.
precision. The relative degree of repeatability, i.e. how closely the values within a series of
replicate measurements agree. It is the result of resolution and stability. See: accuracy, bias,
calibration.
preliminary design. (IEEE) (1) The process of analyzing design alternatives and defining the
architecture, components, interfaces, and timing and sizing estimates for a system or
component. See: detailed design. (2) The result of the process in (1).
preliminary design review. (IEEE) A review conducted to evaluate the progress, technical
adequacy, and risk resolution of the selected design approach for one or more configuration
items; to determine each design's compatibility with the requirements for the configuration
item; to evaluate the degree of definition and assess the technical risk associated with the
selected manufacturing methods and processes; to establish the existence and compatibility of
the physical and functional interfaces among the configuration items and other items of
equipment, facilities, software and personnel; and, as applicable, to evaluate the preliminary
operational and support documents.
printed circuit board. A flat board that holds chips and other electronic components. The
board is "printed" with electrically conductive pathways between the components.
production database. The computer file that contains the establishment's current production
data.
program. (1) (ISO) A sequence of instructions suitable for processing. Processing may
include the use of an assembler, a compiler, an interpreter, or another translator to prepare the
program for execution. The instructions may include statements and necessary declarations.
(2) (ISO) To design, write, and test programs. (3) (ANSI) In programming languages, a set of
one or more interrelated modules capable of being executed. (4) Loosely, a routine. (5)
Loosely, to write a routine.
program design language. (IEEE) A specification language with special constructs and,
sometimes, verification protocols, used to develop, analyze, and document a program design.
program mutation. (IEEE) A computer program that has been purposely altered from the
intended version to evaluate the ability of program test cases to detect the alteration. See:
testing, mutation.
programmable array logic. A programmable logic chip. See: programmable logic device.
programmable logic array. A programmable logic chip. See: programmable logic device.
programmable logic device. A logic chip that is programmed at the user's site. Contrast with
PROM.

Version 9.1

A-41

Guide to the CABA CBOK


programmable read only memory. A chip which may be programmed by using a PROM
programming device. It can be programmed only once. It cannot be erased and
reprogrammed. Each of its bit locations is a fusible link. An unprogrammed PROM has all
links closed establishing a known state of each bit. Programming the chip consists of sending
an electrical current of a specified size through each link which is to be changed to the
alternate state. This causes the "fuse to blow", opening that link.
programming language. (IEEE) A language used to express computer programs. See:
computer language, high-level language, low-level language.
programming standards. See: coding standards.
programming style analysis. (IEEE) Analysis to ensure that all portions of the program
follow approved programming guidelines. See: code audit, code inspection. coding standards.
project plan. (NIST) A management document describing the approach taken for a project.
The plan typically describes work to be done, resources required, methods to be used, the
configuration management and quality assurance procedures to be followed, the schedules to
be met, the project organization, etc. Project in this context is a generic term. Some projects
may also need integration plans, security plans, test plans, quality assurance plans, etc. See:
documentation plan, software development plan, test plan, software engineering.
PROM programmer. Electronic equipment which is used to transfer a program [write
instructions and data] into PROM and EPROM chips.
proof of correctness. (NBS) The use of techniques of mathematical logic to infer that a
relation between program variables assumed true at program entry implies that another
relation between program variables holds at program exit.
protection exception. (IEEE) An exception that occurs when a program attempts to write into
a protected area in storage.
protocol. (ISO) A set of semantic and syntactic rules that determines the behavior of
functional units in achieving communication.
prototyping. Using software tools to accelerate the software development process by
facilitating the identification of required functionality during analysis and design phases. A
limitation of this technique is the identification of system or software problems and hazards.
See: rapid prototyping.
pseudocode. A combination of programming language and natural language used to express a
software design. If used, it is usually the last document produced prior to writing the source
code.
-QQA. quality assurance.
QC. quality control.

A-42

Version 9.1

Vocabulary
qualification, installation. (FDA) Establishing confidence that process equipment and
ancillary systems are compliant with appropriate codes and approved design intentions, and
that manufacturer's recommendations are suitably considered.
qualification, operational. (FDA) Establishing confidence that process equipment and subsystems are capable of consistently operating within established limits and tolerances.
qualification, process performance. (FDA) Establishing confidence that the process is
effective and reproducible.
qualification, product performance. (FDA) Establishing confidence through appropriate
testing that the finished product produced by a specified process meets all release
requirements for functionality and safety.
quality assurance. (1) (ISO) The planned systematic activities necessary to ensure that a
component, module, or system conforms to established technical requirements. (2) All actions
that are taken to ensure that a development organization delivers products that meet
performance requirements and adhere to standards and procedures. (3) The policy,
procedures, and systematic actions established in an enterprise for the purpose of providing
and maintaining some degree of confidence in data integrity and accuracy throughout the life
cycle of the data, which includes input, update, manipulation, and output. (4) (QA) The
actions, planned and performed, to provide confidence that all systems and components that
influence the quality of the product are working as expected individually and collectively.
quality assurance, software. (IEEE) (1) A planned and systematic pattern of all actions
necessary to provide adequate confidence that an item or product conforms to established
technical requirements. (2) A set of activities designed to evaluate the process by which
products are developed or manufactured.
quality control. The operational techniques and procedures used to achieve quality
requirements.
-RRAM. random access memory.
RFI. radiofrequency interference.
RISC. reduced instruction set computer.
ROM. read only memory.
radiofrequency interference. High frequency electromagnetic waves that emanate from
electronic devices such as chips and other electronic devices. An electromagnetic disturbance
caused by such radiating and transmitting sources as electrostatic discharge [ESD], lightning,
radar, radio and TV signals, and motors with brushes can induce unwanted voltages in
electronic circuits, damage components and cause malfunctions. See: electromagnetic
interference.
random access memory. Chips which can be called read/write memory, since the data stored
in them may be read or new data may be written into any memory address on these chips. The
Version 9.1

A-43

Guide to the CABA CBOK


term random access means that each memory location [usually 8 bits or 1 byte] may be
directly accessed [read from or written to] at random. This contrasts to devices like magnetic
tape where each section of the tape must be searched sequentially by the read/write head from
its current location until it finds the desired location. ROM memory is also random access
memory, but they are read only not read/write memories. Another difference between RAM
and ROM is that RAM is volatile, i.e. it must have a constant supply of power or the stored
data will be lost.
range check. (ISO) A limit check in which both high and low values are stipulated.
rapid prototyping. A structured software requirements discovery technique which
emphasizes generating prototypes early in the development process to permit early feedback
and analysis in support of the development process. Contrast with incremental development,
spiral model, waterfall model. See: prototyping.
read only memory. A memory chip from which data can only be read by the CPU. The CPU
may not store data to this memory. The advantage of ROM over RAM is that ROM does not
require power to retain its program. This advantage applies to all types of ROM chips; ROM,
PROM, EPROM, and EEPROM.
real time. (IEEE) Pertaining to a system or mode of operation in which computation is
performed during the actual time that an external process occurs, in order that the computation
results can be used to control, monitor, or respond in a timely manner to the external process.
Contrast with batch. See: conversational, interactive, interrupt, on-line.
real time processing. A fast-response [immediate response] on-line system which obtains
data from an activity or a physical process, performs computations, and returns a response
rapidly enough to affect [control] the outcome of the activity or process; e.g., a process control
application. Contrast with batch processing.
record. (1) (ISO) a group of related data elements treated as a unit. [A data element (field) is
a component of a record, a record is a component of a file (database)].
record of change. Documentation of changes made to the system. A record of change can be
a written document or a database. Normally there are two associated with a computer system,
hardware and software. Changes made to the data are recorded in an audit trail.
recursion. (IEEE) (1) The process of defining or generating a process or data structure in
terms of itself. (2) A process in which a software module calls itself.
reduced instruction set computer. Computer architecture that reduces the complexity of the
chip by using simpler instructions. Reduced instruction set does not necessarily mean fewer
instructions, but rather a return to simple instructions requiring only one or a very few
instruction cycles to execute, and therefore are more effectively utilized with innovative
architectural and compiler changes. Systems using RISC technology are able to achieve
processing speeds of more than five million instructions per second.
region. A clearly described area within the computer's storage that is logically and/or
physically distinct from other regions. Regions are used to separate testing from production
[normal use]. Syn: partition.

A-44

Version 9.1

Vocabulary
register. A small, high speed memory circuit within a microprocessor that holds addresses
and values of internal operations; e.g., registers keep track of the address of the instruction
being executed and the data being processed. Each microprocessor has a specific number of
registers depending upon its design.
regression analysis and testing. (IEEE) A software V&V task to determine the extent of
V&V analysis and testing that must be repeated when changes are made to any previously
examined software products. See: testing, regression.
relational database. Database organization method that links files together as required.
Relationships between files are created by comparing data such as account numbers and
names. A relational system can take any two or more files and generate a new file from the
records that meet the matching criteria. Routine queries often involve more than one data file;
e.g., a customer file and an order file can be linked in order to ask a question that relates to
information in both files, such as the names of the customers that purchased a particular
product. Contrast with network database, flat file.
release. (IEEE) The formal notification and distribution of an approved version. See: version.
reliability. (IEEE) The ability of a system or component to perform its required functions
under stated conditions for a specified period of time. See: software reliability.
reliability assessment. (ANSI/IEEE) The process of determining the achieved level of
reliability for an existing system or system component.
requirement. (IEEE) (1) A condition or capability needed by a user to solve a problem or
achieve an objective. (2) A condition or capability that must be met or possessed by a system
or system component to satisfy a contract, standard, specification, or other formally imposed
documents. (3) A documented representation of a condition or capability as in (1) or (2). See:
design requirement, functional requirement, implementation requirement, interface
requirement, performance requirement, physical requirement.
requirements analysis. (IEEE) (1) The process of studying user needs to arrive at a definition
of a system, hardware, or software requirements. (2) The process of studying and refining
system, hardware, or software requirements. See: prototyping, software engineering.
requirements phase. (IEEE) The period of time in the software life cycle during which the
requirements, such as functional and performance capabilities for a software product, are
defined and documented.
requirements review. (IEEE) A process or meeting during which the requirements for a
system, hardware item, or software item are presented to project personnel, managers, users,
customers, or other interested parties for comment or approval. Types include system
requirements review, software requirements review. Contrast with code review, design
review, formal qualification review, test readiness review.
retention period. (ISO) The length of time specified for data on a data medium to be
preserved.
retrospective trace. (IEEE) A trace produced from historical data recorded during the
execution of a computer program. Note: this differs from an ordinary trace, which is produced

Version 9.1

A-45

Guide to the CABA CBOK


cumulatively during program execution. See: execution trace, subroutine trace, symbolic
trace, variable trace.
revalidation. Relative to software changes, revalidation means validating the change itself,
assessing the nature of the change to determine potential ripple effects, and performing the
necessary regression testing.
review. (IEEE) A process or meeting during which a work product or set of work products, is
presented to project personnel, managers, users, customers, or other interested parties for
comment or approval. Types include code review, design review, formal qualification review,
requirements review, test readiness review. Contrast with audit, inspection. See: static
analysis.
revision number. See: version number.
risk. (IEEE) A measure of the probability and severity of undesired effects. Often taken as the
simple product of probability and consequence.
risk assessment. (DOD) A comprehensive evaluation of the risk and its associated impact.
robustness. The degree to which a software system or component can function correctly in
the presence of invalid inputs or stressful environmental conditions. See: software reliability.
routine. (IEEE) A subprogram that is called by other programs and subprograms. Note: This
term is defined differently in various programming languages. See: module.
RS-232-C. An Electronic Industries Association (EIA) standard for connecting electronic
equipment. Data is transmitted and received in serial format.
-SSCSI. small computer systems interface.
SOPs. standard operating procedures.
SQL. structured query language.
SSI. small scale integration.
safety. (DOD) Freedom from those conditions that can cause death, injury, occupational
illness, or damage to or loss of equipment or property, or damage to the environment.
safety critical. (DOD) A term applied to a condition, event, operation, process or item of
whose proper recognition, control, performance or tolerance is essential to safe system
operation or use; e.g., safety critical function, safety critical path, safety critical component.
safety critical computer software components. (DOD) Those computer software
components and units whose errors can result in a potential hazard, or loss of predictability or
control of a system.
security. See: computer system security.

A-46

Version 9.1

Vocabulary
sensor. A peripheral input device which senses some variable in the system environment,
such as temperature, and converts it to an electrical signal which can be further converted to a
digital signal for processing by the computer.
serial. (1) Pertaining to the sequential processing of the individual parts of a whole, such as
the bits of a character or the characters of a word, using the same facilities for successive
parts. (2) Term describing the transmission of data one bit at a time. Contrast with parallel.
server. A high speed computer in a network that is shared by multiple users. It holds the
programs and data that are shared by all users.
service program. Syn: utility program.
servomechanism. (ANSI) (1) An automatic device that uses feedback to govern the physical
position of an element. (2) A feedback control system in which at least one of the system
signals represents a mechanical motion.
severity. See: criticality.
side effect. An unintended alteration of a program's behavior caused by a change in one part
of the program, without taking into account the effect the change has on another part of the
program. See: regression analysis and testing.
simulation. (1) (NBS) Use of an executable model to represent the behavior of an object.
During testing the computational hardware, the external environment, and even code
segments may be simulated. (2) (IEEE) A model that behaves or operates like a given system
when provided a set of controlled inputs. Contrast with emulation.
simulation analysis. (IEEE) A software V&V task to simulate critical tasks of the software or
system environment to analyze logical or performance characteristics that would not be
practical to analyze manually.
simulator. (IEEE) A device, computer program, or system that behaves or operates like a
given system when provided a set of controlled inputs. Contrast with emulator. A simulator
provides inputs or responses that resemble anticipated process parameters. Its function is to
present data to the system at known speeds and in a proper format.
sizing. (IEEE) The process of estimating the amount of computer storage or the number of
source lines required for a software system or component. Contrast with timing.
sizing and timing analysis. (IEEE) A software V&V task to obtain program sizing and
execution timing information to determine if the program will satisfy processor size and
performance requirements allocated to software.
small computer systems interface. A standard method of interfacing a computer to disk
drives, tape drives and other peripheral devices that require high-speed data transfer. Up to
seven SCSI devices can be linked to a single SCSI port. Contrast with ST-506, EDSI, IDE.
small scale integration. A classification of ICs [chips] based on their size as expressed by the
number of circuits or logic gates they contain. An SSI IC contains up to 100 transistors.
software. (ANSI) Programs, procedures, rules, and any associated documentation pertaining
to the operation of a system. Contrast with hardware. See: application software, operating
system, system software, utility software.

Version 9.1

A-47

Guide to the CABA CBOK


software audit. See: software review.
software characteristic. An inherent, possibly accidental, trait, quality, or property of
software; e.g., functionality, performance, attributes, design constraints, number of states,
lines or branches.
software configuration item. See: configuration item.
software design description. (IEEE) A representation of software created to facilitate
analysis, planning, implementation, and decision making. The software design description is
used as a medium for communicating software design information, and may be thought of as a
blueprint or model of the system. See: structured design, design description, specification.
software developer. See: developer.
software development notebook. (NIST) A collection of material pertinent to the
development of a software module. Contents typically include the requirements, design,
technical reports, code listings, test plans, test results, problem reports, schedules, notes, etc.
for the module. Syn: software development file.
software development plan. (NIST) The project plan for the development of a software
product. Contrast with software development process, software life cycle.
software development process. (IEEE) The process by which user needs are translated into a
software product. the process involves translating user needs into software requirements,
transforming the software requirements into design, implementing the design in code, testing
the code, and sometimes installing and checking out the software for operational activities.
Note: these activities may overlap or be performed iteratively. See: incremental development,
rapid prototyping, spiral model, waterfall model.
software diversity. (IEEE) A software development technique in which two or more
functionally identical variants of a program are developed from the same specification by
different programmers or programming teams with the intent of providing error detection,
increased reliability, additional documentation or reduced probability that programming or
compiler errors will influence the end results.
software documentation. (NIST) Technical data or information, including computer listings
and printouts, in human readable form, that describe or specify the design or details, explain
the capabilities, or provide operating instructions for using the software to obtain desired
results from a software system. See: specification; specification, requirements; specification,
design; software design description; test plan, test report, user's guide.
software element. (IEEE) A deliverable or in- process document produced or acquired during
software development or maintenance. Specific examples include but are not limited to:
(1) Project planning documents; i.e., software development plans, and software verification
and validation plans.
(2) Software requirements and design specifications.
(3) Test documentation.
(4) Customer-deliverable documentation.
(5) Program source code.
A-48

Version 9.1

Vocabulary
(6) Representation of software solutions implemented in firmware.
(7) Reports; i.e., review, audit, project status.
(8) Data; i.e., defect detection, test.
Contrast with software item. See: configuration item.
software element analysis. See: software review.
software engineering. (IEEE) The application of a systematic, disciplined, quantifiable
approach to the development, operation, and maintenance of software; i.e., the application of
engineering to software. See: project plan, requirements analysis, architectural design,
structured design, system safety, testing, configuration management.
software engineering environment. (IEEE) The hardware, software, and firmware used to
perform a software engineering effort. Typical elements include computer equipment,
compilers, assemblers, operating systems, debuggers, simulators, emulators, test tools,
documentation tools, and database management systems.
software hazard analysis. (ODE, CDRH) The identification of safety-critical software, the
classification and estimation of potential hazards, and identification of program path analysis
to identify hazardous combinations of internal and environmental program conditions. See:
risk assessment, software safety change analysis, software safety code analysis, software
safety design analysis, software safety requirements analysis, software safety test analysis,
system safety.
software item. (IEEE) Source code, object code, job control code, control data, or a collection
of these items. Contrast with software element.
software life cycle. (NIST) Period of time beginning when a software product is conceived
and ending when the product is no longer available for use. The software life cycle is typically
broken into phases denoting activities such as requirements, design, programming, testing,
installation, and operation and maintenance. Contrast with software development process.
See: waterfall model.
software reliability. (IEEE) (1) the probability that software will not cause the failure of a
system for a specified time under specified conditions. The probability is a function of the
inputs to and use of the system in the software. The inputs to the system determine whether
existing faults, if any, are encountered. (2) The ability of a program to perform its required
functions accurately and reproducibly under stated conditions for a specified period of time.
software requirements specification. See: specification, requirements.
software review. (IEEE) An evaluation of software elements to ascertain discrepancies from
planned results and to recommend improvement. This evaluation follows a formal process.
Syn: software audit. See: code audit, code inspection, code review, code walkthrough, design
review, specification analysis, static analysis.
software safety change analysis. (IEEE) Analysis of the safety-critical design elements
affected directly or indirectly by the change to show the change does not create a new hazard,
does not impact on a previously resolved hazard, does not make a currently existing hazard

Version 9.1

A-49

Guide to the CABA CBOK


more severe, and does not adversely affect any safety-critical software design element. See:
software hazard analysis, system safety.
software safety code analysis. (IEEE) Verification that the safety-critical portions of the
design are correctly implemented in the code. See: logic analysis, data analysis, interface
analysis, constraint analysis, programming style analysis, noncritical code analysis, timing
and sizing analysis, software hazard analysis, system safety.
software safety design analysis. (IEEE) Verification that the safety-critical portion of the
software design correctly implements the safety-critical requirements and introduces no new
hazards. See: logic analysis, data analysis, interface analysis, constraint analysis, functional
analysis, software element analysis, timing and sizing analysis, reliability analysis, software
hazard analysis, system safety.
software safety requirements analysis. (IEEE) Analysis evaluating software and interface
requirements to identify errors and deficiencies that could contribute to a hazard. See:
criticality analysis, specification analysis, timing and sizing analysis, different software
systems analyses, software hazard analysis, system safety.
software safety test analysis. (IEEE) Analysis demonstrating that safety requirements have
been correctly implemented and that the software functions safely within its specified
environment. Tests may include; unit level tests, interface tests, software configuration item
testing, system level testing, stress testing, and regression testing. See: software hazard
analysis, system safety.
source code. (1) (IEEE) Computer instructions and data definitions expressed in a form
suitable for input to an assembler, compiler or other translator. (2) The human readable
version of the list of instructions [program] that cause a computer to perform a task. Contrast
with object code. See: source program, programming language.
source program. (IEEE) A computer program that must be compiled, assembled, or
otherwise translated in order to be executed by a computer. Contrast with object program.
See: source code.
spaghetti code. Program source code written without a coherent structure. Implies the
excessive use of GOTO instructions. Contrast with structured programming.
special test data. (NBS) Test data based on input values that are likely to require special
handling by the program. See: error guessing; testing, special case.
specification. (IEEE) A document that specifies, in a complete, precise, verifiable manner,
the requirements, design, behavior,or other characteristics of a system or component, and
often, the procedures for determining whether these provisions have been satisfied. Contrast
with requirement. See: specification, formal; specification, requirements; specification,
functional; specification, performance; specification, interface; specification, design; coding
standards; design standards.
specification analysis. (IEEE) Evaluation of each safety-critical software requirement with
respect to a list of qualities such as completeness, correctness, consistency, testability,
robustness, integrity, reliability, usability, flexibility, maintainability, portability,
interoperability, accuracy, auditability, performance, internal instrumentation, security and
training.

A-50

Version 9.1

Vocabulary
specification, design. (NIST) A specification that documents how a system is to be built. It
typically includes system or component structure, algorithms, control logic, data structures,
data set [file] use information, input/output formats, interface descriptions, etc. Contrast with
design standards, requirement. See: software design description.
specification, formal. (NIST) (1) A specification written and approved in accordance with
established standards. (2) A specification expressed in a requirements specification language.
Contrast with requirement.
specification, functional. (NIST) A specification that documents the functional requirements
for a system or system component. It describes what the system or component is to do rather
than how it is to be built. Often part of a requirements specification. Contrast with
requirement.
specification, interface. (NIST) A specification that documents the interface requirements for
a system or system component. Often part of a requirements specification. Contrast with
requirement.
specification, performance. (IEEE) A document that sets forth the performance
characteristics that a system or component must possess. These characteristics typically
include speed, accuracy, and memory usage. Often part of a requirements specification.
Contrast with requirement.
specification, product. (IEEE) A document which describes the as built version of the
software.
specification, programming. (NIST) See: specification, design.
specification, requirements. (NIST) A specification that documents the requirements of a
system or system component. It typically includes functional requirements, performance
requirements, interface requirements, design requirements [attributes and constraints],
development [coding] standards, etc. Contrast with requirement.
specification, system. See: requirements specification.
specification, test case. See: test case.
specification tree. (IEEE) A diagram that depicts all of the specifications for a given system
and shows their relationship to one another.
spiral model. (IEEE) A model of the software development process in which the constituent
activities, typically requirements analysis, preliminary and detailed design, coding,
integration, and testing, are performed iteratively until the software is complete. Syn:
evolutionary model. Contrast with incremental development; rapid prototyping; waterfall
model.
ST-506. A standard electrical interface between the hard disk and controller in IBM PC
compatible computers. Contrast with EDSI, IDE, SCSI.
standard operating procedures. Written procedures [prescribing and describing the steps to
be taken in normal and defined conditions] which are necessary to assure control of
production and processes.

Version 9.1

A-51

Guide to the CABA CBOK


state. (IEEE) (1) A condition or mode of existence that a system, component, or simulation
may be in; e.g., the pre-flight state of an aircraft navigation program or the input state of a
given channel.
state diagram. (IEEE) A diagram that depicts the states that a system or component can
assume, and shows the events or circumstances that cause or result from a change from one
state to another. Syn: state graph. See: state-transition table.
statement coverage. See: testing, statement.
state-transition table. (Beizer) A representation of a state graph that specifies the states, the
inputs, the transitions, and the outputs. See: state diagram.
static analysis. (1) (NBS) Analysis of a program that is performed without executing the
program. (2) (IEEE) The process of evaluating a system or component based on its form,
structure, content, documentation. Contrast with dynamic analysis. See: code audit, code
inspection, code review, code walk-through, design review, symbolic execution.
static analyzer. (ANSI/IEEE) A software tool that aides in the evaluation of a computer
program without executing the program. Examples include checkers, compilers, crossreference generators, standards enforcers, and flowcharters.
stepwise refinement. A structured software design technique; data and processing steps are
defined broadly at first, and then further defined with increasing detail.
storage device. A unit into which data or programs can be placed, retained and retrieved. See:
memory.
string. (IEEE) (1) A sequence of characters. (2) A linear sequence of entities such as
characters or physical elements.
structure chart. (IEEE) A diagram that identifies modules, activities, or other entities in a
system or computer program and shows how larger or more general entities break down into
smaller, more specific entries. Note: The result is not necessarily the same as that shown in a
call graph. Syn: hierarchy chart, program structure chart. Contrast with call graph.
structured design. (IEEE) Any disciplined approach to software design that adheres to
specified rules based on principles such as modularity, top-down design, and stepwise
refinement of data, system structure, and processing steps. See: data structure centered design,
input-processing-output, modular decomposition, object oriented design, rapid prototyping,
stepwise refinement, structured programming, transaction analysis, transform analysis,
graphical software specification/design documents, modular software, software engineering.
structured programming. (IEEE) Any software development technique that includes
structured design and results in the development of structured programs. See: structured
design.
structured query language. A language used to interrogate and process data in a relational
database. Originally developed for IBM mainframes, there have been many implementations
created for mini and micro computer database applications. SQL commands can be used to
interactively work with a data base or can be embedded with a programming language to
interface with a database.

A-52

Version 9.1

Vocabulary
stub. (NBS) Special code segments that when invoked by a code segment under test will
simulate the behavior of designed and specified modules not yet constructed.
subprogram. (IEEE) A separately compilable, executable component of a computer program.
Note: This term is defined differently in various programming languages. See: coroutine,
main program, routine, subroutine.
subroutine. (IEEE) A routine that returns control to the program or subprogram that called it.
Note: This term is defined differently in various programming languages. See: module.
subroutine trace. (IEEE) A record of all or selected subroutines or function calls performed
during the execution of a computer program and, optionally, the values of parameters passed
to and returned by each subroutine or function. Syn: call trace. See: execution trace,
retrospective trace, symbolic trace, variable trace.
support software. (IEEE) Software that aids in the development and maintenance of other
software; e.g., compilers, loaders, and other utilities.
symbolic execution. (IEEE) A static analysis technique in which program execution is
simulated using symbols, such as variable names, rather than actual values for input data, and
program outputs are expressed as logical or mathematical expressions involving these
symbols.
symbolic trace. (IEEE) A record of the source statements and branch outcomes that are
encountered when a computer program is executed using symbolic, rather than actual values
for input data. See: execution trace, retrospective trace, subroutine trace, variable trace.
synchronous. Occurring at regular, timed intervals, i.e. timing dependent.
synchronous transmission. A method of electrical transfer in which a constant time interval
is maintained between successive bits or characters. Equipment within the system is kept in
step on the basis of this timing. Contrast with asynchronous transmission.
syntax. The structural or grammatical rules that define how symbols in a language are to be
combined to form words, phrases, expressions, and other allowable constructs.
system. (1) (ANSI) People, machines, and methods organized to accomplish a set of specific
functions. (2) (DOD) A composite, at any level of complexity, of personnel, procedures,
materials, tools, equipment, facilities, and software. The elements of this composite entity are
used together in the intended operational or support environment to perform a given task or
achieve a specific purpose, support, or mission requirement.
system administrator. The person that is charged with the overall administration, and
operation of a computer system. The System Administrator is normally an employee or a
member of the establishment. Syn: system manager.
system analysis. (ISO) A systematic investigation of a real or planned system to determine
the functions of the system and how they relate to each other and to any other system. See:
requirements phase.
system design. (ISO) A process of defining the hardware and software architecture,
components, modules, interfaces, and data for a system to satisfy specified requirements. See:
design phase, architectural design, functional design.

Version 9.1

A-53

Guide to the CABA CBOK


system design review. (IEEE) A review conducted to evaluate the manner in which the
requirements for a system have been allocated to configuration items, the system engineering
process that produced the allocation, the engineering planning for the next phase of the effort,
manufacturing considerations, and the planning for production engineering. See: design
review.
system documentation. (ISO) The collection of documents that describe the requirements,
capabilities, limitations, design, operation, and maintenance of an information processing
system. See: specification, test documentation, user's guide.
system integration. (ISO) The progressive linking and testing of system components into a
complete system. See: incremental integration.
system life cycle. The course of developmental changes through which a system passes from
its conception to the termination of its use; e.g., the phases and activities associated with the
analysis, acquisition, design, development, test, integration, operation, maintenance, and
modification of a system. See: software life cycle.
system manager. See: system administrator.
system safety. (DOD) The application of engineering and management principles, criteria,
and techniques to optimize all aspects of safety within the constraints of operational
effectiveness, time, and cost throughout all phases of the system life cycle. See: risk
assessment, software safety change analysis, software safety code analysis, software safety
design analysis, software safety requirements analysis, software safety test analysis, software
engineering.
system software. (1) (ISO) Application- independent software that supports the running of
application software. (2) (IEEE) Software designed to facilitate the operation and
maintenance of a computer system and its associated programs; e.g., operating systems,
assemblers, utilities. Contrast with application software. See: support software.
-TTB. terabyte.
TCP/IP. transmission control protocol/Internet protocol.
tape. Linear magnetic storage hardware, rolled onto a reel or cassette.
telecommunication system. The devices and functions relating to transmission of data
between the central processing system and remotely located users.
terabyte. Approximately one trillion bytes; precisely 240 or 1,099,511,627,776 bytes. See:
kilobyte, megabyte, gigabyte.
terminal. A device, usually equipped with a CRT display and keyboard, used to send and
receive information to and from a computer via a communication channel.

A-54

Version 9.1

Vocabulary
test. (IEEE) An activity in which a system or component is executed under specified
conditions, the results are observed or recorded and an evaluation is made of some aspect of
the system or component.
testability. (IEEE) (1) The degree to which a system or component facilitates the
establishment of test criteria and the performance of tests to determine whether those criteria
have been met. (2) The degree to which a requirement is stated in terms that permit
establishment of test criteria and performance of tests to determine whether those criteria have
been met. See: measurable.
test case. (IEEE) Documentation specifying inputs, predicted results, and a set of execution
conditions for a test item. Syn: test case specification. See: test procedure.
test case generator. (IEEE) A software tool that accepts as input source code, test criteria,
specifications, or data structure definitions; uses these inputs to generate test input data; and,
sometimes, determines expected results. Syn: test data generator, test generator.
test design. (IEEE) Documentation specifying the details of the test approach for a software
feature or combination of software features and identifying the associated tests. See: testing
functional; cause effect graphing; boundary value analysis; equivalence class partitioning;
error guessing; testing, structural; branch analysis; path analysis; statement coverage;
condition coverage; decision coverage; multiple-condition coverage.
test documentation. (IEEE) Documentation describing plans for, or results of, the testing of a
system or component, Types include test case specification, test incident report, test log, test
plan, test procedure, test report.
test driver. (IEEE) A software module used to invoke a module under test and, often, provide
test inputs, control and monitor execution, and report test results. Syn: test harness.
test harness. See: test driver.
test incident report. (IEEE) A document reporting on any event that occurs during testing
that requires further investigation. See: failure analysis.
test item. (IEEE) A software item which is the object of testing.
test log. (IEEE) A chronological record of all relevant details about the execution of a test.
test phase. (IEEE) The period of time in the software life cycle in which the components of a
software product are evaluated and integrated, and the software product is evaluated to
determine whether or not requirements have been satisfied.
test plan. (IEEE) Documentation specifying the scope, approach, resources, and schedule of
intended testing activities. It identifies test items, the features to be tested, the testing tasks,
responsibilities, required, resources, and any risks requiring contingency planning. See: test
design, validation protocol.
test procedure. (NIST) A formal document developed from a test plan that presents detailed
instructions for the setup, operation, and evaluation of the results for each defined test. See:
test case.
test readiness review. (IEEE) (1) A review conducted to evaluate preliminary test results for
one or more configuration items; to verify that the test procedures for each configuration item

Version 9.1

A-55

Guide to the CABA CBOK


are complete, comply with test plans and descriptions, and satisfy test requirements; and to
verify that a project is prepared to proceed to formal testing of the configuration items. (2) A
review as in (1) for any hardware or software component. Contrast with code review, design
review, formal qualification review, requirements review.
test report. (IEEE) A document describing the conduct and results of the testing carried out
for a system or system component.
test result analyzer. A software tool used to test output data reduction, formatting, and
printing.
testing. (IEEE) (1) The process of operating a system or component under specified
conditions, observing or recording the results, and making an evaluation of some aspect of the
system or component. (2) The process of analyzing a software item to detect the differences
between existing and required conditions, i.e. bugs, and to evaluate the features of the
software items. See: dynamic analysis, static analysis, software engineering.
testing, 100%. See: testing, exhaustive.
testing, acceptance. (IEEE) Testing conducted to determine whether or not a system satisfies
its acceptance criteria and to enable the customer to determine whether or not to accept the
system. Contrast with testing, development; testing, operational. See: testing, qualification.
testing, alpha []. (Pressman) Acceptance testing performed by the customer in a controlled
environment at the developer's site. The software is used by the customer in a setting
approximating the target environment with the developer observing and recording errors and
usage problems.
testing, assertion. (NBS) A dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth of the assertions is
determined as the program executes. See: assertion checking, instrumentation.
testing, beta []. (1) (Pressman) Acceptance testing performed by the customer in a live
application of the software, at one or more end user sites, in an environment not controlled by
the developer. (2) For medical device software such use may require an Investigational
Device Exemption [IDE] or Institutional Review Board [IRB] approval.
testing, boundary value. A testing technique using input values at, just below, and just
above, the defined limits of an input domain; and with input values causing outputs to be at,
just below, and just above, the defined limits of an output domain. See: boundary value
analysis; testing, stress.
testing, branch. (NBS) Testing technique to satisfy coverage criteria which require that for
each decision point, each possible branch [outcome] be executed at least once. Contrast with
testing, path; testing, statement. See: branch coverage.
testing, compatibility. The process of determining the ability of two or more systems to
exchange information. In a situation where the developed software replaces an already
working program, an investigation should be conducted to assess possible comparability
problems between the new software and other programs or systems. See: different software
system analysis; testing, integration; testing, interface.
testing, component. See: testing, unit.

A-56

Version 9.1

Vocabulary
testing, design based functional. (NBS) The application of test data derived through
functional analysis extended to include design functions as well as requirement functions.
See: testing, functional.
testing, development. (IEEE) Testing conducted during the development of a system or
component, usually in the development environment by the developer. Contrast with testing,
acceptance; testing, operational.
testing, exhaustive. (NBS) Executing the program with all possible combinations of values
for program variables. Feasible only for small, simple programs.
testing, formal. (IEEE) Testing conducted in accordance with test plans and procedures that
have been reviewed and approved by a customer, user, or designated level of management.
Antonym: informal testing.
testing, functional. (IEEE) (1) Testing that ignores the internal mechanism or structure of a
system or component and focuses on the outputs generated in response to selected inputs and
execution conditions. (2) Testing conducted to evaluate the compliance of a system or
component with specified functional requirements and corresponding predicted results. Syn:
black-box testing, input/output driven testing. Contrast with testing, structural.
testing, integration. (IEEE) An orderly progression of testing in which software elements,
hardware elements, or both are combined and tested, to evaluate their interactions, until the
entire system has been integrated.
testing, interface. (IEEE) Testing conducted to evaluate whether systems or components pass
data and control correctly to one another. Contrast with testing, unit; testing, system. See:
testing, integration.
testing, interphase. See: testing, interface.
testing, invalid case. A testing technique using erroneous [invalid, abnormal, or unexpected]
input values or conditions. See: equivalence class partitioning.
testing, mutation. (IEEE) A testing methodology in which two or more program mutations
are executed using the same test cases to evaluate the ability of the test cases to detect
differences in the mutations.
testing, operational. (IEEE) Testing conducted to evaluate a system or component in its
operational environment. Contrast with testing, development; testing, acceptance; See:
testing, system.
testing, parallel. (ISO) Testing a new or an altered data processing system with the same
source data that is used in another system. The other system is considered as the standard of
comparison. Syn: parallel run.
testing, path. (NBS) Testing to satisfy coverage criteria that each logical path through the
program be tested. Often paths through the program are grouped into a finite set of classes.
One path from each class is then tested. Syn: path coverage. Contrast with testing, branch;
testing, statement; branch coverage; condition coverage; decision coverage; multiple
condition coverage; statement coverage.

Version 9.1

A-57

Guide to the CABA CBOK


testing, performance. (IEEE) Functional testing conducted to evaluate the compliance of a
system or component with specified performance requirements.
testing, qualification. (IEEE) Formal testing, usually conducted by the developer for the
consumer, to demonstrate that the software meets its specified requirements. See: testing,
acceptance; testing, system.
testing, regression. (NIST) Rerunning test cases which a program has previously executed
correctly in order to detect errors spawned by changes or corrections made during software
development and maintenance.
testing, special case. A testing technique using input values that seem likely to cause program
errors; e.g., "0", "1", NULL, empty string. See: error guessing.
testing, statement. (NIST) Testing to satisfy the criterion that each statement in a program be
executed at least once during program testing. Syn: statement coverage. Contrast with testing,
branch; testing, path; branch coverage; condition coverage; decision coverage; multiple
condition coverage; path coverage.
testing, storage. This is a determination of whether or not certain processing conditions use
more storage [memory] than estimated.
testing, stress. (IEEE) Testing conducted to evaluate a system or component at or beyond the
limits of its specified requirements. Syn: testing, boundary value.
testing, structural. (1) (IEEE) Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing, path testing, statement
testing. (2) Testing to insure each program statement is made to execute during testing and
that each program statement performs its intended function. Contrast with functional testing.
Syn: white-box testing, glass-box testing, logic driven testing.
testing, system. (IEEE) The process of testing an integrated hardware and software system to
verify that the system meets its specified requirements. Such testing may be conducted in both
the development environment and the target environment.
testing, unit. (1) (NIST) Testing of a module for typographic, syntactic, and logical errors, for
correct implementation of its design, and for satisfaction of its requirements. (2) (IEEE)
Testing conducted to verify the implementation of the design for one software element; e.g., a
unit or module; or a collection of software elements. Syn: component testing.
testing, usability. Tests designed to evaluate the machine/user interface. Are the
communication device(s) designed in a manner such that the information is displayed in a
understandable fashion enabling the operator to correctly interact with the system?
testing, valid case. A testing technique using valid [normal or expected] input values or
conditions. See: equivalence class partitioning.
testing, volume. Testing designed to challenge a system's ability to manage the maximum
amount of data over a period of time. This type of testing also evaluates a system's ability to
handle overload situations in an orderly fashion.
testing, worst case. Testing which encompasses upper and lower limits, and circumstances
which pose the greatest chance finding of errors. Syn: most appropriate challenge conditions.

A-58

Version 9.1

Vocabulary
See: testing, boundary value; testing, invalid case; testing, special case; testing, stress; testing,
volume.
time sharing. (IEEE) A mode of operation that permits two or more users to execute
computer programs concurrently on the same computer system by interleaving the execution
of their programs. May be implemented by time slicing, priority-based interrupts, or other
scheduling methods.
timing. (IEEE) The process of estimating or measuring the amount of execution time required
for a software system or component. Contrast with sizing.
timing analyzer. (IEEE) A software tool that estimates or measures the execution time of a
computer program or portion of a computer program, either by summing the execution times
of the instructions along specified paths or by inserting probes at specified points in the
program and measuring the execution time between probes.
timing and sizing analysis. (IEEE) Analysis of the safety implications of safety-critical
requirements that relate to execution time, clock time, and memory allocation.
top-down design. Pertaining to design methodology that starts with the highest level of
abstraction and proceeds through progressively lower levels. See: structured design.
touch sensitive. (ANSI) Pertaining to a device that allows a user to interact with a computer
system by touching an area on the surface of the device with a finger, pencil, or other object,
e.g., a touch sensitive keypad or screen.
touch screen. A touch sensitive display screen that uses a clear panel over or on the screen
surface. The panel is a matrix of cells, an input device, that transmits pressure information to
the software.
trace. (IEEE) (1) A record of the execution of a computer program, showing the sequence of
instructions executed, the names and values of variables, or both. Types include execution
trace, retrospective trace, subroutine trace, symbolic trace, variable trace. (2) To produce a
record as in (1). (3) To establish a relationship between two or more products of the
development process; e.g., to establish the relationship between a given requirement and the
design element that implements that requirement.
traceability. (IEEE) (1) The degree to which a relationship can be established between two or
more products of the development process, especially products having a predecessorsuccessor or master-subordinate relationship to one another; e.g., the degree to which the
requirements and design of a given software component match. See: consistency. (2) The
degree to which each element in a software development product establishes its reason for
existing; e.g., the degree to which each element in a bubble chart references the requirement
that it satisfies. See: traceability analysis, traceability matrix.
traceability analysis. (IEEE) The tracing of (1) Software Requirements Specifications
requirements to system requirements in concept documentation, (2) software design
descriptions to software requirements specifications and software requirements specifications
to software design descriptions, (3) source code to corresponding design specifications and
design specifications to source code. Analyze identified relationships for correctness,
consistency, completeness, and accuracy. See: traceability, traceability matrix.

Version 9.1

A-59

Guide to the CABA CBOK


traceability matrix. (IEEE) A matrix that records the relationship between two or more
products; e.g., a matrix that records the relationship between the requirements and the design
of a given software component. See: traceability, traceability analysis.
transaction. (ANSI) (1) A command, message, or input record that explicitly or implicitly
calls for a processing action, such as updating a file. (2) An exchange between and end user
and an interactive system. (3) In a database management system, a unit of processing activity
that accomplishes a specific purpose such as a retrieval, an update, a modification, or a
deletion of one or more data elements of a storage structure.
transaction analysis. A structured software design technique, deriving the structure of a
system from analyzing the transactions that the system is required to process.
transaction flowgraph. (Beizer) A model of the structure of the system's [program's]
behavior, i.e., functionality.
transaction matrix. (IEEE) A matrix that identifies possible requests for database access and
relates each request to information categories or elements in the database.
transform analysis. A structured software design technique in which system structure is
derived from analyzing the flow of data through the system and the transformations that must
be performed on the data.
translation. (NIST) Converting from one language form to another. See: assembling,
compilation, interpret.
transmission control protocol/Internet protocol. A set of communications protocols
developed for the Defense Advanced Research Projects Agency to internetwork dissimilar
systems. It is used by many corporations, almost all American universities, and agencies of
the federal government. The File Transfer Protocol and Simple Mail Transfer Protocol
provide file transfer and electronic mail capability. The TELENET protocol provides a
terminal emulation capability that allows a user to interact with any other type of computer in
the network. The TCP protocol controls the transfer of the data, and the IP protocol provides
the routing mechanism.
trojan horse. A method of attacking a computer system, typically by providing a useful
program which contains code intended to compromise a computer system by secretly
providing for unauthorized access, the unauthorized collection of privileged system or user
data, the unauthorized reading or altering of files, the performance of unintended and
unexpected functions, or the malicious destruction of software and hardware. See: bomb,
virus, worm.
truth table. (1) (ISO) An operation table for a logic operation. (2) A table that describes a
logic function by listing all possible combinations of input values, and indicating, for each
combination, the output value.
tuning. (NIST) Determining what parts of a program are being executed the most. A tool that
instruments a program to obtain execution frequencies of statements is a tool with this feature.
twisted pair. A pair of thin-diameter insulated wires commonly used in telephone wiring. The
wires are twisted around each other to minimize interference from other twisted pairs in the

A-60

Version 9.1

Vocabulary
cable. Twisted pairs have less bandwidth than coaxial cable or optical fiber. Abbreviated UTP
for Unshielded Twisted Pair. Syn: twisted wire pair.
-Uunambiguous. (1) Not having two or more possible meanings. (2) Not susceptible to different
interpretations. (3) Not obscure, not vague. (4) Clear, definite, certain.
underflow. (ISO) The state in which a calculator shows a zero indicator for the most
significant part of a number while the least significant part of the number is dropped. For
example, if the calculator output capacity is four digits, the number .0000432 will be shown as
.0000. See: arithmetic underflow.
underflow exception. (IEEE) An exception that occurs when the result of an arithmetic
operation is too small a fraction to be represented by the storage location designated to receive
it.
unit. (IEEE) (1) A separately testable element specified in the design of a computer software
element. (2) A logically separable part of a computer program. Syn: component, module.
UNIX. A multitasking, multiple-user (time-sharing) operating system developed at Bell Labs
to create a favorable environment for programming research and development.
usability. (IEEE) The ease with which a user can learn to operate, prepare inputs for, and
interpret outputs of a system or component.
user. (ANSI) Any person, organization, or functional unit that uses the services of an
information processing system. See: end user.
user's guide. (ISO) Documentation that describes how to use a functional unit, and that may
include description of the rights and responsibilities of the user, the owner, and the supplier of
the unit. Syn: user manual, operator manual.
utility program. (ISO) A computer program in general support of the processes of a
computer; e.g., a diagnostic program, a trace program, a sort program. Syn: service program.
See: utility software.
utility software. (IEEE) Computer programs or routines designed to perform some general
support function required by other application software, by the operating system, or by the
system users. They perform general functions such as formatting electronic media, making
copies of files, or deleting files.
-VV&V. verification and validation.
VAX. virtual address extension.
VLSI. very large scale integration.

Version 9.1

A-61

Guide to the CABA CBOK


VMS. virtual memory system.
VV&T. validation, verification, and testing.
valid. (1) Sound. (2) Well grounded on principles of evidence. (3) Able to withstand criticism
or objection.
validate. To prove to be valid.
validation. (1) (FDA) Establishing documented evidence which provides a high degree of
assurance that a specific process will consistently produce a product meeting its
predetermined specifications and quality attributes. Contrast with data validation.
validation, process. (FDA) Establishing documented evidence which provides a high degree
of assurance that a specific process will consistently produce a product meeting its
predetermined specifications and quality characteristics.
validation, prospective. (FDA) Validation conducted prior to the distribution of either a new
product, or product made under a revised manufacturing process, where the revisions may
affect the product's characteristics.
validation protocol. (FDA) A written plan stating how validation will be conducted,
including test parameters, product characteristics, production equipment, and decision points
on what constitutes acceptable test results. See: test plan.
validation, retrospective. (FDA) (1) Validation of a process for a product already in
distribution based upon accumulated production, testing and control data. (2) Retrospective
validation can also be useful to augment initial premarket prospective validation for new
products or changed processes. Test data is useful only if the methods and results are
adequately specific. Whenever test data are used to demonstrate conformance to
specifications, it is important that the test methodology be qualified to assure that the test
results are objective and accurate.
validation, software. (NBS) Determination of the correctness of the final program or
software produced from a development project with respect to the user needs and
requirements. Validation is usually accomplished by verifying each stage of the software
development life cycle. See: verification, software.
validation, verification, and testing. (NIST) Used as an entity to define a procedure of
review, analysis, and testing throughout the software life cycle to discover errors, determine
functionality, and ensure the production of quality software.
valid input. (NBS) Test data that lie within the domain of the function represented by the
program.
variable. A name, label, quantity, or data item whose value may be changed many times
during processing. Contrast with constant.
variable trace. (IEEE) A record of the name and values of variables accessed or changed
during the execution of a computer program. Syn: data-flow trace, data trace, value trace. See:
execution trace, retrospective trace, subroutine trace, symbolic trace.

A-62

Version 9.1

Vocabulary
vendor. A person or an organization that provides software and/or hardware and/or firmware
and/or documentation to the user for a fee or in exchange for services. Such a firm could be a
medical device manufacturer.
verifiable. Can be proved or confirmed by examination or investigation. See: measurable.
verification, software. (NBS) In general the demonstration of consistency, completeness, and
correctness of the software at each stage and between each stage of the development life
cycle. See: validation, software.
verify. (ANSI) (1) To determine whether a transcription of data or other operation has been
accomplished accurately. (2) To check the results of data entry; e.g., keypunching. (3)
(Webster) To prove to be true by demonstration.
version. An initial release or a complete re-release of a software item or software element.
See: release.
version number. A unique identifier used to identify software items and the related software
documentation which are subject to configuration control.
very large scale integration. A classification of ICs [chips] based on their size as expressed
by the number of circuits or logic gates they contain. A VLSI IC contains 100,000 to
1,000,000 transistors.
virtual address extension. Identifies Digital Equipment Corporation's VAX family of
computers, ranging from a desktop workstation to a large scale cluster of multiprocessors
supporting thousands of simultaneous users.
virtual memory system. Digital Equipment Corporation's multiprocessing, interactive
operating system for the VAX computers.
virus. A program which secretly alters other programs to include a copy of itself, and
executes when the host program is executed. The execution of a virus program compromises a
computer system by performing unwanted or unintended functions which may be destructive.
See: bomb, trojan horse, worm.
volume. (ANSI) A portion of data, together with its data carrier, that can be handled
conveniently as a unit; e.g., a reel of magnetic tape, a disk pack, a floppy disk.
-WWAN. wide area network.
walkthrough. See: code walkthrough.
watchdog timer. (IEEE) A form of interval timer that is used to detect a possible
malfunction.
waterfall model. (IEEE) A model of the software development process in which the
constituent activities, typically a concept phase, requirements phase, design phase,
implementation phase, test phase, installation and checkout phase, and operation and

Version 9.1

A-63

Guide to the CABA CBOK


maintenance, are performed in that order, possibly with overlap but with little or no iteration.
Contrast with incremental development; rapid prototyping; spiral model.
white-box testing. See: testing, structural.
wide area network. A communications network that covers wide geographic areas such as
states and countries. Contrast with LAN, MAN.
word. See: computer word.
workaround. A sequence of actions the user should take to avoid a problem or system
limitation until the computer program is changed. They may include manual procedures used
in conjunction with the computer system.
workstation. Any terminal or personal computer.
worm. An independent program which can travel from computer to computer across network
connections replicating itself in each computer. They do not change other programs, but
compromise a computer system through their impact on system performance. See: bomb,
trojan horse, virus.
-XXmodem. An asynchronous file transfer protocol initially developed for CP/M personal
computers. First versions used a checksum to detect errors. Later versions use the more
effective CRC method. Programs typically include both methods and drop back to checksum
if CRC is not present at the other end. Xmodem transmits 128 byte blocks. Xmodem-1K
improves speed by transmitting 1024 byte blocks. Xmodem-1K-G transmits without
acknowledgment [for error free channels or when modems are self correcting], but
transmission is cancelled upon any error. Contrast with Kermit, Ymodem, Zmodem.
-YYmodem. An asynchronous file transfer protocol identical to Xmodem-1K plus batch file
transfer [also called Ymodem batch]. Ymodem-G transmits without acknowledgement [for
error-free channels or when modems are self correcting], but transmission is cancelled upon
any error. Contrast with Kermit, Xmodem, Zmodem.
-ZZmodem. An asynchronous file transfer protocol that is more efficient than Xmodem. It
sends file name, date and size first, and responds well to changing line conditions due to its
variable length blocks. It uses CRC error correction and is effective in delay-induced satellite
transmission. Contrast with Kermit, Xmodem, Ymodem.

A-64

Version 9.1

You might also like