0% found this document useful (0 votes)
11 views72 pages

Cs 605

The document provides a comprehensive overview of software engineering, covering its definition, characteristics of well-engineered software, and the balancing act between conflicting goals. It discusses various software process models, lifecycle models, and the importance of project management, team structure, and estimation techniques. The document emphasizes the need for structured processes, quality assurance, and effective communication to ensure successful software development.

Uploaded by

murtazacp779
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views72 pages

Cs 605

The document provides a comprehensive overview of software engineering, covering its definition, characteristics of well-engineered software, and the balancing act between conflicting goals. It discusses various software process models, lifecycle models, and the importance of project management, team structure, and estimation techniques. The document emphasizes the need for structured processes, quality assurance, and effective communication to ensure successful software development.

Uploaded by

murtazacp779
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 72

chap1

What is Software Engineering?

A set of tools, techniques, and processes used to produce software.

Includes:

Programming Languages & their Design

Software Design Techniques

Tools, Testing, Maintenance, Development

Even designing a programming language (like Ada) is part of software engineering as it supports building
better systems.

✅ Characteristics of Well-Engineered Software

Reliable

User-friendly

Efficient

High Quality

Cost-effective
Even with unlimited resources, building such software is still a challenge due to time and budget
constraints.

⚖️The Balancing Act

Software engineering is about balancing conflicting goals, such as:

Cost vs. Efficiency

Cost vs. Reliability

Efficiency vs. User Interface

Engineers must decide which aspects to prioritize depending on the project (e.g., safety in missiles vs.
cost in business apps).

📉 Law of Diminishing Returns

Like adding sugar to water, improvements in software (quality, UI, efficiency) yield less benefit after a
point despite increasing cost.

After achieving a “reasonable” quality, further investment may not be worth it.

🧠 Software Background (Caper Jones’ Research)

Caper Jones studied 10,000 projects and categorized software work into ~25 tasks. Key ones include:

Project Management

Requirement Engineering
Design

Coding (~13–14% of total effort)

Testing

SQA

Configuration & Integration

📘 “No Silver Bullet” by Fred Brooks

Managers wish for a magical solution to software problems (costs, delays, flaws).

No magic exists—only discipline and consistent effort work.

Just like germ theory revolutionized medicine, structured practices can improve software.

Software Development Activities

Construction Activities:

Requirement Gathering

Design

Coding
Testing

Management Activities:

Project Planning

Software Quality Assurance

Configuration Management

Installation & Training

🧩 Conclusion

Software Engineering is a disciplined, systematic approach that requires balancing priorities,


understanding economic feasibility, and following structured processes to develop successful software.

chap2

Software Process

A software process is a structured roadmap that guides the creation of high-quality software on time.

It includes activities that result in work products like code, documents, and data.

A good process provides stability and control over development.

Process Maturity and Capability Maturity Model (CMM)

Developed by SEI (Software Engineering Institute), the CMM helps assess an organization’s software
process maturity.
CMM Levels:

Level Name Description

1 Initial Processes are ad hoc or chaotic. Success depends on individual effort.

2 Repeatable Basic project management practices in place (cost, schedule, functionality).

3 DefinedProcesses are documented, standardized, and used throughout the organization.

4 Managed Processes and products are measured and controlled using metrics.

5 Optimizing Focus is on continuous process improvement using feedback and innovation.

Key Process Areas (KPAs)

Each maturity level (except Level 1) has associated KPAs, which are essential software practices.

Each KPA includes:

Goals – Objectives to achieve

Commitments – Organizational promises/requirements

Abilities – Resources and structure needed

Activities – Tasks to implement the KPA

Monitoring Methods – How activities are tracked

Verification Methods – How implementation is validated


KPAs by Level

Level KPAs

1 None – ad hoc processes

2 - Software Configuration Management

- Software Quality Assurance

- Software Subcontract Management

- Software Project Tracking and Oversight

- Software Project Planning

- Requirements Management

3 - Peer Reviews

- Inter-group Coordination

- Software Product Engineering

- Integrated Software Management

- Training Program

- Organizational Process Management

- Organizational Process Focus

4 - Software Quality Management

- Quantitative Process Management

5 (Not fully listed in your note, but generally includes:)

- Defect Prevention

- Technology Change Management

- Process Change Management

chap3

Software Lifecycle Models


Phases of a Software System

Vision – Why: Understand purpose.

Definition – What: Define requirements.

Development – How: Build the system.

Maintenance – Change: Adapt and fix.

Lifecycle Model Overview

A lifecycle model is a structured approach to organize software development activities:

Phases include: Requirements → Specification → Design → Implementation → Integration →


Maintenance → Retirement.

Common Software Development Models

1. Build-and-Fix Model

No design or specification

Keep building and fixing until the client is satisfied.

✅ Simple for small projects.

❌ Not scalable, no documentation, high maintenance cost.

2. Waterfall Model (a.k.a. Linear Sequential Model)


Sequential phases:

Requirement Analysis & Definition

System & Software Design

Implementation & Unit Testing

Integration & System Testing

Operation & Maintenance

Key features:

Complete phase before moving to the next.

Emphasizes documentation.

❌ Late client feedback → costly fixes.

3. Rapid Prototyping Model

Build a quick mock-up to understand user needs.

Once approved, discard prototype and start actual development.


✅ Captures accurate requirements early.

❌ Prototype not reusable, may mislead stakeholders if misunderstood as final product.

Combined Model: Rapid Prototyping + Waterfall

Use Rapid Prototyping for gathering requirements.

Then follow the Waterfall Model for actual development.

✅ Reduces misunderstanding.

✅ Maintains structured development.

chap4

Incremental Models

Problem with Waterfall: Client feedback is delayed until full product delivery, making corrections
expensive and time-consuming.

Solution: Incremental Model divides the system into smaller pieces (increments) delivered regularly.

Benefits:

Quick feedback from the client

Cost-effective adjustments

Early delivery of working software


Smaller upfront investment

Fast ROI (Return on Investment)

Requirements: Open architecture for easy integration of builds

Two Approaches:

Full Planning First: Requirements/specs/design done for full product, then builds are implemented.

Parallel Construction:

First build is specified → designed → implemented

While first is implemented, second is specified, and so on

Risk: Builds may not integrate well → requires tight coordination

Rapid Application Development (RAD)

Type: High-speed version of incremental model

Goal: Deliver fully functional system in 2–3 months

Ideal for: Projects with well-understood requirements & limited scope


Used in: Information Systems

Synchronize-and-Stabilize Model (Microsoft)

Process:

Conduct interviews to gather requirements

Create specification document

Divide project into 3–4 builds

Small teams work in parallel on builds

Daily: Code is synchronized (integrated & debugged)

End of build: Freeze and stabilize (remove defects)

Benefit: Ensures working software at all times; early user insights

Spiral Model (Barry Boehm)

Focus: Risk management during development

Basic Idea: Waterfall model + risk analysis

Process:
Identify objectives

Explore alternatives

Analyze risks

Develop and test

Plan the next iteration

Diagram: Spiral path showing cumulative cost (radial) and progress (angular)

Strengths:

Good risk handling

No strict separation of development vs. maintenance

Helps in determining how much testing is required

Limitations:

Best suited for large, in-house software projects

Not ideal for small or commercial off-the-shelf projects

chap5
Object-Oriented Lifecycle Models

Focus on iteration, parallelism, and incremental development.

These models adapt better to changing requirements and evolving system design.

🚀 Extreme Programming (XP)

User requirements are captured through stories (short feature descriptions).

Estimates are made for cost and time of each story.

Stories → Build → Tasks → Write Test Cases → Code with Continuous Testing.

Pair Programming: 2 developers work together at one computer.

No Overtime for more than 2 consecutive weeks.

Client representative is always present.

Good for projects with changing requirements and limited scope.

💧 Fountain Model

Activities overlap (not strictly sequential).

Arrows show iteration within each phase.


Smaller maintenance cycle = reduced maintenance effort due to object-orientation.

🧱 Rational Unified Process (RUP)

Developed by Rational Software; tightly integrated with UML and Krutchen’s architecture.

Uses iterations, early testing, risk handling, and parallel activities.

Horizontal Axis: Dynamic aspect — phases, iterations, milestones.

Vertical Axis: Static aspect — disciplines (e.g., design, testing), artifacts (e.g.,
diagrams), roles.

Emphasizes incremental delivery and continuous validation.

📊 Comparison of Lifecycle Models

Model Strengths Weaknesses

Waterfall Simple, well-structured Inflexible, late testing

Incremental Fast feedback, lower risk Needs good architecture

RAD Rapid delivery Scope and requirement limits

Synchronize & StabilizeDaily integration, teamwork Complex coordination

Spiral Risk-focused, iterative Costly, best for large in-house use

XP Agile, responsive to change Best for small teams/projects

RUP Structured, supports reuse Tool and process heavy

Quality Assurance (QA) & Documentation

QA is ongoing:
Verification after each phase

Validation before final delivery

Documentation is continuous and must not be postponed.

Phase Documents QA Activities

Requirement Definition Rapid prototype / Requirements doc Reviews

Functional Spec. Specification doc, SPMP, traceability Reviews, check SPMP

Design Architectural and Detailed Design Review, traceability

Coding Source code, test cases Code review, testing

Integration Integrated code, test cases Integration & acceptance testing

Maintenance Change records, regression test cases Regression testing

chap6

Importance of Software Project Management

Essential for tracking cost, schedule, and functionality.

Projects succeed with good management and fail with bad management.

Involves planning, organizing, monitoring, and controlling people and processes.

🎯 Key Factors Influencing Project Success

Project Size – Bigger projects = more complexity.


Delivery Deadline – Realistic deadlines improve quality.

Budget/Costs – Must be estimated and tracked carefully.

Application Domain – Known domains reduce risk.

Technology – New tech can help or hurt productivity.

System Constraints – Must meet non-functional needs.

User Requirements – Clear, complete requirements are crucial.

Available Resources – Must have skilled team members.

⚠️Project Management Concerns

Ensuring quality, assessing risk, and measuring productivity.

Estimating costs and schedule.

Maintaining communication with clients.

Staffing and securing other resources.

Monitoring progress effectively.

❌ Why Projects Fail


Changing or incomplete requirements.

Unrealistic deadlines.

Underestimating effort and risks.

Technical issues, poor communication.

Ineffective management.

🧩 The 4 P’s of Project Management (The Management Spectrum)

People – The most critical asset; proper organization and motivation needed.

Product – Clear understanding of functional & non-functional requirements.

Process – Choosing and following a structured development model.

Project – All coordinated tasks to build the product successfully.

👥 People & Leadership

Team Leader’s Role: Organize, motivate, and guide the team.

MOI Model:

Motivation – Inspire best performance.


Organization – Define or adjust processes.

Innovation – Promote creativity.

Qualities of an Effective Project Manager:

Problem Solver, Managerial Identity, Achievement Oriented, Team Builder.

According to DeMarco:

Heart – Compassion and care.

Nose – Detect problems early.

Gut – Make timely decisions.

Soul – Be the spirit of the team.

chap7,8

The Software Team

🔶 Factors Influencing Team Structure

To choose the best team structure, consider:

Problem complexity

Program size (lines of code or function points)


Team lifetime

Modularity of the problem

Required quality and reliability

Delivery deadline rigidity

Need for team communication

🔶 Constantine's Team Paradigms

Paradigm Description

Closed Hierarchical structure (traditional authority)

Random Loosely organized; relies on individual initiative

Open Mix of closed (control) and random (innovation)

Synchronous Modular tasks, minimal team communication

🔶 Mantei’s Team Structures

Type Description

Democratic Decentralized (DD) No fixed leader, horizontal communication, group


consensus

Controlled Decentralized (CD) Defined leader, horizontal + some vertical


communication

Controlled Centralized (CC) Team leader handles problem solving and


communication (vertical)
Centralized (CC) = faster for simple tasks

Decentralized (DD) = better ideas, suitable for complex problems

Team morale is highest in DD

🔶 Coordination & Communication Issues

Too little = confusion

Too much = inefficiency

Large projects: Prefer CC or CD

🔶 Coordination Techniques

Type Example

Formal, Impersonal Docs, memos, schedules, reports

Formal, Interpersonal QA reviews, status meetings

Informal, Interpersonal Group meetings, co-location

Electronic Emails, bulletin boards

Interpersonal Networking Informal team discussions

Value vs. effort shown via regression line graph

Techniques above the line = high value

The Product & Process

🔹 Defining the Problem


Establish:

Context

Information objectives

Functional and performance requirements

Then decompose the problem (functional partitioning) for estimation and planning.

🔹 Choosing the Process Model

Project Type Recommended Model

Small, known domain Waterfall

Tight timeline, modular tasks RAD

Large functionality, quick results Incremental

Uncertain requirements Prototyping

Lecture 8: The Project Management

🔶 Reel’s 5-Step Success Strategy

Start on the Right Foot – Understand problem, build a strong team

Maintain Momentum – Keep focus and progress

Track Progress – Monitor and take action early

Make Smart Decisions


Postmortem Analysis – Learn from mistakes for future improvement

🔶 W5HH Principle (Barry Boehm)

A 7-question framework:

Why is the system being built?

What will be done?

When will it be done?

Who is responsible?

Where are they located?

How will the job be done?

How much resources are needed?

✅ Applicable to any project size or type

🔶 Critical Success Practices (Airlie Council)

✅ Formal risk analysis

✅ Empirical cost & schedule estimation


✅ Metrics-based project management

✅ Earned value tracking

✅ Defect tracking vs. quality goals

✅ People-aware management

🔑 Mastering these practices = project success

chap9

Why Estimate Software Size?

To determine time, cost, and resources needed.

Helps in project planning and cost control.

A standard method is needed for fairness and consistency.

🔹 Ideal Estimation Method Criteria

Objective – not based on opinion.

Widely accepted – used across the industry.

Comparable – used as a common measure.

Meaningful to users – tied to deliverables.


Technology-independent – not tied to any specific programming language or tool.

🔹 Estimation Techniques

Lines of Code (LOC)

Number of objects

Number of GUIs

Number of document pages

Function Points (FP)

🔹 LOC vs FP

❌ Problems with LOC:

Definition is unclear (e.g., count comments or not?).

Depends on the programmer’s style.

Language-dependent (C++ vs Java).

Can’t be used until coding is done.

✅ Benefits of FP:

Based on functionality from the user’s view.


Can be measured early (during requirements).

Language/tool independent.

Enables consistent comparison between projects.

🔹 Paradox of Reversed Productivity

Example: Assembly vs Ada

Assembly code = more lines but harder and slower to write.

Ada = fewer lines, faster coding, lower cost overall.

Cost per line may seem higher, but total project cost and time are lower with Ada.

👉 So, judging productivity by LOC is misleading.

🔹 Function Point Analysis (FPA)

Developed by Allan Albrecht at IBM in the 1970s.

Managed by IFPUG since 1984.

A de facto standard for size measurement.

🔹 Global Adoption of FP

Used by IEEE, UK, Canada, Hong Kong, Australia, IRS, IBM, etc.
Especially common in outsourcing, benchmarking, and public sector projects.

🔹 Uses of FP

Estimating effort and scope.

Planning and managing projects.

Handling change requests.

Allocating resources.

Benchmarking and setting goals.

Contract negotiation.

🔹 Common FP-Based Metrics

Size: Function Points

Defects: Per Function Point

Effort: Staff-Months

Productivity: FP per Staff-Month

Duration: Calendar Months


Efficiency: FP per Month

Cost: Per Function Point

chap 10

Overview of Function Point Counting Process

Function Point Analysis (FPA) is a standard method for measuring software size by
quantifying its functionality provided to the user.

🧭 1. Determine the Type of Count

Three types of function point counts:

Development Count: All functions built or customized in the project.

Enhancement Count: Functions added/changed/deleted without changing the


application boundary.

Application Count:

a) Only user-used functions

b) All delivered functions

c) Boundary remains the same regardless of the scope.

2. Define the Application Boundary

Identifies the system’s external boundary.


Separates the application from the users.

Determines the scope of counting.

Affects the final function point count.

📊 3. Count Functional Components

A. Transactional Functions

EI (External Inputs)

EO (External Outputs)

EQ (External Inquiries)

B. Data Functions

ILF (Internal Logical Files): Maintained by the application.

EIF (External Interface Files): Used but not maintained by the application.

🧮 4. Calculate Function Points

Step 1: Unadjusted Function Point (UFP)

UFP = Count of Transactional + Data Functions

Step 2: Value Adjustment Factor (VAF)

Based on 14 general system characteristics (GSCs)


Step 3: Adjusted Function Point Count

FP = UFP × VAF

🔍 Data Function Counting Details

Internal Logical Files (ILFs)

Logical, user-identifiable data maintained within the application.

Must be updated by one or more elementary processes.

External Interface Files (EIFs)

Logical, user-identifiable data used by the application but maintained externally.

Must exist as ILFs in another application.

🔁 ILF vs. EIF

Feature ILF EIF

Maintained by App? ✅ Yes ❌ No

Located in App? ✅ Yes ❌ No (external app)

User-Identifiable? ✅ Yes ✅ Yes

🧱 Key Terms & Concepts

🔹 Control Information

Influences what, when, or how data is processed (e.g., payroll schedule).

🔹 User Identifiable
Clearly defined and agreed by both users and developers.

🔹 Maintained

Data can be created, updated, or deleted.

🔹 Elementary Process

Smallest meaningful activity from a user perspective.

Must leave the system in a consistent state.

📐 ILF/EIF Counting Rules

✅ ILF Rules

Group must be logical, user identifiable.

Maintained by processes within the application boundary.

✅ EIF Rules

Group must be logical, user identifiable.

Referenced, not maintained, by the application.

Maintained in another app's ILF.

🧩 Complexity Calculation – DETs and RETs

🔸 DET (Data Element Type)

Unique, user-recognizable, non-repeating field.


🔸 RET (Record Element Type)

A subgroup of data in an ILF or EIF.

🔹 DET Counting Rules

Count each unique field used or maintained.

Count calculated fields (e.g., tax).

Count foreign keys used to relate to another file.

Shared files: count only the DETs each application uses.

Example:

SSN, Name, Address = 3 DETs

12 monthly fields = 1 DET (for all) + 1 DET for month identifier = 2 DETs

chap11

RET (Record Element Type) Definition

RET = User-recognizable subgroup of data elements within an ILF (Internal


Logical File) or EIF (External Interface File).

Two types of RETs:

Mandatory subgroup: User must use at least one during an elementary process.
Optional subgroup: User may use one or none during an elementary process.

Example: In an HR system:

Salaried Employee (mandatory)

Hourly Employee (mandatory)

Dependent (optional)

RET Counting Rules

Count one RET for each optional or mandatory subgroup of an ILF or EIF.

If no subgroups exist, count the ILF or EIF as one RET.

Hints for Counting ILFs, EIFs, and RETs (not strict rules)

Logical grouping from user perspective:

Count an ILF or EIF only once even if used in multiple processes.

Cannot count the same logical file as both ILF and EIF.

Physical files don’t always equal logical files.

Location of data:
ILF = data maintained inside the application boundary.

EIF = data maintained outside the application boundary.

Maintenance through elementary processes:

Elementary processes can maintain multiple ILFs.

Count ILFs separately if maintained by multiple applications.

Entities and Their Role in RETs and Logical Files

Entity: A distinct thing or object (person, place, concept, event) relevant to the
user and business.

Strong Entity: Can stand alone — corresponds to ILFs or EIFs (logical files).

Weak Entity: Depends on another entity — corresponds to RETs (subgroups within


ILFs/EIFs).

Types of Weak Entities (all RETs):

Associative Entity: Represents many-to-many relationships (e.g., Student-Course).

Attributive Entity: Describes characteristics of another entity (e.g., Product-Part).

Entity Subtype: A specialized subset inheriting from a parent entity (e.g.,


Permanent Employee, Contract Employee).

Logical Files Grouping Approaches

Process Driven: Group entities that are created and deleted together as one logical
file.

Data Driven: Group based on entity independence (independent entities →


ILF/EIF; dependent entities → RETs).

Transactional Function Types and Definitions

Function Type Definition/Primary Intent

External Input (EI) Processes data/control info entering the application. Maintains
ILFs or alters system behavior.

External Output (EO) Sends data/control info outside; presents info with
processing logic like calculations or derived data. Can maintain ILFs or alter
behavior.

External Inquiry (EQ) Sends data/control info outside; presents info by retrieval
only, without derived data or altering behavior.

Processing logic may include validation, calculations, filtering, updating ILFs,


retrieving EIFs, creating derived data, altering system behavior, and presenting
information.

Summary Table of Function Purposes

Function Alter system behavior Maintain ILFs Present info to user

EI (External Input) Primary intent (PI) PI Sometimes (F)

EO (External Output) Sometimes (F) Sometimes (F) PI

EQ (External Inquiry) Not allowed (N/A) Not allowed PI

Legend of Function Types and Processing Logic


Symbol Meaning

m Mandatory that the function must perform this logic

m* Mandatory to perform at least one of these logic types

c Can perform, but not mandatory

n Cannot perform this form of processing logic

Elementary Process Identification

Elementary processes are smallest meaningful user activities.

Each must be self-contained and keep the application in a consistent state.

Transactional Functions Classification: EI, EO, EQ

Primary intents:

EI (External Input): Maintain an Internal Logical File (ILF) or change system


behavior.

EO (External Output): Present information to a user.

EQ (External Inquiry): Present information with no updates or derived data.

External Input (EI) Counting Rules

To count an EI:

Data/control must come from outside the app boundary.


Must maintain at least one ILF if not just control info changing behavior.

Must satisfy at least one:

Unique processing logic from other EIs.

Different set of data elements.

Different ILFs/EIFs referenced.

External Output (EO) and External Inquiry (EQ) Counting Rules

Shared Rules:

Sends data/control info outside application boundary.

Must satisfy at least one:

Unique processing logic compared to other EOs/EQs.

Different data element sets.

Different referenced files (ILFs/EIFs).

Additional EO Rules (one must apply):

Contains mathematical formulas/calculations.


Creates derived data.

Maintains at least one ILF.

Alters system behavior.

Additional EQ Rules (all must apply):

Retrieves data/control from ILF or EIF.

No formulas or calculations.

No derived data.

Does not maintain ILFs.

Does not alter system behavior.

Complexity and Contribution

Complexity depends on:

FTRs (File Types Referenced): Number of ILFs/EIFs accessed or maintained.

DETs (Data Element Types): Number of unique, user-recognizable fields crossing


the boundary.
FTR Rules for EI

Count FTR per ILF maintained.

Count FTR per ILF or EIF read.

Only count one FTR if ILF is both maintained and read.

DET Rules for EI

Count each user-recognizable field entering or leaving app boundary.

Do not count system-derived or retrieved fields not crossing boundary.

Count DET for error or confirmation messages sent outside boundary.

Example Clarifications

Unit price retrieved internally to add customer order is not a DET.

Local hourly rate provided by user is a DET; US hourly rate internally calculated is
not

FTR (File Type Referenced) Counting Rules for EOs and EQs

Count one FTR for each ILF (Internal Logical File) or EIF (External Interface File)
read during the process.

For EOs (External Outputs):

Count one FTR for each ILF maintained (updated) during processing.
Count only one FTR per ILF even if it is both read and maintained.

EO/EQ (External Output/External Inquiry) Characteristics

EO: Outputs with processing or calculations (e.g., reports with computed fields).

EQ: Outputs with no processing beyond data retrieval (e.g., simple listings).

Other Tips for Counting

The elementary process is the smallest unit of user-visible work, triggered by user
or internal events.

Consider the application boundary carefully: Only count data crossing this
boundary as DETs.

Logical processes invoked by different methods still count as one DET if they
represent the same action.

Check whether the process changes data (EO) or just retrieves it (EQ).

Example Based on Your Text:

If a user tries to add an existing employee and the system generates error
messages highlighting the incorrect field:

Count one DET for the whole set of system response messages indicating error or
confirmation.

If the user can start adding an employee by clicking OK or pressing a PF key:


Count one DET for the action initiation, regardless of the method.

Summary Table

Aspect Count as? Notes

User-entered unique fields (input) Yes, one DET each User recognizable, non-
repeated

Fields outputted (messages, reports) Yes, one DET each Include error,
confirmation messages

Same data both input and output Count once

Multiple methods to invoke same process One DET One action initiation
regardless of method

Internal derived fields not crossing boundary No

Literals, labels, page numbers, timestamps No

Each ILF/EIF referenced (read/maintained) One FTR each For EO/EQ


process

chap12 13Lecture 12: Software Process and Project Metrics

Why measure?

Measurement helps identify problems and assess the effectiveness of solutions.

Quoting Lord Kelvin: "If you can measure it and express it in numbers, you know
something about it."

Like doctors measure vital signs before treatment, software processes and
products should be measured to improve quality continuously.

Key concepts:
Measure: A quantitative value of an attribute (e.g., size of software).

Measurement: The process of collecting data (e.g., Function Point Analysis to


measure size).

Metric: A normalized relation of measures (e.g., defects per function point).

Indicators: Metrics that suggest potential problems, not the problems themselves.

Why measure in software?

Helps plan and estimate project effort and quality based on historical data.

Helps analyze bottlenecks, improve productivity and product quality.

Tom Gilb's advice:

Anything that can be quantified should be measured.

Measuring is better than not measuring, even if the method is imperfect.

Metrics for Software Quality:

Quality depends on the quality of intermediate work products (requirements,


design, code, testing).
Metrics include: errors/defects per function point, errors per review hour, errors
per KLOC (thousand lines of code).

These help assess the effectiveness of quality assurance at team and individual
levels.

Lecture 13: Software Quality Factors

McCall’s Software Quality Factors (1978):

Grouped by phases of software lifecycle:

Operation phase:

Correctness: Meets specs and mission objectives.

Reliability: Performs intended function accurately.

Efficiency: Uses resources optimally.

Integrity: Controls unauthorized access.

Usability: Ease of learning and using.

Revision phase:

Maintainability: Effort to fix errors.

Flexibility: Effort to modify.


Testability: Effort to test correctness.

Adaptation phase:

Portability: Effort to move across environments.

Reusability: Can be reused in other apps.

Interoperability: Effort to connect with other systems.

Note: Despite technological advances, these factors remain highly relevant.

Measuring Quality (Gilb’s extension):

Correctness: Measured by defects per KLOC or per function point (defects =


verified non-conformance).

Maintainability:

MMTC (Mean Time To Change): Time to analyze, design, implement, test, and
deploy a change.

Spoilage cost: Cost of fixing defects after release, tracked over time.

Integrity: Ability to withstand attacks (accidental or intentional).

Measured by threat probability and security probability:


Integrity = Σ [(1 - threat) × (1 - security)]

Usability: Assessed by:

Skill needed to learn system

Time to become moderately efficient

Productivity improvement

Subjective user feedback

Defect Removal Efficiency (DRE)

Measures effectiveness of QA processes before shipping product.

Formula:

𝐷
DRE=

E+D

where

E = defects found before delivery,

D = defects found after delivery.

Data from Capers Jones (1997) shows:

Testing alone finds ~40% defects on average.

Combining design and code inspections with testing can raise DRE up to 97%.

Design and code inspections are critical yet often undervalued.

chap 14

Quality of Design Metrics

Structural Complexity:

𝑜
𝑢

S=(f

out

Data Complexity:

D=

out

+1
v

v = number of input and output variables

System Complexity:

C=∑(S

+D

Baseline for Metrics


Data from past projects must be collected, cleaned, and stored in a database.

Baselines allow estimation and process improvement.

Data must be:

Accurate

Collected consistently using the same method

Relevant to similar applications

Improved over time with feedback

Metrics in Small Organizations

Full-scale metrics programs can be resource-intensive.

Even small organizations (~20 people) benefit from a simple, cost-effective, value-
oriented metrics program.

Focus should be on results, not just measurement.

Define clear objectives for what you want to measure.

Example: Measuring Change Request Processing Time

Measure:
𝑡

queue

: Time from request submission to evaluation completion

Size of change request (function points)

Evaluation effort

eval

(person-months)

Time from evaluation to assignment


𝑡

eval

Effort to make change

change

Time to make change

𝑐

change

Errors during change

change

Defects after release

𝑐

change

Sample Data Table (Before Normalization)

Project Size (FP) Effort (Pm) Cost (Rs. '000) Documentation (pages)
Pre-shipment errors Post-shipment defects People

abc 120 24 168000 365 134 29 3

def 270 62 440000 1224 321 86 5

ghi 200 43 314000 1050 256 64 6

Normalized Data (Per Function Point)

Project Size (FP) Effort (Pm/FP) Cost (Rs/FP)Doc (pages/FP) Pre-


shipment errors/FP Post-shipment defects/FPPeople

abc 120 0.2 1400 3.04 1.12 0.24 3

def 270 0.23 1629 4.53 1.19 0.32 5

ghi 200 0.22 1570 5.25 1.28 0.32 6

Using Data for Analysis

Use statistical and graphical techniques to analyze process improvements and


their impact.
This approach is called statistical control techniques.

chap 15

Quality of Design Metrics

Structural Complexity:

S=(f

out

Data Complexity:

=
𝑣

D=

out

+1

v = number of input and output variables

System Complexity:

𝑖
+

C=∑(S

+D

Baseline for Metrics

Data from past projects must be collected, cleaned, and stored in a database.

Baselines allow estimation and process improvement.

Data must be:

Accurate

Collected consistently using the same method

Relevant to similar applications

Improved over time with feedback


Metrics in Small Organizations

Full-scale metrics programs can be resource-intensive.

Even small organizations (~20 people) benefit from a simple, cost-effective, value-
oriented metrics program.

Focus should be on results, not just measurement.

Define clear objectives for what you want to measure.

Example: Measuring Change Request Processing Time

Measure:

queue

: Time from request submission to evaluation completion

Size of change request (function points)


Evaluation effort

eval

(person-months)

Time from evaluation to assignment

eval

Effort to make change


𝑎

change

Time to make change

change

Errors during change


𝑎

change

Defects after release

change

Sample Data Table (Before Normalization)

Project Size (FP) Effort (Pm) Cost (Rs. '000) Documentation (pages)
Pre-shipment errors Post-shipment defects People

abc 120 24 168000 365 134 29 3


def 270 62 440000 1224 321 86 5

ghi 200 43 314000 1050 256 64 6

Normalized Data (Per Function Point)

Project Size (FP) Effort (Pm/FP) Cost (Rs/FP)Doc (pages/FP) Pre-


shipment errors/FP Post-shipment defects/FPPeople

abc 120 0.2 1400 3.04 1.12 0.24 3

def 270 0.23 1629 4.53 1.19 0.32 5

ghi 200 0.22 1570 5.25 1.28 0.32 6

Using Data for Analysis

Use statistical and graphical techniques to analyze process improvements and


their impact.

This approach is called statistical control techniques.

chap16

Purpose of Control Charts

Used to check if variations in process metrics over time are statistically valid or
due to random chance.

Helps distinguish between stable (in control) and unstable (out of control)
processes.

Originated in manufacturing (1920s, Walter Shewart), now widely used in software


engineering.

Types of Control Charts

Moving Range Control Chart


Individual Control Chart

Moving Range Control Chart Steps

Calculate moving ranges: absolute differences between successive data points.

Compute the mean moving range and plot it.

Multiply mean moving range by 3.268 to get the Upper Control Limit (UCL).

Plot moving ranges and check if they stay within UCL.

Inside UCL = process stable

Outside UCL = process unstable

Individual Control Chart Steps

Plot individual metric values.

Calculate the average metric value (Am).

Calculate Upper Natural Process Limit (UNPL) = Am + (mean moving range ×


2.66).

Calculate Lower Natural Process Limit (LNPL) = Am - (mean moving range ×


2.66).

(Don't plot LNPL if negative and values cannot be < 0)


Calculate standard deviation:

σ=

UNPL−Am

Plot lines at ±1σ and ±2σ from Am.

Use 4 zone rules to detect out-of-control process:

Any value outside UNPL

Two of three consecutive points >2σ away from Am


Four of five consecutive points >1σ away from Am

Eight consecutive points on one side of Am

If none of the above, process is in control.

Example Result Interpretation

After a process improvement starting from project 11:

Variability decreased.

Average time for change implementation improved by 29%.

Conclusion: The changes were effective and process is stable.

chap17

Lecture 16: Interpreting Measurements in Software Projects

Good metrics should be simple, cheap, and valuable for management.

Defects tracking:

Plot defects reported vs defects fixed over time.

The gap between reported and fixed defects shows product stability:

Increasing gap → unstable product


Decreasing gap → stable product, ready for shipment

Defects vs use cases run:

Use control limits from past data.

If defects < lower limit → insufficient testing coverage.

If defects > upper limit → poor design/coding quality.

Ripple defects indicate design coupling:

High ripple defect frequency → tight coupling → poor maintainability.

Not-a-defect issues:

Many defects marked “not-a-defect” means requirements misunderstanding


between teams.

Lecture 17: Software Project Planning

Project planning covers:

Software scope estimation

Resource requirements
Time requirements

Structural decomposition

Risk analysis and planning

Software Scope Estimation:

Understand what data, controls, functions, performance, constraints, interfaces,


and reliability are needed.

Meet with client to gather requirements.

Use FAST (Facilitated Application Specification Technique) — a collaborative


approach with customers and developers.

Feasibility Analysis: Check if the project is feasible in terms of:

Technology: Can we build it technically?

Finance: Can we afford it?

Time: Can we finish on time?

Resources: Do we have required tools and people?

Software Project Estimation:


Difficult and influenced by many factors.

Use historic data, decomposition, and empirical models.

Empirical model formula example:

E=A+B×(ev)

where

E is effort (person-months),

𝑣
ev is size (LOC or FP), and

A,B,C are constants.

Example: COCOMO model

3.2

1.05

E=3.2×(KLOC)

1.05

Software Equation (Dynamic Estimation Model):


𝐸

0.333

E=[

LOC×B

0.333

3
×(

where:

E = effort (person-months)

t = duration (months)

B = skill factor (integration, QA, etc.)

P = productivity parameter (process maturity, tools, team skills)

Buy vs Build Decision:

Options: Build, Reuse, Buy, Contract.

Steps:
Develop specs for function & performance.

Estimate internal cost & time.

Select candidate apps/components.

Compare options via a decision matrix or benchmark tests.

Evaluate quality, support, reputation.

Check opinions from other users.

Considerations:

Delivery date

Development cost (acquisition + customization)

Maintenance cost

Use decision tree with probabilities to calculate expected cost for each option.

Example costs calculated showed Buy was the most cost-effective option.

Risk Analysis and Management Overview

Risk is about future uncertain events that might affect the project.
It involves:

Future: Identifying risks that could cause the project to fail or deviate.

Change: Understanding how changes in requirements, technology, personnel, etc.,


can trigger risks.

Choice: Evaluating options to handle each risk.

Risk Characteristics:

Uncertainty: The risk may or may not occur.

Loss: If the risk happens, it results in negative consequences.

Risk Management Approaches:

Reactive: Wait for problems to happen, then respond (like Indiana Jones).

Proactive: Identify and analyze risks before work starts, rank them, and prepare a
risk management plan.

Main goal: Avoid risks where possible.

Prepare contingency plans for risks that cannot be avoided.


Risk Analysis involves answering:

What can go wrong?

What is the likelihood of it going wrong?

What will the damage be?

What can we do about it?

You might also like