0% found this document useful (0 votes)
28 views134 pages

Software Engineering Unit 5 .......... 13-August-2021

Uploaded by

project group 87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views134 pages

Software Engineering Unit 5 .......... 13-August-2021

Uploaded by

project group 87
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 134

CSE3005 - Software Engineering

UNIT – 5 : QUALITY & MAINTENANCE


13-AUGUST-2021
Topics not covered
1. The Unified Process- Introduction to Agile Process.
2. Empirical Estimation Models - The Make/Buy Decision
3. Art of debugging – Project management
4. Jacobson Methodology Unified Approach.
QUALITY & MAINTENANCE
- Software evolution (21)
- Verification and Validation (23)
- Critical Systems Validation (24)
- Metrics for Process, Project and Product
- Quality Management (27)
- Process Improvement (28)
- Risk Management Configuration Management (29)
- Software Cost Estimation (26)
Software Evolution
Software change is inevitable
→New requirements emerge when the software is used;

→The business environment changes;

→Errors must be repaired;

→New computers and equipment is added to the system;

→The performance or reliability of the system may have to be improved.


Software Evolution
A key problem for organizations is implementing and managing
change to their existing software systems.
Software Evolution
Why we have to Do?

• Organizations have huge investments in their software systems - they are critical
business assets.

• To maintain the value of these assets to the business, they must be changed and
updated.

• The majority of the software budget in large companies is devoted to evolving


existing software rather than developing new software.
Software Evolution
Software Evolution

• Program Evolution Dynamics


• Software Maintenance
• Evolution Processes
• Legacy System Evolution
Software Evolution
Program Evolution Dynamics
→ PED is the study of the processes of system change.

→ Lehman and Belady proposed that there were a number of ‘laws’ which
applied to all systems as they evolved.
→There are sensible observations rather than laws. They are applicable to large
systems developed by large organizations. Perhaps less applicable in other
cases.
Software Evolution
Program Evolution Dynamics (Cont.)
→Continuing Growth
→ Continuing Change
→Declining Quality
→Increasing Complexity
→ Feedback System
→Large Program Evolution
The majority of organizations,
→Organizational Stability
during their evolutionary process,
→Conservation of Familiarity incorporate these laws.
Software Evolution
First Law (Continuing Change)
→ System maintenance is an inevitable process

→ As the systems environment changes, new requirements emerge and the


system must be modified.
→ Evolution process is recycle: This modification system promotes new
environmental changes
Software Evolution
Second Law (Increasing Complexity)
→ As a system changed, Its structure is degraded.

→ To avoid this, invest in preventative maintenance where you spend time


improving the software structure without adding to its functionality
→ Extra resources must be devoted to preserving and simplifying the
structure
Software Evolution
Third Law (Large Program)
→ Program evolution is a self-regulating process.

→System attributes such as size, time between releases and the number of
reported errors is approximately invariant for each system release.
Software Evolution
Fourth Law (Organizational Stability)
→ Over a program’s lifetime, Its rate of development is approximately
constant and independent of the resources devoted to system development.
→This law confirms that large software development teams are often
unproductive because communication overheads dominate the work of the
team.
Software Evolution
Fifth Law (Conservation of Familiarity)
→ Over the lifetime of a system, the incremental change in each release is
approximately constant.
→ Avoid large functionality release

→ Include little new functionality


Software Evolution
Sixth Law (Continuing Growth)
→ The functionality offered by systems has to continually increase to
maintain user satisfaction.
Software Evolution
Seventh Law (Declining Quality)
→ The quality of systems will appear to be declining unless they are
adapted to changes in their operational environment.
Software Evolution
Eight Law (Feedback System)
→ Evolution processes incorporate multi-agent. multi-loop feedback
systems and you have to treat them as feedback systems to achieve
significant product improvement.
Software Evolution
Software Maintenance
→ Changing a system after it has been delivered

→ Separate development groups involved before and after delivery


→ Simple Error – Coding errors

→ More extensive – Design errors

→ Significant changes – specification errors or accommodate new requirements.


Software Evolution
Three types
→Maintenance to repair software faults

→Maintenance to adapt the software to a different


operating environment
→Maintenance to add to or modify the system's
functionalities
Software Evolution
Maintenance Cost
→Four time more than development

→ To increase cost effectiveness

→ Invest effort in designing and implementation

→ Very difficult to modify the system after


delivery
→ during development make easier to understand
Software Evolution
Maintenance Cost Factors
→ Team stability : same staff involvement

→ Contractual responsibility : separate contract

→ Staff skills : staff are often relatively inexperienced and unfamiliar with the
application domain.
→ As programs age, their structure is degraded and they become harder to understand
and change (software re-engineering)
Software Evolution
Maintenance Prediction
→ It talks about, which parts of the system may cause problems and have high maintenance costs

→ Whether a system change should be accepted depends, to some extent, on the


maintainability of the system components affected by that change.
→ Implementing system changes tends to degrade the system structure and hence reduce its
maintainability.
→ Maintenance costs depend on the number of changes, and the costs of change
implementation depend on the maintainability of system components.
Software Evolution
Maintenance prediction
Software Evolution
Change Prediction
→ Predicting the number of changes requires and understanding of the relationships between a
system and its environment.
→ Tightly coupled systems require changes whenever the environment is changed.

→ Factors influencing this relationship are

→ Number and complexity of system interfaces; (demands for change)

→ Number of inherently volatile system requirements; (reflects organizational policy)

→ The business processes where the system is used. (environment)


Software Evolution
Complexity metrics
→ Predictions of maintainability can be made by assessing the complexity of system components.

→ Studies have shown that most maintenance effort is spent on a relatively small number of system
components.
→ Complexity depends on

→ Complexity of control structures;

→ Complexity of data structures;

→ Object, method (procedure) and module size.


Software Evolution
Process metrics
→ Process measurements may be used to assess maintainability

→Number of requests for corrective maintenance;

→Average time required for impact analysis;

→Average time taken to implement a change request;

→Number of outstanding change requests.

→If any or all of these is increasing, this may indicate a decline in maintainability.
Evolution processes
Evolution processes vary depend on
→ The type of software being maintained;

→ The development processes used;

→ The skills and experience of the people involved

→ Formal, also, informal

→Proposals for system change are the driver for system evolution.

→Change identification and evolution continue throughout the system lifetime.


Evolution processes
Process of Change identification and evolution process
Evolution processes
System evolution process
Evolution processes
Change Implementation
Emergency repair
Evolution processes
Emergency Change Request
→ Urgent changes may have to be implemented without going through all stages of the software
engineering process

→ If a serious system fault has to be repaired;

→ If changes to the system’s environment (e.g. an OS upgrade) have unexpected effects;

→ If there are business changes that require a very rapid response (e.g. the release of a competing
product).
Evolution processes
Re-Engineering
Evolution processes
Re-Engineering
→ Re-structuring or re-writing part or all of a legacy system without changing its
functionality.
→Applicable where some but not all sub-systems of a larger system require frequent
maintenance.
→Re-engineering involves adding effort to make them easier to maintain. The system
may be re- structured and re-documented.
Evolution processes
Advantages of reengineering
→ Reduced risk

→ There is a high risk in new software development. There may be development


problems, staffing problems and specification problems.
→ Reduced cost

→The cost of re-engineering is often significantly less than the costs of developing
new software.
Re-Engineering Process
Evolution processes
Activities in reengineering process

Source code translation
• Convert code to a new language.

Reverse engineering
• Analyze the program to understand it;

Program structure improvement
• Control Structure of the program analysed and modified

Program modularisation
• Reorganize the program structure; Grouped together to remove the redundancy

Data reengineering
• Clean-up and restructure system data.
Re-Engineering Approaches (Cost vs Re-engineering)
Evolution processes
Reengineering Cost factors

The quality of the software to be reengineered.

The tool support available for reengineering.

The extent of the data conversion which is required.

The availability of expert staff for reengineering.

This can be a problem with old systems based on technology that
is no longer widely used.
Legacy System Evolution

Organizations that rely on legacy systems must choose a strategy for
evolving these systems
● Scrap the system completely – not making an effective contribution
● Leave the system unchanged and continue with regular maintenance
– stable and few changes
● Re-engineer the system to improve its maintainability – Quality
degraded, regular changes
● Replace all or part of the system with a new system – change
hardware
System quality and business value
Legacy System Evolution

Four Categories
● Low quality, low business value
● These systems should be scrapped.
● Low-quality, high-business value
● Should be re-engineered or replaced if a suitable system is available
● High-quality, low-business value
● Maintain and later scrap completely
● High-quality, high business value
● Continue in operation using normal system maintenance.
Legacy System Evolution

Assessment should take different viewpoints into account
● System end-users;
● Business customers
● Line managers
● IT managers
● Senior managers
● Interview different stakeholders and collate results.
Legacy System Evolution

System Quality Assessment
● Business process assessment
● How well does the business process support the current goals of the
business?
● Environment assessment
● How effective is the system’s environment and how expensive is it to
maintain?
● Application assessment
● What is the quality of the application software system?
Verification and Validation
Verification and validation planning
Software inspections
Automated static analysis
Cleanroom software development
Verification and Validation
Verification:
"Are we building the product right”.
The software should conform to its specification.
Validation:
"Are we building the right product”.
The software should do what the user really requires.
Verification and Validation
V & V process
• Is a whole life-cycle process - V & V must be applied at each stage in the
software process.
• Has two principal objectives
→The discovery of defects in a system;

→The assessment of whether or not the system is useful and useable in an operational
situation.
Verification and Validation
V & V Goals
• Verification and validation should establish confidence that the software is
fit for purpose.
• This does NOT mean completely free of defects.
• Rather, it must be good enough for its intended use and the type of use will
determine the degree of confidence that is needed.
Verification and Validation
When and Where ?
→Depends on system’s purpose, user expectations and marketing environment

→Software function
→The level of confidence depends on how critical the software is to an organisation.

→User expectations
→Users may have low expectations of certain kinds of software.

→Marketing environment
→Getting a product to market early may be more important than finding defects in the program.
Verification and Validation
Static vs Dynamic Verification
• Software inspections
Concerned with analysis of the static system representation to discover problems (static
verification)
→May be supplement by tool-based document and code analysis

• Software testing
Concerned with exercising and observing product behaviour (dynamic verification)
→The system is executed with test data and its operational behaviour is observed
Verification and Validation

Software
inspections

Requirements High-level Formal Detailed


Program
specification design specifica tion design

Prog ram
Prototype
testing
Verification and Validation
Program testing
→Can reveal the presence of errors NOT their absence.

→The only validation technique for non-functional requirements as the software has to
be executed to see how it behaves.
→Should be used in conjunction with static verification to provide full V&V coverage.
Verification and Validation
Types of testing
Defect testing
→Tests designed to discover system defects.

→A successful defect test is one which reveals the presence of defects in a system.

→Covered in Chapter 23

Validation testing
→Intended to show that the software meets its requirements.

→A successful test is one that shows that a requirements has been properly implemented.
Testing and debugging
Testing and debugging

→Defect testing and debugging are distinct processes.

→Verification and validation is concerned with establishing the existence of defects in a program.

→Debugging is concerned with locating and repairing these errors.

→Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses
to find the system error.
Verification and Validation
The debugging process

Test Test
Specification
results cases

Locate Design Repair Retest


error error repair error prog ram
Verification and Validation
V & V planning
→Careful planning is required to get the most out of testing and inspection processes.

→Planning should start early in the development process.

→The plan should identify the balance between static verification and testing.

→Test planning is about defining standards for the testing process rather than describing
product tests.
Planning verification and validation
The V-model of development

Requir ements System System Detailed


specification specification design design

System Sub-system Module and


Acceptance
integration integ ration unit code
test plan
test plan test plan and test

Acceptance System Sub-system


Service
test integ ration test integ ration test
Planning verification and validation
The structure of a software test plan
→The testing process
→A description of the major phases of the testing process. These might be as described earlier in this chapter.

→Requirements traceability
→Users are most interested in the system meeting its requirements and testing should be planned so that all
requirements are individually tested.

→ Tested items.
→The products of the software process that are to be tested should be specified.
Planning verification and validation
The structure of a software test plan
• Testing schedule
→An overall testing schedule and resource allocation for this schedule. This, obviously, is linked to the more general project
development schedule.
• Test recording procedures
→It is not enough simply to run tests. The results of the tests must be systematically recorded. It must be possible to audit the testing
process to check that it been carried out correctly.
• Hardware and software requirements
→This section should set out software tools required and estimated hardware utilisation.
• Constraints
→Constraints affecting the testing process such as staff shortages should be anticipated in this section.
Verification and Validation
Planning verification and validation

Software inspections

Automated static analysis


Verification and formal methods
Software Inspections
→ Software Inspection is a static V & V process, reviewed to find errors,
omissions and anomalies.
→They may be applied to any representation of the system (requirements,
design, configuration data, test data, etc.) to discover errors
→They have been shown to be an effective technique for discovering
program errors.
Software Inspections
Advantages of inspection over testing:
→ During testing, errors can mask (hide) other errors.

→ Single inspection session covers many errors in system

→ Incomplete versions of a system can be inspected without additional costs.


→ Incomplete tasks are comes under development.
→ An inspection can also consider broader quality attributes of a program
→ Compliance with standards, portability and maintainability, inappropriate algorithms and poor
programming
Software Inspections
Advantages of inspection over testing:
→ During testing, errors can mask (hide) other errors.

→ Single inspection session covers many errors in system

→ Incomplete versions of a system can be inspected without additional costs.


→ Incomplete tasks are comes under development.
→ An inspection can also consider broader quality attributes of a program
→ Compliance with standards, portability and maintainability, inappropriate algorithms and poor
programming
Program inspections
→ Objective of program inspection is defect detection
→ E.g. logical errors, anomalies in the code, etc.

→ Is a formal process that is carried out by a team od at least four people.

→ Authors, Reader, Tester and Moderator

→ They analyse the code and point out possible defects


Program inspections
→ General Process Diagram

Planning

Overview Follow-up
Individual
Rework
preparation
Inspection
meeting
Program inspections
→ System overview presented to inspection team.

→Code and associated documents are distributed to inspection team in


advance.
→Inspection takes place and discovered errors are noted.

→Modifications are made to repair discovered errors.

→Re-inspection may or may not be required.


Program inspections
→ Author or owner: The programmer or designer responsible for producing the program or document. Responsible for
fixing defects discovered during the inspection process.

→ Inspector: Finds errors, omissions and inconsistencies in programs and documents. May also identify broader issues
that are outside the scope of the inspection team.

→ Reader: Presents the code or document at an inspection meeting.

→ Scribe: Records the results of the inspection meeting.

→ Chairman or moderator: Manages the process and facilitates the inspection. Reports process results to the Chief
moderator.

→ Chief moderator: Responsible for inspection process improvements, checklist updating, standards development etc.
Program inspections
Inspection Checklist
→ A checklist of common programmer errors is often used to focus the discussion.

→ Error checklists are programming language dependent and reflect the characteristic
errors that are likely to arise in the language.
→ Timing: Depends on the experience of the inspection team, the programming
language and the application domain.
Program inspections
Example : Inspection Checklist
→ Data faults Are all program variables initialised before their values are
used?
Have all constants been named?
Should the upper bound of arrays be equal to the size of the
array or Size -1?
If character strings are used, is a de limiter explicitly
assigned?
Is there any possibility of buffer overflow?
Control faults For each conditional statement, is the condition correct?
Is each loop certain to terminate?
Are compound statements correctly bracketed?
In case statements, are all possible cases accounted for?
If a break is required after each case in case statements, has
it been included?
Input/output faults Are all input variables used?
Are all output variables assigned a value before they are
output?
Can unexpected inputs cause corruption?
Program inspections
Inspection Rate
→ 500 statements/hour during overview.

→125 source statement/hour during individual preparation.

→90-125 statements/hour can be inspected.

→Inspection is therefore an expensive process.

→Inspecting 500 lines costs about 40 man/hours effort - about 2,24,000 Rupees
Verification and Validation
Planning verification and validation

Software inspections

Automated static analysis

Verification and formal methods


Automated Static Analysis
→ Static analyzers are software tools for source text processing.

→ Automated checklist task.

→ It detect whether statements are well formed, make inferences about the control flow in
the program and, in many cases, compute the sell of all possible values for program data.
→ The Intention of automatic static analysis is to draw an inspector's attention

→ Anomalies in the program: variables without initialization, loop range, and variables
unused
Automated Static Analysis
Stages of static Analysis
→ Control flow Analysis : loops, exit and entry

→ Data use Analysis : highlights variables

→ Inference Analysis : consistency of routine and procedure declarations and their use

→ Information flow Analysis : dependencies between input and output variables.

→ Path Analysis : paths through the program and sets out the statements executed in that path.
Static analysis checks
Fault class Static analysis check
Data faults Variables used before initialisation
Variables declared but never used
Variables assigned twice but never used between assignments
Possible array bound violations
Undeclared variables
Control faults Unreachable code
Unconditional branches into loops
Input/output faults Variables output twice with no intervening assignment
Interface faults Parameter type mismatches
Parameter number mismatches
Non-usage of the results of functions
Uncalled functions and procedures

Storage management faults Unassigned pointers


Pointer arithmetic
Automated Static Analysis
Example: Linter Static code Analyzer Tool
138% more lint_ex.c

#include <stdio.h>
printarray (Anarray)
int Anarray;
{ printf(“%d”,Anarray); }
 main ()
{
int Anarray[5]; int i; char c;
printarray (Anarray, i, c);
printarray (Anarray) ;
}
 
139% cc lint_ex.c
140% lint lint_ex.c
 lint_ex.c(10): warning: c may be used before set
lint_ex.c(10): warning: i may be used before set
printarray: variable # of args. lint_ex.c(4) :: lint_ex.c(10)
printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(10)
printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(11)
printf returns value which is always ignored
Automated Static Analysis
Use of Static Analysis
→Particularly valuable when a language such as C is used which has weak typing and hence many errors are undetected by the
compiler,

→Less cost-effective for languages like Java that have strong type checking and can therefore detect many errors during compilation.
Verification and Validation
Planning verification and validation

Software inspections

Automated static analysis

Verification and formal methods


Verification and formal methods
→Formal methods can be used when a mathematical specification of the system is
produced.
→They are the ultimate static verification technique.

→They involve detailed mathematical analysis of the specification and may develop
formal arguments that a program conforms to its mathematical specification.
Verification and formal methods
Arguments for formal methods
→Producing a mathematical specification requires a detailed analysis of the requirements
and this is likely to uncover errors.
→They can detect implementation errors before testing when the program is analysed
alongside the specification.
Verification and formal methods
Arguments against formal methods
→Require specialised notations that cannot be understood by domain experts.

→Very expensive to develop a specification and even more expensive to show that a
program meets that specification.
→It may be possible to reach the same level of confidence in a program more cheaply
using other V & V techniques.
Verification and formal methods
Cleanroom software development
“The name is derived from the 'Cleanroom' process in semiconductor
fabrication. The philosophy is defect avoidance rather than defect removal.”
This software development process is based on:
→Incremental development;

→Formal specification;

→Static verification using correctness arguments;

→Statistical testing to determine program reliability;


Verification and formal methods
Cleanroom Process

For mally Error rework


specify
system

Define Construct For mally


Integrate
software structured verify
increment
increments program code

Develop
oper ational Design Test
profile sta tistical integrated
tests system
Verification and formal methods
Cleanroom process characteristics
→Formal specification using a state transition model.

→Incremental development where the customer prioritises increments.

→Structured programming - limited control and abstraction constructs are used in the program.

→Static verification using rigorous inspections.

→Statistical testing of the system


Verification and formal methods
Formal specification and inspections
→The state based model is a system specification and the inspection process checks the
program against this model
→The programming approach is defined so that the correspondence between the model
and the system is clear.
→Mathematical arguments (not proofs) are used to increase confidence in the inspection
process.
Verification and formal methods
Cleanroom process teams
→Specification team. Responsible for developing and maintaining the system specification.

→Development team. Responsible for developing and verifying the software. The software is NOT
executed or even compiled during this process.

→Certification team. Responsible for developing a set of statistical tests to exercise the software after
development. Reliability growth models used to determine when reliability is acceptable.
Verification and formal methods
Cleanroom process evaluation
→The results of using the Cleanroom process have been very impressive with few discovered faults in
delivered systems.

→Independent assessment shows that the process is no more expensive than other approaches.

→There were fewer errors than in a 'traditional' development process.

→However, the process is not widely used. It is not clear how this approach can be transferred to an
environment with less skilled or less motivated software engineers.
Key points
• Verification and validation are not the same thing.
• Verification shows conformance with specification;
• validation shows that the program meets the customer’s needs.
• Test plans should be drawn up to guide the testing process.
• Static verification techniques involve examination and analysis of the
program for error detection.
Key points
• Program inspections are very effective in discovering errors.
• Program code in inspections is systematically checked by a small team to locate
software faults.
• Static analysis tools can discover program anomalies which may be an indication of
faults in the code.
• The Cleanroom development process depends on incremental development, static
verification and statistical testing.
Unit 5 (Quality and Maintenance)
QUALITY & MAINTENANCE
- Software evolution (21)
- Verification and Validation (23)
- Critical Systems Validation (24)
- Metrics for Process, Project and Product
- Quality Management (27)
- Process Improvement (28)
- Risk Management Configuration Management (29)
- Software Cost Estimation (26)
Critical Systems Validation
Objective
→To explain how system reliability can be measured and how reliability
growth models can be used for reliability prediction
→To describe safety arguments and how these are used
→To discuss the problems of safety assurance
→To introduce safety cases and how these are used in safety validation
Critical Systems Validation
Topics
→The results of using the Cleanroom process have been very impressive with few discovered faults in
delivered systems.

→Independent assessment shows that the process is no more expensive than other approaches.

→There were fewer errors than in a 'traditional' development process.

→However, the process is not widely used. It is not clear how this approach can be transferred to an
environment with less skilled or less motivated software engineers.
Topics covered
Reliability validation
Safety assurance
Security assessment
Safety and dependability cases
Validation of critical systems
The verification and validation costs for critical systems involves additional validation
processes and analysis than for non-critical systems:
→ COST of failure
→ much greater than for noncritical systems

→ spending more on system verification and validation

→ Validation of dependability attributes


→ You may have to make a formal case to customers or to a regulator that the system meets its dependability
requirements.

→ This dependability case may require specific V & V activities to be carried out.
The reliability measurement process
Statistical Testing

Identify Compute
Prepare test Apply tests to
operational observed
data set system reliability
profiles
Reliability validation activities
→Establish the operational profile for the system.

→Construct test data reflecting the operational profile.

→Test the system and observe the number of failures and the times of these
failures.

→Compute the reliability after a statistically significant number of failures


have been observed.
Statistical testing
→Testing software for reliability rather than fault detection.

→Measuring the number of errors allows the reliability of the software to be


predicted. Note that, for statistical reasons, more errors than are allowed for in
the reliability specification must be induced.
→An acceptable level of reliability should be specified and the software tested
and amended until that level of reliability is reached.
Reliability measurement problems
→Operational profile uncertainty
→The operational profile may not be an accurate reflection of the real use of the
system.

→High costs of test data generation


→Costs can be very high if the test data for the system cannot be generated
automatically.

→Statistical uncertainty
→You need a statistically significant number of failures to compute the reliability but
highly reliable systems will rarely fail.
Operational profiles
→An operational profile is a set of test data whose frequency matches the
actual frequency of these inputs from ‘normal’ usage of the system.
→A close match with actual usage is necessary otherwise the measured
reliability will not be reflected in the actual usage of the system.
→It can be generated from real data collected from an existing system or (more
often) depends on assumptions made about the pattern of usage of a system.
Operational profile generation
→Should be generated automatically whenever possible.

→Automatic profile generation is difficult for interactive systems.

→May be straightforward for ‘normal’ inputs but it is difficult to


predict ‘unlikely’ inputs and to create test data for them.
Reliability prediction
→ A reliability growth model is a mathematical model of the system
reliability change as it is tested and faults are removed.
→ It is used as a means of reliability prediction by inferring from current
data
→Simplifies test planning and customer negotiations.
→You can predict when testing will be completed and demonstrate to
customers whether or not the reliability growth will ever be achieved.
→ Prediction depends on the use of statistical testing to measure the
reliability of a system version.
Growth model selection
→Many different reliability growth models have been proposed.

→There is no universally applicable growth model.

→Reliability should be measured and observed data should be fitted to several models.

→The best-fit model can then be used for reliability prediction.


Topics covered
Reliability validation
Safety assurance
Security assessment
Safety and dependability cases
Safety assurance
Safety assurance and reliability measurement are quite different:
→Within the limits of measurement error, you know whether or not a
required level of reliability has been achieved;
→However, quantitative measurement of safety is impossible. Safety
assurance is concerned with establishing a confidence level in the system.
Safety confidence
Confidence in the safety of a system can vary from very low to
very high.
Confidence is developed through:
→Past experience with the company developing the software;
→The use of dependable processes and process activities geared to safety;
→Extensive V & V including both static and dynamic validation techniques.
Safety reviews
• Review for correct intended function.
• Review for maintainable, understandable structure.
• Review to verify algorithm and data structure design against
specification.
• Review to check code consistency with algorithm and data
structure design.
• Review adequacy of system testing.
Review guidance
• Make software as simple as possible.
• Use simple techniques for software development avoiding error-
prone constructs such as pointers and recursion.
• Use information hiding to localise the effect of any data
corruption.
• Make appropriate use of fault-tolerant techniques but do not be
seduced into thinking that fault-tolerant software is necessarily
safe.
Safety arguments
Safety arguments are intended to show that the system cannot reach in unsafe
state.
These are weaker than correctness arguments which must show that the system
code conforms to its specification.
They are generally based on proof by contradiction
→Assume that an unsafe state can be reached;

→Show that this is contradicted by the program code.


A graphical model of the safety argument may be developed.
Construction of a safety argument
• Establish the safe exit conditions for a component or a program.
• Starting from the END of the code, work backwards until you
have identified all paths that lead to the exit of the code.
• Assume that the exit condition is false.
• Show that, for each path leading to the exit that the assignments
made in that path contradict the assumption of an unsafe exit from
the component.
Safety related process activities
Creation of a hazard logging and monitoring system.
Appointment of project safety engineers.
Extensive use of safety reviews.
Creation of a safety certification system.
Detailed configuration management
Hazard analysis
Hazard analysis involves identifying hazards and their root causes.

There should be clear traceability from identified hazards through their


analysis to the actions taken during the process to ensure that these hazards
have been covered.

A hazard log may be used to track hazards throughout the process.


Run-time safety checking
During program execution, safety checks can be incorporated as statements to
check that the program is executing within a safe operating ‘envelope’.
Statements can be included as comments (or using an assert statement in some
languages). Code can be generated automatically to check these assertions.
Security assessment
Security assessment has something in common with safety assessment.
It is intended to demonstrate that the system cannot enter some state (an unsafe or an
insecure state) rather than to demonstrate that the system can do something.
However, there are differences
→Safety problems are accidental; security problems are deliberate;
→Security problems are more generic - many systems suffer from the same problems; Safety problems are
mostly related to the application domain
Security validation
Experience-based validation
→The system is reviewed and analysed against the types of attack that are known to the validation team.

Tool-based validation
→Various security tools such as password checkers are used to analyse the system in operation.

Tiger teams
→A team is established whose goal is to breach the security of the system by simulating attacks on the
system.

Formal verification
→The system is verified against a formal security specification.
Security checklist
1. Do all files that are created in the application have appropriate access permissions? The
wrong access permissions may lead to these files being accessed by unauthorised users.
2. Does the system automatically terminate user sessions after a period of inactivity?
Sessions that are left active may allow unauthorised access through an unattended
computer.
3. If the system is written in a programming language without array bound checking, are
there situations where buffer overflow may be exploited? Buffer overflow may allow
attackers to send code strings to the system and then execute them.
4. If passwords are set, does the system check that password are ‘strong’. Strong
passwords consist of mixed letters, numbers and punctuation and are not normal
dictionary entries. They are more difficult to break than simple passwords.
Unit -5
QUALITY & MAINTENANCE
- Software evolution (21)
- Verification and Validation (23)
- Critical Systems Validation (24)
- Metrics for Process, Project and Product
- Quality Management (27)
- Process Improvement (28)
- Risk Management Configuration Management (29)
- Software Cost Estimation (26)
Metrics for Process, Project and Product
Metrics : Metrics are measures of quantitative assessment

Metrics have been used in accounting, operations, and performance analysis


throughout history.
Metrics for Process, Project and Product
Which Software Metrics to Choose, and Why?
Choose an available or known set of metrics and then add more as and when they discover
anything new.
(Example: Consulting doctor : Temperature, Pressure, Pulse Rate…, Disease diagnosis : X-
Ray, MRI, Glucose…. )
- Broad
Resource utilization aspects, performance… and
- Specific
Broadcasting or live streaming – delay
Re-telecast - Quality
Metrics for Process, Project and Product
→In the case of software development, different stakeholders have different sets of goals.
(Outcome)
→So a Project Manager may have a completely different set of goals compared to the VP
Engineering from a software project. (For instance, your goal may be to “reduce total
cost of testing efforts.”)
Metrics for Process, Project and Product
In order to satisfy this goal, you may need to answer the following questions:
→Which functional areas have the most defects?

→How long does it take to repair the defects?

→What percentage of regression tests are automated?

→What is the code coverage with automated tests?


Metrics for Process, Project and Product
Metrics for Process, Project and Product
Software metrics can be classified into three categories −
Product −Describes the characteristics of the product such as size, complexity,
design features, performance, and quality level.
Process −These characteristics can be used to improve the development and
maintenance activities of the software.
Project −This metrics describe the project characteristics and execution.
(Examples include the number of software developers, the staffing pattern over the life cycle of
the software, cost, schedule, and productivity.)
Metrics for Process, Project and Product
Software quality metrics
→ It is a subset of software metrics that focus on the quality aspects of the product, process, and
project.
→ These are more closely associated with process and product metrics than with project metrics.
Three categories
→Product quality metrics

→In-process quality metrics

→Maintenance quality metrics


Metrics for Process, Project and Product
Product Quality Metrics
• Product quality metrics measure the excellence of a product and its features.
• They measure the “goodness” inherent in the product, apart from how the product was developed.
This metrics include the following −
Functionality - Does the product work correctly? (Failure Rate)
Stability - Does the product work reliably? (Uptime)
Performance - Does the product work optimally?
Processer usage, Memory usage, Response Time, Throughput
Complexity - Is the software code unnecessarily complicated?
Line of Coding, Cyclomatic Complexity Depth of Inheritance
Satisfaction - Does the product satisfy the end user? (Customer Satisfaction )
Metrics for Process, Project and Product
In-process Quality Metrics
In-process quality metrics deals with the tracking of defect arrival during
formal machine testing for some organizations.
This metric includes
→Defect density during machine testing

→Defect arrival pattern during machine testing

→Phase-based defect removal pattern

→Defect removal effectiveness


Metrics for Process, Project and Product
Defect density during machine testing
• Defect rate during formal machine testing (testing after code is integrated into the
system library) is correlated with the defect rate in the field.
• This simple metric of defects per KLOC or function point is a good indicator of
quality, while the software is still being tested.
Metrics for Process, Project and Product
Defect arrival pattern during machine testing
• The overall defect density during testing will provide only the summary of the
defects.
• The pattern of defect arrivals gives more information about different quality levels in
the field.
Metrics for Process, Project and Product
Represented by:
• The defect arrivals or defects reported during the testing phase by time interval (e.g.,
week). Here all of which will not be valid defects.
• The pattern of valid defect arrivals when problem determination is done on the
reported problems. This is the true defect pattern.
• The pattern of defect backlog overtime.
Metrics for Process, Project and Product
Phase-based defect removal pattern
• This is an extension of the defect density metric during testing.
• In addition to testing, it tracks the defects at all phases of the development cycle,
including the design reviews, code inspections, and formal verifications before testing.
• Because a large percentage of programming defects is related to design problems,
conducting formal reviews, or functional verifications to enhance the defect removal
capability of the process at the front-end reduces error in the software
Metrics for Process, Project and Product
Defect removal effectiveness
• This metric can be calculated for the entire development process, for the front-end
before code integration and for each phase.
• It is called early defect removal when used for the front-end and phase
effectiveness for specific phases.
Metrics for Process, Project and Product
Maintenance Quality Metrics
• When developing software product is complete and it is released to the market, is the maintenance phase of
its life cycle.
• In this phase defect arrivals by time interval and customer problem calls (whether or not defects) over a time
interval are de facto metrics.
→Fix backlog and backlog management index

→Fix response time and fix responsiveness

→Percent delinquent fixes

→Fix quality
Metrics for Process, Project and Product
Fix backlog and backlog management index
• Fix backlog is related to the rate of defect arrivals and the rate at which fixes for
reported problems become available.
• It is a simple count of reported problems that remain at the end of each month or
each week.
• Block Management Index
Metrics for Process, Project and Product
Fix response time and fix responsiveness
• The fix response time metric is usually calculated as the mean time of all problems
from open to close. Short fix response time leads to customer satisfaction.
• The important elements of fix responsiveness are customer expectations, the agreed-
to fix time, and the ability to meet one's commitment to the customer.
Metrics for Process, Project and Product
Percent Delinquent Fixes
Metrics for Process, Project and Product
Fix Quality
• Fix quality or the number of defective fixes is another important quality metric for the
maintenance phase.
• A fix is defective if it did not fix the reported problem, or if it fixed the original problem but
injected a new defect.
• For mission-critical software, defective fixes are detrimental to customer satisfaction.
• The metric of percent defective fixes is the percentage of all fixes in a time interval that is
defective.

You might also like