Software Engineering Unit 5 .......... 13-August-2021
Software Engineering Unit 5 .......... 13-August-2021
• Organizations have huge investments in their software systems - they are critical
business assets.
• To maintain the value of these assets to the business, they must be changed and
updated.
→ Lehman and Belady proposed that there were a number of ‘laws’ which
applied to all systems as they evolved.
→There are sensible observations rather than laws. They are applicable to large
systems developed by large organizations. Perhaps less applicable in other
cases.
Software Evolution
Program Evolution Dynamics (Cont.)
→Continuing Growth
→ Continuing Change
→Declining Quality
→Increasing Complexity
→ Feedback System
→Large Program Evolution
The majority of organizations,
→Organizational Stability
during their evolutionary process,
→Conservation of Familiarity incorporate these laws.
Software Evolution
First Law (Continuing Change)
→ System maintenance is an inevitable process
→System attributes such as size, time between releases and the number of
reported errors is approximately invariant for each system release.
Software Evolution
Fourth Law (Organizational Stability)
→ Over a program’s lifetime, Its rate of development is approximately
constant and independent of the resources devoted to system development.
→This law confirms that large software development teams are often
unproductive because communication overheads dominate the work of the
team.
Software Evolution
Fifth Law (Conservation of Familiarity)
→ Over the lifetime of a system, the incremental change in each release is
approximately constant.
→ Avoid large functionality release
→ Staff skills : staff are often relatively inexperienced and unfamiliar with the
application domain.
→ As programs age, their structure is degraded and they become harder to understand
and change (software re-engineering)
Software Evolution
Maintenance Prediction
→ It talks about, which parts of the system may cause problems and have high maintenance costs
→ Studies have shown that most maintenance effort is spent on a relatively small number of system
components.
→ Complexity depends on
→If any or all of these is increasing, this may indicate a decline in maintainability.
Evolution processes
Evolution processes vary depend on
→ The type of software being maintained;
→Proposals for system change are the driver for system evolution.
→ If there are business changes that require a very rapid response (e.g. the release of a competing
product).
Evolution processes
Re-Engineering
Evolution processes
Re-Engineering
→ Re-structuring or re-writing part or all of a legacy system without changing its
functionality.
→Applicable where some but not all sub-systems of a larger system require frequent
maintenance.
→Re-engineering involves adding effort to make them easier to maintain. The system
may be re- structured and re-documented.
Evolution processes
Advantages of reengineering
→ Reduced risk
→The cost of re-engineering is often significantly less than the costs of developing
new software.
Re-Engineering Process
Evolution processes
Activities in reengineering process
●
Source code translation
• Convert code to a new language.
●
Reverse engineering
• Analyze the program to understand it;
●
Program structure improvement
• Control Structure of the program analysed and modified
●
Program modularisation
• Reorganize the program structure; Grouped together to remove the redundancy
●
Data reengineering
• Clean-up and restructure system data.
Re-Engineering Approaches (Cost vs Re-engineering)
Evolution processes
Reengineering Cost factors
●
The quality of the software to be reengineered.
●
The tool support available for reengineering.
●
The extent of the data conversion which is required.
●
The availability of expert staff for reengineering.
●
This can be a problem with old systems based on technology that
is no longer widely used.
Legacy System Evolution
●
Organizations that rely on legacy systems must choose a strategy for
evolving these systems
● Scrap the system completely – not making an effective contribution
● Leave the system unchanged and continue with regular maintenance
– stable and few changes
● Re-engineer the system to improve its maintainability – Quality
degraded, regular changes
● Replace all or part of the system with a new system – change
hardware
System quality and business value
Legacy System Evolution
●
Four Categories
● Low quality, low business value
● These systems should be scrapped.
● Low-quality, high-business value
● Should be re-engineered or replaced if a suitable system is available
● High-quality, low-business value
● Maintain and later scrap completely
● High-quality, high business value
● Continue in operation using normal system maintenance.
Legacy System Evolution
●
Assessment should take different viewpoints into account
● System end-users;
● Business customers
● Line managers
● IT managers
● Senior managers
● Interview different stakeholders and collate results.
Legacy System Evolution
●
System Quality Assessment
● Business process assessment
● How well does the business process support the current goals of the
business?
● Environment assessment
● How effective is the system’s environment and how expensive is it to
maintain?
● Application assessment
● What is the quality of the application software system?
Verification and Validation
Verification and validation planning
Software inspections
Automated static analysis
Cleanroom software development
Verification and Validation
Verification:
"Are we building the product right”.
The software should conform to its specification.
Validation:
"Are we building the right product”.
The software should do what the user really requires.
Verification and Validation
V & V process
• Is a whole life-cycle process - V & V must be applied at each stage in the
software process.
• Has two principal objectives
→The discovery of defects in a system;
→The assessment of whether or not the system is useful and useable in an operational
situation.
Verification and Validation
V & V Goals
• Verification and validation should establish confidence that the software is
fit for purpose.
• This does NOT mean completely free of defects.
• Rather, it must be good enough for its intended use and the type of use will
determine the degree of confidence that is needed.
Verification and Validation
When and Where ?
→Depends on system’s purpose, user expectations and marketing environment
→Software function
→The level of confidence depends on how critical the software is to an organisation.
→User expectations
→Users may have low expectations of certain kinds of software.
→Marketing environment
→Getting a product to market early may be more important than finding defects in the program.
Verification and Validation
Static vs Dynamic Verification
• Software inspections
Concerned with analysis of the static system representation to discover problems (static
verification)
→May be supplement by tool-based document and code analysis
• Software testing
Concerned with exercising and observing product behaviour (dynamic verification)
→The system is executed with test data and its operational behaviour is observed
Verification and Validation
Software
inspections
Prog ram
Prototype
testing
Verification and Validation
Program testing
→Can reveal the presence of errors NOT their absence.
→The only validation technique for non-functional requirements as the software has to
be executed to see how it behaves.
→Should be used in conjunction with static verification to provide full V&V coverage.
Verification and Validation
Types of testing
Defect testing
→Tests designed to discover system defects.
→A successful defect test is one which reveals the presence of defects in a system.
→Covered in Chapter 23
Validation testing
→Intended to show that the software meets its requirements.
→A successful test is one that shows that a requirements has been properly implemented.
Testing and debugging
Testing and debugging
→Verification and validation is concerned with establishing the existence of defects in a program.
→Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses
to find the system error.
Verification and Validation
The debugging process
Test Test
Specification
results cases
→The plan should identify the balance between static verification and testing.
→Test planning is about defining standards for the testing process rather than describing
product tests.
Planning verification and validation
The V-model of development
→Requirements traceability
→Users are most interested in the system meeting its requirements and testing should be planned so that all
requirements are individually tested.
→ Tested items.
→The products of the software process that are to be tested should be specified.
Planning verification and validation
The structure of a software test plan
• Testing schedule
→An overall testing schedule and resource allocation for this schedule. This, obviously, is linked to the more general project
development schedule.
• Test recording procedures
→It is not enough simply to run tests. The results of the tests must be systematically recorded. It must be possible to audit the testing
process to check that it been carried out correctly.
• Hardware and software requirements
→This section should set out software tools required and estimated hardware utilisation.
• Constraints
→Constraints affecting the testing process such as staff shortages should be anticipated in this section.
Verification and Validation
Planning verification and validation
Software inspections
Planning
Overview Follow-up
Individual
Rework
preparation
Inspection
meeting
Program inspections
→ System overview presented to inspection team.
→ Inspector: Finds errors, omissions and inconsistencies in programs and documents. May also identify broader issues
that are outside the scope of the inspection team.
→ Chairman or moderator: Manages the process and facilitates the inspection. Reports process results to the Chief
moderator.
→ Chief moderator: Responsible for inspection process improvements, checklist updating, standards development etc.
Program inspections
Inspection Checklist
→ A checklist of common programmer errors is often used to focus the discussion.
→ Error checklists are programming language dependent and reflect the characteristic
errors that are likely to arise in the language.
→ Timing: Depends on the experience of the inspection team, the programming
language and the application domain.
Program inspections
Example : Inspection Checklist
→ Data faults Are all program variables initialised before their values are
used?
Have all constants been named?
Should the upper bound of arrays be equal to the size of the
array or Size -1?
If character strings are used, is a de limiter explicitly
assigned?
Is there any possibility of buffer overflow?
Control faults For each conditional statement, is the condition correct?
Is each loop certain to terminate?
Are compound statements correctly bracketed?
In case statements, are all possible cases accounted for?
If a break is required after each case in case statements, has
it been included?
Input/output faults Are all input variables used?
Are all output variables assigned a value before they are
output?
Can unexpected inputs cause corruption?
Program inspections
Inspection Rate
→ 500 statements/hour during overview.
→Inspecting 500 lines costs about 40 man/hours effort - about 2,24,000 Rupees
Verification and Validation
Planning verification and validation
Software inspections
→ It detect whether statements are well formed, make inferences about the control flow in
the program and, in many cases, compute the sell of all possible values for program data.
→ The Intention of automatic static analysis is to draw an inspector's attention
→ Anomalies in the program: variables without initialization, loop range, and variables
unused
Automated Static Analysis
Stages of static Analysis
→ Control flow Analysis : loops, exit and entry
→ Inference Analysis : consistency of routine and procedure declarations and their use
→ Path Analysis : paths through the program and sets out the statements executed in that path.
Static analysis checks
Fault class Static analysis check
Data faults Variables used before initialisation
Variables declared but never used
Variables assigned twice but never used between assignments
Possible array bound violations
Undeclared variables
Control faults Unreachable code
Unconditional branches into loops
Input/output faults Variables output twice with no intervening assignment
Interface faults Parameter type mismatches
Parameter number mismatches
Non-usage of the results of functions
Uncalled functions and procedures
#include <stdio.h>
printarray (Anarray)
int Anarray;
{ printf(“%d”,Anarray); }
main ()
{
int Anarray[5]; int i; char c;
printarray (Anarray, i, c);
printarray (Anarray) ;
}
139% cc lint_ex.c
140% lint lint_ex.c
lint_ex.c(10): warning: c may be used before set
lint_ex.c(10): warning: i may be used before set
printarray: variable # of args. lint_ex.c(4) :: lint_ex.c(10)
printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(10)
printarray, arg. 1 used inconsistently lint_ex.c(4) :: lint_ex.c(11)
printf returns value which is always ignored
Automated Static Analysis
Use of Static Analysis
→Particularly valuable when a language such as C is used which has weak typing and hence many errors are undetected by the
compiler,
→Less cost-effective for languages like Java that have strong type checking and can therefore detect many errors during compilation.
Verification and Validation
Planning verification and validation
Software inspections
→They involve detailed mathematical analysis of the specification and may develop
formal arguments that a program conforms to its mathematical specification.
Verification and formal methods
Arguments for formal methods
→Producing a mathematical specification requires a detailed analysis of the requirements
and this is likely to uncover errors.
→They can detect implementation errors before testing when the program is analysed
alongside the specification.
Verification and formal methods
Arguments against formal methods
→Require specialised notations that cannot be understood by domain experts.
→Very expensive to develop a specification and even more expensive to show that a
program meets that specification.
→It may be possible to reach the same level of confidence in a program more cheaply
using other V & V techniques.
Verification and formal methods
Cleanroom software development
“The name is derived from the 'Cleanroom' process in semiconductor
fabrication. The philosophy is defect avoidance rather than defect removal.”
This software development process is based on:
→Incremental development;
→Formal specification;
Develop
oper ational Design Test
profile sta tistical integrated
tests system
Verification and formal methods
Cleanroom process characteristics
→Formal specification using a state transition model.
→Structured programming - limited control and abstraction constructs are used in the program.
→Development team. Responsible for developing and verifying the software. The software is NOT
executed or even compiled during this process.
→Certification team. Responsible for developing a set of statistical tests to exercise the software after
development. Reliability growth models used to determine when reliability is acceptable.
Verification and formal methods
Cleanroom process evaluation
→The results of using the Cleanroom process have been very impressive with few discovered faults in
delivered systems.
→Independent assessment shows that the process is no more expensive than other approaches.
→However, the process is not widely used. It is not clear how this approach can be transferred to an
environment with less skilled or less motivated software engineers.
Key points
• Verification and validation are not the same thing.
• Verification shows conformance with specification;
• validation shows that the program meets the customer’s needs.
• Test plans should be drawn up to guide the testing process.
• Static verification techniques involve examination and analysis of the
program for error detection.
Key points
• Program inspections are very effective in discovering errors.
• Program code in inspections is systematically checked by a small team to locate
software faults.
• Static analysis tools can discover program anomalies which may be an indication of
faults in the code.
• The Cleanroom development process depends on incremental development, static
verification and statistical testing.
Unit 5 (Quality and Maintenance)
QUALITY & MAINTENANCE
- Software evolution (21)
- Verification and Validation (23)
- Critical Systems Validation (24)
- Metrics for Process, Project and Product
- Quality Management (27)
- Process Improvement (28)
- Risk Management Configuration Management (29)
- Software Cost Estimation (26)
Critical Systems Validation
Objective
→To explain how system reliability can be measured and how reliability
growth models can be used for reliability prediction
→To describe safety arguments and how these are used
→To discuss the problems of safety assurance
→To introduce safety cases and how these are used in safety validation
Critical Systems Validation
Topics
→The results of using the Cleanroom process have been very impressive with few discovered faults in
delivered systems.
→Independent assessment shows that the process is no more expensive than other approaches.
→However, the process is not widely used. It is not clear how this approach can be transferred to an
environment with less skilled or less motivated software engineers.
Topics covered
Reliability validation
Safety assurance
Security assessment
Safety and dependability cases
Validation of critical systems
The verification and validation costs for critical systems involves additional validation
processes and analysis than for non-critical systems:
→ COST of failure
→ much greater than for noncritical systems
→ This dependability case may require specific V & V activities to be carried out.
The reliability measurement process
Statistical Testing
Identify Compute
Prepare test Apply tests to
operational observed
data set system reliability
profiles
Reliability validation activities
→Establish the operational profile for the system.
→Test the system and observe the number of failures and the times of these
failures.
→Statistical uncertainty
→You need a statistically significant number of failures to compute the reliability but
highly reliable systems will rarely fail.
Operational profiles
→An operational profile is a set of test data whose frequency matches the
actual frequency of these inputs from ‘normal’ usage of the system.
→A close match with actual usage is necessary otherwise the measured
reliability will not be reflected in the actual usage of the system.
→It can be generated from real data collected from an existing system or (more
often) depends on assumptions made about the pattern of usage of a system.
Operational profile generation
→Should be generated automatically whenever possible.
→Reliability should be measured and observed data should be fitted to several models.
Tool-based validation
→Various security tools such as password checkers are used to analyse the system in operation.
Tiger teams
→A team is established whose goal is to breach the security of the system by simulating attacks on the
system.
Formal verification
→The system is verified against a formal security specification.
Security checklist
1. Do all files that are created in the application have appropriate access permissions? The
wrong access permissions may lead to these files being accessed by unauthorised users.
2. Does the system automatically terminate user sessions after a period of inactivity?
Sessions that are left active may allow unauthorised access through an unattended
computer.
3. If the system is written in a programming language without array bound checking, are
there situations where buffer overflow may be exploited? Buffer overflow may allow
attackers to send code strings to the system and then execute them.
4. If passwords are set, does the system check that password are ‘strong’. Strong
passwords consist of mixed letters, numbers and punctuation and are not normal
dictionary entries. They are more difficult to break than simple passwords.
Unit -5
QUALITY & MAINTENANCE
- Software evolution (21)
- Verification and Validation (23)
- Critical Systems Validation (24)
- Metrics for Process, Project and Product
- Quality Management (27)
- Process Improvement (28)
- Risk Management Configuration Management (29)
- Software Cost Estimation (26)
Metrics for Process, Project and Product
Metrics : Metrics are measures of quantitative assessment
→Fix quality
Metrics for Process, Project and Product
Fix backlog and backlog management index
• Fix backlog is related to the rate of defect arrivals and the rate at which fixes for
reported problems become available.
• It is a simple count of reported problems that remain at the end of each month or
each week.
• Block Management Index
Metrics for Process, Project and Product
Fix response time and fix responsiveness
• The fix response time metric is usually calculated as the mean time of all problems
from open to close. Short fix response time leads to customer satisfaction.
• The important elements of fix responsiveness are customer expectations, the agreed-
to fix time, and the ability to meet one's commitment to the customer.
Metrics for Process, Project and Product
Percent Delinquent Fixes
Metrics for Process, Project and Product
Fix Quality
• Fix quality or the number of defective fixes is another important quality metric for the
maintenance phase.
• A fix is defective if it did not fix the reported problem, or if it fixed the original problem but
injected a new defect.
• For mission-critical software, defective fixes are detrimental to customer satisfaction.
• The metric of percent defective fixes is the percentage of all fixes in a time interval that is
defective.