Software Engineering Unit-V: Syllabus

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Software Engineering Unit-V

Unit – V

TESTING & MAINTENANCE

Syllabus
Testing strategies- Testing Tactics - strategies Issues for conventional and
object oriented software- Verification and Validation- validation testing –
system testing – Art of debugging. Software evolution -Critical Systems
Validation – Metrics for Process, Project and Product-Quality Management -
Process Improvement –Risk Management- Configuration Management –
Software Cost Estimation

TESTING STRATEGIES
• A strategy for software testing integrates the design of software test cases
into a well-planned series of steps that result in successful development
of the software

• The strategy provides a road map that describes the steps to be taken,
when, and how much effort, time, and resources will be required

• The strategy incorporates test planning, test case design, test execution,
and test result collection and evaluation

• The strategy provides guidance for the practitioner and a set of milestones
for the manager

• Because of time pressures, progress must be measurable and problems


must surface as early as possible

General Characteristics of Strategic Testing

• To perform effective testing, a software team should conduct effective formal


technical reviews

• Testing begins at the component level and work outward toward the
integration of the entire computer-based system

• Different testing techniques are appropriate at different points in time

• Testing is conducted by the developer of the software and (for large projects)
by an independent test group

• Testing and debugging are different activities, but debugging must be


accommodated in any testing strategy

Mr. John Blesswin Page 1


Software Engineering Unit-V

TESTING TACTICS & STRATEGIES


Levels of Software Testing

There are different levels during the process of Testing.

Levels of testing include the different methodologies that can be used while
conducting Software Testing. Following are the main levels of Software Testing:

 Functional Testing.

 Non-Functional Testing.

Functional Testing

This is a type of black box testing that is based on the specifications of the software
that is to be tested. The application is tested by providing input and then the
results are examined that need to conform to the functionality it was intended for.
Functional Testing of the software is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements.There are five
steps that are involved when testing an application for functionality.

Steps Description
The determination of the functionality that the intended application is
I
meant to perform.

II The creation of test data based on the specifications of the application.

III The output based on the test data and the specifications of the application.
IV The writing of Test Scenarios and the execution of test cases.
The comparison of actual and expected results based on the executed test
V
cases.
a. Unit Testing

This type of testing is performed by the developers before the setup is handed over
to the testing team to formally execute the test cases. Unit testing is performed by
the respective developers on the individual units of source code assigned areas. The
developers use test data that is separate from the test data of the quality assurance

Mr. John Blesswin Page 2


Software Engineering Unit-V

team. The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to


evaluate every execution path in every software application. The same is the case
with unit testing. There is a limit to the number of scenarios and test data that the
developer can use to verify the source code. So after he has exhausted all options
there is no choice but to stop unit testing.

b. Integration Testing

The testing of combined parts of an application to determine if they function


correctly together is Integration testing. There are two methods of doing Integration
Testing Bottom-up Integration testing and Top down Integration testing. In a
comprehensive software development environment, bottom-up testing is usually
done first, followed by top-down testing.
Bottom-up integration
This testing begins with unit testing, followed by tests of progressively higher-
level combinations of units called modules or builds.
Top-Down integration
This testing, the highest-level modules are tested first and progressively
lower-level modules are tested after that.

c. System Testing

This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see
that it meets Quality Standards. This type of testing is performed by a specialized
testing team.

System testing is so important because of the following reasons:


 System Testing is the first step in the Software Development Life Cycle, where
the application is tested as a whole.

Mr. John Blesswin Page 3


Software Engineering Unit-V

 The application is tested thoroughly to verify that it meets the functional and
technical specifications.
 The application is tested in an environment which is very close to the
production environment where the application will be deployed.
 System Testing enables us to test, verify and validate both the business
requirements as well as the Applications Architecture.

d. Regression Testing

Whenever a change in a software application is made it is quite possible that other


areas within the application have been affected by this change. To verify that a fixed
bug hasn't resulted in another functionality or business rule violation is Regression
testing. The intent of Regression testing is to ensure that a change, such as a bug
fix did not result in another fault being uncovered in the application.

Regression testing is so important because of the following reasons:


 Minimize the gaps in testing when an application with changes made has to
be tested.
 Testing the new changes to verify that the change made did not affect any
other area of the application.
 Mitigates Risks when regression testing is performed on the application.
 Test coverage is increased without compromising timelines.
 Increase speed to market the product.

e. Acceptance Testing

This is arguably the most importance type of testing as it is conducted by the


Quality Assurance Team who will gauge whether the application meets the intended
specifications and satisfies the client’s requirements. The QA team will have a set of
pre written scenarios and Test Cases that will be used to test the application.

More ideas will be shared about the application and more tests can be performed on
it to gauge its accuracy and the reasons why the project was initiated. Acceptance
tests are not only intended to point out simple spelling mistakes, cosmetic errors or
Interface gaps, but also to point out any bugs in the application that will result in
system crashers or major errors in the application.

Mr. John Blesswin Page 4


Software Engineering Unit-V

By performing acceptance tests on an application the testing team will deduce how
the application will perform in production. There are also legal and contractual
requirements for acceptance of the system.

f. Alpha Testing

This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase, the following will be
tested in the application:

 Spelling Mistakes

 Broken Links

 Cloudy Directions

 The Application will be tested on machines with the lowest specification to


test loading times and any latency problems.

g. Beta Testing

This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed
to a wide audience on the Web, partly to give the program a "real-world" test and
partly to provide a preview of the next release. In this phase the audience will be
testing the following:

 Users will install, run the application and send their feedback to the project
team.
 Typographical errors, confusing application flow, and even crashes.
 Getting the feedback, the project team can fix the problems before releasing
the software to the actual users.
 The more issues you fix that solve real user problems, the higher the quality
of your application will be.
 Having a higher-quality application when you release to the general public
will increase customer satisfaction.

Mr. John Blesswin Page 5


Software Engineering Unit-V

Non-Functional Testing

This Testing is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software from the
requirements which are non functional in nature related but important a well such
as performance, security, user interface etc. Some of the important and commonly
used non-functional testing types are mentioned as follows:

a. Performance Testing

It is mostly used to identify any bottlenecks or performance issues rather than


finding the bugs in software. There are different causes which contribute in
lowering the performance of software:

 Network delay.
 Client side processing.
 Database transaction processing.
 Load balancing between servers.
 Data rendering.
Performance testing is considered as one of the important and mandatory testing
type in terms of following aspects:
 Speed (i.e. Response Time, data rendering and accessing)
 Capacity
 Stability
 Scalability
It can be either qualitative or quantitative testing activity and can be divided into
different sub types such as Load testing and Stress testing.

b. Load Testing

A process of testing the behavior of the Software by applying maximum load in


terms of Software accessing and manipulating large input data. It can be done at
both normal and peak load conditions. This type of testing identifies the maximum
capacity of Software and its behavior at peak time.

Mr. John Blesswin Page 6


Software Engineering Unit-V

Most of the time, Load testing is performed with the help of automated tools such as
Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk
Performer, Visual Studio Load Test etc. Virtual users (VUsers) are defined in the
automated testing tool and the script is executed to verify the Load testing for the
Software. The quantity of users can be increased or decreased concurrently or
incrementally based upon the requirements.

c. Stress Testing

This testing type includes the testing of Software behavior under abnormal
conditions. Taking away the resources, applying load beyond the actual load limit is
Stress testing. The main intent is to test the Software by applying the load to the
system and taking over the resources used by the Software to identify the breaking
point. This testing can be performed by testing different scenarios such as:

 Shutdown or restart of Network ports randomly.


 Turning the database on or off.
 Running different processes that consume resources such as CPU, Memory,
server etc.

d. Usability Testing

This includes different concepts and definitions of Usability testing from Software
point of view. It is a black box technique and is used to identify any error(s) and
improvements in the Software by observing the users through their usage and
operation. Usability can be defined in terms of five factors i.e. Efficiency of use,
Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the
usability of the product will be good and the system is usable if it possesses the
above factors.

Usability is the quality requirement which can be measured as the outcome of


interactions with a computer system. This requirement can be fulfilled and the end
user will be satisfied if the intended goals are achieved effectively with the use of
proper resources.

Mr. John Blesswin Page 7


Software Engineering Unit-V

e. Security Testing

Security testing involves the testing of Software in order to identify any flaws ad
gaps from security and vulnerability point of view. Following are the main aspects
which Security testing should ensure:

 Confidentiality.
 Integrity.
 Authentication.
 Availability.
 Authorization.
 Non-repudiation.
 SQL insertion attacks.
 Injection flaws.
 Session management issues.

f. Portability Testing

Portability testing includes the testing of Software with intend that it should be re-
useable and can be moved from another Software as well. Following are the
strategies that can be used for Portability testing.

 Transferred installed Software from one computer to another.


 Building executable (.exe) to run the Software on different platforms.
Portability testing can be considered as one of the sub parts of System testing, as
this testing type includes the overall testing of Software with respect to its usage
over different environments. Computer Hardware, Operating Systems and Browsers
are the major focus of Portability testing. Following are some pre-conditions for
Portability testing:

 Software should be designed and coded, keeping in mind Portability


Requirements.
 Unit testing has been performed on the associated components.
 Integration testing has been performed.
 Test environment has been established.

Mr. John Blesswin Page 8


Software Engineering Unit-V

STRATEGIC ISSUES FOR CONVENTIONAL SOFTWARE TESTING


Unit testing

 Exercises specific paths in a component's control structure to ensure


complete coverage and maximum error detection
 Components are then assembled and integrated

Integration testing

 Focuses on inputs and outputs, and how well the components fit
together and work together

Validation testing

 Provides final assurance that the software meets all functional,


behavioral and performance requirements

System testing

 Verifies that all system elements (software, hardware, people,


databases) mesh properly and that overall system function and
performance is achieved

STRATEGIC ISSUES FOR OBJECT ORIENTED SOFTWARE TESTING

 Must broaden testing to include detections of errors in analysis and


design models
 Unit testing loses some of its meaning and integration testing changes
significantly
 Use the same philosophy but different approach as in conventional
software testing
 Test "in the small" and then work out to testing "in the large"
– Testing in the small involves class attributes and operations; the
main focus is on communication and collaboration within the
class
– Testing in the large involves a series of regression tests to uncover
errors due to communication and collaboration among classes

Mr. John Blesswin Page 9


Software Engineering Unit-V

 Finally, the system as a whole is tested to detect errors in fulfilling


requirements
 With object-oriented software, you can no longer test a single operation
in isolation (conventional thinking)
 Class testing for object-oriented software is the equivalent of unit
testing for conventional software
– Focuses on operations encapsulated by the class and the state
behavior of the class
 Drivers can be used
– To test operations at the lowest level and for testing whole groups
of classes
– To replace the user interface so that tests of system functionality
can be conducted prior to implementation of the actual interface
 Stubs can be used
– In situations in which collaboration between classes is required
but one or more of the collaborating classes has not yet been fully
implemented
 Two different object-oriented testing strategies
– Thread-based testing
• Integrates the set of classes required to respond to one input
or event for the system
• Each thread is integrated and tested individually
• Regression testing is applied to ensure that no side effects
occur
– Use-based testing
• First tests the independent classes that use very few, if any,
server classes
• Then the next layer of classes, called dependent classes, are
integrated
• This sequence of testing layer of dependent classes
continues until the entire system is constructed
Mr. John Blesswin Page 10
Software Engineering Unit-V

VERIFICATION AND VALIDATION

• Software testing is part of a broader group of activities called


verification and validation that are involved in software quality
assurance

• Verification (Are the algorithms coded correctly?)

– The set of activities that ensure that software correctly


implements a specific function or algorithm

• Validation (Does it meet user requirements?)

– The set of activities that ensure that the software that has been
built is traceable to customer requirements

VALIDATION TESTING
• Validation testing follows integration testing

• The distinction between conventional and object-oriented software


disappears

• Focuses on user-visible actions and user-recognizable output from the


system

• Demonstrates conformity with requirements

• Designed to ensure that

– All functional requirements are satisfied

– All behavioral characteristics are achieved

– All performance requirements are attained

– Documentation is correct

– Usability and other requirements are met (e.g., transportability,


compatibility, error recovery, maintainability)

• After each validation test

– The function or performance characteristic conforms to


specification and is accepted
Mr. John Blesswin Page 11
Software Engineering Unit-V

– A deviation from specification is uncovered and a deficiency list is


created

• A configuration review or audit ensures that all elements of the software


configuration have been properly developed, cataloged, and have the
necessary detail for entering the support phase of the software life cycle

Alpha and Beta Testing

• Alpha testing

– Conducted at the developer’s site by end users

– Software is used in a natural setting with developers watching


intently

– Testing is conducted in a controlled environment

• Beta testing

– Conducted at end-user sites

– Developer is generally not present

– It serves as a live application of the software in an environment


that cannot be controlled by the developer

– The end-user records all problems that are encountered and


reports these to the developers at regular intervals

• After beta testing is complete, software engineers make software


modifications and prepare for release of the software product to the
entire customer base

Mr. John Blesswin Page 12


Software Engineering Unit-V

SYSTEM TESTING

• Recovery testing

– Tests for recovery from system faults

– Forces the software to fail in a variety of ways and verifies that


recovery is properly performed

– Tests re-initialization, check pointing mechanisms, data recovery,


and restart for correctness

• Security testing

– Verifies that protection mechanisms built into a system will, in


fact, protect it from improper access

• Stress testing

– Executes a system in a manner that demands resources in


abnormal quantity, frequency, or volume

• Performance testing

– Tests the run-time performance of software within the context of


an integrated system

– Often coupled with stress testing and usually requires both


hardware and software instrumentation

– Can uncover situations that lead to degradation and possible


system failure

Mr. John Blesswin Page 13


Software Engineering Unit-V

ART OF DEBUGGING
Debugging Process

• Debugging occurs as a consequence of successful testing


• It is still very much an art rather than a science
• Good debugging ability may be an innate human trait
• Large variances in debugging ability exist
• The debugging process begins with the execution of a test case
• Results are assessed and the difference between expected and actual
performance is encountered
• This difference is a symptom of an underlying cause that lies hidden
• The debugging process attempts to match symptom with cause, thereby
leading to error correction
Why is Debugging so Difficult?

• The symptom and the cause may be geographically remote


• The symptom may disappear (temporarily) when another error is
corrected
• The symptom may actually be caused by non errors (e.g., round-off
accuracies)
• It may be difficult to accurately reproduce input conditions, such as
asynchronous real-time information
Debugging Strategies
• Objective of debugging is to find and correct the cause of a software
error
• Bugs are found by a combination of systematic evaluation, intuition,
and luck
• Debugging methods and tools are not a substitute for careful evaluation
based on a complete design model and clear source code
• There are three main debugging strategies

Mr. John Blesswin Page 14


Software Engineering Unit-V

1.Brute Force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output
statements
• Leads many times to wasted effort and time
2.Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has been
uncovered
• The source code is then traced backward (manually) until the location
of the cause is found
• In large programs, the number of potential backward paths may
become unmanageably large
3.Cause Elimination
• Involves the use of induction or deduction and introduces the concept
of binary partitioning
– Induction (specific to general): Prove that a specific starting value
is true; then prove the general case is true
– Deduction (general to specific): Show that a specific conclusion
follows from a set of general premises
• Data related to the error occurrence are organized to isolate potential
causes
• A cause hypothesis is devised, and the aforementioned data are used to
prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug

Mr. John Blesswin Page 15


Software Engineering Unit-V

SOFTWARE EVOLUTION: CRITICAL SYSTEMS VALIDATION

 To explain how system reliability can be measured and how reliability


growth models can be used for reliability prediction.
 To describe safety arguments and how these are used
 To discuss the problems of safety assurance
 To introduce safety cases and how these are used in safety validation

Validation of critical systems

The verification and validation costs for critical systems involve additional
validation processes and analysis than for noncritical systems:

 The costs and consequences of failure are high so it is cheaper to find


and remove faults than to pay for system failure;
 You may have to make a formal case to customers or to a regulator that
the system meets its dependability requirements. This dependability
case may require specific V & V activities to be carried out.

Validation costs

 The validation costs for critical systems are usually significantly higher
than for noncritical systems.
 Normally, V & V costs take up more than 50% of the total system
development costs.

Reliability validation

 Reliability validation involves exercising the program to assess whether


or not it has reached the required level of reliability.
 This cannot normally be included as part of a normal defect testing
process because data for defect testing is (usually) atypicaof actuausage
data.
 Reliability measurement therefore requires a specially designed data set
that replicates the pattern of inputs to be processed by the system.

Statistical testing

 Testing software for reliability rather than fault detection.


 Measuring the number of errors allows the reliability of the software to
be predicted. Note that, for statistical reasons, more errors than are
allowed for in the reliability specification must be induced.
 An acceptable level of reliability should be specified and the software
tested and amended until that level of reliability is reached.
Mr. John Blesswin Page 16
Software Engineering Unit-V

METRICS FOR PROCESS, PROJECT AND PRODUCT


Software process and project metrics are quantitative measures that enable
software engineers to gain insight into the efficiency of the software process and the
projects conducted using the process framework. In software project management,
we are primarily concerned with productivity and quality metrics. There are four
reasons for measuring software processes, products, and resources (to characterize,
to evaluate, to predict, and to improve).

Process and Project Metrics

 Metrics should be collected so that process and product indicators can be


ascertained
 Process metrics used to provide indictors that lead to long term process
improvement
 Project metrics enable project manager to
o Assess status of ongoing project
o Track potential risks
o Uncover problem are before they go critical
o Adjust work flow or tasks
o Evaluate the project team’s ability to control quality of software work
products

Process Metrics

 Private process metrics (e.g. defect rates by individual or module) are only
known to by the individual or team concerned.
 Public process metrics enable organizations to make strategic changes to
improve the software process.
 Metrics should not be used to evaluate the performance of individuals.
 Statistical software process improvement helps and organization to discover
where they are strong and where is weak.

Project Metrics

 A software team can use software project metrics to adapt project workflow
and technical activities.
 Project metrics are used to avoid development schedule delays, to mitigate
potential risks, and to assess product quality on an on-going basis.
 Every project should measure its inputs (resources), outputs (deliverables),
and results (effectiveness of deliverables).

Mr. John Blesswin Page 17


Software Engineering Unit-V

Size-Oriented Metrics

• Derived by normalizing (dividing) any direct measure (e.g. defects or human


effort) associated with the product or project by LOC.
• Size oriented metrics are widely used but their validity and applicability is
widely debated.

Function-Oriented Metrics

• Function points are computed from direct measures of the information


domain of a business software application and assessment of its complexity.
• Once computed function points are used like LOC to normalize measures for
software productivity, quality, and other attributes.
• The relationship of LOC and function points depends on the language used to
implement the software.

Reconciling LOC and FP Metrics

• The relationship between lines of code and function points depends upon the
programming language that is used to implement the software and the quality
of the design
• Function points and LOC-based metrics have been found to be relatively
accurate predictors of software development effort and cost
• Using LOC and FP for estimation a historical baseline of information must be
established.

Object-Oriented Metrics

• Number of scenario scripts (NSS)


• Number of key classes (NKC)
• Number of support classes (e.g. UI classes, database access classes,
computations classes, etc.)
• Average number of support classes per key class
• Number of subsystems (NSUB)

Use Case-Oriented Metrics

• Describe (indirectly) user-visible functions and features in language


independent manner
• Number of use case is directly proportional to LOC size of application and
number of test cases needed
• However use cases do not come in standard sizes and use as a normalization
measure is suspect
• Use case points have been suggested as a mechanism for estimating effort

Mr. John Blesswin Page 18


Software Engineering Unit-V

WebApp Project Metrics

• Number of static Web pages (Nsp)


• Number of dynamic Web pages (Ndp)
• Customization index: C = Nsp / (Ndp + Nsp)
• Number of internal page links
• Number of persistent data objects
• Number of external systems interfaced
• Number of static content objects
• Number of dynamic content objects
• Number of executable functions

Software Quality Metrics

• Factors assessing software quality come from three distinct points of view
(product operation, product revision, product modification).
• Software quality factors requiring measures include
o correctness (defects per KLOC)
o maintainability (mean time to change)
o integrity (threat and security)
o usability (easy to learn, easy to use, productivity increase, user
attitude)
• Defect removal efficiency (DRE) is a measure of the filtering ability of the
quality assurance and control activities as they are applied through out the
process framework
DRE = E / (E + D)
E = number of errors found before delivery of work product
D = number of defects found after work product delivery

Mr. John Blesswin Page 19


Software Engineering Unit-V

QUALITY MANAGEMENT
• Also called software quality assurance (SQA)

• Serves as an umbrella activity that is applied throughout the software


process

• Involves doing the software development correctly versus doing it over


again

• Reduces the amount of rework, which results in lower costs and


improved time to market

• Encompasses

– A software quality assurance process

– Specific quality assurance and quality control tasks (including


formal technical reviews and a multi-tiered testing strategy)

– Effective software engineering practices (methods and tools)

– Control of all software work products and the changes made to


them

– A procedure to ensure compliance with software development


standards

– Measurement and reporting mechanisms

Quality

• Defined as a characteristic or attribute of something

• Refers to measurable characteristics that we can compare to known


standards

• In software it involves such measures as cyclomatic complexity,


cohesion, coupling, function points, and source lines of code

• Includes variation control

– A software development organization should strive to minimize the


variation between the predicted and the actual values for cost,
schedule, and resources

– They should make sure their testing program covers a known


percentage of the software from one release to another

Mr. John Blesswin Page 20


Software Engineering Unit-V

– One goal is to ensure that the variance in the number of bugs is


also minimized from one release to another

• Two kinds of quality are sought out

– Quality of design

• The characteristic that designers specify for an item

• This encompasses requirements, specifications, and the


design of the system

– Quality of conformance (i.e., implementation)

• The degree to which the design specifications are followed


during manufacturing

• This focuses on how well the implementation follows the


design and how well the resulting system meets its
requirements

Quality also can be looked at in terms of user satisfaction

User satisfaction = compliant product + good quality


+ delivery within budget and schedule

Quality Control

• Involves a series of inspections, reviews, and tests used throughout the


software process

• Ensures that each work product meets the requirements placed on it

• Includes a feedback loop to the process that created the work product

– This is essential in minimizing the errors produced

• Combines measurement and feedback in order to adjust the process


when product specifications are not met

• Requires all work products to have defined, measurable specifications


to which practitioners may compare to the output of each process

The Cost of Quality

• Includes all costs incurred in the pursuit of quality or in performing


quality-related activities

Mr. John Blesswin Page 21


Software Engineering Unit-V

• Is studied to

– Provide a baseline for the current cost of quality

– Identify opportunities for reducing the cost of quality

– Provide a normalized basis of comparison (which is usually


dollars)

• Involves various kinds of quality costs (See next slide)

• Increases dramatically as the activities progress from

– Prevention  Detection  Internal failure  External failure

Types of Cost

• Prevention costs

– Quality planning, formal technical reviews, test equipment,


training

• Appraisal costs

– Inspections, equipment calibration and maintenance, testing

• Failure costs – subdivided into internal failure costs and external


failure costs

– Internal failure costs

• Incurred when an error is detected in a product prior to


shipment

• Include rework, repair, and failure mode analysis

– External failure costs

• Involves defects found after the product has been shipped

• Include complaint resolution, product return and


replacement, help line support, and warranty work

Mr. John Blesswin Page 22


Software Engineering Unit-V

PROCESS IMPROVEMENT
• Understanding existing processes and introducing process changes to
improve product quality, reduce costs or accelerate schedules.
• Most process improvement work so far has focused on defect
reduction. This reflects the increasing attention paid by industry to
quality.
• However, other process attributes can also be the focus of
improvement

Process attributes

Process Description
characteristic
Understandability To what extent is the process explicitly defined and how easy
is it to understand the process definition?
Visibility Do the process activities culminate in clear results so that the
progress of the process is externally visible?
Supportability To what extent can CASE tools be used to support the process
activities?
Acceptability Is the defined process acceptable to and usable by the
engineers responsible for producing the software product?
Reliability Is the process designed in such a way that process errors are
avoided or trapped before they result in product errors?
Robustness Can the process continue in spite of unexpected problems?
Maintainability Can the process evolve to reflect changing organisational
requirements or identified process improvements?
Rapidity How fast can the process of delivering a system from a given
specification be completed?

The process improvement cycle

Measure

Change Analyse

Mr. John Blesswin Page 23


Software Engineering Unit-V

Process improvement stages

• Process measurement

o Attributes of the current process are measured. These are a


baseline for assessing improvements.

• Process analysis

o The current process is assessed and bottlenecks and weaknesses


are identified.

• Process change

o Changes to the process that have been identified during the


analysis are introduced.

RISK MANAGEMENT

• Software risks:
– What can go wrong?
– What is the likelihood?
– What will be the damage?
– What can be done about it?
• Risk analysis and management are a set of activities that help a
software team to understand and manage uncertainty about a
project.
Risk: Definition and Attributes

• Risk is the uncertainty associated with the outcome of a future event


and has a number of attributes:
– Uncertainty (probability)
– Time (future event)
– Potential for loss (or gain)
• Multiple perspectives, e.g.,
– Process perspective (development process breaks)
– Project perspective (critical objectives are missed)
– Product perspective (loss of code integrity)
– User perspective (loss of functionality)

Mr. John Blesswin Page 24


Software Engineering Unit-V

A Risk vs a Problem

• A problem is a condition that exists and is undesirable. Thus it is a


value judgment made upon the merits of the current condition.
• A risk suggests a possible, future undesirable condition (or
consequence). Thus it is a value judgment made upon the potential
implications of the current conditions.
• Risks that are not identified (or identified but not prevented) will later
become problems
• Note: risks are not always problems; they can be beneficial

Risk Management

• The process by which a course of action is selected that balances the


potential impact of a risk weighted by its probability of occurrence and
the benefits of avoiding (or controlling) the risk

• Risk management life cycle:


– Identify (risk identification)
– Analyze (risk analysis)
– Plan (contingency planning)
– Track (risk monitoring)
– Control (recovery management)

RMMM: Risk Mitigation, Monitoring, and Management

• Mitigation: how can we avoid the risk?


• Monitoring: what factors can we track that will enable us to determine
if the risk is becoming more or less likely?
• Management: what contingency plans do we have if the risk becomes a
reality?

Types of Risks

• Project risks: threaten the project plan (e.g, cost overrun)


• Technical risks: threaten the quality and timeliness of the product (e.g.,
specification ambiguity)
• Business Risks: threaten the viability of the product (e.g., product not
aligned with business strategy or losing upper management support, or
losing budget)

Mr. John Blesswin Page 25


Software Engineering Unit-V

CONFIGURATION MANAGEMENT
A set of management disciplines within the software engineering process to
develop a baseline. Software Configuration Management encompasses the
disciplines and techniques of initiating, evaluating and controlling change to
software products during and after the software engineering process.

• SCM is a Project Function (as defined in the SPMP) with the goal to
make technical and managerial activities more effective.
• Software Configuration Management can be administered in several
ways:
• A single software configuration management team for the whole
organization
• A separate configuration management team for each project
• Software Configuration Management distributed among the project
members
• Mixture of all of the above

Configuration Management Activities

• Software Configuration Management Activities:


• Configuration item identification
• Promotion management
• Release management
• Branch management
• Variant management
• Change management

Configuration item identification

modeling of the system as a set of evolving components

Promotion management

is the creation of versions for other developers

Release management

is the creation of versions for the clients and users

Change management

is the handling, approval and tracking of change requests

Branch management

Mr. John Blesswin Page 26


Software Engineering Unit-V

is the management of concurrent development

Variant management

is the management of versions intended to coexist

Configuration Management Roles

Configuration Manager

Responsible for identifying configuration items. The configuration


manager can also be responsible for defining the procedures for creating
promotions and releases

Change control board member

Responsible for approving or rejecting change requests

Developer

Creates promotions triggered by change requests or the normal


activities of development. The developer checks in changes and resolves
conflicts

Auditor

Responsible for the selection and evaluation of promotions for release


and for ensuring the consistency and completeness of this release

Mr. John Blesswin Page 27


Software Engineering Unit-V

SOFTWARE COST ESTIMATION


Estimates of cost and schedule in software projects are based on a prediction of the
size of the future system. Reliable early estimates are difficult to obtain because of
the lack of detailed information about the future system at an early stage.
However, early estimates are required when bidding for a contract or determining
whether a project is feasible in the terms of a cost-benefit analysis.

Traditional cost models take software size as an input parameter, and then
apply a set of adjustment factors or 'cost drivers' to compute an estimate of
total effort. In object-oriented software production, use cases describe
functional requirements.

Estimation of resources, cost and schedule of software development are very


important. To make a good estimate requires experience and expertise to convert
qualitative measures to quantitative form. Factors like Project size, Amount of risk
involved etc are affecting the accuracy and efficacy of estimates.

Estimation Techniques

The following are the different techniques for estimation

 Decomposition Technique
 Empirical Estimation Models
 Automated Estimation Tools

Decomposition Technique

Here we subdivide the problem into small problems. When all the small problems are
solved the main problem is solved.

 Lines of Code
 Function Point

LOC (Lines of Code), FP(Function Point) estimation methods consider the size as the
measure. In LOC the cost is calculated based on the number of lines. In FP the cost is
calculated based on the number of various functions in the program.

Mr. John Blesswin Page 28


Software Engineering Unit-V

Empirical Estimation Models

Estimation models uses empirically derived formulas to predict the estimates.Here we


conduct a study on some completed projects. From those observation we form some
statistical formulas.We can use this formulas to estimate the cost of other projects.
The structure of empirical estimation models is a formula, derived from data collected
from past software projects, that uses software size to estimate effort.Size, itself, is an
estimate, described as either lines of code (LOC) or function points (FP).

Constructive Cost Model (COCOMO)

Model 1 : Basic COCOMO

Model 2 : Intermediate COCOMO

Model 3 : Advanced COCOMO

The basic COCOMO model computes software development effort (and cost ) as a
function of program size expressed in estimated Lines of Code. The intermediate
COCOMO model computes software development effort as a function of program size
and a set of cost drivers that include hardware ,personnel attributes. The advanced
COCOMO model incorporate all characteristics of the intermediate version with an
assessment of cost drivers impact on each step (analysis,design,coding etc) of the
software engineering process.

Automated Estimation Tools

The decomposition techniques and the empirical estimation models can be


implemented using software.These automated tools allow the planner to estimate the
cost and effort and will also give important information like delivery date ,staffing.

Mr. John Blesswin Page 29


Software Engineering Unit-V

Challenges in Software Cost Estimation


The following are the challenges that come while estimating software cost and
schedule.
1. The project is poorly scoped
It is difficult to estimate time on a project when that project is unknown. Almost
every large project, it is expected that the system should handle anything at any
future point in time — even though of no idea what those features might be.
2. Development time is estimated by non-programmers
A project is doomed the moment a manager writes their own fictional estimate. At
best, they’ll be completely incorrect. At worst, the programmers will be tempted to
prove them wrong.
3. Developer estimates are too optimistic
Developers think in terms of coding hours. Time passes quickly when they are in
the zone and it’s difficult to assess the own speed. Appreciating the speed of other
developers is impossible. Many developers are over-optimistic.
4. Estimated time is used
Give a programmer 5 days to complete a task and it’ll take 5 days. Software
development is infinitely variable and any code can be improved. If a developer
takes 3 days to finish the task, they’ll spend the remaining time tweaking it or doing
other activities.
5. More developers != quicker development
A 100-day project will not be completed in 1 day by 100 developers. More people
results in an exponential increase in complexity.
6. The project scope changes
This is perhaps the most irritating problem for a developer. A feature is changed or
added because customer X has requested it or the CEO thinks it’s a cool thing to
do.
7. Estimates are fixed
Estimates should be continually assessed and updated as the system development
progresses.
8. Testing time is forgotten
It’s impossible for a developer to adequately test their own code. They know how it
should work, so they consciously or sub-consciously test in a specific way.

Mr. John Blesswin Page 30

You might also like