Software Engineering Unit-V: Syllabus
Software Engineering Unit-V: Syllabus
Software Engineering Unit-V: Syllabus
Unit – V
Syllabus
Testing strategies- Testing Tactics - strategies Issues for conventional and
object oriented software- Verification and Validation- validation testing –
system testing – Art of debugging. Software evolution -Critical Systems
Validation – Metrics for Process, Project and Product-Quality Management -
Process Improvement –Risk Management- Configuration Management –
Software Cost Estimation
TESTING STRATEGIES
• A strategy for software testing integrates the design of software test cases
into a well-planned series of steps that result in successful development
of the software
• The strategy provides a road map that describes the steps to be taken,
when, and how much effort, time, and resources will be required
• The strategy incorporates test planning, test case design, test execution,
and test result collection and evaluation
• The strategy provides guidance for the practitioner and a set of milestones
for the manager
• Testing begins at the component level and work outward toward the
integration of the entire computer-based system
• Testing is conducted by the developer of the software and (for large projects)
by an independent test group
Levels of testing include the different methodologies that can be used while
conducting Software Testing. Following are the main levels of Software Testing:
Functional Testing.
Non-Functional Testing.
Functional Testing
This is a type of black box testing that is based on the specifications of the software
that is to be tested. The application is tested by providing input and then the
results are examined that need to conform to the functionality it was intended for.
Functional Testing of the software is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements.There are five
steps that are involved when testing an application for functionality.
Steps Description
The determination of the functionality that the intended application is
I
meant to perform.
III The output based on the test data and the specifications of the application.
IV The writing of Test Scenarios and the execution of test cases.
The comparison of actual and expected results based on the executed test
V
cases.
a. Unit Testing
This type of testing is performed by the developers before the setup is handed over
to the testing team to formally execute the test cases. Unit testing is performed by
the respective developers on the individual units of source code assigned areas. The
developers use test data that is separate from the test data of the quality assurance
team. The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.
b. Integration Testing
c. System Testing
This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see
that it meets Quality Standards. This type of testing is performed by a specialized
testing team.
The application is tested thoroughly to verify that it meets the functional and
technical specifications.
The application is tested in an environment which is very close to the
production environment where the application will be deployed.
System Testing enables us to test, verify and validate both the business
requirements as well as the Applications Architecture.
d. Regression Testing
e. Acceptance Testing
More ideas will be shared about the application and more tests can be performed on
it to gauge its accuracy and the reasons why the project was initiated. Acceptance
tests are not only intended to point out simple spelling mistakes, cosmetic errors or
Interface gaps, but also to point out any bugs in the application that will result in
system crashers or major errors in the application.
By performing acceptance tests on an application the testing team will deduce how
the application will perform in production. There are also legal and contractual
requirements for acceptance of the system.
f. Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase, the following will be
tested in the application:
Spelling Mistakes
Broken Links
Cloudy Directions
g. Beta Testing
This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed
to a wide audience on the Web, partly to give the program a "real-world" test and
partly to provide a preview of the next release. In this phase the audience will be
testing the following:
Users will install, run the application and send their feedback to the project
team.
Typographical errors, confusing application flow, and even crashes.
Getting the feedback, the project team can fix the problems before releasing
the software to the actual users.
The more issues you fix that solve real user problems, the higher the quality
of your application will be.
Having a higher-quality application when you release to the general public
will increase customer satisfaction.
Non-Functional Testing
This Testing is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software from the
requirements which are non functional in nature related but important a well such
as performance, security, user interface etc. Some of the important and commonly
used non-functional testing types are mentioned as follows:
a. Performance Testing
Network delay.
Client side processing.
Database transaction processing.
Load balancing between servers.
Data rendering.
Performance testing is considered as one of the important and mandatory testing
type in terms of following aspects:
Speed (i.e. Response Time, data rendering and accessing)
Capacity
Stability
Scalability
It can be either qualitative or quantitative testing activity and can be divided into
different sub types such as Load testing and Stress testing.
b. Load Testing
Most of the time, Load testing is performed with the help of automated tools such as
Load Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk
Performer, Visual Studio Load Test etc. Virtual users (VUsers) are defined in the
automated testing tool and the script is executed to verify the Load testing for the
Software. The quantity of users can be increased or decreased concurrently or
incrementally based upon the requirements.
c. Stress Testing
This testing type includes the testing of Software behavior under abnormal
conditions. Taking away the resources, applying load beyond the actual load limit is
Stress testing. The main intent is to test the Software by applying the load to the
system and taking over the resources used by the Software to identify the breaking
point. This testing can be performed by testing different scenarios such as:
d. Usability Testing
This includes different concepts and definitions of Usability testing from Software
point of view. It is a black box technique and is used to identify any error(s) and
improvements in the Software by observing the users through their usage and
operation. Usability can be defined in terms of five factors i.e. Efficiency of use,
Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the
usability of the product will be good and the system is usable if it possesses the
above factors.
e. Security Testing
Security testing involves the testing of Software in order to identify any flaws ad
gaps from security and vulnerability point of view. Following are the main aspects
which Security testing should ensure:
Confidentiality.
Integrity.
Authentication.
Availability.
Authorization.
Non-repudiation.
SQL insertion attacks.
Injection flaws.
Session management issues.
f. Portability Testing
Portability testing includes the testing of Software with intend that it should be re-
useable and can be moved from another Software as well. Following are the
strategies that can be used for Portability testing.
Integration testing
Focuses on inputs and outputs, and how well the components fit
together and work together
Validation testing
System testing
– The set of activities that ensure that the software that has been
built is traceable to customer requirements
VALIDATION TESTING
• Validation testing follows integration testing
– Documentation is correct
• Alpha testing
• Beta testing
SYSTEM TESTING
• Recovery testing
• Security testing
• Stress testing
• Performance testing
ART OF DEBUGGING
Debugging Process
1.Brute Force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output
statements
• Leads many times to wasted effort and time
2.Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has been
uncovered
• The source code is then traced backward (manually) until the location
of the cause is found
• In large programs, the number of potential backward paths may
become unmanageably large
3.Cause Elimination
• Involves the use of induction or deduction and introduces the concept
of binary partitioning
– Induction (specific to general): Prove that a specific starting value
is true; then prove the general case is true
– Deduction (general to specific): Show that a specific conclusion
follows from a set of general premises
• Data related to the error occurrence are organized to isolate potential
causes
• A cause hypothesis is devised, and the aforementioned data are used to
prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug
The verification and validation costs for critical systems involve additional
validation processes and analysis than for noncritical systems:
Validation costs
The validation costs for critical systems are usually significantly higher
than for noncritical systems.
Normally, V & V costs take up more than 50% of the total system
development costs.
Reliability validation
Statistical testing
Process Metrics
Private process metrics (e.g. defect rates by individual or module) are only
known to by the individual or team concerned.
Public process metrics enable organizations to make strategic changes to
improve the software process.
Metrics should not be used to evaluate the performance of individuals.
Statistical software process improvement helps and organization to discover
where they are strong and where is weak.
Project Metrics
A software team can use software project metrics to adapt project workflow
and technical activities.
Project metrics are used to avoid development schedule delays, to mitigate
potential risks, and to assess product quality on an on-going basis.
Every project should measure its inputs (resources), outputs (deliverables),
and results (effectiveness of deliverables).
Size-Oriented Metrics
Function-Oriented Metrics
• The relationship between lines of code and function points depends upon the
programming language that is used to implement the software and the quality
of the design
• Function points and LOC-based metrics have been found to be relatively
accurate predictors of software development effort and cost
• Using LOC and FP for estimation a historical baseline of information must be
established.
Object-Oriented Metrics
• Factors assessing software quality come from three distinct points of view
(product operation, product revision, product modification).
• Software quality factors requiring measures include
o correctness (defects per KLOC)
o maintainability (mean time to change)
o integrity (threat and security)
o usability (easy to learn, easy to use, productivity increase, user
attitude)
• Defect removal efficiency (DRE) is a measure of the filtering ability of the
quality assurance and control activities as they are applied through out the
process framework
DRE = E / (E + D)
E = number of errors found before delivery of work product
D = number of defects found after work product delivery
QUALITY MANAGEMENT
• Also called software quality assurance (SQA)
• Encompasses
Quality
– Quality of design
Quality Control
• Includes a feedback loop to the process that created the work product
• Is studied to
Types of Cost
• Prevention costs
• Appraisal costs
PROCESS IMPROVEMENT
• Understanding existing processes and introducing process changes to
improve product quality, reduce costs or accelerate schedules.
• Most process improvement work so far has focused on defect
reduction. This reflects the increasing attention paid by industry to
quality.
• However, other process attributes can also be the focus of
improvement
Process attributes
Process Description
characteristic
Understandability To what extent is the process explicitly defined and how easy
is it to understand the process definition?
Visibility Do the process activities culminate in clear results so that the
progress of the process is externally visible?
Supportability To what extent can CASE tools be used to support the process
activities?
Acceptability Is the defined process acceptable to and usable by the
engineers responsible for producing the software product?
Reliability Is the process designed in such a way that process errors are
avoided or trapped before they result in product errors?
Robustness Can the process continue in spite of unexpected problems?
Maintainability Can the process evolve to reflect changing organisational
requirements or identified process improvements?
Rapidity How fast can the process of delivering a system from a given
specification be completed?
Measure
Change Analyse
• Process measurement
• Process analysis
• Process change
RISK MANAGEMENT
• Software risks:
– What can go wrong?
– What is the likelihood?
– What will be the damage?
– What can be done about it?
• Risk analysis and management are a set of activities that help a
software team to understand and manage uncertainty about a
project.
Risk: Definition and Attributes
A Risk vs a Problem
Risk Management
Types of Risks
CONFIGURATION MANAGEMENT
A set of management disciplines within the software engineering process to
develop a baseline. Software Configuration Management encompasses the
disciplines and techniques of initiating, evaluating and controlling change to
software products during and after the software engineering process.
• SCM is a Project Function (as defined in the SPMP) with the goal to
make technical and managerial activities more effective.
• Software Configuration Management can be administered in several
ways:
• A single software configuration management team for the whole
organization
• A separate configuration management team for each project
• Software Configuration Management distributed among the project
members
• Mixture of all of the above
Promotion management
Release management
Change management
Branch management
Variant management
Configuration Manager
Developer
Auditor
Traditional cost models take software size as an input parameter, and then
apply a set of adjustment factors or 'cost drivers' to compute an estimate of
total effort. In object-oriented software production, use cases describe
functional requirements.
Estimation Techniques
Decomposition Technique
Empirical Estimation Models
Automated Estimation Tools
Decomposition Technique
Here we subdivide the problem into small problems. When all the small problems are
solved the main problem is solved.
Lines of Code
Function Point
LOC (Lines of Code), FP(Function Point) estimation methods consider the size as the
measure. In LOC the cost is calculated based on the number of lines. In FP the cost is
calculated based on the number of various functions in the program.
The basic COCOMO model computes software development effort (and cost ) as a
function of program size expressed in estimated Lines of Code. The intermediate
COCOMO model computes software development effort as a function of program size
and a set of cost drivers that include hardware ,personnel attributes. The advanced
COCOMO model incorporate all characteristics of the intermediate version with an
assessment of cost drivers impact on each step (analysis,design,coding etc) of the
software engineering process.