Software Engineering Mod4
Software Engineering Mod4
MODULE-4
Software Reliability:
Reliability in software is software that has no failure and works in a special time period with
a special environment. Hardware reliability is the probability of the absence of any
hardware-related system malfunction for a given mission on the other hand software
reliability is the probability that the software will provide a failure-free operation in a fixed
environment for a fixed interval of time.
Hardware Reliability:
Hardware reliability is the probability that the ability of the hardware to perform its function
for some period of time. It may change during certain periods such as initial burn-in or the
end of useful life.
It is expressed as Mean Time between Failures (MBTF).
Hardware faults are mostly physical faults.
Thorough testing of all components cuts down on the number of faults.
Hardware failures are mostly due to wear and tear.
It follows the Bathtub curve principle for testing failure.
Software Reliability:
Software reliability is the probability that the software will operate failure-free for a
specific period of time in a specific environment. It is measured per some unit of time.
Software Reliability starts with many faults in the system when first created.
After testing and debugging enter a useful life cycle.
Useful life includes upgrades made to the system which bring about new faults.
The system needs to then be tested to reduce faults.
Software reliability cannot be predicted from any physical basis, since it depends
completely on the human factors in design.
The current methods of software reliability measurement can be divided into four categories:
Product Metrics:
ii. Test coverage metric size fault and reliability by performing tests on software products,
assuming that software reliability is a function of the portion of software that is
successfully verified or tested.
1. Schedule Performance Indicators (SPI): SPI measures the efficiency of project schedule
performance by comparing the actual progress of the project to the planned schedule. It is
calculated as the ratio of earned value (EV) to planned value (PV). An SPI value greater than 1
indicates that the project is ahead of schedule, while a value less than 1 indicates that the
project is behind schedule.
2. Cost Performance Indicators (CPI): CPI measures the efficiency of project cost performance by
comparing the actual costs incurred to the planned costs. It is calculated as the ratio of earned
value (EV) to actual cost (AC). A CPI value greater than 1 indicates that the project is under
budget, while a value less than 1 indicates that the project is over budget.
3. Schedule Variance (SV): SV measures the deviation of actual progress from the planned
schedule. It is calculated as the difference between earned value (EV) and planned value (PV). A
positive SV indicates that the project is ahead of schedule, while a negative SV indicates that the
project is behind schedule.
4. Cost Variance (CV): CV measures the deviation of actual costs from the planned costs. It is
calculated as the difference between earned value (EV) and actual cost (AC). A positive CV
indicates that the project is under budget, while a negative CV indicates that the project is over
budget.
Fault and Failure Metrics:
A fault is a defect in a program which appears when the programmer makes an error and
causes failure when executed under particular conditions. These metrics are used to determine
the failure-free execution software. Here are some common fault and failure metrics.
1. Mean Time between Failures (MTBF): MTBF is the average time elapsed between
consecutive failures of a software system. It represents the expected time interval
between failures and is calculated by dividing the total operational time by the number
of failures observed. A higher MTBF value indicates higher reliability and longer intervals
between failures.
2. Mean Time to Failure (MTTF): MTTF is similar to MTBF but focuses on the average time
until the first failure occurs in the software system. It represents the expected time until
the software system experiences its initial failure and is calculated by dividing the total
operational time by the number of failures observed. A higher MTTF value indicates
higher reliability and longer intervals until the first failure.
3. Fault Removal Efficiency (FRE): FRE measures the effectiveness of fault removal
activities in eliminating faults or defects from the software codebase. It is calculated as
the ratio of the number of faults removed during testing to the total number of faults
identified. A higher FRE value indicates higher effectiveness in identifying and removing
faults during testing.
4. Fault Detection Rate (FDR): FDR measures the rate at which faults or defects are
detected and identified during testing or maintenance activities. It is calculated as the
ratio of the number of faults detected to the total number of faults present in the
software codebase. A higher FDR value indicates higher efficiency in detecting and
addressing faults.
5. Fault Injection Rate (FIR): FIR measures the frequency at which faults or defects are
intentionally injected into the software system for testing or validation purposes. It
helps assess the robustness and resilience of the software system to various types of
faults and failures.
6. Mean Time to Repair (MTTR): MTTR measures the average time required to repair or
recover from a failure in the software system. It includes the time to detect, diagnose,
and fix the failure. A lower MTTR value indicates faster recovery and shorter downtime,
contributing to higher reliability and availability of the software system.
Reliability Growth Modeling:
A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These
models help the manager in deciding how much efforts should be devoted to testing. The
objective of the project manager is to test and debug the system until the required level of
reliability is reached.
Characteristics of JM Model:
2. It is certainly the earliest and certainly one of the most well-known black-box models.
4. JM Model follows a prefect debugging step, i.e., the detected fault is removed with
certainty simple model.
The Musa-Okumo model and logarithmic models are both used in software reliability
engineering to predict and evaluate software reliability, but they approach the problem from
different perspectives.
The Musa-Okumo model, developed by K. Musa and Y. Okumoto, is a widely used software
reliability growth model. It's based on the assumption that software failures occur due to the
presence of latent defects in the software. The model predicts the number of failures that will
occur during testing or operation based on the number of remaining defects and the
effectiveness of the testing process.
The Littlewood and Verall model is based on the assumption that the rate at which faults are
detected during testing follows a non-homogeneous Poisson process (NHPP). This means that
the rate of fault detection changes over time as the testing process evolves. The model takes
into account factors such as fault introduction rate, fault detection rate, and fault removal rate.
In this model, the number of faults remaining in the software system at any given time can be
estimated using mathematical techniques based on the observed fault detection rate and other
relevant parameters. By analyzing the historical fault detection data, the model can provide
insights into the reliability of the software and help project managers make decisions about
resource allocation and testing strategies.
Software Quality:
Quality of a product is defined in terms of its fitness of purpose. For software products, the
fitness of purpose is usually interpreted in terms of satisfaction of requirements laid down in
SRS document. Quality Factors:
2. Usability: A software product has good usability, if different categories of user can easily
invoke the functions of product.
3. Reusability: a software product has good reusability, if different modules of the product can
easily be reused to develop new products.
b) Quality System Activities: the quality system activities encompass the following
a. Auditing of projects
ISO 9000 certification serves as a reference for contract between independent parties and
also specifies the guidelines for maintaining a quality system.
ISO 9001: this standard applies to organizations engaged in design, development, and
production and serving of goods and is applicable to most software organization.
ISO 9002: it is applicable to that organization which does not design the products but are only
involved in production.
ISO 9003: this will applies to organizations involved in installation and testing of products.
Quality SEI CMM:
The Software Engineering Institute (SEI) Capability Maturity Model (CMM) specifies an increasing series
of levels of a software development organization. The higher the level, the better the software
development process, hence reaching each level is an expensive and time-consuming process.
Level One: Initial - The software process is characterized as inconsistent, and occasionally even
chaotic. Defined processes and standard practices that exist are abandoned during a crisis.
Success of the organization majorly depends on an individual effort, talent, and heroics. The
heroes eventually move on to other organizations taking their wealth of knowledge or lessons
learnt with them.
Level Two: Repeatable - This level of Software Development Organization has a basic and
consistent project management processes to track cost, schedule, and functionality. The
process is in place to repeat the earlier successes on projects with similar applications. Program
management is a key characteristic of a level two organization.
Level Three: Defined - The software process for both management and engineering activities
are documented, standardized, and integrated into a standard software process for the entire
organization and all projects across the organization use an approved, tailored version of the
organization's standard software process for developing, testing and maintaining the
application.
Level Four: Managed - Management can effectively control the software development effort
using precise measurements. At this level, organization set a quantitative quality goal for both
software process and software maintenance. At this maturity level, the performance of
processes is controlled using statistical and other quantitative techniques, and is quantitatively
predictable.
Level Five: Optimizing - The Key characteristic of this level is focusing on continually
improving process performance through both incremental and innovative technological
improvements. At this level, changes to the process are to improve the process performance
and at the same time maintaining statistical probability to achieve the established quantitative
process-improvement objectives
Software maintenance:
Software maintenance refers to the process of modifying and updating software after it
has been deployed to fix defects, improve performance, enhance functionality, adapt to
changes in the operating environment, and meet new user requirements. It's an
essential phase in the software development lifecycle (SDLC) that ensures the long-term
viability and usefulness of software systems.
1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
Software Re-engineering:
Software Re-engineering is a process of software development that is done to
improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form. This
process encompasses a combination of sub-processes like reverse engineering,
forward engineering, reconstructing, etc.
The process of software re-engineering involves the following steps:
1. Planning: The first step is to plan the re-engineering process, which involves
identifying the reasons for re-engineering, defining the scope, and
establishing the goals and objectives of the process.
2. Analysis: The next step is to analyze the existing system, including the code,
documentation, and other artifacts. This involves identifying the system’s
strengths and weaknesses, as well as any issues that need to be addressed.
3. Design: Based on the analysis, the next step is to design the new or updated
software system. This involves identifying the changes that need to be
made and developing a plan to implement them.
5. Testing: Once the changes have been implemented, the software system
needs to be tested to ensure that it meets the new requirements and
specifications.
Advantages of Re-engineering:
1. Reduced Risk: As the software already exists, the risk is less as compared to new
software development. Development problems, staffing problems and specification
problems are the lots of problems that may arise in new software development.
2. Reduced Cost: The cost of re-engineering is less than the costs of developing new
software.
3. Revelation of Business Rules: As a system is re-engineered, business rules that are
embedded in the system are rediscovered.
4. Better use of Existing Staff: Existing staff expertise can be maintained and extended
accommodate new skills during re-engineering.
5. Improved efficiency: By analyzing and redesigning processes, re-engineering can lead to
significant improvements in productivity, speed, and cost-effectiveness.
6. Increased flexibility: Re-engineering can make systems more adaptable to changing
business needs and market conditions.
7. Better customer service: By redesigning processes to focus on customer needs, re-
engineering can lead to improved customer satisfaction and loyalty.
8. Increased competitiveness: Re-engineering can help organizations become more
competitive by improving efficiency, flexibility, and customer service.
9. Improved quality: Re-engineering can lead to better quality products and services by
identifying and eliminating defects and inefficiencies in processes.
10. Increased innovation: Re-engineering can lead to new and innovative ways of doing
things, helping organizations to stay ahead of their competitors.
11. Improved compliance: Re-engineering can help organizations to comply with industry
standards and regulations by identifying and addressing areas of non-compliance.
Disadvantages of Re-engineering:
1. High costs: Re-engineering can be a costly process, requiring significant investments in time,
resources, and technology.
2. Disruption to business operations: Re-engineering can disrupt normal business operations and
cause inconvenience to customers, employees and other stakeholders.
3. Resistance to change: Re-engineering can encounter resistance from employees who may be
resistant to change and uncomfortable with new processes and technologies.
4. Risk of failure: Re-engineering projects can fail if they are not planned and executed properly,
resulting in wasted resources and lost opportunities.
5. Lack of employee involvement: Re-engineering projects that are not properly communicated
and involve employees, may lead to lack of employee engagement and ownership resulting in
failure of the project.
Software reuse in software engineering refers to the practice of utilizing existing software
components, modules, or systems to build new software applications. Instead of developing
software from scratch, developers leverage reusable assets to accelerate development,
improve quality, reduce costs, and enhance productivity. Software reuse encompasses various
forms and levels of reuse, including:
1. Code Reuse: Reusing code involves incorporating existing source code, libraries, or
modules into new software projects. This can range from simple code snippets and
functions to entire libraries or frameworks. Code reuse helps in avoiding redundant
development efforts, reducing errors, and maintaining consistency across projects.
4. Design Patterns: Design patterns are reusable solutions to common software design
problems. They encapsulate best practices and proven solutions for designing software
systems in a reusable and maintainable way. By applying design patterns, developers
can leverage established solutions to recurring design challenges, reducing the need for
reinventing the wheel.
1. Client: The client is the part of the application that interacts directly with the end-user.
It typically runs on the user's device, such as a computer, smart phone, or tablet. The
client's primary responsibility is to provide a user interface through which users can
interact with the application and initiate requests for services or resources from the
server.
2. Server: The server is the part of the application that provides services, resources, or data
to clients upon request. It typically runs on a remote computer or server, accessible over
a network such as the internet. The server's primary responsibility is to handle client
requests, process them, and return results or data back to the clients. Servers can
provide various services, including data storage, computation, application logic,
messaging, and authentication.
Service
Services are the basic building blocks of SOA. They can be private—available only to internal
users of an organization—or public—accessible over the internet to all. Individually, each
service has three main features.
Service implementation
The service implementation is the code that builds the logic for performing the specific service
function, such as user authentication or bill calculation.
Service contract
The service contract defines the nature of the service and its associated terms and conditions,
such as the prerequisites for using the service, service cost, and quality of service provided.
Service interface
In SOA, other services or systems communicate with a service through its service interface. The
interface defines how you can invoke the service to perform activities or exchange data. It
reduces dependencies between services and the service requester. For example, even users
with little or no understanding of the underlying code logic can use a service through its
interface.
Advantages of SOA:
Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
Availability: SOA facilities are easily available to anyone on request.
Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
High investment: A huge initial investment is required for SOA.
Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a
large number of messages.
Software as a Service:
Software as a Service (SaaS) is a delivery model for software where instead of purchasing and
installing software on individual computers or servers, users access the software via the
internet, usually through a web browser. In the context of software engineering, SaaS refers to
developing, deploying, and maintaining software applications.
4. Maintenance and Updates: SaaS providers are responsible for maintaining and updating
the software, including security patches, bug fixes, and feature enhancements. This
relieves customers from the burden of managing infrastructure and allows them to
focus on using the software to achieve their business objectives.
5. Integration: SaaS applications often need to integrate with other systems and services,
such as third-party APIs, databases, or internal systems within an organization. Software
engineers design and implement these integrations to ensure seamless communication
and data flow between different components.