Reliability Metrics For Software Quality
Reliability Metrics For Software Quality
1 1 2 1 1
MariaUsman , Nouman Ali , Usman Habib , SaadatHanif Dar , Tehmina Khalil
1. Introduction
Software reliability is defined as: “The possibility of a given system executing its task
effectively for a quantified amount of time under its proposed operating environment [1]. The
probability of the software that it will provide free of error operation for a given amount of
time under suitable condition. [2].We calculate reliability as [3]: Reliability = 1 – E/ L,
where as E represents number of errors means actual error and L represents total number of
lines means executable lines of code.
Initially, we measure software reliability by counting the numbers of faults in the system. The
possibility of the failure is independent of time. The possibility of the failure means failure of
the software. It is likely different from program correctness. It means the program is correct
and consistent or reliable with its specified environment when it satisfies its desired
specification and respond to a specific event in a particular time interval. A program is
inconsistent or unreliable when it doesn’t satisfies itssystem specification and doesn’t
respond to an event in a given time interval and it is not able to cover errors.
In software development, reliability is calculated as correctness and correctness means the
number of faults detected from the system and then fix it through reliability, we are better
able to develop a robust and high quality product Software reliability becomes the most
crucial part in the quality of software in the industry.
Quality of software is the most significant aspect of software reliability. Reliability means
providing a software that meet customer’s specification and it satisfy customerSoftware
reliability is divided into three categories. The first one is modelling and it provides a
foundation to software reliability and give knowledge about different models of reliability
and tell us which model is more appropriate for specific condition/ environment. Second one
is measurement it provides reliability metrics for the purpose of validation. The third and last
step is to provide improvement. If some fault is detected from overall process it should be
removed. Some factor required to measure the improvement of reliability because it is not
measured directly. There are various models used for software reliability and they are not
unique for all situation. Although some functionalities may be same in some model but they
are modeled to handle specific condition. Key characteristics of software reliability is that it
first create, review and then validate it. Software Reliability is mandatory and hard to
achieve, but it is the most important factor for the success of the product. It can be improved
by better understanding of the reliability characteristics for the software. Testing will
ultimately improve the reliability of product, but complete testing of product is not
possible[4]. The question arises, why we need measurements for software reliability? The
answer to make our product defect free. Although every products have problem it is not
completely bug free. The product is defect free means that important functionalities working
correctly.
In this paper we have discusses different reliability models, reliability metrics, software
reliability metrics, different views of reliability metrics and finally we compare reliability
model for Component Based System and reliability models for soft computing techniques.
And finally we conclude our result based on the validation procedure they follow.
Component Based Systems mainly depend upon the interaction among different components
of the system and it depends upon the dependences among system. When dependency level
increases it leads to the complex system. The estimation of these systems are mainly based on
reliability and it is bit lengthy task. There are different types of CBS discussed in section 7.
Soft computing techniques mainly focuses on the different concepts of computing related to
real life. Its main aim is to overcome the problem associated to real life. It is an association of
different computing approaches such as neural network (NN),Fuzzy Logic (FL) etc.
The finding reveals that Soft computing Techniques are more suitable for complex system
and also for critical system where human factor involves and validation is mandatory without
validation we cannot move. Where as Component Based System works with the interaction
of elements it is more suitable for case studies and also for experimental results.
The paper is structured as follows. We discuss reliability models in Section 2, reliability
metrics in section 3, in section 4 we discuss software metrics We present Reliability Metrics
in Section 4, Software Metrics for Reliability are presented in Section 5, in Section 5 we
discuss measurement factors for reliability, section 6 focuses on the comparison of the CBS
and soft computing techniques, results and discussion discussed in section 7 and finally
conclude our discussion in Section 8.
2. RELIABILITY MODELS
Reliability models assume the failure in the software that are independent to each other. It
characterize the instance of the failures in random process and study software failure as the
instance time. There are mainly three models of reliability. That are usage model, trend
model, and probabilistic failure model. The models discussed here are inherent from these
three models. We are going to discuss the reliability models that are used now a days. The
working of reliability model shown in Figure 3: [5]
Li Xiao-hua et al. [6]gave us a more particle model of reliability that is based on components
of software and its architecture. It is based on semi-Markov chain and give us a quantitative
evaluation of the digital protection of software. Related with software module and material
architecture and also the reliability model and then converted into semi Markov that is based
on the software architecture module. Used for complex system for calculating paratactic and
study the sensitivity of reliability factors and get higher quality software.
Usage Model described how the system is used. Harb et al. [7] gave us a new way to
calculate the Mean Time between Failures (MTBF) for the (photovoltaic module-integrated
inverter)PV-MII and it is applied on a usage model of reliability to find the statistical
distribution of electrical components. The modulate integrated electronics and operating
environment volatility used for the calculation of Mean Time between Failure of MII and it
gives a realistic assessment of the reliability. The results show that electrolytic capacitor have
a low MTBF and give us realistic MTBF.
Ullah et al. [8]provided a systematic approach that extend the validation of reliability
models. Evaluates the eight reliability models and observe which one of the best model. By
determining the number of faults left and provide a practical support to managers. The main
aim is to focus on supporting the practitioners and characterize the reliability of the OSS
model in the terms of defects remains in the system and help to achieve large goals and helps
managers to make decisions on OSS use. Experiments done on four models that are
Generalized Goel, Delayed S-shaped, Gompertz, Logistics. The results show that Delayed S
shaped to give better results as compare to others.
Mohit et al.[9] gave us an improved version of reliability tool i.e. CASRE (Computer
Aided Software Reliability Estimation) and the improved version of the reliability tool is
named as Improved Software Reliability Estimation Tool (ISRET). The results taken by
Improved Software Reliability Estimation Tool are better than that of CASRE another
advantage of Improved Software Reliability Estimation Tool is that it works with any version
of Windows.
Mirjafariet al. [10]presented a usage model for photovoltaic (PV) convertor to find the
efficiency. Record the maximum power point of current and voltage from the actual value of
solar module and then calculated the weighted efficiency. It gives better results while
comparing the performance of CEC.
A. Barabadi et al.[11] presented an effective reliability model for multi release OSS
based on Jørgensen model based on SDLC (Software Development Life Cycle). Based on the
unique characteristics of OSS model and remove bugs from parallel debugging test phases
and pre-commit test. It depends on the total number of faults removed in new release and the
number of faults reported in previous release. The parameters of this model depend on the
three releases of the Apache project. The comparison made based on three models and then
select the suitable reliability model and also propose changes point concept that it capture
fault with minimum number of error percentage.
Noorossanaet al.[12] introduce a reliability model for risky system and for random usage.
Haphazard usage, random shocks and Degradation processare the main reason for the failure
of the system. They proposed a reliability model maximizes the system availiability and
called it condition-based maintenance model. Maintenance model is considered and compare
their policies with numerical examples. System monitoring sensors and maintenance action
are not always perfect that’s why maintenance and reliability models are again considered
this issue and restart the process.
Yuanjie Si et al.[13] gave us component based software system for an architecture based
bottom up reliability estimation framework. This approach takes utilization frequency and
composition mechanism for each component and takes different architectural styles
Trend Model describes how the bugs are being fixed. Romney B et al.[14] focuses on
mathematical models of learning theory and during testing phase it searches the evidence
from learning trends. Learning models such as Stevens’s equation and then applied to
datasets of historical literature review. Historical Literature is not able to provide evidence for
learning additional faults are included in the system. Results show that learning models can
predict remaining failures in the software than that of traditional proposed techniques. And
these traditional techniques are based on nonhomogeneous Poisson process.
Sharmet al.[15] gave us a framework that evaluates the reliability from probe-sourced
traffic speed data for the assessment of general infrastructure performance and also
congestion detection. To find the similarities and dissimilarities from probe- sourced it used
time series analysis and pattern recognition. Reliability is estimated based on synchrony and
similarity between corresponding trends. Synchrony used to measure short and medium the
accuracy of congestion detection and long term is used to measure the performance
assessment. It supports a state of DOTs.
Probabilistic failure model defines how to capture the fact that failure may be happening.
Liu et al.[16] presented a more realistic probabilistic region failure (PRF) model that capture
some important features that correlate the geographical region of failure and then develop a
framework that apply the probabilistic region failure for the assessment of wireless mesh
network reliability. It selects appropriate routing and protection strategy for failure of the
region and identify geographical distribution the worst case of degradation of network
performance from PRF and do we achieve failure-resilient network design. It is cost efficient.
Lorenzo et al.[17] extended the fault tolerance models and present a technique that will
cover the non independent development in the process of diverse version. It gives us a way to
open questing and farming claim about how to purse the diversity and the positive and
negative effect of commonalities between developments from correction of the specification
to the test case selection. How to improve the reliability of expected system by creating
intentionally negative dependencies in the system and create a relation between the
developments of different iterations. This gives better results while extended the preference
and removing the commonalities in the different version of the development of the product
and help to identify damage and uncertain effect.
Bing Zhu et al.[18] presented a distributed storage system that will provide data
redundancy, guarantee reliability and fault tolerance. The conventional replication scheme
(CRS) is not efficient, but the storage emerging coding techniques such as FRC (fractional
repetition codes) is more efficient and provide redundancy than that of (CRS). It is applied to
the storage system and where different capacity of storage. It is evaluated based on node
repair metric and it measure the subsets of different nodes and then repaired the failed node.
Fan et al.[19] presented time series models that examine the construction, reliability
analysis and forecasting failures. It is mainly concerned on expected number of failures per
interval and Mean Time between Failures. Experimental results shows that this model is
suitable for selection of system reliability assessment and right maintenance strategy are the
most important tool for decision makers, model is also used for FRMCE forecasting of
reliability metrics of construction equipment and also flexible. The significant of this wok is
validated from literature review.
FAQIH et al [20] investigated the issues and critical factors that may lead to poor
performance of reliability models. Software Reliability Models are unable to deliver the
expected outcome. The reason behind is that performance inefficiency is that software
reliability models are inefficient and 14 major flaws in their attributes and it has been
explored. The study suggests that contemporary methodologies are immature in the reliability
concept of application.
Z. Zeng et al, [21] proposed a belief reliability model to evaluate the effectiveness of
reliability models with respect to epistemic uncertainly and assess the reliability of
engineering activities. Epistemic uncertainly was not properly addressed in traditional model.
The result of proposed model shows that Epistemic uncertainly is a major problem and has
diverse effect in real world.
Vizarreta et al. [22] propose a software reliability maturity metrics to identify the failure
earlier in software reliability model using real time data. Moreover a new technique has been
proposed the model regularization technique use to adopt the reliability model earlier than
before you deploy or access the project. Results shows that these techniques helps the
developer to decide which of the reliability model and software controller is more mature that
will help you to deploy and release the project.
Kishani et al.[23] analyze the soft errors in data storage system (cache memory). A new
metrics has been proposed to identify the soft errors from the server and from the entire
system dependability mainly from cache system. The experiments shows that up to 40% of
the user’s data saved in your systems cache memory get loss due to unrecoverable soft errors.
This technique will help you to identify the soft errors such silent data corruption and
recoverability of cache memory.
3. RELIABILITY METRICS
The probability of the software to provide an operation or process free of errors in a specified
environment for a specified interval of time. [2] A reliability metric should be able to
measure the characteristics and behavior of the system and check software is the part of
execution testing. It indicate that software or system is reliability and working under their
specified operation (ISO-IEC 2000) (ISO-IEC 2000b). Reliability Metrics are further divided
into Maturity metrics (MT), Fault tolerance metrics (FTM), Recoverability metrics (RM), and
Reliability compliance metrics (RCM). Figure 2 shows the reliability metrics.
Reliability Metrics
Reliability
Compliance Maturity Metrics Recoverability Fault Tolerance
Metrics (RCM) (MM) Metrics (RM) Metrics (FTM)
4. SOFTWARE METRICS
Software metrics provide standard measurement in the degree of which a software or a
system process some property of interest. By mean of metrics it is not used for measurements
it provide functions while measuring the values obtained from the application
Where Start of down time means start date of last failure, Startup time means start date
of first failure.
The Mean Time to Repair (MTTR) is the required time to repair a system.
𝐓𝐨𝐭𝐚𝐥𝐌𝐚𝐢𝐧𝐭𝐞𝐧𝐚𝐧𝐜𝐞𝐓𝐢𝐦𝐞
𝐌𝐓𝐓𝐑 =
𝐍𝐮𝐦𝐛𝐞𝐫𝐨𝐟𝐑𝐞𝐩𝐚𝐢𝐫𝐬
Reliability (R) is directly proportional to software quality (Q)
𝐑 ∝ 𝐐, 𝐲 = 𝐤𝐱
Reliability is inversely proportional to the complexity (C)
𝟏 𝐤
𝐑∝ , 𝐲=
𝐂 𝐱
The major distinction between software and other engineering artifacts is that software is
wholesome of implementation and design. Software reliability is something that the system is
not able to respond to a particular task, it is due to the fault in the design and it occurs due to
human errors. Hardware reliability basically in design and manufacturing time it is not
intentionally. Software reliability is a change in the code or program intentionally.
Reliability is important in such a way that to identify the weakness of the system and
identify which component is more crucial for the component failure [37].
The process of reliability of software optimization through which program emphasizes
on fault detection, fault removal, software error prevention and the use of some measures of
reliability to increase the reliability in the form of schedule, performance, resource. The
importance of reliability IR of the component i in the system n component is given as follows
𝐼𝑅𝑖 =ɚ𝑅𝑠
ɚ𝑅𝑖
Where Rs Defines the system reliability, Ri defines the component reliability. To find the
reliability of the system [35] calculated by this formula Reliability= 𝒆^ (-168/< Mean time
𝑡
between hours).To find the probability of repair time [38]∫0 𝑅(𝑡 − 𝑢0𝑚(𝑢)𝑑𝑢. [39].
Type of Models:
Three main models of reliability models on which framework analysis approach based
that are Additive Model evaluates the reliability of the system on the failure of component
data. Consideration of software architecture is not explicit, State Based Models consider the
Control flow between modules and the modules are fails independently. It considers Markov
behavior and transfer between models. It is expressed by two methods Composite model and
Hierarchical model. The failure of this model depends upon Constant Failure rate, Failure
Probability or Reliability, Time Dependent Failure Rate. The limitation of this model is that
the probability is not constant and Path Based Models consider the conceivable execution
paths for the reliable application. Testing done based on algorithms and experiments. The
limitation of this model is the architecture of an application provides infinite number of paths
in the presence of a loop [40].
Scope
It is the domain of software applications to check the applicability of the proposed
approach. Reviewed approach is Reuse Oriented Systems are used for refining the programs
and thus produce sequences of prototype and define rules, Service Oriented Architecture is
the collection of web services. Communication done with services and pass data during
communication and then preform activity, Complex Component Based Systems used for
large scale Component Based Systems which are complex, Component Based Systems (CBS)
based on the interaction among components [40].
Techniques
These models are estimated based on these techniques by using Algebraic Method,
Algorithm, mathematical formulas, and by using Component Dependency Graph (CDG) [40].
Scenario-
SherifYacoub’s
Based Algorithm Good for
Scenario Based Waiting
Reliability and extended
Reliability COTS PBM queue
Analysis Mathematica distributed
Analysis simulator
Approach l formulation systems
Approach[50]
(SBRA)
Fan Zhang’s COTS PBM sub domain Tij . (Rij × Hypothetic Analyzes
novel model for based method Wij). al validation the
CBS reliability reliability
analysis[51] of CBS
and
propose
algorithm
Ning Huang’s CBSS and Architecture Based on Algebraic Hypothetic
An Algebra- composite
and algebra method al example
Based Reliability web
Prediction services. operational
Approach[52] profile based
approach.
WANG Dong’s COTS PBM Markov Mathematic No Markov
Reliability Components model. al formulas validation is Model is
Analysis of CBS Based given extended
on Relationships Software
of Systems
Components[53] (CBSS) and
SOA.
VivekGoswamis COTS and Component Operational Mathematic Experiment Promisin
’s Method for CBSS usage ratio profile al formulas al results g
reliability based. estimation
estimation[54] of
software
reliability
Yuanjie Si’s CBSS PBM Estimate the Mathematic Case study Estimates
Reliability reliability of al formulas reliability
Estimation five basic based on
Framework component the
through composition componen
Component mechanism t
Composition
Mechanisms[55]
Chao-Jung Complex PBM Adaptive Algorithmic Experiment Sensitive
Hsu’s Adaptive CBSS approach approach al results Analysis
Reliability has been
Analysis Using used
Path Testing[56]
integrated Belief New metric Bayesian Case Study To
framework reliability proposed theory identify
the effect
of
epistemic
uncertaint
y in
reliability
model,
Table 3. Soft computing techniques
Technology Referen
Results
Used ce
Neural Networks It explores the feed forward for the NN based on
[57]
(NN) reliability growth prediction
It explores that network can predict the
Neural Networks commutativefaults and it can be used in future [58]
testing
It explains that NN are not much simpler but the it
Neural Networks [59]
is equal to recalibration method.
It demonstrate that the neural network should be
Neural Networks [60]
consider as an effective modeling tool.
It compare two models and results shows that
Neural Networks [61]
Jordan Model is better than feed -forward model
It propose that system get remarkably low
Neural Networks [62]
prediction errors
Neural Networks It provide models for smaller normalized root [63]
It describe that Neural Network is a robust
Neural Networks [64]
technique
Introduce a dynamic weighted combinational
Neural Networks [65]
model
Neural
It introduce two encoding schemes [66]
Network
60 Column1 Column2
50
40
30
20
10
0
Out Datasets Testcases Design Exp-results
Need Process Complex Facts
Examples Visibility models
Figure 3
Comparison of different techniques discussed in table 4 the comparison is very useful because result is placed in
one platform.
Table 4.Comparison of Soft Computing Techniques and Component Based System in terms of Modeling
Capabilities
Figure 4
8. CONCLUSION
In this paper, we examine different soft computing techniques and component based
techniques and considered some criteria based on our assumptions on available approaches
that are used in improvement of reliability. Most of the techniques used mathematical
calculations, formulas and operational profile. As the rapid improvement in software
industries for the importance of reliability metrics have also developed fast. Reliability
metrics become an essential part of the software management to accomplish the software
development. It is estimated that by using reliability metrics we are better able to develop
software and thus due to the overall progress rate will improve of software maintainability,
productivity and software quality.
Although the people appreciate the importance of reliability metrics and it will become a
mature field and it will help in various large projects where life of people is in danger.
Looking for the increasing demands of reliability metrics for the implementation and
most successful case study for the betterment of software quality. It is better to say that in
coming years the software reliability metric’s significance will increase and it will affect the
industry. The developer or leaders of the industry will better able to improve the delivering
product and they will have better software quality. There are amount of reliability metrics are
anticipated for software metrics that measure the software quality before implementation. In
future research, we are looking for the enhancement in existing reliability metrics based on
the magnitude and nature of the problem being addressed. There is better scope for many
software tools that will better able to manage the software project development they are able
to reduce the cost, development time, effort required and manage the project in a consistent
manner. The main task is to relate and compare different models and see which models are
more accurate and fit in the risky environment.
References