0% found this document useful (0 votes)
21 views7 pages

Considering Fault Removal Efficiency in Software Reliability Assessment

This document proposes a new software reliability growth model that incorporates fault removal efficiency. It begins by reviewing existing software reliability growth models and their limitations in accounting for imperfect debugging. It then presents a new non-homogeneous Poisson process model that considers: 1) Fault removal efficiency, defined as the percentage of bugs actually eliminated through debugging, which is typically less than 100%. 2) A fault introduction rate, to capture the possibility of new faults being introduced during debugging. 3) An S-shaped fault detection rate function to model the learning effect of developers. The document derives an explicit solution for the mean value function of the new model and evaluates it using two real-world software testing datasets, demonstrating improvements over

Uploaded by

Debapriya Mitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views7 pages

Considering Fault Removal Efficiency in Software Reliability Assessment

This document proposes a new software reliability growth model that incorporates fault removal efficiency. It begins by reviewing existing software reliability growth models and their limitations in accounting for imperfect debugging. It then presents a new non-homogeneous Poisson process model that considers: 1) Fault removal efficiency, defined as the percentage of bugs actually eliminated through debugging, which is typically less than 100%. 2) A fault introduction rate, to capture the possibility of new faults being introduced during debugging. 3) An S-shaped fault detection rate function to model the learning effect of developers. The document derives an explicit solution for the mean value function of the new model and evaluates it using two real-world software testing datasets, demonstrating improvements over

Uploaded by

Debapriya Mitra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

114 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 33, NO.

1, JANUARY 2003

Considering Fault Removal Efficiency in Software


Reliability Assessment
Xuemei Zhang, Xiaolin Teng, and Hoang Pham, Senior Member, IEEE

Abstract—Software reliability growth models (SRGMs) have been studied. Other reliability measures, such as the mean time
been developed to estimate software reliability measures such until next failure [18], are also investigated.
as the number of remaining faults, software failure rate, and Although some software reliability studies addressed the
software reliability. Issues such as imperfect debugging and the
learning phenomenon of developers have been considered in imperfect debugging phenomenon, most of them only con-
these models. However, most SRGMs assume that faults detected sidered possibilities of adding new faults while removing
during tests will eventually be removed. Consideration of fault the existing ones. However, imperfect debugging also means
removal efficiency in the existing models is limited. In practice, that detected faults are removed with an imperfect removal
fault removal efficiency is usually imperfect. This paper aims efficiency other than 100%. Jones [7] pointed out that the
to incorporate fault removal efficiency into software reliability
assessment. Fault removal efficiency is a useful metric in software defect-removal efficiency is an important factor for software
development practice and it helps developers to evaluate the quality and process management. It can provide software
debugging effectiveness and estimate the additional workload. In developers with the estimation of testing effectiveness and
this paper, imperfect debugging is considered in the sense that the prediction of additional effort. Moreover, fault removal
new faults can be introduced into the software during debugging efficiency is usually below 100% (e.g., it ranges from 15% to
and the detected faults may not be removed completely. A model
is proposed to integrate fault removal efficiency, failure rate, and 50% for unit test, 25% to 40% for integration test, and 25% to
fault introduction rate into software reliability assessment. In 55% for system test [7]. Goel and Okumoto [4] also considered
addition to traditional reliability measures, the proposed model a similar conception in their Markov model. They assumed
can provide some useful metrics to help the development team that after a failure the residual faults remain the same with
make better decisions. Software testing data collected from real probability and it reduces to one less than current value with
applications are utilized to illustrate the proposed model for both
the descriptive and predictive power. The expected number of probability . In other words, fault removal is not always 100%.
residual faults and software failure rate are also presented. Kremer [8] applied a birth–death process to software reliability
modeling, considering both imperfect fault removal probability
Index Terms—Akaike’s information criteria (AIC), max-
imum-likelihood estimate (MLE), nonhomogeneous poisson (death–process) and fault introduction (birth process).
process (NHPP), software reliability growth. In practice, software fault debugging is a very complex
process. Usually, when testers detected a deviation from the
requirements, they create a modification request. Then the
I. INTRODUCTION review board members will assign this request to a particular
developer. After the developer studies the software fault, he/she
A S SOFTWARE systems get larger and more complex, the
software development process inevitably becomes more
complicated. Powerful metrics play an important role in the as-
will submit a code change to fix it. The changed code has to go
through the various tests (unit test, integration test, and system
sisting management decisions making for a complicated process test) again to make sure that it fixes the reported problem. The
like software development. For instance, reliability is a signif- fix may not pass these tests and sometimes even if it passes
icant factor in quantitatively characterizing quality and deter- these tests the fault may not be completely removed due to
mining when to stop testing and release software on the basis of the fact that the test environment maybe not be the same as
predetermined reliability objectives. Software reliability growth the customer environment. It is not unusual for the software
models [2]–[5], [8]–[14], [19], [22], [23] have been proposed development team to find that a software fault has been reported
to estimate reliability metrics such as the number of residual multiple times before it is finally removed. Some faults can
faults, failure rate, and reliability of software. Perfect [3], [5], only be encountered in the customer field trials. Therefore,
[11], [12], [21], [22] and imperfect debugging [14], [15] are con- fault removal efficiency is an important factor for software
sidered in the NHPP models. In some of these models, learning reliability estimation and software project management.
phenomenon of the software developers [11], [12], [17] has also In this paper, we propose a methodology to integrate fault re-
moval efficiency into software reliability growth models. Sec-
tion II presents the formulation of the NHPP model addressing
fault removal efficiency and fault introduction rate. The explicit
Manuscript received January 3, 2000; revised February 28, 2003. This re-
search was supported in part by the FAA William J. Hughes Technical Center solution of the mean value function (MVF) for the proposed
under Grant 98-G-006 and by the NSF under Grant INT-0107755. This paper NHPP model is derived. This model considers the learning phe-
was recommended by Associate Editor L. Fang. nomenon using an S-shaped fault detection rate function and
The authors are with the Department of Industrial Engineering, Rutgers Uni-
versity, New Brunswick, NJ 08903, USA (e-mail: [email protected]). introduces a constant fault introduction rate. Section III evalu-
Digital Object Identifier 10.1109/TSMCA.2003.812597 ates the proposed model and compares and contrasts it to the
1083-4427/03$17.00 © 2003 IEEE
ZHANG et al.: CONSIDERING FAULT REMOVAL EFFICIENCY IN SOFTWARE RELIABILITY ASSESSMENT 115

other existing NHPP models using two sets of data collected members will assign a developer to look into the code. Although
from real software applications. Software reliability metrics in- the fault that causes the failure may not be removed immedi-
cluding the expected number of remaining faults and software ately, the debugging effort is still initiated. When the developer
failure rate are estimated using the proposed fault removal effi- tries to modify the code, new faults could be introduced to the
ciency model. The results show that the proposed model has the software. This is captured by the assumption 3 and 4.
following technical merits: improving both software reliability
assessment and providing additional metrics for development A. General NHPP Software Reliability Model With Fault
project evaluation and management. Section IV summarizes the Removal Efficiency
conclusions. In this section, fault removal efficiency and fault introduction
rate are integrated into the MVF of an NHPP model. Fault re-
Notation moval efficiency is defined as the percentage of bugs eliminated
Counting process for the total number of failures in by reviews, inspections, and tests. This section also presents
. an explicit solution to the differential equation of the proposed
Expected number of software failures by time , model. The MVF that incorporates both fault removal efficiency
. and fault introduction phenomena can be obtained by solving
Total fault content rate function, i.e., the sum of ex- the system of differential equations as follows:
pected number of initial software faults and intro-
duced faults by time . (1)
Failure detection rate function, which also repre-
sents the average failure rate of a fault. (2)
Fault removal efficiency, i.e., percentage of faults
eliminated by reviews, inspections and tests. where represents the fault removal efficiency, which means
Fault introduction probability at time . % percentage of detected faults can be eliminated completely
Intensity function or fault detection rate per unit during the development process. Therefore, in (1), rep-
time, . resents the expected number of faults detected by time , and
Software reliability function by time for a mission then, represents the expected number of faults that can
time . be successfully removed. Existing models usually assume that
is 100%.
The marginal conditions for the differential equations (1) and
II. SOFTWARE RELIABILITY MODELING
(2) are as follows:
In the family of software reliability models, NHPP software
reliability models have been widely used in analyzing and (3)
estimating the reliability related metrics of software products in
many applications, such as, telecommunications [6], [24] etc. (4)
These models consider the debugging process as a counting
process, which follows Poisson distribution with a time-de- where is the number of initial faults in the software system
pendent intensity function. Existing NHPP software reliability before testing starts.
models can be unified into a general NHPP function proposed Most existing NHPP models assume that the fault failure rate
by Pham etc. [16]. The primary task of using the NHPP models is proportional to the total number of residual faults. Equation
to estimate software reliability metrics is to determine the (1) can be deduced directly from assumption 2 and 3. Software
Poisson mean, which is known as the MVF. system failure rate is a function of the number of residual faults
In this section, an NHPP model with fault removal efficiency at any time and the fault detection rate (which can also be in-
is presented. The following are the assumptions for this model: terpreted as the average failure rate of a fault). The expected
number of residual faults is given by
1) The occurrence of software failures follows an NHPP.
2) The software failure rate at any time is a function of fault (5)
detection rate and the number of remaining faults pre- Notice, that when , the proposed model can be reduced to
sented at that time. an existing NHPP model [17].
3) When a software failure occurs, a debugging effort will be Equation (2) can also be directly deduced from assumption 3
initiated immediately with probability . This debugging and 4. The fault content rate in software at time is pro-
is s-independent at each location of the software failures. portional to the rate of debugging efforts to the system, which
4) For each debugging effort, whether the fault is success- equals to because of assumption 3.
fully removed, or not, some new faults may be introduced Equation (5) can be used to derive explicit solutions of (1)
into the software system with probability , . and (2). By taking derivatives on both sides of (5), we obtain
Assumption 1 is a widely accepted assumption. Assumption 2
can be interpreted as follows: software failure rate is the product
of the number of residual faults (which incorporates the concept or
of fault removal efficiency) and the average failure rate of a fault.
In practice, once a software failure is reported, the review board (6)
116 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 33, NO. 1, JANUARY 2003

with marginal condition TABLE I


SUMMARY SOFTWARE RELIABILITY MODELS
OF THE AND THE
MEAN VALUE FUNCTIONS

Hence, the expected number of residual faults given by (6) is

(7)
From (1), the failure rate function can be expressed as follows:
(8)
Therefore, the explicit expression of the MVF can be obtained
as follows:

(9)
Using the result in (8), one can also obtain the solution for the
fault content rate function by taking the integral of (2). The fault
content rate function is given by

The reliability function based on the NHPP is, therefore


(10)
where is given by (9).
Thus, the reliability metrics, i.e., the expected number of
residual faults, software failure rate, and software reliability
can be estimated from (7), (8), and (10), respectively.

B. NHPP Model
In this section, we derive a new NHPP model from the general
class of model presented in the previous section. The fault detec-
tion rate function in this model, , is a nondecreasing func-
tion with inflexion S-shaped curve [11], [12], which captures
the learning process of the software developers. In the existing
models [11], [12], however, the upper bound of fault detection
rate is assumed to be the same as the learning curve increasing
rate. This is for the purpose of calculation convenience. In this
paper, we relax this assumption and use a different parameter
for the upper bound of fault detection rate [see (11)]. The model
also addresses imperfect debugging by assuming faults can be
introduced during debugging with a constant fault introduction
probability, . That is

(11)
Substituting (11) into (9), we obtain the MVF for the proposed and the software failure rate is
model as follows:
(14)
(12)

Note, that when the testing time goes to infinity, con- Table I summarizes the features of the proposed model and the
verges to its upper bound . The expected number of existing ones.
residual faults is given by
C. Parameter Estimation and Model Comparison
(13) Parameter Estimation: Once the analytical expression for
the MVF is derived, the parameters in the MVF need to be
ZHANG et al.: CONSIDERING FAULT REMOVAL EFFICIENCY IN SOFTWARE RELIABILITY ASSESSMENT 117

TABLE II
REAL-TIME CONTROL SYSTEM DATA

estimated, which is usually carried out by using the maximum III. MODEL EVALUATION AND COMPARISON
likelihood estimate (MLE) method. A. Case 1: Data From a Real Time Control System
Model Comparison: Two criteria are used for model com-
parison. In this section, we evaluate the performance of the In this section, we examine the goodness-of-fit and predictive
models using the sum of squared errors (SSE) and the Akaike’s power of the proposed model and compare it with the existing
information criterion [1]. Both the descriptive and the predictive models. The first set of data is documented in Lyu [9]. There are
power of the models are considered. The sum of squared error totally 136 faults reported and the time-between failures (TBF)
is usually used as a criterion for comparison of goodness-of-fit in second are listed in Table II.
and predictive power. SSE can be calculated as follows: We need to separate the data sets into two subsets for the
goodness-of-fit test and predictive power evaluation. Since an
extremely long TBF from the 122nd fault to the 123rd fault is
SSE (15) observed, and the TBFs after the 123rd fault increases tremen-
dously, implying reliability growth, the system becomes stabi-
where lized. In this study, we use the first 122 data points for the good-
observed number of faults by time ; ness-of-fit evaluation and the remaining data points for the pre-
dictive power test. The SSE and AIC values for goodness-of-fit
expected number of faults by time estimated by a
and prediction are listed in Table III.
model;
As seen from Table III, the proposed model provides the best
fault index. fit and prediction for this data set (both the SSE and the AIC
Another criterion used for model comparison is AIC, which values are the lowest among all models). Furthermore, some
can be calculated as follows: instrumental information can be obtained from the parameter
estimation provided by the proposed model. For example, the
likelihood function at its maximum value fault removal efficiency is 90%, which is relatively high
(16) according to [7]. The number of initial faults is estimated
to be 135, together with 90% fault removal efficiency, the ex-
where represents the number of parameters in the model. The pected number of total detected faults is then 152. Therefore,
AIC measures the ability of a model to maximize the likelihood at the assumed stopping point of 57 042 s, there are about 30
function that is directly related to the degrees of freedom during ( ) faults remaining in the software. The fault in-
fitting, increasing the number of parameters will usually result troduction probability is 0.012, that is, on average, one fault
in a better fit. AIC criterion takes the degree of freedom into con- will be introduced when 100 faults are removed. Some models
sideration by assigning a model with more parameters a larger underestimate the expected number of total faults .
penalty. The lower the SSE and AIC values, the better the model Software failure rate can be predicted after the parameters
performs. are estimated. Fig. 1 shows the trend of failure rate forthe test
118 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 33, NO. 1, JANUARY 2003

TABLE III
PARAMETER ESTIMATION AND MODEL COMPARISON

Fig. 2. Comparison of the post-test failure rates by different models.

velopment teams with both traditional reliability measures and


in-process metrics.

B. Case 2: Data From a Tandem Computers Project


In this section, we test the predictive power of the new model
and other existing models using four sets of software failure
data. Wood [23] studied eight existing NHPP models based on
four data sets that stem from four major releases of software
products at Tandem Computers, and found that the G-O model
performs the best. Due to page limitation, we only present the
model comparison based on the first of those four releases. Sim-
ilar conclusions are also observed from the other three data sets.
In [23], Wood used a subset of each group of the actual data to
fit the models and then predicted the number of future failures.
He then compared the predicted number of failures with the ac-
tual data. We fitted our model into the same subset of data (week
1 to week 9 for Release 1 data), predicted the number of faults
for week 10 to week 20, and compared our results with the best
model—the G-O model in Wood’s paper. Table IV shows the re-
sults predicted using the CPU execution hours as the time frame.
As can be seen from Table IV, the new model predicts signifi-
cantly better than the G-O model. From the SSE values, we can
see this proposed model provides a significant prediction than
G-O model. The AIC value for the proposed model is also lower
than that of G-O.
The estimates of the parameters and their implications can
be summarized as follows: fault removal efficiency ,
which is below average. (In [7], Jones mentioned that the fault
removal efficiency ranges from 45% to 99% with the average
72%). Thus, more resources need to be allocated to improve the
fault removal efficiency. The result also shows that the initial
number of faults is , which is greater than the actual
Fig. 1. Failure rate for real-time control data. total detected faults by the end of the testing phase (100), and
the estimated total number of faults by the end of testing phase is
about 117. This implies that there are still a number of remaining
and post-test period. Fig. 2 illustrates the difference between the
faults in the software at the end of the testing phase. This would
post-test failure rates predicted by several existing models listed
agrees with the fact that about 20 faults were detected during
in Table III and the proposed model. For instance, the failure
user operational phase [23]. The MLEs of the other model pa-
rate given by the G-O model is on the optimistic side, due to the
rameters are , , and .
following two reasons: 1) the G-O model underestimates the ex-
pected number of total faults (125 instead of 136), and 2) unlike
IV. CONCLUSIONS
the proposed model, the G-O model does not consider the fault
removal efficiency. Thus, we can see that the new model has This paper incorporates fault removal efficiency into software
promising technical merit in the sense that it provides the de- reliability assessment. Imperfect debugging is considered in the
ZHANG et al.: CONSIDERING FAULT REMOVAL EFFICIENCY IN SOFTWARE RELIABILITY ASSESSMENT 119

TABLE IV [7] C. Jones, “Software defect-removal efficiency,” IEEE Computer, vol.


COMPARISON OF G-O AND THE NEW MODEL USING SOFTWARE FAILURE DATA 29, pp. 73–74, Apr. 1996.
FROM TANDEM COMPUTERS [8] W. Kremer, “Birth-death and bug counting,” IEEE Trans. Rel., vol. R-32,
no. 1, pp. 37–47, 1983.
[9] M. Lyu, Ed., Handbook Software Reliability Engineering. New York:
McGraw-Hill, 1996.
[10] M. Ohba, “Software reliability analysis models,” IBM J. Res. Develop.,
vol. 28, pp. 428–443, 1984.
[11] , “Inflexion S-shaped software reliability growth models,” in Sto-
chastic Models in Reliability Theory, S. Osaki and Y. Hatoyama, Eds.
Berlin, Germany: Springer-Verlag, 1984, pp. 144–162.
[12] M. Ohba and S. Yamada, “S-shaped software reliability growth models,”
in Proc. 4th Int. Conf. Reliability Maintainability, 1984, pp. 430–436.
[13] H. Ohtera and S. Yamada, “Optimal allocation and control problems for
software-testing resources,” IEEE Trans. Rel., pp. 171–176, June 1990.
[14] H. Pham, “Software reliability assessment: Imperfect debug-
ging and multiple failure types in software development,” in
EG&G-RAAM-10 737; Idaho National Eng. Lab., 1993.
[15] , “A software cost model with imperfect debugging, random life
cycle and penalty cost,” Int. Syst. Sci., vol. 27, no. 5, pp. 455–463, 1996.
[16] H. Pham, L. Nordmann, and X. Zhang, “A general imperfect software
debugging model with s-shaped fault detection rate,” IEEE Trans. Rel.,
vol. 48, pp. 169–175, June 1999.
[17] H. Pham and X. Zhang, “An NHPP software reliability models and
its comparison,” Int. J. Rel., Quality Safety Eng., vol. 14, no. 3, pp.
269–282, 1997.
[18] L. Pham and H. Pham, “Software reliability models with time-dependent
hazard function based on Bayesian approach,” IEEE Trans. Syst., Man,
Cybern. A, vol. 30, pp. 25–35, Jan. 2000.
[19] H. Pham, Software Reliability. New York: Springer-Verlag, 2000.
[20] S. Yamada, M. Ohba, and S. Osaki, “S-shaped reliability growth mod-
eling for software fault detection,” IEEE Trans. Rel., vol. TR–12, pp.
475–484, 1983.
sense that not all fault can be removed completely, and new [21] S. Yamada and S. Osaki, “Software reliability growth modeling:
faults can be introduced while removing existing ones. Both Models and applications,” IEEE Trans. Software Eng., vol. SE-11, pp.
1431–1437, 1985.
the fault removal efficiency and the fault introduction function [22] S. Yamada, K. Tokuno, and S. Osaki, “Imperfect debugging models with
can take a time-varying form. Data collected from real appli- fault introduction rate for software reliability assessment,” Int. J. Syst.
cations show that the proposed model provides both, the tradi- Sci., vol. 23, no. 12, 1992.
[23] A. Wood, “Predicting software reliability,” IEEE Computer, vol. 11, pp.
tional reliability measures, and also, some important in-process 69–77, Nov. 1996.
metrics including the fault removal efficiency and fault intro- [24] X. Zhang, D. R. Jeske, and H. Pham, “Calibrating software reliability
duction rate. These metrics offer very useful information about models when the test environment does not match the user environ-
ment,” Appl. Stochast. Models Business Ind., vol. 18, pp. 87–89, 2002.
the development project management. With more careful data
collection, more sophisticated analyzes can be investigated in
this area.

Xuemei Zhang received the M.S. degree in statis-


ACKNOWLEDGMENT tics and the Ph.D. degree in industrial engineering
both from Rutgers University, New Brunswick, NJ,
The authors would like to thank the referees and editor for in 1999.
She is currently a Member of Technical Staff in
their helpful comments. the Performance Analysis Department of Bell Labo-
ratories, Lucent Technologies, NJ. Her major area of
work has been in reliability with an emphasis toward
software reliability. Her other areas of work include
REFERENCES performance analysis of computers systems and net-
works.
[1] H. Akaike, “A new look at statistical model identification,” IEEE Trans.
Automat. Cont., vol. AC–19, pp. 716–723, 1974.
[2] W. Ehrlich, B. Prasanna, J. Stampfel, and J. Wu, “Determining the cost
of a stop-testing decision,” IEEE Softw., pp. 33–42, 1993.
[3] A. L. Goel and K. Okumoto, “Time-dependent fault-detection rate
model for software and other performance measures,” IEEE Trans.
Rel., vol. R–28, pp. 206–211, 1979. Xiaolin Teng received the M.S. degree in statistics
[4] , “A Markovian model for reliability and other performance mea- and the Ph.D. degree in industrial engineering from
sures of software systems,” in Proc. AFIPS Conf., June 4–7, 1979, Nat. Rutgers University, New Brunswick, NJ, in 2001.
Comput. Conf., pp. 770–774. He is currently a part-time Adjunct Lecturer in
[5] S. A. Hossain and R. C. Dahiya, “Estimating the parameters of a nonho- the School of Management at Rutgers University.
mogeneous Poisson process model for software reliability,” IEEE Trans. His research interests include software reliability,
Rel., vol. 42, pp. 604–612, Dec. 1993. reliability modeling, and fault-tolerant computing.
[6] D. R. Jeske, X. Zhang, and L. Pham, “Accounting for realities when es-
timating the field failure rate of software,” in Proc, 12th Int. Symposium
Software Reliability Engineering, Kowlong, Hong Kong, Nov. 2001.
120 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 33, NO. 1, JANUARY 2003

Hoang Pham (S’89–M’89–SM’92) received the


B.S. degree in mathematics, the B.S. degree in
computer science, both with high honors, from
Northeastern Illinois University, Chicago, IL, the
M.S. degree in statistics from the University of
Illinois, Urbana-Champaign, and the M.S. and Ph.D.
degrees in industrial engineering from the State
University of New York at Buffalo, in 1982, 1984,
1988, and 1989, respectively.
He is an Associate Professor with the Department
of Industrial Engineering at Rutgers University, New
Brunswick, NJ. Before joining Rutgers, he was a Senior Engineering Specialist
at the Boeing Company, Seattle, WA, and the Idaho National Engineering
Laboratory, Idaho Falls. His research interests include software reliability,
fault-tolerant computing, reliability modeling, maintenance, environmental risk
assessment, and optimization. He is the author of Software Reliability (Berlin,
Germany: Springer-Verlag, 2000) and editor of the Handbook of Reliability
Engineering (Berlin, Germany: Springer-Verlag, Fall 2003). He has published
more than 70 journal articles, 15 book chapters, and the editor of ten volumes.
Among his edited books are Recent Advances in Reliability and Quality
Engineering (Singapore: World Scientific, 2001), Software Reliability and
Testing (New York: IEEE Computer Society Press, 1995), and Fault-Tolerant
Software Systems: Techniques and Applications (New York: IEEE Computer
Society Press, 1992). He is Editor-in-Chief of the International Journal of
Reliability, Quality and Safety Engineering, and Associate Editor of Journal of
Systems and Software, the International Journal of Modeling and Simulation.
He is an editorial board member of the International Journal of Systems
Science, Journal of Computer and Software Engineering, IIE Transactions on
Quality and Reliability, and the International Journal of Plant Engineering
and Management.
Dr. Pham is Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN
AND CYBERNETICS (PART A), and was Guest Editor of the IEEE TRANSACTIONS
ON RELIABILITY, and IEEE Communications. He has been conference chair and
program chair of many international conferences and is currently the Conference
Chair of the Ninth International Conference on Reliability and Quality in Design
will be held in Honolulu, HI, August 2003. He is a senior member of the IIE and
is listed in Who’s Who in the World, Who’s Who in America, and Who’s Who in
Science and Engineering.

You might also like