Statistics For Reliability Modeling
Statistics For Reliability Modeling
3
Paul Kvam and Jye-Chyi Lu
P. Kvam () In everyday use, words like reliability and quality have mean-
Department of Mathematics and Computer Science, University of ings that vary depending on the context. In engineering,
Richmond, Richmond, VA, USA
e-mail: [email protected]
reliability is defined as the ability of an item to perform
its function, usually measured in terms of probability as a
J.-C. Lu
The School of Industrial and Systems Engineering, Georgia Institute of
function of time. Quality denotes how the item conforms to
Technology, Atlanta, GA, USA its specifications, so reliability is a measure of the item’s
e-mail: [email protected] quality over time.
Since the time of Birnbaum and Sanders [1], when sys- performance, Lyu [14] provides a comprehensive guide of
tem reliability emerged as its own discipline, research has engineering procedures for software reliability testing, while
centered on the operation of simple systems with identical a more theoretical alternative by Singpurwalla and Wilson
parts working independently of each other. Today’s systems [15] emphasizes probability modeling for software reliabil-
do not fit this mold; system representation must include ity, including hierarchical Bayesian methods. Closely related
multifaceted components with several component states that to reliability modeling in engineering systems, Bedford and
can vacillate between perfect operation and terminal failure. Cooke [16] covers methods of probabilistic risk assessment,
Not only do components interact within systems, but many which is an integral part of reliability modeling for large and
systems are dynamic in that the system configuration can complex systems.
be expected to change during its operation, perhaps due to Other texts emphasize reliability assessment in a partic-
component failures or external stresses. Computer software, ular engineering field of interest. For statistical reliability
for example, changes its failure structure during the course in geotechnical engineering, Baecher and Christian [17]
of design, testing, and implementation. is recommended as it details statistical problems with soil
Statistical methods for reliability analysis grew from this variability, autocorrelation (i.e., Kriging), and load/resistance
concept of system examination, and system reliability is often factors. Ohring [18] provides a comprehensive guide to reli-
gauged through component lifetime testing. This chapter ability assessment for electrical engineering and electronics
reviews the current framework for statistical reliability and manufacturing, including reliability pertaining to degrada-
considers some modern needs from experimenters in engi- tion of contacts (e.g., crack growth in solder), optical fiber
neering and the physical sciences. reliability, semiconductor degradation, and mass-transport-
Statistical analysis of reliability data in engineering appli- induced failure. For civil engineering, Melchers’ [19] re-
cations cannot be summarized comprehensively in a single liability text has a focus on reliability of structural sys-
book chapter such as this. The following books (listed fully in tems and loads, time-dependent reliability, and resistance
the reference section) serve as an excellent basis for a serious modeling.
treatment of the subject:
1. Statistical Theory of Reliability and Life Testing by Bar- 3.2 Lifetime Distributions in Reliability
low and Proschan [2]
2. Practical Methods for Reliability Data Analysis by While engineering studies have contributed a great deal of
Ansell and Phillips [3] the current methods for reliability life testing, an equally
3. Reliability: Probabilistic Models and Statistical Meth- great amount exists in the biological sciences, especially
ods by Leemis [4] relating to epidemiology and biostatistics. Life testing is a
4. Applied Reliability by Tobias and Trindade [5] crucial component to both fields, but the bio-related sciences
5. Engineering Reliability by Barlow [6] tend to focus on mean lifetimes and numerous risk factors.
6. Reliability for Technology, Engineering, and Manage- Engineering methods, on the other hand, are more likely
ment by Kales [7] to focus on upper (or lower) percentiles of the lifetime
7. Statistical Methods for Reliability Data by Meeker and distribution as well as the stochastic dependencies between
Escobar [8] working components. Another crucial difference between the
8. Reliability Modeling, Prediction, and Optimization by two application areas is that engineering models are more
Blischke and Murthy [9] likely to be based on principles of physics that lead to well-
9. Statistical Methods for the Reliability of Repairable Sys- known distributions such as Weibull, log-normal, extreme
tems by Rigdon and Basu [10] and value, and so on.
10. Bayesian Reliability by Hamada et al. [11] The failure time distribution is the most widely used prob-
ability tool for modeling product reliability in science and
Some of the books in this list focus on reliability theory, industry. If f (x) represents the probability density function
and others focus exclusively on reliability engineering. From ∞ the product’s failure time, then its reliability is R(x) =
for
the more inclusive books, [8] provides a complete, high- x f (u)du, and R(t) = 1 − F(t) where F is the cumulative
level guide to reliability inference tools for an engineer, distribution function (CDF) corresponding to f. A quantile
and most examples have an engineering basis (usually in is the CDF’s inverse; the pth quantile of F is the lifetime
manufacturing). For reliability problems closely associated value tp such that F(tp ) = p. To understand the quality of
with materials testing, Bogdanoff and Kozin [12] connect a manufactured product through these lifetime probability
the physics of degradation to reliability models. Sobczyk functions, it is often useful to consider the notion of aging.
and Spencer [13] also relate fatigue to reliability through For example, the (conditional) reliability of a product that has
probability modeling. For reliability prediction in software been working t units of time is
3 Statistics for Reliability Modeling 55
mθ m m mθ mθ 2
Pareto tm+1
t > θ, m > 0
t m−1 (m−1)2 (m−2)
exp[−(t−a)/b] exp[−(t−a)/b]
Extreme value b exp[− exp(−(t−a)/b)] b exp[− exp(−(t−a)/b)]−1 a − bΓ (1) (bπ)2 /6 −∞ < a < ∞, b > 0
√
a new shape parameter to a generalized Weibull distribution μ N B∗
that allows bath-tub-shaped failure rates as well as a broader W= − √ (3.5)
σ σ N
class of monotone failure rates.
For materials exposed to constant stress cycles with a has a normal distribution, which leads to accessible imple-
given stress range, lifetime is measured in number of cycles mentation in lifetime modeling (see [24] or [12] for more
until failure (N). The Whöler curve (or S–N curve) relates properties).
stress level (S) to N as NSb = k, where b and k are material
parameters (see [13] for examples). By taking logarithms
of the S–N equation, we can express cycles to failure as 3.2.4 Lifetime Distributions
a linear function: Y = log N = log k − b log S. If N from Degradation Modeling
is log-normally distributed, then Y is normally distributed
and regular regression models can be applied for predicting These examples show how the product’s lifetime distribution
cycles to failure (at a given stress level). In many settings, can be implied by knowledge of how it degrades in time. In
the log-normal distribution is applied as the failure time general, degradation measurements have great potential to
distribution when the corresponding degradation process is improve lifetime data analysis, but they also introduce new
based on rates that combine multiplicatively. Despite having problems to the statistical inference. Lifetime models have
a concave-shaped (or upside-down bath-tub shape) failure been researched and refined for many manufactured products
rate, the log-normal is especially useful in modeling fatigue that are put on test. On the other hand, degradation models
crack growth in metals and composites. tend to be empirical (e.g., nonparametric) or based on simple
Birnbaum and Saunders [1] modeled the damage to a test physical properties of the test item and its environment (e.g.,
item after n cycles as Bn = ζ1 + . . . + ζn , where ζi represents the Paris crack law, Arrhenius rule, and power law) which
the damage amassed in the ith cycle. If failure is determined often lead to obscure lifetime models. Meeker and Escobar
by Bn exceeding a fixed damage threshold value B* , and if [8] provide a comprehensive guide to degradation modeling,
the ζi are identically and independently distributed, and show that many valid degradation models will not yield
lifetime distributions with closed-form solutions. Functions
B∗ − nμ for estimating parameters in reliability models are available
P (N ≤ n) = P Bn > B∗ ≈ √ , (3.3) in R packages, for example, the “WeibullR” package [25].
σ n
In a setting where the lifetime distribution is known,
where Φ is the standard normal CDF. This happens because but the degradation distribution is unknown, degradation
Bn will be approximately normal if n is large enough. The information does not necessarily complement the available
reliability function for the test unit is lifetime data. For example, the lifetime data may be dis-
tributed as Weibull, but conventional degradation models will
B∗ − nμ contradict the Weibull assumption (actually, the rarely used
R(t) ≈ √ (3.4)
σ n reciprocal Weibull distribution for degradation with a fixed
failure threshold leads to Weibull lifetimes).
which is called the Birnbaum–Saunders distribution. It fol- In selecting a degradation model based on longitudinal
lows that measurements of degradation, monotonic models are
3 Statistics for Reliability Modeling 57
Fig. 3.1 Weibull probability plot for alloy T7987 fatigue life [8]
3.2.5 Censoring
Atkinson [28] provides a substantial discussion of the subject
For most products tested in regular use conditions (as op-
in the context of regression diagnostics. Advanced plotting
posed to especially harsh conditions), the allotted test time
techniques even allow for censored observations (see Waller
is usually too short to allow the experimenter to witness
and Turnbull [29], for example).
failure times for the entire set that is on test. When the item
To illustrate how the plot works, we first linearize the CDF
is necessarily taken off test after a certain amount of test
of the distribution in question. For example, if we consider
time, its lifetime is right censored. This is also called type
the two-parameter Weibull distribution, the quantile function
I censoring. Type II censoring corresponds to tests that are
is
stopped after a certain number of failures (say k out of n,
1 ≤ k ≤ n) occur. − log p 1/κ
statistical ideology. Accelerated life testing, an important tool n
n
−θxi
for designing reliability experiments, is discussed in detail in log L (θ) = log θe = n log θ − θ xi .
Chap. 22 and is only mentioned in this chapter. Instead, a i=1 i=1
This allows the analyst to make direct inference for the where χp2 (α) represents the 1 − α quantile of the χp2 distri-
component reliability [ψ(θ; t) = Rθ (t), for example]. bution.
Example MLE for failure rate with exponential data Example Confidence region for Weibull parameters: In this
(X1 , . . . , Xn ): The likelihood is based on f (x) = θ exp. (−θx) case, the MLEs for θ = (λ, r) must be computed using
where θ > 0 and is easier to maximize in its natural log form numerical methods. Many statistical software packages
3 Statistics for Reliability Modeling 59
n
L(F) = F (xi ) − F xi− ,
i=1
2.5
where F is the cumulative distribution of the sample X1 , . . . ,
Xn . Kaplan and Meier [32] developed a nonparametric maxi-
mum likelihood estimator for F that allows for censored data
2.0
in the likelihood. The prevalence of right censoring (when a 3
reliability test is stopped at a time t so the component’s failure
time is known only to be greater than t) in reliability studies,
1.5
along with the increasing computing capabilities provided
to reliability analysts, has made nonparametric data analysis
more mainstream in reliability studies.
1.0
Suppose we have a sample of possibly right-censored life-
times. The sample is denoted {(Xi , δ i ), i = 1, . . . , n}, where Xi
is the time measurement, and δ i indicates whether the time is
0.5 the observed lifetime (δ i = 1) or a right censor time δ i = 0.
The likelihood
1 2 3 4 5 6 7 8
n
L(F) = (1 − F (xi ))1−δi dF(xi )δi
Fig. 3.2 1 − α = 0.50, 0.90, and 0.95 confidence regions for Weibull i=1
parameters (λ, r) based on simulated data of size n = 100
can be shown to be maximized by assigning probability mass
only to the observed failure times in the sample according to
the rule
compute
such
estimators
along with confidence bounds.
With λ̂, r̂ , L λ̂, r̂ standardizes the likelihood ratio so dj
F̂(t) = 1 − 1−
xj ≤t
mj
that 0 ≤ θ , θ̂ ≤ 1 and Λ peaks at (λ, r) = λ̂, r̂ .
Figure 3.2 shows 50%, 90%, and 95% confidence regions
where dj are the number of failures at time xj and mj are
for the Weibull parameters based on a simulated sample of
the number of test items that had survived up to the time
n = 100.
xj− (i.e., just before the time xj ). A pointwise (approximate)
confidence interval can be constructed based on the Kaplan–
Empirical likelihood provides a powerful method for pro-
Meier variance estimate
viding confidence bounds on parameters of inference without
necessarily making strong assumptions about the lifetime 2 dj
distribution of the product (i.e., it is nonparametric). This σ̂ 2 (ti ) = 1 − F̂ (ti ) .
ti ≤tj
mj mj − dj
chapter cannot afford the space needed to provide the reader
with an adequate description of its method and theory; Owen Nair [33] showed that large-sample approximations work
[30] provides a comprehensive study of empirical likelihood well in a variety of settings, but for medium samples, the
including its application to lifetime data. Rather, we will following serves as an effective (1 − α)-level confidence
summarize the fundamentals of nonparametric reliability es- interval for the survival function 1 − F(t):
timation below.
ln α2 2
1 − F̂(t) ± − 1 − F̂(t) 1 + σ̂ 2 (t) .
2n
3.3.3 Kaplan–Meier Estimator
We illustrate in Fig. 3.3 the nonparametric estimator with
For many modern reliability problems, it is not possible to strength measurements (in coded units) for 48 pieces of
find a lifetime distribution that adequately fits the avail- weathered cord along with 95% pointwise confidence in-
able failure data. As a counterpart to parametric maximum tervals for the cord strength. The data, found in Crowder,
likelihood theory, the nonparametric likelihood, as defined et al. [34], include seven measurements that were damaged
by Kiefer and Wolfowitz [31], circumvents the need for and yielded strength measurements that are considered right
continuous densities: censored.
60 P. Kvam and J.-C. Lu
2.
C 3
3.4 System Reliability
A B
A system is an arrangement of components that work together 3. C
A
for a common goal. So far, the discussion has fixated on
the lifetime analysis of a single component, so this repre- B C
sents an extension of single-component reliability study. At
the simplest level, a system contains n components of an B
4.
identical type that are assumed to function independently. A
The mapping of component outcomes to system outcomes C
is through the system’s structure function. The reliability
function describes the system reliability as a function of
A
component reliability.
A series system is such that the failure of any of the 5. B
n components in the working group causes the system to
fail. If the probability that a single component fails in its C
mission is p, the probability the system fails is 1 − P(system
succeeds) = 1 − P(all n components succeed) = 1 − (1 − p)n .
More generally, in terms of component reliabilities (p1 , . . . , Fig. 3.4 Five unique systems of three components: (1) is series, (3) is
2-out-of-3, and (5) is parallel
pn ), the system reliability function Ψ is
n
Ψ (p1 , . . . , pn ) = (1 − pi ). (3.16)
i=1 1.00
1
System reliability
n
n
Ψ (p) = (1 − p)i pn−i . (3.18)
i in terms of a logic diagram including a series system (1), a 2-
i=k
out-of-3 system (3), and a parallel system (5). Note that the 2-
Of course, most component arrangements are much more out-of-3 system cannot be diagrammed with only three com-
complex that a series or parallel system. With just three ponents, so each component is represented twice in the logic
components, there are five unique ways of arranging the com- diagram. Figure 3.5 displays the corresponding reliabilities,
ponents in a coherent way (that is, so that each component as a function of the component reliability 0 ≤ p ≤ 1 of those
success contributes positively to the system reliability). Fig- five systems. Fundamental properties of coherent systems are
ure 3.4 shows the system structure of those five arrangements discussed in [2, 4].
62 P. Kvam and J.-C. Lu
3.4.1 Estimating System and Component and numerical methods are employed to find F̂. Huang [37]
Reliability investigated the asymptotic properties of this MLE, and Chen
[38] provides an ad hoc estimator that examines the effects of
In many complex systems, the reliability of the system can censoring.
be computed through the reliability of the components along Compared to individual component tests, observed system
with the system’s structure function. If the exact reliability is lifetimes can be either advantageous or disadvantageous.
too difficult to compute explicitly, reliability bounds might be With an equal number of k-out-of-n systems at each
achievable based on minimum cut sets (MCS) and minimum 1 ≤ k ≤ n, Takahasi and Wakimoto [39] showed that the
path sets (MPS). An MPS is the collection of the smallest estimate of MTTF is superior to that of an equal number
component sets that are required to work in order to keep the of individual component tests. With an unbalanced set of
system working. An MCS is the collection of the smallest system lifetimes, no such guarantee can be made. If only
component sets that are required to fail in order for the system series systems are observed, Kvam and Samaniego [40]
to fail. Table 3.2 shows the minimum cuts sets and path sets show how the uncertainty in F̂(t) is relatively small in the
for the three-component systems from Fig. 3.4. lower quantiles of F (where system failures are observed)
In most industrial systems, components have different but explodes in the upper quantiles.
roles and varying reliabilities, and often the component reli-
ability depends on the working status of other components.
System reliability can be simplified through fault-tree analy- 3.4.2 Stochastic Dependence Between
ses (see Chap. 7 of [16], for example), but uncertainty bounds System Components
for system reliability are typically determined through
simulation. Almost all basic reliability theory is based on systems with
In laboratory tests, component reliabilities are determined independently operating components. For realistic modeling
and the system reliability is computed as a function of the of complex systems, this assumption is often impractical;
statistical inference of component lifetimes. In field studies, system components typically operate at a level related to the
the tables are turned. Component manufacturers seeking quality and operational state of the other system components.
reliability data outside laboratory tests look to component External events that cause the simultaneous failure of
lifetime data within a working system. For a k-out-of-n component groups is a serious consideration in reliability
system, for example, the system lifetime represents an order analysis of power systems. This can be a crucial point in sys-
statistic of the underlying distribution function. That is, if tems that rely on built-in component redundancy to achieve
the ordered lifetimes form a set of independent and iden- high target system reliability. Shock models, such as those
tically distributed components (X1:n ≤ X2:n ≤ . . . ≤ Xn:n ), introduced by Marshall and Olkin [41], can be employed to
then Xn−k+1:n represents the k-out-of-n system lifetime. The demonstrate how multiple component failures can occur. An
density function for Xr:n is extra failure process is added to the otherwise independent
n component failure processes, representing the simultaneous
fr:n (t) = r r F(t)r−1 [1 − F(t)]n−r f (t), t > 0. (3.19) failure of one or more components, thus making the com-
ponent lifetimes positively dependent. This is the basis for
Kvam and Samaniego [36] derived the nonparametric most dependent failure models in probabilistic risk assess-
maximum likelihood estimator for F(t) based on a sample ment, including common cause failure models used in the
of k-out-of-n system data, and showed that the MLE F̂(t) is nuclear industry (alpha-factor model, beta-factor model, and
consistent. If the i-th system (i = 1, . . . , m) observed is a ki - binomial failure rate model). See Chapt. 8 of Bedford and
out-of-ni system, the likelihood can be represented as Cooke [16] for discussion about how these models are used
in risk assessment.
m
In dynamic systems, where system configurations and
L(F) = fki :ni (ti ) (3.20)
component reliability can change after an external event or
i=1
a failure of one or more of the system components, the
shock model approach cannot be applied effectively. In some
Table 3.2 Minimum cut sets and path sets for the systems in Fig. 3.3 applications, a load-share model applies. Early applications
System Minimum path sets Minimum cut sets of the load-share system models were investigated by Daniels
1 {A,B,C} {A}, {B}, {C} [42] for studying the reliability of composite materials in the
2 {A,B}, {C} {A,C}, {B,C} textile industry. Yarns and cables fail after the last fiber (or
3 {A,B}, {A,C}, {B,C} {A,B}, {A,C}, {B,C} wire) in the bundle breaks, thus a bundle of fibers can be
4 {A,B}, {A,C} {A}, {B,C} considered a parallel system subject to a constant tensile load.
5 {A}, {B}, {C} {A,B,C} An individual fiber fails in time with an individual rate that
3 Statistics for Reliability Modeling 63
depends on how the unbroken fibers within the bundle share DCs similar to airline hubs (e.g., regional DCs and global
the load of this stress. Depending on the physical properties DCs) in aggregating various types of goods. Vehicle routing
of the fiber composite, this load sharing has different mean- procedures typically involve trucks that carry similar prod-
ings in the failure model. Yarn bundles or untwisted cables ucts to several stores in an assigned region. Different prod-
tend to spread the stress load uniformly after individual ucts are consolidated in shipment for risk-pooling purposes
failures which defines an equal load-share rule, implying the and to more easily control delivery time and store-docking
existence of a constant system load that is distributed equally operations.
among the working components. When one DC cannot meet the demands from its re-
3
As expected, a load-sharing structure within a system gional stores (due to demand increase or the DC’s limited
can increase reliability (if the load distribution saves the capability), other DCs provide backup support to maintain
system from failing automatically) but reliability inference the overall network’s service reliability. Focusing on the
is hampered even by the simplest model. Kvam and Pena operations between DCs and stores, Ni et al. [49] defined
[43] show how the efficiency of the load-share system, as a the following network reliability as a weighted sum of the
function of component dependence, varies between that of a individual reliabilities from each DC’s operations:
series system (equivalent to sharing an infinite load) and a
parallel system (equivalent to sharing zero load). ⎡
M
∗
∗
rsystem,k =⎣ di P Tm,i < t0
i=1,i=k
28. Atkinson, A.C.: Plots, Transformations, and Regression: An Intro- 52. Chen, W., Lewis, K.: Robust design approach for achieving flexi-
duction to Graphical Methods of Diagnostic Regression Analysis, bility in multidisciplinary design. AIAA J. 37(8), 982–989 (1999)
Oxford Statistical Science Series. Oxford University Press, Oxford 53. Charoensiriwath, C., Lu, J.-C.: Competition under manufacturing
(1992) service and retail price. Econ. Model. 28(3), 1256–1264 (2011)
29. Waller, L.A., Turnbull, B.W.: Probability plotting with censored
data. Am. Stat. 46, 5–12 (1992)
30. Owen, A.: Empirical Likelihood. Chapman and Hall, CRC, Boca
Raton (2001)
31. Kiefer, J., Wolfowitz, J.: Consistency of the maximum likelihood 3
estimator in the presence of infinitely many incidental parameters.
Ann. Math. Stat. 27, 887–906 (1956)
32. Kaplan, E.L., Meier, P.: Nonparametric maximum likelihood esti-
mation from incomplete observations. J. Am. Stat. Assoc. 53, 457–
481 (1958)
33. Nair, V.N.: Confidence bands for survival functions with censored
data: a comparative study. Technometrics. 26, 265–275 (1984)
34. Crowder, M.J., Kimber, A.C., Smit, R.L., Sweeting, T.J.: Statistical
Analysis of Reliability Data. Chapman and Hall, CRC, London
Paul Kvam is a professor of statistics at the University of Richmond.
(1994)
He joined UR in 2014 after 19 years as professor at Georgia Tech and
35. Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. Chap-
4 years as scientific staff at the Los Alamos National Laboratory. Dr.
man and Hall, CRC Press, Boca Raton (1993)
Kvam received his B.S. in mathematics from Iowa State University in
36. Kvam, P.H., Samaniego, F.J.: Nonparametric maximum likelihood
1984, an M.S. in statistics from the University of Florida in 1986, and
estimation based on ranked set samples. J. Am. Stat. Assoc. 89,
his Ph.D. in statistics from the University of California, Davis in 1991.
526–537 (1994)
His research interests focus on statistical reliability with applications to
37. Huang, J.: Asymptotic of the NPMLE of a distribution function
engineering, nonparametric estimation, and analysis of complex and de-
based on ranked set samples. Ann. Stat. 25(3), 1026–1049 (1997)
pendent systems. He is a fellow of the American Statistical Association
38. Chen, Z.: Component reliability analysis of k-out-of-n systems with
and an elected member of the International Statistics Institute.
censored data. J. Stat. Plan. Inference. 116, 305–316 (2003)
Paul Kvam received his Ph.D. from the University of California,
39. Takahasi, K., Wakimoto, K.: On unbiased estimates of the popula-
Davis, in 1991. He was a staff scientist at Los Alamos National Labora-
tion mean based on samples stratified by means of ordering. Ann.
tory from 1991–1995, and professor at Georgia Tech from 1995–2014
Inst. Stat. Math. 20, 1–31 (1968)
in the School of Industrial and Systems Engineering. Kvam is currently
40. Kvam, P.H., Samaniego, F.J.: On estimating distribution functions
professor of statistics in the Department of Mathematics and Computer
using nomination samples. J. Am. Stat. Assoc. 88, 1317–1322
Science at the University of Richmond.
(1993)
41. Marshall, A.W., Olkin, I.: A multivariate exponential distribution.
J. Am. Stat. Assoc. 62, 30–44 (1967)
42. Daniels, H.E.: The statistical theory of the strength of bundles of
threads. Proc. R. Soc. Lond. A. 183, 405–435 (1945)
43. Kvam, P.H., Peña, E.: Estimating load-sharing properties in a
dynamic reliability system. J. Am. Stat. Assoc. 100(469), 262–272
(2004)
44. Ball, M.O.: Computing network reliability. Oper. Res. 27, 823–838
(1979)
45. Sanso, B., Soumis, F.: Communication and transportation network
reliability using routing models. IEEE Trans. Reliab. 40, 29–37
(1991)
46. Shi, V.: Evaluating the performability of tactical communications
network. IEEE Trans. Veh. Technol. 53, 253–260 (2004) Jye-Chyi (JC) Lu is a professor in the School of Industrial and Systems
47. Chen, A., Yang, H., Lo, H., Tang, W.H.: A capacity related relia- Engineering (ISyE). He received a Ph.D. in statistics from University
bility for transportation networks. J. Adv. Transp. 33(2), 183–200 of Wisconsin at Madison in 1988, and joined the faculty of North
(1999) Carolina State University where he remained until 1999 when he joined
48. Dandamudi, S., Lu, J.-C.: Competition driving logistics design Georgia Institute of Technology. Dr. Jye-Chyi Lu’s research areas cover
with continuous approximation methods. Technical Report. http:/ industrial statistics, signal processing, semiconductor and electronic
/www.isye.gatech.edu/apps/research-papers/ manufacturing, data mining, bioinformatics, supply chain management,
49. Ni, W., Lu, J.-C., Kvam, P.: Reliability modeling in spatially logistics planning, and nanotechnology. He has about 93 disciplinary
distributed logistics systems. IEEE Trans. Reliab. 55(3), 525–534 and interdisciplinary publications, which have appeared in both engi-
(2006) neering and statistics journals. He has served as an associate editor for
50. Wu, C.F.J., Hamada, M.: Experiments: Planning, Analysis, and Technometrics, IEEE Transactions on Reliability, and Journal of Quality
Parameter Design Optimization, p. 503. Wiley, New York (2009) Technology.
51. Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge His work now includes reliability, human-in-the-loop variable selec-
(2000) tions for large p small n problems, decision, and data analytics.