0% found this document useful (0 votes)
57 views13 pages

Statistics For Reliability Modeling

This chapter discusses fundamental concepts in reliability theory and statistical methods for analyzing reliability data, focusing on lifetime distributions and their applications in engineering. It covers various statistical techniques, including maximum likelihood estimation and degradation modeling, as well as system reliability and its dependencies. The chapter also highlights the importance of selecting appropriate lifetime distributions for reliability analysis based on both empirical data and physical principles.

Uploaded by

Jorge Morocho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views13 pages

Statistics For Reliability Modeling

This chapter discusses fundamental concepts in reliability theory and statistical methods for analyzing reliability data, focusing on lifetime distributions and their applications in engineering. It covers various statistical techniques, including maximum likelihood estimation and degradation modeling, as well as system reliability and its dependencies. The chapter also highlights the importance of selecting appropriate lifetime distributions for reliability analysis based on both empirical data and physical principles.

Uploaded by

Jorge Morocho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Statistics for Reliability Modeling

3
Paul Kvam and Jye-Chyi Lu

Contents chapter ends with a description of graphical and analytical


methods to find appropriate lifetime distributions for a set
3.1 Introduction and Literature Review . . . . . . . . . . . . . . . . 53
of failure data.
3.2 Lifetime Distributions in Reliability . . . . . . . . . . . . . . . . 54 The second part of the chapter describes statistical
3.2.1 Alternative Properties to Describe Reliability . . . . . . . . . . 55
3.2.2 Conventional Reliability Lifetime Distributions . . . . . . . . 55 methods for analyzing reliability data, including maxi-
3.2.3 From Physics to Failure Distributions . . . . . . . . . . . . . . . . 55 mum likelihood estimation (both parametric and nonpara-
3.2.4 Lifetime Distributions from Degradation Modeling . . . . . 56 metric) and likelihood ratio testing. Degradation data are
3.2.5 Censoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 more prevalent in experiments in which failure is rare
3.2.6 Probability Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
and test time is limited. Special regression techniques for
3.3 Analysis of Reliability Data . . . . . . . . . . . . . . . . . . . . . . . 57 degradation data can be used to draw inference on the
3.3.1 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.3.2 Likelihood Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
underlying lifetime distribution, even if failures are rarely
3.3.3 Kaplan–Meier Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 observed.
3.3.4 Degradation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 The last part of the chapter discusses reliability for sys-
3.4 System Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 tems. Along with the components that comprise the sys-
3.4.1 Estimating System and Component Reliability . . . . . . . . . 62 tem, reliability analysis must take account of the system
3.4.2 Stochastic Dependence Between System Components . . 62 configuration and (stochastic) component dependencies.
3.4.3 Logistics Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 System reliability is illustrated with an analysis of logis-
3.4.4 Robust Reliability Design in the Supply Chain . . . . . . . . . 64
tics systems (e.g., moving goods in a system of product
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 sources and retail outlets). Robust reliability design can be
used to construct a supply chain that runs with maximum
efficiency or minimum cost.
Abstract

This chapter provides a short summary of fundamental


ideas in reliability theory and inference. The first part Keywords
of the chapter accounts for lifetime distributions that are Weibull distribution · System reliability · Empirical
used in engineering reliability analysis, including gen- likelihood · Residual life · Lifetime data
eral properties of reliability distributions that pertain to
lifetime for manufactured products. Certain distributions
are formulated on the basis of simple physical properties,
and other are more or less empirical. The first part of the
3.1 Introduction and Literature Review

P. Kvam () In everyday use, words like reliability and quality have mean-
Department of Mathematics and Computer Science, University of ings that vary depending on the context. In engineering,
Richmond, Richmond, VA, USA
e-mail: [email protected]
reliability is defined as the ability of an item to perform
its function, usually measured in terms of probability as a
J.-C. Lu
The School of Industrial and Systems Engineering, Georgia Institute of
function of time. Quality denotes how the item conforms to
Technology, Atlanta, GA, USA its specifications, so reliability is a measure of the item’s
e-mail: [email protected] quality over time.

© Springer-Verlag London Ltd., part of Springer Nature 2023 53


H. Pham (ed.), Springer Handbook of Engineering Statistics, Springer Handbooks,
https://doi.org/10.1007/978-1-4471-7503-2_3
54 P. Kvam and J.-C. Lu

Since the time of Birnbaum and Sanders [1], when sys- performance, Lyu [14] provides a comprehensive guide of
tem reliability emerged as its own discipline, research has engineering procedures for software reliability testing, while
centered on the operation of simple systems with identical a more theoretical alternative by Singpurwalla and Wilson
parts working independently of each other. Today’s systems [15] emphasizes probability modeling for software reliabil-
do not fit this mold; system representation must include ity, including hierarchical Bayesian methods. Closely related
multifaceted components with several component states that to reliability modeling in engineering systems, Bedford and
can vacillate between perfect operation and terminal failure. Cooke [16] covers methods of probabilistic risk assessment,
Not only do components interact within systems, but many which is an integral part of reliability modeling for large and
systems are dynamic in that the system configuration can complex systems.
be expected to change during its operation, perhaps due to Other texts emphasize reliability assessment in a partic-
component failures or external stresses. Computer software, ular engineering field of interest. For statistical reliability
for example, changes its failure structure during the course in geotechnical engineering, Baecher and Christian [17]
of design, testing, and implementation. is recommended as it details statistical problems with soil
Statistical methods for reliability analysis grew from this variability, autocorrelation (i.e., Kriging), and load/resistance
concept of system examination, and system reliability is often factors. Ohring [18] provides a comprehensive guide to reli-
gauged through component lifetime testing. This chapter ability assessment for electrical engineering and electronics
reviews the current framework for statistical reliability and manufacturing, including reliability pertaining to degrada-
considers some modern needs from experimenters in engi- tion of contacts (e.g., crack growth in solder), optical fiber
neering and the physical sciences. reliability, semiconductor degradation, and mass-transport-
Statistical analysis of reliability data in engineering appli- induced failure. For civil engineering, Melchers’ [19] re-
cations cannot be summarized comprehensively in a single liability text has a focus on reliability of structural sys-
book chapter such as this. The following books (listed fully in tems and loads, time-dependent reliability, and resistance
the reference section) serve as an excellent basis for a serious modeling.
treatment of the subject:

1. Statistical Theory of Reliability and Life Testing by Bar- 3.2 Lifetime Distributions in Reliability
low and Proschan [2]
2. Practical Methods for Reliability Data Analysis by While engineering studies have contributed a great deal of
Ansell and Phillips [3] the current methods for reliability life testing, an equally
3. Reliability: Probabilistic Models and Statistical Meth- great amount exists in the biological sciences, especially
ods by Leemis [4] relating to epidemiology and biostatistics. Life testing is a
4. Applied Reliability by Tobias and Trindade [5] crucial component to both fields, but the bio-related sciences
5. Engineering Reliability by Barlow [6] tend to focus on mean lifetimes and numerous risk factors.
6. Reliability for Technology, Engineering, and Manage- Engineering methods, on the other hand, are more likely
ment by Kales [7] to focus on upper (or lower) percentiles of the lifetime
7. Statistical Methods for Reliability Data by Meeker and distribution as well as the stochastic dependencies between
Escobar [8] working components. Another crucial difference between the
8. Reliability Modeling, Prediction, and Optimization by two application areas is that engineering models are more
Blischke and Murthy [9] likely to be based on principles of physics that lead to well-
9. Statistical Methods for the Reliability of Repairable Sys- known distributions such as Weibull, log-normal, extreme
tems by Rigdon and Basu [10] and value, and so on.
10. Bayesian Reliability by Hamada et al. [11] The failure time distribution is the most widely used prob-
ability tool for modeling product reliability in science and
Some of the books in this list focus on reliability theory, industry. If f (x) represents the probability density function
and others focus exclusively on reliability engineering. From  ∞ the product’s failure time, then its reliability is R(x) =
for
the more inclusive books, [8] provides a complete, high- x f (u)du, and R(t) = 1 − F(t) where F is the cumulative
level guide to reliability inference tools for an engineer, distribution function (CDF) corresponding to f. A quantile
and most examples have an engineering basis (usually in is the CDF’s inverse; the pth quantile of F is the lifetime
manufacturing). For reliability problems closely associated value tp such that F(tp ) = p. To understand the quality of
with materials testing, Bogdanoff and Kozin [12] connect a manufactured product through these lifetime probability
the physics of degradation to reliability models. Sobczyk functions, it is often useful to consider the notion of aging.
and Spencer [13] also relate fatigue to reliability through For example, the (conditional) reliability of a product that has
probability modeling. For reliability prediction in software been working t units of time is
3 Statistics for Reliability Modeling 55

R (t + x) • Mill’s ratio = R(x)/f (x) = 1/h(x), used in economics,


R (x|t) = , if R(t) > 0. (3.1)
R(t) which is not an ordinary way to characterize reliability,
but it is worth noting because of its close connection to
The rate of change of R(x|t) is an important metric for failure rate.
judging a product’s quality, and the conditional failure rate
function h(t) is defined as
3.2.2 Conventional Reliability Lifetime
−1 R(t) − R (t + x) f (t) Distributions
3
h(t) = lim x = . (3.2)
x→∞ R(t) R(t)
So far, only one distribution (exponential) has been men-
The cumulative failure
 t rate (sometimes called the hazard tioned. Rather than presenting a formal review of commonly
function) is H(t) = 0 h(u)du, and has many practical uses
used reliability distributions, a summary of commonly
in reliability theory because of its monotonicity and the fact
applied lifetime distributions is presented in Table 3.1,
that H(t) = −log R(t).
including the exponential, gamma, Weibull, log-normal,
The failure rate clearly communicates how the product
logistic, Pareto, and extreme value. In the table, (t) =
ages during different spans of its lifetime. Many manufac- ∞ t−1 −x
0 x e dx is the ordinary gamma function, and IG(t, x)
tured products have an increasing failure rate, but the rate
represents the corresponding incomplete gamma function.
of increase is rarely stable throughout the product’s lifetime.
For manufacturing centers and research laboratories that
If r(t) remains constant, it is easy to show the lifetime
conduct lifetime tests on products, lifetime data is an es-
distribution is exponential f (x) = θ exp(−θx), x > 0 and the
sential element of reliability analysis. However, a great deal
product exhibits no aging characteristics. Many electronic
of reliability analysis is based on field data, or reliability
components and other manufactured items have brief initial
information sampled from day-to-day usage of the prod-
period when failure rate is relatively high and decrease to-
uct. In many of these instances, lifetime data is a luxury
ward a steady state, where it stays until aging causes the rate
not afforded to the reliability inference. Instead, historical
to increase. This is called a bath-tub failure rate. The period in
event data and inspection counts are logged for the data
which early failures occur (called infant mortality) is called
analysis. Consequently, several discrete distributions (e.g.,
the burn-in period, and is often used by manufacturers to age
Poisson, binomial, and geometric) are important in reliability
products and filter out defectives (early failures) before being
applications. Chapter  4 has a more detailed discussion of
making it available to the consumer.
these and other statistical distributions applied in engineering
problems.
3.2.1 Alternative Properties to Describe
Reliability
3.2.3 From Physics to Failure Distributions
The failure rate function, reliability function, cumulative
hazard function, and probability density describe different Many of the distributions in Table 3.1 are derived based on
aspects of a lifetime distribution. The expected lifetime, or physical principles. For example, Weibull [21] derived the
mean time to failure (MTTF), is an important measure for distribution that takes his name to represent the breaking
repairable systems. Several alternatives for characterizing strength of materials based on the idea that some components
properties of the lifetime distribution include: are comparable to a chain that is no stronger than its weakest
link. From this premise, the distribution can be derived from
• Mean residual life = L(t) = EX (X − t|X ≥ t) which is properties of minimums, in contrast to the extreme value
the expected residual life of a component that has already distribution, which can be derived through the properties
lasted t units of time. If L(t) is less than the expected of maximums (see [22], for example). In a short time after
lifetime μ, the product is exhibiting aging by the time t. its introduction, the Weibull distribution was successfully
• Reversed hazard rate = ν(t) = f (x)/F(x) that provides applied to numerous modeling problems in engineering and
a different aspect of reliability: the conditional failure has become the hallmark distribution in applied reliability.
frequency at the time just before t given that the product A primary reason for its suitability to lifetime analysis is
failed in (0, t] (see Chap. 1 of [20], for example). its flexible failure rate; unlike other distributions listed in
• Percentile residual life = Qα = F−1 [1 − (1 − α) × R(t)] − t Table 3.1, the Weibull failure rate is simple to model, easy
which is the α quantile of the residual life (the conditional to demonstrate, and it can be either increasing or decreasing.
lifetime distribution given that the product has lasted t A mixture of two Weibull distributions can be used to portray
units of time). The median residual life, where α = 1/2 a bath-tub failure rate (as long as only one of the shape
compares closely to L(t). parameters is less than one). Mudholkar et al. [23] introduce
56 P. Kvam and J.-C. Lu

Table 3.1 Common lifetime distributions used in reliability data analysis


Distribution f (t), t > 0 h(t) μ σ2 Parameter space
Exponential θ e−θ t θ 1/θ 1/θ 2 θ >0
  
  λ−2/κ  1 + κ2
−λtκ λκtκ − 1 λ−1/κ  1 + κ1
Weibull λκtκ−1 e   κ > 0, λ > 0
− 2 1 + κ1
λr tr−1 e−λt
Gamma λr  –1 (r)tr – 1 e–λt (r)[1−IG(r,λt)] r/λ r/λ2 r > 0, λ > 0
−(log t−μ)2 2 /2 2 2
Log-normal √1 e
2σ 2
f (t)/R(t) eμ+σ e2μ+2σ − e2μ+σ −∞ < μ < ∞, σ > 0
σ 2π
e−(t−λ)/β
Logistic 2 [β(1 + e−(t − λ)/β )]−1 λ (βπ)2 /3 −∞ < λ < ∞, β > 0
β (1+e−(t−λ)/β )

mθ m m mθ mθ 2
Pareto tm+1
t > θ, m > 0
t m−1 (m−1)2 (m−2)
exp[−(t−a)/b] exp[−(t−a)/b]
Extreme value b exp[− exp(−(t−a)/b)] b exp[− exp(−(t−a)/b)]−1 a − bΓ  (1) (bπ)2 /6 −∞ < a < ∞, b > 0


a new shape parameter to a generalized Weibull distribution μ N B∗
that allows bath-tub-shaped failure rates as well as a broader W= − √ (3.5)
σ σ N
class of monotone failure rates.
For materials exposed to constant stress cycles with a has a normal distribution, which leads to accessible imple-
given stress range, lifetime is measured in number of cycles mentation in lifetime modeling (see [24] or [12] for more
until failure (N). The Whöler curve (or S–N curve) relates properties).
stress level (S) to N as NSb = k, where b and k are material
parameters (see [13] for examples). By taking logarithms
of the S–N equation, we can express cycles to failure as 3.2.4 Lifetime Distributions
a linear function: Y = log N = log k − b log S. If N from Degradation Modeling
is log-normally distributed, then Y is normally distributed
and regular regression models can be applied for predicting These examples show how the product’s lifetime distribution
cycles to failure (at a given stress level). In many settings, can be implied by knowledge of how it degrades in time. In
the log-normal distribution is applied as the failure time general, degradation measurements have great potential to
distribution when the corresponding degradation process is improve lifetime data analysis, but they also introduce new
based on rates that combine multiplicatively. Despite having problems to the statistical inference. Lifetime models have
a concave-shaped (or upside-down bath-tub shape) failure been researched and refined for many manufactured products
rate, the log-normal is especially useful in modeling fatigue that are put on test. On the other hand, degradation models
crack growth in metals and composites. tend to be empirical (e.g., nonparametric) or based on simple
Birnbaum and Saunders [1] modeled the damage to a test physical properties of the test item and its environment (e.g.,
item after n cycles as Bn = ζ1 + . . . + ζn , where ζi represents the Paris crack law, Arrhenius rule, and power law) which
the damage amassed in the ith cycle. If failure is determined often lead to obscure lifetime models. Meeker and Escobar
by Bn exceeding a fixed damage threshold value B* , and if [8] provide a comprehensive guide to degradation modeling,
the ζi are identically and independently distributed, and show that many valid degradation models will not yield
lifetime distributions with closed-form solutions. Functions
  B∗ − nμ for estimating parameters in reliability models are available
P (N ≤ n) = P Bn > B∗ ≈ √ , (3.3) in R packages, for example, the “WeibullR” package [25].
σ n
In a setting where the lifetime distribution is known,
where Φ is the standard normal CDF. This happens because but the degradation distribution is unknown, degradation
Bn will be approximately normal if n is large enough. The information does not necessarily complement the available
reliability function for the test unit is lifetime data. For example, the lifetime data may be dis-
tributed as Weibull, but conventional degradation models will
B∗ − nμ contradict the Weibull assumption (actually, the rarely used
R(t) ≈ √ (3.4)
σ n reciprocal Weibull distribution for degradation with a fixed
failure threshold leads to Weibull lifetimes).
which is called the Birnbaum–Saunders distribution. It fol- In selecting a degradation model based on longitudinal
lows that measurements of degradation, monotonic models are
3 Statistics for Reliability Modeling 57

typically chosen under the assumption that degradation is


a one-way process. In some cases, such as the measured 0.99
luminosity of light displays (vacuum fluorescent displays and
0.75
plasma display devices), the degradation is not necessarily
monotonic because, during the first phase of product life, 0.30
impurities inside the light display’s vacuum are slowly
burned off and luminosity increases. After achieving a peak 0.10 T7987 fatigue life
level, usually before 100 h of use, the light slowly degrades
3
in a generally monotonic fashion. See Bae and Kvam [26, 27]
for details on the modeling of non-monotonic degradation
50 100 200
data. Degradation data analysis is summarized in Sect. 3.3.3.

Fig. 3.1 Weibull probability plot for alloy T7987 fatigue life [8]
3.2.5 Censoring
Atkinson [28] provides a substantial discussion of the subject
For most products tested in regular use conditions (as op-
in the context of regression diagnostics. Advanced plotting
posed to especially harsh conditions), the allotted test time
techniques even allow for censored observations (see Waller
is usually too short to allow the experimenter to witness
and Turnbull [29], for example).
failure times for the entire set that is on test. When the item
To illustrate how the plot works, we first linearize the CDF
is necessarily taken off test after a certain amount of test
of the distribution in question. For example, if we consider
time, its lifetime is right censored. This is also called type
the two-parameter Weibull distribution, the quantile function
I censoring. Type II censoring corresponds to tests that are
is
stopped after a certain number of failures (say k out of n,
1 ≤ k ≤ n) occur. − log p 1/κ

Inspection data are lifetimes only observed at fixed times tp = , (3.6)


λ
of inspection. If the inspection reveals a failed test item, it
must be left censored at that fixed time. Items that are still which implies that the plot of log t has a linear relation-
working at the time of the last inspection are necessarily right ship with the log–log function of p = F(t). Hence, Weibull
censored. This is sometimes called interval censoring. probability plots are graphed on log–log probability pa-
Censoring is a common hindrance in engineering appli- per. Figure 3.1 shows a Weibull plot (using Minitab) for
cations. Lifetime data that are eclipsed by censoring cause the fatigue life of 67 alloy specimens that failed before
serious problems in the data analysis, but it must be kept in n = 300,000 cycles. This dataset is from Meeker and Escobar
mind that each observation, censored or not, contributes in- [8] and the plot also includes 95% confidence bands that
formation and increases precision in the statistical inference, identify the uncertainty associated with the plot. In this case,
overall. the curvature (especially noticeable on the left side) suggests
that the Weibull distribution does not provide an adequate fit.

3.2.6 Probability Plotting


3.3 Analysis of Reliability Data
Probability plotting is a practical tool for checking the ade-
quacy of a fitted lifetime distribution to a given set of data. Once the lifetime distribution of a test item is determined,
The rationale is to transform the observed data according the data can be used to estimate important properties of the
to a given distribution, so a linear relationship exists if the distribution, including mean, standard deviation, failure rate,
distribution was specified correctly. In the past, probabil- reliability (at a fixed time t), and upper or lower quantiles that
ity plotting paper was employed to construct the transfor- pertain to early or late failure times.
mation, but researchers can find plotting options on many There are two fundamental methods for approaching the
computer packages that feature data analysis (e.g., SAS, S- analysis of lifetime data: Bayesian methods and, for the
Plus, Matlab, Minitab, and SPSS) making the special plot- lack of an optimal term, non-Bayesian methods. Although
ting paper nearly obsolete. Despite the applicability of this Bayesian methods are accepted widely across many fields
technique, few engineering texts feature in-depth discussion of engineering and physical science, non-Bayesian statistics,
on probability plotting and statistics texts tend to focus on mostly frequentist and likelihood methods, are still an in-
theory more than implementation. Rigdon and Basu [10] pro- dustry standard. This chapter will not detail how methods
vide a thorough discussion of basic probability plotting, and of statistical inference are derived in various frameworks of
58 P. Kvam and J.-C. Lu

 
statistical ideology. Accelerated life testing, an important tool n 
n
−θxi
for designing reliability experiments, is discussed in detail in log L (θ) = log θe = n log θ − θ xi .
Chap.  22 and is only mentioned in this chapter. Instead, a i=1 i=1

summary of important procedures is outlined for statistical


estimation, confidence intervals, and hypothesis tests. The maximum occurs at θ̂ = 1/x, and the Fisher infor-
mation i(θ ) = n/θ 2 , so an approximate (1 − α) confidence
interval is
3.3.1 Maximum Likelihood  −1/2
1 1 θ̂ 1  √ −1
± z α2 i θ̂ = ± z α2 √ = ± z α2 x n . (3.10)
Parametric likelihood methods examine a family of prob- x x n x
ability distributions and choose the parameter combination
In this case, the approximation above is surpassed by
that best fits the data. A likelihood function is generally
an exact interval that can be constructed from the statistic
defined by the observed probability model; if the lifetime data
2θ(X1 + . . . + Xn ) which has a chi-squared distribution with
X1 , . . . , Xn are independently and identically (IID) distributed
2n degrees of freedom. The confidence statement
with density function fX (x;θ), the likelihood function is
n
2
P χ2n (1 − α/2) ≤ (X1 + · · · + Xn ) ≤ χ2n
2
(α/2) = 1 − α,
L (θ) = fX (xi ; θ) (3.7)
2
i=1 where χ2n (α) represents the α quantile of the chi-squared
distribution with 2n degrees of freedom, leads to a 1 − α
and the maximum likelihood estimator (MLE) is the value confidence interval for θ of
of θ that maximizes L(θ). Single-parameter distributions
such as the exponential generate easily solved MLEs, but 2
χ2n (1 − α/2) χ2n
2
(α/2)
, . (3.11)
distributions with two or more parameters are not often 2nx 2nx
straightforward. Samples that are not IID lead to complicated
likelihood functions and numerical methods are usually em-
ployed to solve for MLEs. If an observation x represents a 3.3.2 Likelihood Ratio
right censoring time, for example, then P(censor) = R(x) and
this information contributes the term R(x) to the likelihood Uncertainty bounds, especially for multidimensional param-
instead of f (x). Leemis [4] provides a thorough introduction eters, are more directly computed using the likelihood ratio
to the likelihood theory for reliability inference. (LR) method. Here we consider θ to have p components.
 For most parametric distributions of interest, the MLE Confidence regions are constructed by actual contours (in p-
θ̂ has helpful limit properties. As the sample size n → ∞, dimensions) of the likelihood function. Define the LR as
√  
 
n θ̂ − θ → N 0, i(θ)−1 , where L (θ )
 θ , θ̂ =   , (3.12)
  L θ̂
2
∂ ∂2
i (θ) = E log f = −E log f (3.8)
∂θ ∂θ 2 where θ̂ is the MLE of L. If θ is the true value of the
parameter, then
is the estimator’s Fisher information. For other parameters of
interest, say ψ(θ), we can construct approximate confidence −2 log  ∼ χp2 ,
intervals based on an estimated variance using the Fisher
information: where χp2 is the chi-squared distribution with p degrees of
   freedom. A (1 − α) confidence region for θ is
1
σ̂ 2 ψ θ̂ ≈  2   . (3.9)    
ψ θ̂ i θ̂ θ : −2 log  θ, θ̂ ≤ χp2 (α) , (3.13)

This allows the analyst to make direct inference for the where χp2 (α) represents the 1 − α quantile of the χp2 distri-
component reliability [ψ(θ; t) = Rθ (t), for example]. bution.

Example MLE for failure rate with exponential data Example Confidence region for Weibull parameters: In this
(X1 , . . . , Xn ): The likelihood is based on f (x) = θ exp. (−θx) case, the MLEs for θ = (λ, r) must be computed using
where θ > 0 and is easier to maximize in its natural log form numerical methods. Many statistical software packages
3 Statistics for Reliability Modeling 59

n
  
L(F) = F (xi ) − F xi− ,
i=1
2.5
where F is the cumulative distribution of the sample X1 , . . . ,
Xn . Kaplan and Meier [32] developed a nonparametric maxi-
mum likelihood estimator for F that allows for censored data
2.0
in the likelihood. The prevalence of right censoring (when a 3
reliability test is stopped at a time t so the component’s failure
time is known only to be greater than t) in reliability studies,
1.5
along with the increasing computing capabilities provided
to reliability analysts, has made nonparametric data analysis
more mainstream in reliability studies.
1.0
Suppose we have a sample of possibly right-censored life-
times. The sample is denoted {(Xi , δ i ), i = 1, . . . , n}, where Xi
is the time measurement, and δ i indicates whether the time is
0.5 the observed lifetime (δ i = 1) or a right censor time δ i = 0.
The likelihood
1 2 3 4 5 6 7 8
n
L(F) = (1 − F (xi ))1−δi dF(xi )δi
Fig. 3.2 1 − α = 0.50, 0.90, and 0.95 confidence regions for Weibull i=1
parameters (λ, r) based on simulated data of size n = 100
can be shown to be maximized by assigning probability mass
only to the observed failure times in the sample according to
the rule
compute
 such
  estimators
 along with confidence bounds.
With λ̂, r̂ , L λ̂, r̂ standardizes the likelihood ratio so dj
    F̂(t) = 1 − 1−
xj ≤t
mj
that 0 ≤  θ , θ̂ ≤ 1 and Λ peaks at (λ, r) = λ̂, r̂ .
Figure 3.2 shows 50%, 90%, and 95% confidence regions
where dj are the number of failures at time xj and mj are
for the Weibull parameters based on a simulated sample of
the number of test items that had survived up to the time
n = 100.
xj− (i.e., just before the time xj ). A pointwise (approximate)
confidence interval can be constructed based on the Kaplan–
Empirical likelihood provides a powerful method for pro-
Meier variance estimate
viding confidence bounds on parameters of inference without
necessarily making strong assumptions about the lifetime  2  dj
distribution of the product (i.e., it is nonparametric). This σ̂ 2 (ti ) = 1 − F̂ (ti )  .
ti ≤tj
mj mj − dj
chapter cannot afford the space needed to provide the reader
with an adequate description of its method and theory; Owen Nair [33] showed that large-sample approximations work
[30] provides a comprehensive study of empirical likelihood well in a variety of settings, but for medium samples, the
including its application to lifetime data. Rather, we will following serves as an effective (1 − α)-level confidence
summarize the fundamentals of nonparametric reliability es- interval for the survival function 1 − F(t):
timation below.
  
  ln α2  2  
1 − F̂(t) ± − 1 − F̂(t) 1 + σ̂ 2 (t) .
2n
3.3.3 Kaplan–Meier Estimator
We illustrate in Fig. 3.3 the nonparametric estimator with
For many modern reliability problems, it is not possible to strength measurements (in coded units) for 48 pieces of
find a lifetime distribution that adequately fits the avail- weathered cord along with 95% pointwise confidence in-
able failure data. As a counterpart to parametric maximum tervals for the cord strength. The data, found in Crowder,
likelihood theory, the nonparametric likelihood, as defined et al. [34], include seven measurements that were damaged
by Kiefer and Wolfowitz [31], circumvents the need for and yielded strength measurements that are considered right
continuous densities: censored.
60 P. Kvam and J.-C. Lu

function η contains random coefficients; that is, η(t) = η(t,


λ, θ), where λ is a vector of unknown parameters (common
Kaplan-meier estimator for cord strength
1.00
to all units) and θ is a vector of random coefficients which
have a distribution G (with further unknown parameters β)
0.75 so that realizations of θ change from unit to unit. With
an accumulated set of unknown parameters (λ, β, Σ), this
0.50 makes for a difficult computation of the lifetime distribution.
Numerical methods and simulations are typically employed
to generate point estimates and confidence statements.
0.25
Least squares or maximum likelihood can be used to
estimate the unknown parameters in the degradation model.
0.00 To estimate F(t0 ), one can simulate M degradation curves
30 40 50 60 (choosing M to be large) from the estimated regression by
Cord strength (in coded units) generating M random  coefficients
 θ 1 , . . . , θ M from the es-
timated distribution G θ; β̂ . Next compute the estimated
Fig. 3.3 Estimated Kaplan–Meier survival function (1-F) for cord degradation curve
 for yi based on the model with θ i and
strength data, along with 95% pointwise confidence interval λ̂ : yi (t) = ηi t; λ̂, θi . Then F̂ (t0 ) is the proportion of the
M generated curves that have reached the failure threshold y*
3.3.4 Degradation Data by time t0 .
Meeker and Escobar use bootstrap confidence intervals
As an alternative to traditional life testing, degradation tests for measuring the uncertainty in the lifetime distribution
can be effective in assessing product reliability when mea- estimate. Their method follows the general algorithm for
surements of degradation leading to failure are observable nonparametric bootstrap confidence intervals described in
and quantifiable. Meeker and Escobar [8] provide the most Efron and Tibshirani [35]. There are numerous bootstrap
comprehensive discussion on modeling and analyzing degra- sampling methods for various uncertainty problems posed
dation data for manufactured items that have either a soft by complex models. This algorithm uses a nonparametric
failure threshold (i.e., an arbitrary fixed point at which the bootstrap sampling procedure which resamples n of the sam-
device is considered to have failed) or items that degrade ple degradation curves with replacement (i.e., some curves
before reaching a failed state. In the electronics industry, may not be represented in the sample while others may
product lifetimes are far too long to test in a laboratory; some be represented multiple times). This resampled set will be
products in the lab will tend to become obsolete long before termed the bootstrap sample in the following procedure for
they actually fail. In such cases, accelerated degradation constructing confidence intervals.
testing (ADT) is used to hasten product failure. In the manu-
facture of electronic components, this is often accomplished 1. Compute estimates of the parameters β, λ, Σ.
by increasing voltage or temperature. See Chap.  22 for a 2. Use simulation (as above) to construct F̂ (t0 ).
review of recent results in ALT. 3. Generate N ≥ 1000 bootstrap samples, and for each one,
If the degradation path is modeled as compute estimates F̂ (1) (t0 ) , . . . , F̂ (N) (t0 ). This is done as
before except now the M simulated degradation paths  are 
yi (t) = ηi (t) + i (t), (3.14) constructed with an error term generated from H η;  ˆ
to reflect variability in any single degradation path.
where ηi is the path of the ith tested unit (i = 1, . . . , n) and  i
4. With the collection of bootstrap estimates from step
represents an error term that has a distribution H(; Σ) with
parameter Σ unknown. Failure would be declared once yi (t) 3, compute a 1 − α confidence interval for F(t0 ) as
passes a certain degradation threshold, say y* . The lifetime F̂ l (t0 ) , F̂ u (t0 ) , where the indexes 1 ≤ l ≤ u ≤ N are
distribution can be computed as (assuming degradation is an calculated as l/N = Φ[2Φ −1/2 (p0 ) + Φ −1/2 × (α/2)] and
increasing function) u/N = Φ[2Φ −1/2 (p0 ) + Φ −1/2 × (1 − α/2)], and p0 is the
proportion of bootstrap estimates of F(t0 ) less than F̂ (t0 ).
F(t) = P [y(t) > y∗ ] = P [i (t) > y∗ − ηi (t)] . (3.15)
Procedures based on realistic degradation models can
If η is a deterministic function, the lifetime distribution obviously grow to be computationally cumbersome, but for
is driven completely by the error term. This is not altogether important applications the increase in statistical efficiency
realistic. In most cases, item-to-item variability exists and the can be dramatic. In the past, these computations have
3 Statistics for Reliability Modeling 61

impeded degradation analysis from being a feature of


reliability problem-solving. Such analyses are easier to
1. A B C
implement now, and the reliability analyst need not be
coerced into using an overly simplistic model – for instance,
a linear model that does not allow for random coefficients. A B

2.
C 3
3.4 System Reliability
A B
A system is an arrangement of components that work together 3. C
A
for a common goal. So far, the discussion has fixated on
the lifetime analysis of a single component, so this repre- B C
sents an extension of single-component reliability study. At
the simplest level, a system contains n components of an B
4.
identical type that are assumed to function independently. A
The mapping of component outcomes to system outcomes C
is through the system’s structure function. The reliability
function describes the system reliability as a function of
A
component reliability.
A series system is such that the failure of any of the 5. B
n components in the working group causes the system to
fail. If the probability that a single component fails in its C
mission is p, the probability the system fails is 1 − P(system
succeeds) = 1 − P(all n components succeed) = 1 − (1 − p)n .
More generally, in terms of component reliabilities (p1 , . . . , Fig. 3.4 Five unique systems of three components: (1) is series, (3) is
2-out-of-3, and (5) is parallel
pn ), the system reliability function Ψ is
n
Ψ (p1 , . . . , pn ) = (1 − pi ). (3.16)
i=1 1.00

1
System reliability

A parallel system is just the opposite; it fails only after 0.75


every one of its n working components fail. The system 2
0.50 3
failure probability is then 4
0.25 5
n
Ψ (p1 , . . . , pn ) = 1 − pi . (3.17) 0.00
i=1
0.00 0.25 0.50 0.75 1.00
Component reliability
The parallel system and series system are special cases of
a k-out-of-n system, which is a system that works as long as
at least k out of its n components work. Assuming pi = p, Fig. 3.5 System reliabilities of five system configurations in Fig. 3.4
i = 1, . . . , n, the reliability of a k-out-of-n systems is from the parallel system (1) to the series system (5)


n
n
Ψ (p) = (1 − p)i pn−i . (3.18)
i in terms of a logic diagram including a series system (1), a 2-
i=k
out-of-3 system (3), and a parallel system (5). Note that the 2-
Of course, most component arrangements are much more out-of-3 system cannot be diagrammed with only three com-
complex that a series or parallel system. With just three ponents, so each component is represented twice in the logic
components, there are five unique ways of arranging the com- diagram. Figure 3.5 displays the corresponding reliabilities,
ponents in a coherent way (that is, so that each component as a function of the component reliability 0 ≤ p ≤ 1 of those
success contributes positively to the system reliability). Fig- five systems. Fundamental properties of coherent systems are
ure 3.4 shows the system structure of those five arrangements discussed in [2, 4].
62 P. Kvam and J.-C. Lu

3.4.1 Estimating System and Component and numerical methods are employed to find F̂. Huang [37]
Reliability investigated the asymptotic properties of this MLE, and Chen
[38] provides an ad hoc estimator that examines the effects of
In many complex systems, the reliability of the system can censoring.
be computed through the reliability of the components along Compared to individual component tests, observed system
with the system’s structure function. If the exact reliability is lifetimes can be either advantageous or disadvantageous.
too difficult to compute explicitly, reliability bounds might be With an equal number of k-out-of-n systems at each
achievable based on minimum cut sets (MCS) and minimum 1 ≤ k ≤ n, Takahasi and Wakimoto [39] showed that the
path sets (MPS). An MPS is the collection of the smallest estimate of MTTF is superior to that of an equal number
component sets that are required to work in order to keep the of individual component tests. With an unbalanced set of
system working. An MCS is the collection of the smallest system lifetimes, no such guarantee can be made. If only
component sets that are required to fail in order for the system series systems are observed, Kvam and Samaniego [40]
to fail. Table 3.2 shows the minimum cuts sets and path sets show how the uncertainty in F̂(t) is relatively small in the
for the three-component systems from Fig. 3.4. lower quantiles of F (where system failures are observed)
In most industrial systems, components have different but explodes in the upper quantiles.
roles and varying reliabilities, and often the component reli-
ability depends on the working status of other components.
System reliability can be simplified through fault-tree analy- 3.4.2 Stochastic Dependence Between
ses (see Chap. 7 of [16], for example), but uncertainty bounds System Components
for system reliability are typically determined through
simulation. Almost all basic reliability theory is based on systems with
In laboratory tests, component reliabilities are determined independently operating components. For realistic modeling
and the system reliability is computed as a function of the of complex systems, this assumption is often impractical;
statistical inference of component lifetimes. In field studies, system components typically operate at a level related to the
the tables are turned. Component manufacturers seeking quality and operational state of the other system components.
reliability data outside laboratory tests look to component External events that cause the simultaneous failure of
lifetime data within a working system. For a k-out-of-n component groups is a serious consideration in reliability
system, for example, the system lifetime represents an order analysis of power systems. This can be a crucial point in sys-
statistic of the underlying distribution function. That is, if tems that rely on built-in component redundancy to achieve
the ordered lifetimes form a set of independent and iden- high target system reliability. Shock models, such as those
tically distributed components (X1:n ≤ X2:n ≤ . . . ≤ Xn:n ), introduced by Marshall and Olkin [41], can be employed to
then Xn−k+1:n represents the k-out-of-n system lifetime. The demonstrate how multiple component failures can occur. An
density function for Xr:n is extra failure process is added to the otherwise independent
n  component failure processes, representing the simultaneous
fr:n (t) = r r F(t)r−1 [1 − F(t)]n−r f (t), t > 0. (3.19) failure of one or more components, thus making the com-
ponent lifetimes positively dependent. This is the basis for
Kvam and Samaniego [36] derived the nonparametric most dependent failure models in probabilistic risk assess-
maximum likelihood estimator for F(t) based on a sample ment, including common cause failure models used in the
of k-out-of-n system data, and showed that the MLE F̂(t) is nuclear industry (alpha-factor model, beta-factor model, and
consistent. If the i-th system (i = 1, . . . , m) observed is a ki - binomial failure rate model). See Chapt. 8 of Bedford and
out-of-ni system, the likelihood can be represented as Cooke [16] for discussion about how these models are used
in risk assessment.
m
In dynamic systems, where system configurations and
L(F) = fki :ni (ti ) (3.20)
component reliability can change after an external event or
i=1
a failure of one or more of the system components, the
shock model approach cannot be applied effectively. In some
Table 3.2 Minimum cut sets and path sets for the systems in Fig. 3.3 applications, a load-share model applies. Early applications
System Minimum path sets Minimum cut sets of the load-share system models were investigated by Daniels
1 {A,B,C} {A}, {B}, {C} [42] for studying the reliability of composite materials in the
2 {A,B}, {C} {A,C}, {B,C} textile industry. Yarns and cables fail after the last fiber (or
3 {A,B}, {A,C}, {B,C} {A,B}, {A,C}, {B,C} wire) in the bundle breaks, thus a bundle of fibers can be
4 {A,B}, {A,C} {A}, {B,C} considered a parallel system subject to a constant tensile load.
5 {A}, {B}, {C} {A,B,C} An individual fiber fails in time with an individual rate that
3 Statistics for Reliability Modeling 63

depends on how the unbroken fibers within the bundle share DCs similar to airline hubs (e.g., regional DCs and global
the load of this stress. Depending on the physical properties DCs) in aggregating various types of goods. Vehicle routing
of the fiber composite, this load sharing has different mean- procedures typically involve trucks that carry similar prod-
ings in the failure model. Yarn bundles or untwisted cables ucts to several stores in an assigned region. Different prod-
tend to spread the stress load uniformly after individual ucts are consolidated in shipment for risk-pooling purposes
failures which defines an equal load-share rule, implying the and to more easily control delivery time and store-docking
existence of a constant system load that is distributed equally operations.
among the working components. When one DC cannot meet the demands from its re-
3
As expected, a load-sharing structure within a system gional stores (due to demand increase or the DC’s limited
can increase reliability (if the load distribution saves the capability), other DCs provide backup support to maintain
system from failing automatically) but reliability inference the overall network’s service reliability. Focusing on the
is hampered even by the simplest model. Kvam and Pena operations between DCs and stores, Ni et al. [49] defined
[43] show how the efficiency of the load-share system, as a the following network reliability as a weighted sum of the
function of component dependence, varies between that of a individual reliabilities from each DC’s operations:
series system (equivalent to sharing an infinite load) and a
parallel system (equivalent to sharing zero load). ⎡

M
 ∗ 

rsystem,k =⎣ di P Tm,i < t0
i=1,i=k

3.4.3 Logistics Systems ⎤ (3.21)



M
 ∗  
M

Numerous studies have examined fundamental problems in + pi dk P Tm,k,i < t0 ⎦ / di ,


i=1,i=k i=1
network reliability [44], system performance degradation,
and workload rerouting for telecommunication, power, and

transportation networks [45, 46]. In comparison, the litera- where di is the demand aggregated at the ith DC, Tm,i is the
ture on modeling logistics system reliability or performance motion time defined as the sum of traveling time from DCi
degradation is scarce. Logistics systems that transport goods, to its assigned stores (including material processing time at
energy (e.g., electricity and gas), water, sewage, money, or DCi ), pi is the proportion of products rerouted from DCk

information from origins to destinations are critical to every through DCi due to the limited capability in DCk , and Tm,j
nation’s economic prosperity. Unlike the hub in the typical is the modified motion time including the rerouted traveling
Internet or telecommunication network, where the messages time.
are not mixed together, logistics distribution centers (DCs) For modeling the aggregated demand di and calculating
tend to mix products from various sources for risk-pooling routing distance, Ni et al. [49] proposed a multiscale approx-
purposes [47]. Past studies [48] of road network reliability imation model to quantify demand patterns at spatially lo-
mainly addressed connectivity and travel time reliability. cated clustered stores. Then, they evaluated product rerouting
These developments have limited use in providing a first- strategies for maintaining system service reliability, defined
cut analysis for system-level planning that involves robust in (3.19). Based on the store locations of a major retail chain,
logistics network design to meet reliability requirements or several examples show the importance of designing a robust
supply chain cost and delivery time evaluation for contract logistics network to limit service reliability degradation when
decisions [49]. a contingency (e.g., multiple DC failure) occurs in the net-
Consider a logistics network consisting of many suppliers work. Future work includes:
providing goods to several DCs, which support store oper-
ations to meet customer demands. The reliability of such 1. Modeling the low-probability but high-impact contin-
a network can be evaluated in terms of the probability of gency in the DCs [49] and routes for calculating their
delivering goods to stores in a prespecified time limit t0 . relative importance to network reliability.
Traveling time in transport routes contains uncertainty, as 2. Examining the trade-off between the cost of adding more
does the processing time for products shipped through DCs. DCs and the improvement of service reliability.
Random traveling time is a function of routing distances, 3. Resolving the domino effect when the added workload to
road and traffic conditions, and possible delays from seaport DCs after a local DC failure causes further DC failures
or security checkpoint inspections. Traveling distance de- due to faulty predetermined rules of rerouting to maintain
pends on the configuration of logistics networks. Some retail system reliability (e.g., the 2003 electricity blackout in the
chains use single-layer DCs, but others use multiple-layer northeastern region of the USA).
64 P. Kvam and J.-C. Lu

3.4.4 Robust Reliability Design in the Supply References


Chain
1. Birnbaum, Z.W., Saunders, S.C.: A new family of life distributions.
J. Appl. Probab. 6, 319–327 (1969)
Past studies of robust parameter design [50] focused on
2. Barlow, R.E., Proschan, F.: Statisitcal Theory of Reliability and
product quality issues and assumed that all the controllable Life Testing. Holt, Rinehart, Austin (1975)
variables are under single ownership. Recent outsourcing 3. Ansell, J.I., Phillips, M.J.: Practical Methods for Reliability Data
trends in automobile and electronic manufacturing processes Analysis. Oxford University Press, Oxford (1994)
4. Leemis, L.: Reliability: Probabilistic Models and Statistical Meth-
motivate the presentation in this section. In an automo-
ods. Prentice Hall, Englewood (2009)
bile manufacturing enterprise system, various parts suppliers 5. Tobias, P.A., Trindade, D.C.: Applied Reliability. CRC, Boca Raton
have control of variables determining quality and reliability. (2011)
Most of the automobile supply chain systems assemble these 6. Barlow, R.E.: Engineering Reliability. Society for Industrial and
Applied Mathematics, Alexandria (1998)
parts into a subsystem and then move these systems to other
7. Kales, P.: Reliability for Technology, Engineering and Manage-
locations owned by different partners for the final system- ment. Prentice-Hall, Englewood (1998)
level assembly and testing. Every segment of the assembly 8. Meeker, W.Q., Escobar, L.A.: Statistical Methods for Reliability
operation controls a subset of variables leading to different Data. Wiley, New York (2014)
9. Blischke, W.R., Murthy, D.N.P.: Reliability Modeling, Prediction,
levels of system reliability. Because the warranty policy is
and Optimization. Wiley, New York (2000)
addressed to all of the part manufacturing and assembly 10. Rigdon, S.E., Basu, A.P.: Statistical Methods for the Reliability of
processes in making the final product, it is important to Repairable Systems. Wiley-Interscience, New York (2000)
extend the robust parameter design concept to the supply- 11. Hamada, M., Wilson, A.G., Reese, C.S., Martz, H.F.: Bayesian
Reliability. Springer, Berlin/Heidelberg/New York (2008)
chain-oriented manufacturing processes.
12. Bogdanoff, J.L., Kozin, F.: Probabilistic Models of Cumulative
Supply chain partners have their own operation objectives Damage. Wiley, New York (1985)
(e.g., maximize the profit of manufacturing parts to supply 13. Sobczyk, K., Spencer, B.F.: Random Fatigue from Data to Theory.
several automobile companies). Some of the objectives are Academic, Boston (1992)
14. Lyu, M.R.: Handbook of Software Reliability Engineering. Mc-
aligned to manufacturing a specific type of product, but there
Graw Hill, New York (1996)
are many potential situations with conflicting objectives. 15. Singpurwalla, N.D., Wilson, S.P.: Statistical Methods in Software
When there is no single ownership of all controllable vari- Engineering. Springer, Heidelberg/Berlin/New York (1999)
ables in the manufacturing processes, negotiation is needed 16. Bedford, T., Cooke, R.: Probabilistic Risk Analysis: Foundations
and Methods. Cambridge University Press, London (2001)
to resolve potential conflicts. Game theory [51] is commonly
17. Baecher, G., Christian, J.: Reliability and Statistics in Geotechnical
used in supply chain contract decisions. Following the frame- Engineering. Wiley, New York (2003)
work of Chen and Lewis [52], we can decompose the set of 18. Ohring, M.: Reliability & Failure of Electronic Materials & De-
controllable variables into a few subsets owned by distinct vices. Academic, Boston (1998)
19. Melchers, R.E.: Structural Reliability Analysis and Prediction.
partners and formulate the objectives of these partners. The
Wiley, New York (1999)
supply chain manager can define the product quality and 20. Shaked, M., Shanthikumar, J.G.: Stochastic Orders and their Ap-
reliability measures and build models to link them to the plications. Academic, Boston (2007)
controllable and uncontrollable variables that are seen in 21. Weibull, W.: A statistical theory of the strength of material. In-
geniors Vetenskaps Akademiens Handligar, Stockholm. 151, 5–45
robust parameter design.
(1939)
Different negotiation situations (e.g., the final product 22. David, H.A., Nagaraja, H.K.: Order Statistics. Wiley, New York
assembly company has more bargaining power than other (2003)
partners) will lead to distinct levels selected for the control- 23. Mudholkar, G.S., Srivastava, D.K., Kollia, G.D.: A generalization
of the Weibull distribution with application to the analysis of
lable variables (see Charoensiriwath and Lu [53], for exam-
survival data. J. Am. Stat. Assoc. 91, 1575–1583 (1996)
ple, in negotiations). As a result, the reliability of the final 24. Høyland, A., Rausand, M.: System Reliability Theory Models and
product can vary. Designing a supply chain system that leads Statistical Methods. Wiley, New York (1994)
to the most reliable products (with minimum cost) presents 25. Silkworth, D., Symynck, J., Ormerod, J.: Package ‘WeibullR’:
Weibull Analysis for Reliability Engineering R Package. http://
an acute challenge, and warranty policies can be designed
www.openreliability.org/weibull-r-weibull-analysis-on-r/ (2018)
correspondingly. Because parts and subsystems are made by 26. Bae, S.J., Kvam, P.H.: A nonlinear random coefficients model for
various partners, warranty responsibilities for certain parts degration testing. Technometrics. 46(4), 460–469 (2004)
are distributed among partners under the negotiated supply 27. Bae, S.J., Kvam, P.H.: A change-point analysis for modeling in-
complete burn-in for light displays. IIE Trans. 38, 489–498 (2006)
chain contracts.
3 Statistics for Reliability Modeling 65

28. Atkinson, A.C.: Plots, Transformations, and Regression: An Intro- 52. Chen, W., Lewis, K.: Robust design approach for achieving flexi-
duction to Graphical Methods of Diagnostic Regression Analysis, bility in multidisciplinary design. AIAA J. 37(8), 982–989 (1999)
Oxford Statistical Science Series. Oxford University Press, Oxford 53. Charoensiriwath, C., Lu, J.-C.: Competition under manufacturing
(1992) service and retail price. Econ. Model. 28(3), 1256–1264 (2011)
29. Waller, L.A., Turnbull, B.W.: Probability plotting with censored
data. Am. Stat. 46, 5–12 (1992)
30. Owen, A.: Empirical Likelihood. Chapman and Hall, CRC, Boca
Raton (2001)
31. Kiefer, J., Wolfowitz, J.: Consistency of the maximum likelihood 3
estimator in the presence of infinitely many incidental parameters.
Ann. Math. Stat. 27, 887–906 (1956)
32. Kaplan, E.L., Meier, P.: Nonparametric maximum likelihood esti-
mation from incomplete observations. J. Am. Stat. Assoc. 53, 457–
481 (1958)
33. Nair, V.N.: Confidence bands for survival functions with censored
data: a comparative study. Technometrics. 26, 265–275 (1984)
34. Crowder, M.J., Kimber, A.C., Smit, R.L., Sweeting, T.J.: Statistical
Analysis of Reliability Data. Chapman and Hall, CRC, London
Paul Kvam is a professor of statistics at the University of Richmond.
(1994)
He joined UR in 2014 after 19 years as professor at Georgia Tech and
35. Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. Chap-
4 years as scientific staff at the Los Alamos National Laboratory. Dr.
man and Hall, CRC Press, Boca Raton (1993)
Kvam received his B.S. in mathematics from Iowa State University in
36. Kvam, P.H., Samaniego, F.J.: Nonparametric maximum likelihood
1984, an M.S. in statistics from the University of Florida in 1986, and
estimation based on ranked set samples. J. Am. Stat. Assoc. 89,
his Ph.D. in statistics from the University of California, Davis in 1991.
526–537 (1994)
His research interests focus on statistical reliability with applications to
37. Huang, J.: Asymptotic of the NPMLE of a distribution function
engineering, nonparametric estimation, and analysis of complex and de-
based on ranked set samples. Ann. Stat. 25(3), 1026–1049 (1997)
pendent systems. He is a fellow of the American Statistical Association
38. Chen, Z.: Component reliability analysis of k-out-of-n systems with
and an elected member of the International Statistics Institute.
censored data. J. Stat. Plan. Inference. 116, 305–316 (2003)
Paul Kvam received his Ph.D. from the University of California,
39. Takahasi, K., Wakimoto, K.: On unbiased estimates of the popula-
Davis, in 1991. He was a staff scientist at Los Alamos National Labora-
tion mean based on samples stratified by means of ordering. Ann.
tory from 1991–1995, and professor at Georgia Tech from 1995–2014
Inst. Stat. Math. 20, 1–31 (1968)
in the School of Industrial and Systems Engineering. Kvam is currently
40. Kvam, P.H., Samaniego, F.J.: On estimating distribution functions
professor of statistics in the Department of Mathematics and Computer
using nomination samples. J. Am. Stat. Assoc. 88, 1317–1322
Science at the University of Richmond.
(1993)
41. Marshall, A.W., Olkin, I.: A multivariate exponential distribution.
J. Am. Stat. Assoc. 62, 30–44 (1967)
42. Daniels, H.E.: The statistical theory of the strength of bundles of
threads. Proc. R. Soc. Lond. A. 183, 405–435 (1945)
43. Kvam, P.H., Peña, E.: Estimating load-sharing properties in a
dynamic reliability system. J. Am. Stat. Assoc. 100(469), 262–272
(2004)
44. Ball, M.O.: Computing network reliability. Oper. Res. 27, 823–838
(1979)
45. Sanso, B., Soumis, F.: Communication and transportation network
reliability using routing models. IEEE Trans. Reliab. 40, 29–37
(1991)
46. Shi, V.: Evaluating the performability of tactical communications
network. IEEE Trans. Veh. Technol. 53, 253–260 (2004) Jye-Chyi (JC) Lu is a professor in the School of Industrial and Systems
47. Chen, A., Yang, H., Lo, H., Tang, W.H.: A capacity related relia- Engineering (ISyE). He received a Ph.D. in statistics from University
bility for transportation networks. J. Adv. Transp. 33(2), 183–200 of Wisconsin at Madison in 1988, and joined the faculty of North
(1999) Carolina State University where he remained until 1999 when he joined
48. Dandamudi, S., Lu, J.-C.: Competition driving logistics design Georgia Institute of Technology. Dr. Jye-Chyi Lu’s research areas cover
with continuous approximation methods. Technical Report. http:/ industrial statistics, signal processing, semiconductor and electronic
/www.isye.gatech.edu/apps/research-papers/ manufacturing, data mining, bioinformatics, supply chain management,
49. Ni, W., Lu, J.-C., Kvam, P.: Reliability modeling in spatially logistics planning, and nanotechnology. He has about 93 disciplinary
distributed logistics systems. IEEE Trans. Reliab. 55(3), 525–534 and interdisciplinary publications, which have appeared in both engi-
(2006) neering and statistics journals. He has served as an associate editor for
50. Wu, C.F.J., Hamada, M.: Experiments: Planning, Analysis, and Technometrics, IEEE Transactions on Reliability, and Journal of Quality
Parameter Design Optimization, p. 503. Wiley, New York (2009) Technology.
51. Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge His work now includes reliability, human-in-the-loop variable selec-
(2000) tions for large p small n problems, decision, and data analytics.

You might also like