Reliability Engineering
Reliability Engineering
Reliability engineering deals with the estimation, prevention and management of high
levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic
parameters define and affect reliability, according to some expert authors on
reliability engineering (e.g. P. O'Conner, J. Moubray[2] and A. Barnard[3]), reliability
is not (solely) achieved by mathematics and statistics. You cannot really find a root
cause (needed to effectively prevent failures) by only looking at statistics. "Nearly all
teaching and literature on the subject emphasize these aspects, and ignore the
reality that the ranges of uncertainty involved largely invalidate quantitative methods
for prediction and measurement."[4]
Objective
3. To determine ways of coping with failures that do occur, if their causes have
not been corrected.
4. To apply methods for estimating the likely reliability of new designs, and for
analysing reliability data.
The reason for the priority emphasis is that it is by far the most effective way of
working, in terms of minimizing costs and generating reliable products. The primary
skills that are required, therefore, are the ability to understand and anticipate the
possible causes of failures, and knowledge of how to prevent them. It is also
necessary to have knowledge of the methods that can be used for analysing designs
and data.
System availability and mission readiness analysis and related reliability and
maintenance requirement allocation
Maintenance-induced failures
Transport-induced failures
Storage-induced failures
Use (load) studies, component stress analysis, and derived requirements
specification
Tribology
Stress (mechanics)
Thermal engineering
Electrical engineering
Material science
Definitions
The idea that an item is fit for a purpose with respect to time
Consistent with the creation of a safety cases, for example ARP4761, the goal of
reliability assessments is to provide a robust set of qualitative and quantitative
evidence that use of a component or system will not be associated with
unacceptable risk. The basic steps to take[12] are to:
Determine the best mitigation and get agreement on final, acceptable risk
levels, possibly based on cost/benefit analysis
Risk is here the combination of probability and severity of the failure incident
(scenario) occurring.
In a deminimus definition, severity of failures include the cost of spare parts, man-
hours, logistics, damage (secondary failures), and downtime of machines which may
cause production loss. A more complete definition of failure also can mean injury,
dismemberment, and death of people within the system (witness mine accidents,
industrial accidents, space shuttle failures) and the same to innocent bystanders
(witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and
other victims of the 2011 Thoku earthquake and tsunami)in this case, reliability
engineering becomes system safety. What is acceptable is determined by the
managing authority or customers or the affected communities. Residual risk is the
risk that is left over after all reliability activities have finished, and includes the un-
identified riskand is therefore not completely quantifiable.
Risk vs Cost/Complexity.[13]
Implementing a reliability program is not simply a software purchase; it's not just a
checklist of items that must be completed that will ensure you have reliable products
and processes. A reliability program is a complex learning and knowledge-based
system unique to your products and processes. It is supported by leadership, built on
the skills that you develop within your team, integrated into your business processes
and executed by following proven standard work practices. [14]
A reliability program plan is used to document exactly what "best practices" (tasks,
methods, tools, analysis, and tests) are required for a particular (sub)system, as well
as clarify customer requirements for reliability assessment. For large-scale complex
systems, the reliability program plan should be a separate document. Resource
determination for manpower and budgets for testing and other tasks is critical for a
successful program. In general, the amount of work required for an effective program
for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability,
maintainability, and the resulting system Availability, and is developed early during
system development and refined over the system's life-cycle. It specifies not only
what the reliability engineer does, but also the tasks performed by other
stakeholders. A reliability program plan is approved by top program Management,
which is responsible for allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve availability of a
system by the strategy of focusing on increasing testability & maintainability and not
on reliability. Improving maintainability is generally easier than improving reliability.
Maintainability estimates (repair rates) are also generally more accurate. However,
because the uncertainties in the reliability estimates are in most cases very large,
they are likely to dominate the availability calculation (prediction uncertainty
problem), even when maintainability levels are very high. When reliability is not
under control, more complicated issues may arise, like manpower (maintainers /
customer service capability) shortages, spare part availability, logistic delays, lack of
repair facilities, extensive retro-fit and complex configuration management costs, and
others. The problem of unreliability may be increased also due to the "domino effect"
of maintenance-induced failures after repairs. Focusing only on maintainability is
therefore not enough. If failures are prevented, none of the other issues are of any
importance, and therefore reliability is generally regarded as the most important part
of availability. Reliability needs to be evaluated and improved related to both
availability and the Total Cost of Ownership (TCO) due to cost of spare parts,
maintenance man-hours, transport costs, storage cost, part obsolete risks, etc. But,
as GM and Toyota have belatedly discovered, TCO also includes the downstream
liability costs when reliability calculations have not sufficiently or accurately
addressed customers' personal bodily risks. Often a trade-off is needed between the
two. There might be a maximum ratio between availability and cost of ownership.
Testability of a system should also be addressed in the plan, as this is the link
between reliability and maintainability. The maintenance strategy can influence the
reliability of a system (e.g., by preventive and/or predictive maintenance), although it
can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether
only availability or also cost of ownership is more important depends on the use of
the system. For example, a system that is a critical link in a production systeme.g.,
a big oil platformis normally allowed to have a very high cost of ownership if that
cost translates to even a minor increase in availability, as the unavailability of the
platform results in a massive loss of revenue which can easily exceed the high cost
of ownership. A proper reliability plan should always address RAMT analysis in its
total context. RAMT stands for Reliability, Availability, Maintainability/Maintenance,
and Testability in context to the customer needs.
Reliability requirements
For any system, one of the first tasks of reliability engineering is to adequately
specify the reliability and maintainability requirements allocated from the overall
availability needs and, more importantly, derived from proper design failure analysis
or preliminary prototype test results. Clear requirements (able to designed to) should
constrain the designers from designing particular unreliable items / constructions /
interfaces / systems. Setting only availability, reliability, testability, or maintainability
targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding
about Reliability Requirements Engineering. Reliability requirements address the
system itself, including test and assessment requirements, and associated tasks and
documentation. Reliability requirements are included in the appropriate system or
subsystem requirements specifications, test plans, and contract statements. Creation
of proper lower-level requirements is critical. [15] Provision of only quantitative
minimum targets (e.g., MTBF values or failure rates) is not sufficient for different
reasons. One reason is that a full validation (related to correctness and verifiability in
time) of an quantitative reliability allocation (requirement spec) on lower levels for
complex systems can (often) not be made as a consequence of (1) the fact that the
requirements are probabalistic, (2) the extremely high level of uncertainties involved
for showing compliance with all these probabalistic requirements, and because (3)
reliability is a function of time, and accurate estimates of a (probabalistic) reliability
number per item are available only very late in the project, sometimes even after
many years of in-service use. Compare this problem with the continues
(re-)balancing of, for example, lower-level-system mass requirements in the
development of an aircraft, which is already often a big undertaking. Notice that in
this case masses do only differ in terms of only some %, are not a function of time,
the data is non-probabalistic and available already in CAD models. In case of
reliability, the levels of unreliability (failure rates) may change with factors of decades
(multiples of 10) as result of very minor deviations in design, process, or anything
else.[16] The information is often not available without huge uncertainties within the
development phase. This makes this allocation problem almost impossible to do in a
useful, practical, valid manner that does not result in massive over- or under-
specification. A pragmatic approach is therefore neededfor example: the use of
general levels / classes of quantitative requirements depending only on severity of
failure effects. Also, the validation of results is a far more subjective task than for any
other type of requirement. (Quantitative) reliability parametersin terms of MTBF
are by far the most uncertain design parameters in any design.
The maintainability requirements address the costs of repairs as well as repair time.
Testability (not to be confused with test requirements) requirements provide the link
between reliability and maintainability and should address detectability of failure
modes (on a particular system level), isolation levels, and the creation of diagnostics
(procedures). As indicated above, reliability engineers should also address
requirements for various reliability tasks and documentation during system
development, testing, production, and operation. These requirements are generally
specified in the contract statement of work and depend on how much leeway the
customer wishes to provide to the contractor. Reliability tasks include various
analyses, planning, and failure reporting. Task selection depends on the criticality of
the system as well as cost. A safety-critical system may require a formal failure
reporting and review process throughout development, whereas a non-critical
system may rely on final test reports. The most common reliability program tasks are
documented in reliability program standards, such as MIL-STD-785 and IEEE 1332.
Failure reporting analysis and corrective action systems are a common approach for
product/process reliability monitoring.
Practically, most failures can in the end be traced back to a root causes of the type of
human error of any kind. For example, human errors in:
Assumptions
Design
Design drawings
Statistical analysis
Manufacturing
Quality control
Maintenance
Maintenance manuals
Training
etc.
However, humans are also very good in detection of (the same) failures, correction
of failures and improvising when abnormal situations occur. The policy that human
actions should be completely ruled out of any design and production process to
improve reliability may not be effective therefore. Some tasks are better performed
by humans and some are better performed by machines. [18]
For existing systems, it is arguable that responsible programs would directly analyse
and try to correct the root cause of discovered failures and thereby may render the
initial MTBF estimate fully invalid as new assumptions (subject to high error levels) of
the effect of the patch/redesign must be made. Another practical issue concerns a
general lack of availability of detailed failure data and not consistent filtering of failure
(feedback) data or ignoring statistical errors, which are very high for rare events (like
reliability related failures). Very clear guidelines must be present to be able to count
and compare failures, related to different type of root-causes (e.g. manufacturing-,
maintenance-, transport-, system-induced or inherent design failures, ). Comparing
different type of causes may lead to incorrect estimations and incorrect business
decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and
may be very expensive if done by testing. On part level, results can be obtained
often with higher confidence as many samples might be used for the available
testing financial budget, however unfortunately these tests might lack validity on
system level due to the assumptions that had to be made for part level testing.
These authors argue that it can not be emphasized enough that testing for reliability
should be done to create failures in the first place, learn from them and to improve
the system / part. The general conclusion is drawn that an accurate and an absolute
predictionby field data comparison or testingof reliability is in most cases not
possible. An exception might be failures due to wear-out problems like fatigue
failures. In the introduction of MIL-STD-785 it is written that reliability prediction
should be used with great caution if not only used for comparison in trade-off studies.
Reliability design begins with the development of a (system) model. Reliability and
availability models use block diagrams and Fault Tree Analysis to provide a graphical
means of evaluating the relationships between different parts of the system. These
models may incorporate predictions based on failure rates taken from historical data.
While the (input data) predictions are often not accurate in an absolute sense, they
are valuable to assess relative differences in design alternatives. Maintainability
parameters, for example MTTR, are other inputs for these models.
The most important fundamental initiating causes and failure mechanisms are to be
identified and analyzed with engineering tools. A diverse set of practical guidance
and practical performance and reliability requirements should be provided to
designers so they can generate low-stressed designs and products that protect or
are protected against damage and excessive wear. Proper Validation of input loads
(requirements) may be needed and verification for reliability "performance" by testing
may be needed.
A Fault Tree Diagram
One of the most important design techniques is redundancy. This means that if one
part of the system fails, there is an alternate success path, such as a backup system.
The reason why this is the ultimate design choice is related to the fact that high
confidence reliability evidence for new parts / items is often not available or
extremely expensive to obtain. By creating redundancy, together with a high level of
failure monitoring and the avoidance of common cause failures, even a system with
relative bad single channel (part) reliability, can be made highly reliable (mission
reliability) on system level. No testing of reliability has to be required for this.
Furthermore, by using redundancy and the use of dissimilar design and
manufacturing processes (different suppliers) for the single independent channels,
less sensitivity for quality issues (early childhood failures) is created and very high
levels of reliability can be achieved at all moments of the development cycles (early
life times and long term). Redundancy can also be applied in systems engineering by
double checking requirements, data, designs, calculations, software and tests to
overcome systematic failures.
Accelerated testing
Electromagnetic analysis
Testability analysis
Manual screening
Results are presented during the system design reviews and logistics reviews.
Reliability is just one requirement among many system requirements. Engineering
trade studies are used to determine the optimum balance between reliability and
other requirements and constraints.
Reliability engineers could concentrate more on "why and how" items / systems may
fail or have failed, instead of mostly trying to predict "when" or at what (changing)
rate (failure rate (t)). Answers to the first questions will drive improvement in design
and processes.[4] When failure mechanisms are really understood then solutions to
prevent failure are easily found. Only required Numbers (e.g. MTBF) will not drive
good designs. The huge amount of (un)reliability hazards that are generally part of
complex systems need first to be classified and ordered (based on qualitative and
quantitative logic if possible) to get to efficient assessment and improvement. This is
partly done in pure language and proposition logic, but also based on experience
with similar items. This can for example be seen in descriptions of events in Fault
Tree Analysis, FMEA analysis and a hazard (tracking) log. In this sense language
and proper grammar (part of qualitative analysis) plays an important role in reliability
engineering, just like it does in safety engineering or in general within systems
engineering. Engineers are likely to question why? Well, it is precisely needed
because systems engineering is very much about finding the correct words to
describe the problem (and related risks) to be solved by the engineering solutions we
intend to create. In the words of Jack Ring, the systems engineer's job is to
"language the project." [Ring et al. 2000]. [20] Language in itself is about putting an
order in a description of the reality of a (failure of a) complex function/item/system in
a complex surrounding. Reliability engineers use both quantitative and qualitative
methods, which extensively use language to pinpoint the risks to be solved.
The importance of language also relates to the risks of human error, which can be
seen as the ultimate root cause of almost all failuressee further on this site. As an
example, proper instructions (often written by technical authors in so called simplified
English) in maintenance manuals, operation manuals, emergency procedures and
others are needed to prevent systematic human errors in any maintenance or
operational task that may result in system failures.
Reliability modeling
For part level predictions, two separate fields of investigation are common:
Reliability theory
Reliability is defined as the probability that a device will perform its intended function
during a specified period of time under stated conditions. Mathematically, this may
be expressed as,
,
where is the failure probability density function and is the length of the period
of time (which is assumed to start from time zero).
A special case of mission success is the single-shot device or system. These are
devices or systems that remain relatively dormant and only operate once. Examples
include automobile airbags, thermal batteries and missiles. Single-shot reliability is
specified as a probability of one-time success, or is subsumed into a related
parameter. Single-shot missile reliability may be specified as a requirement for the
probability of a hit. For such systems, the probability of failure on demand (PFD) is
the reliability measurewhich actually is an unavailability number. This PFD is
derived from failure rate (a frequency of occurrence) and mission time for non-
repairable systems.
Reliability testing
A reliability sequential test plan
The purpose of reliability testing is to discover potential problems with the design as
early as possible and, ultimately, provide confidence that the system meets its
reliability requirements.
Reliability testing may be performed at several levels and there are different types of
testing. Complex systems may be tested at component, circuit board, unit, assembly,
subsystem and system levels.[21] (The test level nomenclature varies among
applications.) For example, performing environmental stress screening tests at lower
levels, such as piece parts or small assemblies, catches problems before they cause
failures at higher levels. Testing proceeds during each level of integration through
full-up system testing, developmental testing, and operational testing, thereby
reducing program risk. However, testing does not mitigate unreliability risk.
With each test both a statistical type 1 and type 2 error could be made and depends
on sample size, test time, assumptions and the needed discrimination ratio. There is
risk of incorrectly accepting a bad design (type 1 error) and the risk of incorrectly
rejecting a good design (type 2 error).
It is not always feasible to test all system requirements. Some systems are
prohibitively expensive to test; some failure modes may take years to observe; some
complex interactions result in a huge number of possible test cases; and some tests
require the use of limited test ranges or other resources. In such cases, different
approaches to testing can be used, such as (highly) accelerated life testing, design
of experiments, and simulations.
The desired level of statistical confidence also plays a role in reliability testing.
Statistical confidence is increased by increasing either the test time or the number of
items tested. Reliability test plans are designed to achieve the specified reliability at
the specified confidence level with the minimum number of test units and test time.
Different test plans result in different levels of risk to the producer and consumer. The
desired reliability, statistical confidence, and risk levels for each side influence the
ultimate test plan. The customer and developer should agree in advance on how
reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem
obvious, there are many situations where it is not clear whether a failure is really the
fault of the system. Variations in test conditions, operator differences, weather and
unexpected situations create differences between the customer and the system
developer. One strategy to address this issue is to use a scoring conference
process. A scoring conference includes representatives from the customer, the
developer, the test organization, the reliability organization, and sometimes
independent observers. The scoring conference process is defined in the statement
of work. Each test case is considered by the group and "scored" as a success or
failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy
with the customer. The test strategy makes trade-offs between the needs of the
reliability organization, which wants as much data as possible, and constraints such
as cost, schedule and available resources. Test plans and procedures are developed
for each reliability test, and results are documented.
Reliability test requirements can follow from any analysis for which the first estimate
of failure probability, failure mode or effect needs to be justified. Evidence can be
generated with some level of confidence by testing. With software-based systems,
the probability is a mix of software and hardware-based failures. Testing reliability
requirements is problematic for several reasons. A single test is in most cases
insufficient to generate enough statistical data. Multiple tests or long-duration tests
are usually very expensive. Some tests are simply impractical, and environmental
conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that
provides empirical evidence that the system meets its reliability requirements.
Statistical confidence levels are used to address some of these concerns. A certain
parameter is expressed along with a corresponding confidence level: for example, an
MTBF of 1000 hours at 90% confidence level. From this specification, the reliability
engineer can, for example, design a test with explicit criteria for the number of hours
and number of failures until the requirement is met or failed. Different sorts of tests
are possible.
The combination of required reliability level and required confidence level greatly
affects the development cost and the risk to both the customer and producer. Care is
needed to select the best combination of requirementse.g. cost-effectiveness.
Reliability testing may be performed at various levels, such as component,
subsystem and system. Also, many factors must be addressed during testing and
operation, such as extreme temperature and humidity, shock, vibration, or other
environmental factors (like loss of signal, cooling or power; or other catastrophes
such as fire, floods, excessive heat, physical or security violations or other myriad
forms of damage or degradation). For systems that must last many years,
accelerated life tests may be needed.
Accelerated testing
The purpose of accelerated life testing (ALT test) is to induce field failure in the
laboratory at a much faster rate by providing a harsher, but nonetheless
representative, environment. In such a test, the product is expected to fail in the lab
just as it would have failed in the fieldbut in much less time. The main objective of
an accelerated test is either of the following:
To predict the normal field life from the high stress lab life
An Accelerated testing program can be broken down into the following steps:
Arrhenius model
Eyring model
Temperaturehumidity model
Software reliability
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that
results in the system not performing its intended function. Repairing or replacing the
hardware component restores the system to its original operating state. However,
software does not fail in the same sense that hardware fails. Instead, software
unreliability is the result of unanticipated results of software operations. Even
relatively small software programs can have astronomically large combinations of
inputs and states that are infeasible to exhaustively test. Restoring software to its
original state only works until the same combination of inputs and states results in
the same unintended result. Software reliability engineering must take this into
account.
Despite this difference in the source of failure between software and hardware,
several software reliability models based on statistics have been proposed to
quantify what we experience with software: the longer software is run, the higher the
probability that it will eventually be used in an untested manner and exhibit a latent
defect that results in a failure (Shooman 1987), (Musa 2005), (Denney 2005).
Testing is even more important for software than hardware. Even the best software
development process results in some software faults that are nearly undetectable
until tested. As with hardware, software is tested at several levels, starting with
individual units, through integration and full-up system testing. Unlike hardware, it is
inadvisable to skip levels of software testing. During all phases of testing, software
faults are discovered, corrected, and re-tested. Reliability estimates are updated
based on the fault density and other metrics. At a system level, mean-time-between-
failure data can be collected and used to estimate reliability. Unlike hardware,
performing exactly the same test on exactly the same software configuration does
not provide increased statistical confidence. Instead, software reliability uses
different metrics, such as code coverage.
Eventually, the software is integrated with the hardware in the top-level system, and
software reliability is subsumed by system reliability. The Software Engineering
Institute's capability maturity model is a common means of assessing the overall
software development process for reliability and quality purposes.
Vs safety engineering
Reliability engineering differs from safety engineering with respect to the kind of
hazards that are considered. Reliability engineering is in the end only concerned with
cost. It relates to all Reliability hazards that could transform into incidents with a
particular level of loss of revenue for the company or the customer. These can be
cost due to loss of production due to system unavailability, unexpected high or low
demands for spares, repair costs, man hours, (multiple) re-designs, interruptions on
normal production (e.g. due to high repair times or due to unexpected demands for
non-stocked spares) and many other indirect costs. [23]
Safety engineering, on the other hand, is more specific and regulated. It relates to
only very specific and system safety hazards that could potentially lead to severe
accidents and is primarily concerned with loss of life, loss of equipment, or
environmental damage. The related system functional reliability requirements are
sometimes extremely high. It deals with unwanted dangerous events (for life,
property, and environment) in the same sense as reliability engineering, but does
normally not directly look at cost and is not concerned with repair actions after failure
/ accidents (on system level). Another difference is the level of impact of failures on
society and the control of governments. Safety engineering is often strictly controlled
by governments (e.g. nuclear, aerospace, defense, rail and oil industries). [23]
The above example of a 2oo3 fault tolerant system increases both mission reliability
as well as safety. However, the "basic" reliability of the system will in this case still be
lower than a non redundant (1oo1) or 2oo2 system! Basic reliability refers to all
failures, including those that might not result in system failure, but do result in
maintenance repair actions, logistic cost, use of spares, etc. For example, the
replacement or repair of 1 channel in a 2oo3 voting system that is still operating with
one failed channel (which in this state actually has become a 2oo2 system) is
contributing to basic unreliability but not mission unreliability. Also, for example, the
failure of the taillight of an aircraft is not considered as a mission loss failure, but
does contribute to the basic unreliability.
When using fault tolerant (redundant architectures) systems or systems that are
equipped with protection functions, detectability of failures and avoidance of common
cause failures becomes paramount for safe functioning and/or mission reliability.
Six Sigma has its roots in manufacturing and reliability engineering is a sub-part of
systems engineering. The systems engineering process is a discovery process that
is quite unlike a manufacturing process. A manufacturing process is focused on
repetitive activities that achieve high quality outputs with minimum cost and time.
The systems engineering process must begin by discovering the real (potential)
problem that needs to be solved; the biggest failure that can be made in systems
engineering is finding an elegant solution to the wrong problem [24] (or in terms of
reliability: "providing elegant solutions to the wrong root causes of system failures").
The everyday usage term "quality of a product" is loosely taken to mean its inherent
degree of excellence. In industry, this is made more precise by defining quality to be
"conformance to requirement specifications at the start of use". Assuming the final
product specifications adequately capture original requirements and customer (or
rest of system) needs, the quality level of these parts can now be precisely
measured by the fraction of units shipped that meet the detailed product
specifications.[25]
Variation of this static output may affect quality and reliability, but this is not the total
picture. More inherent aspects may play a role or variation at microscopic levels may
not be measured or controlled by any means (e.g. one good example is the
unavoidable existence of micro cracks and chemical impurities in standard metal
products, which may progress over time under physical or chemical "loading" into
macro level defects). Furthermore, on system level, systematic failures may play a
dominant role (e.g. requirement errors or software or software compiler or design
flaws).
Quality is a snapshot at the start of life and mainly related to control of lower level
product specifications and reliability is (as part of systems engineering) more of a
system level motion picture of the day-by-day operation for many years. Time zero
defects are manufacturing mistakes that escaped final test (Quality Control). The
additional defects that appear over time are "reliability defects" or reliability fallout.
These reliability issues may just as well occur due to Inherent design issues, which
may have nothing to do with non-conformance product specifications. Items that are
produced perfectlyaccording all product specificationsmay fail over time due to
any single or combined failure mechanism (e.g. mechanical-, electrical-, chemical- or
human error related). All these parameters are also a function of all all possible
variances coming from initial production. Theoretically, all items will functionally fail
over infinite time.[26] In theory the Quality level might be described by a single fraction
defective. To describe reliability fallout a probability model that describes the fraction
fallout over time is needed. This is known as the life distribution model. [25]
Next to this and also in a major contrast with reliability engineering, Six-Sigma is
much more measurement based (quantification). The core of Six-Sigma thrives on
empirical research and statistics where it is possible to measure parameters (e.g. to
find transfer functions). This can not be translated practically to most reliability
issues, as reliability is not (easy) measurable due to the function of time (large times
may be involved), specially during the requirements specification and design phase
where reliability engineering is the most efficient. Full Quantification of reliability is in
this phase extremely difficult or costly (testing). It also may foster re-active
management (waiting for system failures to be measured). Furthermore, as
explained on this page, Reliability problems are likely to come from many different
(e.g. inherent failures, human error, systematic failures) causes besides
manufacturing induced defects.
Note: What is called a defect however in six-sigma / quality literature is not the same
as a failure (Field failure | e.g. fractured item) in reliability. A defects in six-sigma /
quality refers generally to a non-conformance with a (basis functional or dimensional)
requirement. Items can however fail over time, even if these requirements (e.g. a
dimension) are all fulfilled. Quality is normally not much concerned with the question
if the requirements are correct.
It is extremely important to have one common source FRACAS system for all end
items. Also, test results should be able to be captured here in a practical way. Failure
to adopt one easy to handle (easy data entry for field engineers and repair shop
engineers)and maintain integrated system is likely to result in a FRACAS program
failure.
Some of the common outputs from a FRACAS system includes: Field MTBF, MTTR,
Spares Consumption, Reliability Growth, Failure/Incidents distribution by type,
location, part no., serial no, symptom etc.
The use of past data to predict the reliability of new comparable systems/items can
be misleading as reliability is a function of the context of use and can be affected by
small changes in the designs/manufacturing.
Reliability organizations
There are several common types of reliability organizations. The project manager or
chief engineer may employ one or more reliability engineers directly. In larger
organizations, there is usually a product assurance or specialty engineering
organization, which may include reliability, maintainability, quality, safety, human
factors, logistics, etc. In such case, the reliability engineer reports to the product
assurance manager or specialty engineering manager.
Education
A group of engineers have provided a list of useful tools for reliability engineering.
These include: RelCalc software, Military Handbook 217 (Mil-HDBK-217), and the
NAVMAT P-4855-1A manual. Analyzing failures and successes coupled with a
quality standards process also provides systemized information to making informed
engineering designs.[30]
See also
Engineering portal
Factor of safety
Failing badly
Solid mechanics
Human reliability
Industrial engineering
Logistic engineering
Performance engineering
Product qualification
RAMS
Security engineering
Strength of materials
Temperature cycling
References
1.
Further reading
Trevor Kletz (1998) Process Plants: A Handbook for Inherently Safer Design
CRC ISBN 1-56032-619-0
DoD 3235.1-H (3rd Ed) Test and Evaluation of System Reliability, Availability,
and Maintainability (A Primer), U.S. Department of Defense (March 1982).
IEEE 13321998 IEEE Standard Reliability Program for the Development and
Production of Electronic Systems and Equipment, Institute of Electrical and
Electronics Engineers (1998).
UK standards
In the UK, there are more up to date standards maintained under the sponsorship of
UK MOD as Defence Standards. The relevant Standards include:
PART 3: