Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

System Health Management: with Aerospace Applications
System Health Management: with Aerospace Applications
System Health Management: with Aerospace Applications
Ebook1,551 pages12 hours

System Health Management: with Aerospace Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

System Health Management: with Aerospace Applications provides the first complete reference text for System Health Management (SHM), the set of technologies and processes used to improve system dependability. Edited by a team of engineers and consultants with SHM design, development, and research experience from NASA, industry, and academia, each heading up sections in their own areas of expertise and co-coordinating contributions from leading experts, the book collates together in one text the state-of-the-art in SHM research, technology, and applications. It has been written primarily as a reference text for practitioners, for those in related disciplines, and for graduate students in aerospace or systems engineering.

There are many technologies involved in SHM and no single person can be an expert in all aspects of the discipline.System Health Management: with Aerospace Applications provides an introduction to the major technologies, issues, and references in these disparate but related SHM areas. Since SHM has evolved most rapidly in aerospace, the various applications described in this book are taken primarily from the aerospace industry. However, the theories, techniques, and technologies discussed are applicable to many engineering disciplines and application areas.

Readers will find sections on the basic theories and concepts of SHM, how it is applied in the system life cycle (architecture, design, verification and validation, etc.), the most important methods used (reliability, quality assurance, diagnostics, prognostics, etc.), and how SHM is applied in operations (commercial aircraft, launch operations, logistics, etc.), to subsystems (electrical power, structures, flight controls, etc.) and to system applications (robotic spacecraft, tactical missiles, rotorcraft, etc.).

LanguageEnglish
PublisherWiley
Release dateJun 1, 2011
ISBN9781119998730
System Health Management: with Aerospace Applications

Related to System Health Management

Titles in the series (37)

View More

Related ebooks

Aviation & Aeronautics For You

View More

Related articles

Reviews for System Health Management

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    System Health Management - Stephen B. Johnson

    Part One

    The Socio-technical Context of System Health Management

    Charles D. Mott

    Complete Data Management, USA

    Part One provides an overview of system health management (SHM), its basic theory and concepts, and its relationship to individual and social factors that both enable and constrain its development, usage, and effectiveness.

    The goal of SHM is to improve system dependability, which is the characteristic of a system that causes it to operate as intended for a defined period of time. As such, SHM is a branch of engineering, which is the process used to create useful objects or processes within a set of given requirements and constraints. Engineers design, analyze, build, and operate systems using science and mathematics to reach an acceptable (preferably, the optimal) solution. To build any but the most simple of objects, engineers work in one or many groups, in which the engineers must communicate and cooperate with each other and with non-engineers to create the system. The system in turn is often operated by non-engineers, whose needs must be taken into account by the engineers to design a system that serves the requirements of its users. The skills and knowledge of the people, the structure of the organization, and the larger society that they operate in all have considerable effects on the system's final form. This part discusses and highlights how these non-technical processes affect systems dependability.

    This part starts with the assumptions, concepts, and terminology of SHM theory. This theory makes clear how communication and knowledge sharing are embedded in technology, identifying the primary source of faults as cognitive and communication failures. It also shows that SHM extends systems theory and control theory into the realm of faults and failures.

    The importance of communication and its role in introducing faults into systems is the subject of Chapter 2. Without communication between users, designers, builders, and operators the system cannot be built. Communication is essential to elucidating system requirements and constraints.

    Chapter 3 describes high-reliability organizations. Organizations provide resources, training, and education, and an environment in which systems are created. Organizations can enhance or hinder the communication process.

    Within SHM design and dependable system operation, organizations and individuals communicate and develop knowledge, thus making knowledge management a key aspect of dependable system design. Chapter 4 describes the relationship between knowledge management and SHM, most significantly how knowledge management systems are essentially communication management systems.

    Chapter 5 concludes this part by reviewing the business and economic realities that enable or hinder SHM design. Without an understanding of the costs and benefits of health management systems, they may not be fully utilized and the dependability of the system impacted.

    Chapter 1

    The Theory of System Health Management

    Stephen B. Johnson

    NASA Marshall Space Flight Center and University of Colorado at Colorado Springs, USA

    Overview

    This chapter provides an overview of system health management (SHM), and a theoretical framework for SHM that is used throughout the book. SHM includes design and manufacturing techniques as well as operational and managerial methods, and it also involves organizational, communicative, and cognitive features of humans as social beings and as individuals. The chapter will discuss why all of these elements, from the technical to the cognitive and social, are necessary to build dependable human–machine systems. The chapter defines key terms and concepts for SHM, outlines a functional framework and architecture for SHM operations, describes the processes needed to implement SHM in the system lifecycle, and provides a theoretical framework to understand the relationship between the different aspects of the discipline. It then derives from these and the social and cognitive bases some design and operational principles for SHM.

    1.1 Introduction

    System health management (SHM) is defined as the capabilities of a system that preserve the system's ability to function as intended.¹ An equivalent, but much wordier, description is the capability of the system to contain, prevent, detect, diagnose, respond to, and recover from conditions that may interfere with nominal system operations. SHM includes the actions to design, analyze, verify, validate, and operate these system capabilities. It brings together a number of previously separate activities and techniques, all of which separately addressed specific, narrower problems associated with assuring successful system operation. These historically have included analytical methods, technologies, design and manufacturing processes, verification and validation issues, and operational methods. However, SHM is not a purely technical endeavor, because failures largely originate in the organizational, communicative, and cognitive features of humans as social beings and as individuals.

    SHM is intimately linked to the concept of dependability, which refers to the ability of a system to function as intended, and thus SHM refers to the capabilities that provide dependability.² Dependability subsumes or overlaps with other ilities such as reliability, maintainability, safety, integrity, and other related terms. Dependability includes quantitative and qualitative features, design as well as operations, prevention as well as mitigation of failures. Psychologically, human trust in a system requires a system to consistently perform according to human intentions. Only then is it perceived as dependable. The engineering discipline that provides dependability we shall call dependability engineering. When applied to an application, dependability engineering then creates SHM system capabilities. This text could easily have been called Dependability Engineering: With Aerospace Applications. The relationship of dependability engineering to SHM is much like that of aerospace engineering to its application domain, in that there is no aerospace subsystem, but rather a set of system capabilities designed by aerospace engineers, such as aerodynamic capabilities of lift and drag, mission plans and profiles, and then the coordination of many other subsystems to control the aircraft's dynamics, temperatures, electrical power, avionics, etc. SHM is the name of all the dependability capabilities which are embedded in a host of other subsystems.

    Within the National Aeronautics and Space Administration (NASA), a recent alternative term to SHM is fault management (FM), which is defined as the operational capability of a system to contain, prevent, detect, diagnose, respond to, and recover from conditions that may interfere with nominal mission operations. FM addresses what to do when a system becomes unhealthy. To use a medical analogy, FM is equivalent to a patient going to the doctor once the patient is sick, whereas SHM also includes methods to prevent sickness, such as exercise and improved diet, which boost the immune system (improve design margins against failure). For the purposes of this book, FM will be considered the operational aspect of SHM. SHM includes non-operational mechanisms to preserve intended function, such as design margins and quality assurance, as well as operational mechanisms such as fault tolerance and prognostics.

    Major events in the evolution of SHM are given in Table 1.1.

    Table 1.1 Major events in the development of SHM

    The recognition that the many different techniques and technologies shown in Table 1.1 are intimately related and should be integrated has been growing over time. Statistical and quality control methods evolved in World War II to handle the logistics of the massive deployment of technological systems. The extreme environmental and operational conditions of aviation and space drove the creation of systems engineering, reliability, failure modes analysis, and testing methods in the 1950s and 1960s. As aerospace system complexity increased, the opportunity for failures to occur through a variety of causal factors also increased: inadequate design, manufacturing faults, operational mistakes, and unplanned events. This led in the 1970s to the creation of new methods to monitor and respond to system failures, such as the on-board mechanisms for deep-space fault protection on the Voyager project and the Space Shuttle's redundancy management capabilities. By the 1970s and 1980s these technologies and growing system complexity led to the development of formal theory for fault-tolerant computing (Byzantine fault theory), software failure modes and fault tree analyses, diagnostic methods, including directed graphs, and eventually to methods to predict future failures (prognostics). Total quality management, which was in vogue in the late 1980s and early 1990s, was a process-based approach to improve reliability, while software engineers created more sophisticated techniques to detect and test for software design flaws. By the early 2000s, and in particular in response to the Columbia accident of 2003, NASA and the DoD recognized that failures often resulted from a variety of cultural problems within the organizations responsible for operating complex systems, and hence that failure was not a purely technical problem.

    The term system health management evolved from the phrase vehicle health monitoring (VHM), which within the NASA research community in the early 1990s referred to proper selection and use of sensors and software to monitor the health of space vehicles. Engineers soon found the VHM concept deficient in two ways. First, merely monitoring was insufficient, as the point of monitoring was to take action. The word management soon substituted for monitoring to refer to this more active practice. Second, given that vehicles are merely one aspect of the complex human–machine systems, the term system soon replaced vehicle, such that by the mid-1990s, system health management became the most common phrase used to deal with the subject. By the mid-1990s, SHM became integrated SHM (ISHM) within some parts of NASA, which highlighted the relatively unexplored system implementation issues, instead of classical subsystem concerns.

    In the 1980s, the DoD had created a set of processes dealing with operational maintenance issues under the title Integrated Diagnostics. The DoD's term referred to the operational issues in trying to detect failures, determine the location of the underlying faults, and repairing or replacing the failed components. Given that failure symptoms frequently manifested themselves in components that were not the source of the original fault, it required integrated diagnostics looking at symptoms across the entire vehicle to determine the failure source. By the mid-1990s the DoD was promoting a more general concept of condition-based maintenance (as opposed to schedule-based maintenance), leading to the development of a standard by the early 2000s. By the 2000s enterprise health management was becoming a leading term for the field.

    Another recent term for SHM is prognostics and health management (PHM), though from all appearances, the subject matter is identical to SHM, since SHM encompasses prognostics as a specific technique for maintaining system health. The PHM term graces a new PHM society established in 2009 and with its own conferences and an online journal, the International Journal of Prognostics and Health Management, which was formed in 2009.

    Within NASA's Science Mission Directorate, the recognition that on-board design to address failures had become a major cost and schedule issue within science spacecraft programs was highlighted in the Fault Management Workshop of 2008. This workshop led to a set of findings about the typical technical, organizational, cost, and schedule problems associated with these on-board functions, to the institution of a Fault Management Group in the Constellation Program, and to the creation of a Fault Management Handbook, which as of October 2010 is in development. There is still some debate as to the scope of FM versus SHM, whether these are synonyms for each other, or whether FM is the operational subset of SHM. In this book we shall interpret FM as the latter.

    As described in the Foreword, this text emerged from the NASA Marshall Space Flight Center and Ames Research Center-sponsored Forum on Integrated System Health Engineering and Management, held in Napa, California, in November 2005. The book editors decided that the term system health management most concisely captured the major goal of the discipline, to manage the health of the system, through organizational and technical means. As systems by their nature are and should be integrated, the editors decided not to use the term integrated in the title. The goal of this book is to provide SHM practitioners, who are typically expert in one set of techniques and one application area, an educational resource to learn the basics of the related disciplines, issues, and applications of SHM in aerospace. Though the system applications in Part Six are aerospace focused, the rest of the sections are general in nature and apply to many different application areas outside of aerospace. Even the aerospace applications provide very different emphases that have similarities to those of other applications outside of aerospace, such as chemical process industries, electrical power generation and distribution, and computer network applications.

    Organizing SHM as a discipline provides a conceptual framework to organize knowledge about dependable system design and operations. It also heightens awareness of the various techniques to create and operate such systems. The resulting specialization of knowledge will allow for the creation of theories and models of system health and failure, of processes to monitor health and mitigate failure, all with greater depth and understanding than exist in the fall of 2010. We feel this step is necessary, since the disciplines and processes that currently exist, such as reliability theory, systems engineering, management theory and others, practiced separately, have not and cannot separately meet the challenge of our increasingly sophisticated and complex systems. As the depth of SHM knowledge increases, the resulting ideas must be fed back into other disciplines and processes in academic, industrial, and government contexts.

    1.2 Functions, Off-Nominal States, and Causation

    SHM's primary goal is to preserve the system's ability to function as intended. To understand the ramifications of this goal, we must introduce and define a number of concepts, including: system, intended functions, states and behaviors, classes of off-nominal states, and causation of off-nominal states.

    According to the International Council on Systems Engineering, a system is a construct or collection of different elements that together produce results not obtainable by the elements alone (INCOSE, 2010). For engineered systems, these results are the purposes (goals, objectives) for which the system was created. The system's designers and operators intend for the system to perform these goals, and it is these intentions that ultimately define whether the system is performing properly. Intent is defined by anyone that uses or interacts with the system, whether as designer, manufacturer, operator, or user.

    In mathematical terms, a system performs a function y = f(x), where x is the input state vector, y is the output state vector, and f is the system process that transforms the input state into the output state. The system can be divided into subsystems and components, each of which can be described in the same mathematical, functional way. Functions are allocated to mechanisms (which can be hardware, software, or humans), such that the function f can be implemented in several possible ways. Functions are implemented by mechanisms, and their operation is characterized by how they affect system states. The temporal evolution of a state is called a behavior. Behaviors can also be characterized as states, as they simply form a new state vector of individual states from a set of time samples. During system operations, it is not possible to definitively know the system's true state. Instead, operators only have information from which an estimated state is determined. In general, when we refer to states in operations, we mean the estimated state. In analysis and testing, the true state is often assumed or known, such as when a known fault is injected into a system test.

    A system is considered nominal if the output state vector matches the intentions of the designer and/or operator. A system is off-nominal if it does not. Off-nominal states come in three types: failures, anomalies, and degradations. Failure is the unacceptable performance of the intended function. Degradation is the decreased performance of the intended function. Anomaly is the unexpected performance of the intended function.³

    Contrary to common wisdom, off-nominal states are not definitively specified in a system's requirements or specifications. No single specification or model defines all of the details of what as system is intended to do, and how people expect it to behave. This is ultimately due to the fact that each person that interacts with the system has his or her own individual model of the system, which could be formal (a computer simulation or mathematical model) or informal (in a person's mind, based on the person's prior interactions with the system). This model determines how a person translates the (designers' or operators') intentions for the system into individual expectations of current and future system behaviors. It is ultimately an individual, and very frequently a social (or community), decision as to what constitutes an off-nominal state, and there can be significant arguments between groups and individuals about how to classify a given state. Over time, as a particular off-nominal state is investigated, the states can be reclassified from nominal, degraded, anomalous, or failed into one of the other categories. This typically happens because people change their minds about the proper classification once they understand the cause(s) of the observed states and behaviors. They then adjust their individual (formal and/or informal) models, leading to a different classification. This reflects normal learning about the system over time during operations.

    It is possible to have states that are anomalous and failed, anomalous and not failed, and failed but not anomalous. For example, the loss of Pioneer 10 due to component wearout of its radioisotope thermal generator was a failure, but there was no anomaly, because the failure was predicted years in advance and happened just as predicted and expected. Conversely, anomalies that are not failures are common, such as a power signature that differs from previous signatures of the same function. An example is a transient fluctuation when a switch is closed that differs from previous times when that same switch was closed. In most cases, failures are also anomalous. This same logic holds for the relationship of anomalies to degradations. Degraded and failed states are both on the same axis of functional acceptability, where degraded is not ideal but still acceptable, but failed is not acceptable. Anomalous behavior is a separate, orthogonal classification axis.

    The typical response to an anomaly is to investigate the behavior so as to determine its cause, because it could indicate component degradation or low-level component failure that manifests as a different behavioral signature than when that component was operating nominally. Investigation of the anomaly leads to three possible outcomes: (1) the anomaly remains not understood, and remains an anomaly; (2) the anomaly is judged as acceptable behavior and is reclassified as nominal; (3) the anomaly is judged unacceptable and is deemed a failure. In the second and third cases, the former anomaly now is understood and expected, and system models used to classify system states are adjusted so that future occurrences of the behavior are automatically classified as nominal or as failures. Two prominent examples of the second case are from the Space Shuttle Program. Both the Shuttle foam strike phenomenon and partial charring of O-rings were initially classified as anomalies and failures according to the Shuttle's initial specifications, but were reclassified over time to be normal behavior, in the sense that they were reclassified from safety to maintenance issues (Vaughan, 1996).⁴ While both of these cases led to tragedies, the fact remains that, for complex systems, there are many behaviors that prior to operations are considered anomalies and/or failures, but are reclassified as normal once flight experience has been acquired and the anomalies and failures are investigated. The issue is not whether classification occurs—it does, all the time—but rather if it is done correctly.

    The basis for reclassification is most often the determination of the cause of the off-nominal state. That is, knowledge of the cause, and projection of the future effects from that cause, determine whether an off-nominal state is acceptable or not, and hence what actions should be taken. Causation of off-nominal behavior is therefore a critical SHM topic. Internal causes of failures are called faults, whereas external causes of failure are simply called external causes. A fault is defined as a physical or logical cause internal to the system, which explains a failure. Simply put, it is an explanation for observed failure effects. If an anomaly exists, and an investigation judges the anomalous behavior as failed and the cause is internal to the system, then the cause is a fault. Conversely, if the behavior is judged as nominal, or as failed but the cause is external, then no fault exists. The term fault in common English has two meanings of significance for SHM: causation and responsibility. Investigation of failure, which is a significant aspect of SHM, addresses both of these concerns. Therefore, it is important for dependability engineering as a discipline to have a definition of fault that encompasses both meanings. The definition of a fault as a cause internal to the system enables both interpretations. For example, if a Mars lander lands on a rock that is too large, it will tip over and fail. In this situation, we would not normally say that it is Mars's fault that the lander tipped over, particularly if the risks were known, low, and acceptable. We would say that it was just bad luck in this case. However, if the operators knew that the region in which the landing was to occur had many such rocks, and that the operators took unnecessary risks by landing there, then there is a fault, which is that the operators made a flawed decision to land at this location.

    Fault and failure are interdependent, recursive concepts of cause and effect. Seen from one perspective, a fault explains a given failure, but from another, that same fault is seen as the failure that needs an explanation. For example, in the Columbia tragedy of 2003, the hole in the leading edge of the wing is the failure that needs explaining, and its cause is a chunk of insulation foam hitting the wing and causing a structural breach. However, from the perspective of the designers of the external tank, the foam falling off the external tank is the failure to be explained, and its cause was air bubbles in the foam insulation. In turn, the air bubbles in the foam can be seen as the failure, and flaws in the foam application process seen as the fault that explains it. This process can continue for quite a long time in a failure investigation, but ultimately the investigation stops and no further causes are sought. The first causes in these long chains of explanation are the root causes—failure is often the result of several root causes that interact. The term root cause is also relative, because as far as one group are concerned, the explanation that satisfies them so that they require no deeper explanation is their root cause. However, another group may not be satisfied with this. For them, the original group's root cause is not a cause at all, but a failure to be explained. When they stop their investigation, their first causes are the root causes. The recursive nature of these terms helps to explain the major difficulties that many groups have had in defining them, but also explains their utility.⁵ Figure 1.1 illustrates the relationship of a number of these concepts.

    Figure 1.1 Concept diagram for major SHM terms

    1.1

    Human causation of the majority of faults is a key axiom of SHM theory. Human faults, whether individual or social via miscommunication or lack of communication, are the root causes for most failures, other than the relatively small percentage of failures caused by expected system wearout or environmental causes. While comprehensive statistical studies have not been compiled, discussions with those who have compiled aerospace failure databases suggest that the vast majority (most likely 80% or more) of failures are ultimately due to one of two fundamental causes: individual performance failures and social communicative failures. This should come as little surprise. Humans create and operate systems for their own purposes and with their own individual and social processes, and it is human failings in these areas that lead to internal faults in design, manufacturing, or operations. Human causation of the majority of faults is the basis of the Columbia Accident Investigation Board's finding that NASA's culture is deeply implicated in disasters in human spaceflight. In essence, if you search deep enough for root causes, you find most of them are human causes, and thus addressing these individual cognitive and social communicative issues is relevant to reducing the probability of system failure, by reducing the number of faults introduced into the system by humans.

    The results of human faults differ, depending on when they occur in the system lifecycle. Human mistakes in the design phase generally lead to design faults or common mode failures, since they are common to all copies of the system. Mistakes in manufacturing generally lead to faults in single copies of the system. These are typically called random part failure, though the label of random is usually a cover for our inability to find the human fault that is almost always the root cause. In manufacturing, faults can also lead to failures in all copies of the system, but when this is true, the fault is in the design of equipment that manufactures multiple copies, in which case the fault is ultimately a design flaw. Mistakes in operations are generally considered human operational faults and are often blamed on the operators. However, most failures are ultimately due to humans and thus share this fundamental similarity.

    The implication of human causation is that SHM must address all failure causes, whether design faults, manufacturing faults, or operator faults, and that the basic rates of occurrence of these faults are roughly the same due to common human causation.

    1.3 Complexity and Knowledge Limitations

    Humans regularly build systems that produce behaviors that the human designers and operators did not expect or understand prior to their occurrence. This basic fact defines the concept of complexity. We define something as complex when it is beyond the complete understanding of any one individual. In passing, we note that many systems such as the Space Shuttle elude the complete understanding of entire organizations devoted to their care and operation. Because of their complexity, aerospace systems must have several or many people working on them, each of whom specializes in a small portion. The system is subdivided into small chunks, each of which must be simple enough for one person to comprehend. A fundamental limitation on any system design is the proper division of the system into cognitively comprehensible pieces. This is a key facet of systems engineering, though it is not generally expressed in this manner (Johnson, 2003).

    The inability of humans to understand their creations has a specific implication for SHM, which is that SHM engineers must assume that the system has behaviors that nobody understands or expects to occur. Since they are not expected, they are assumed improbable, but logic requires that these be more probable than the usual assumption of so improbable as to ignore, and history indicates that it is nearly certain that some of these will manifest themselves during system operation. Some of these behaviors can cause system failure. The SHM design must include capabilities to address this issue.

    1.4 SHM Mitigation Strategies

    The purpose of SHM is to preserve system functionality, or, stated differently, to control state variables within an acceptable range, in the presence of current or predicted future failures. If the system always functioned perfectly and ideally, SHM would not be necessary. However, over time, due to regular wear and tear, components degrade, or due to other internal or external causes they fail such that the system's nominal control systems are unable to sustain function. For an active control system, this is often because the sensors, processors, or actuators that the control system assumes are operating normally have failed, making the control algorithm ineffective since it usually assumes these components are working properly. For passive control of a state variable, such as is typical for structures, the structures themselves degrade and fail due to dynamic or thermal cycling. A common example is an aircraft's wings, which have a finite lifetime under standard dynamic loads. In either case, the system's active or passive design margins are exceeded and failure ensues. SHM provides passive capabilities to prevent faults, active capabilities that take over when the regular control system fails, and active capabilities to predict future failure and take action to prevent or delay it.

    To design SHM into the system, the first step must be to identify the system's functions. This is typically accomplished by performing a systems engineering functional decomposition, which defines system functions from the top down. Typically this is represented as a tree structure (a function or success tree) and/or in a functional flow block diagram. The former is a semi-time-independent functional decomposition, whereas the latter is an event sequence diagram, addressing the time and/or mission sequencing of system functions.

    Each system function has the possibility of failure, and the goal to preserve functionality in the face of impending or actual failure implies that each function defined from the system decomposition must be assessed from the standpoint of how that function can be preserved or protected. The SHM designer must determine the function preservation strategy. At the highest level, there are only two choices for preserving function: to prevent failure, or to tolerate failure, as shown in Figure 1.2.

    Figure 1.2 SHM function preservation strategies

    1.2

    Failure prevention can be accomplished by design-time fault avoidance, or by operational failure avoidance. Design-time fault avoidance means that failure causes are prevented, usually through design margins and quality assurance measures. Operational failure avoidance means operationally predicting when a failure will occur, and taking operational measures to prevent its occurrence. These operational measures can include retirement of the system prior to failure if repair is not possible (thus avoiding a hazardous or undesirable consequence should the system fail, such as retiring an aircraft before its wings fail), alteration of the system's operation to delay the inevitable failure (reducing loads on the stressed components), or repair or replacement of the stressed component before failure (such as schedule-based or condition-based maintenance).

    Failure (or fault) tolerance strategies include failure masking, failure recovery, or goal change. Failure masking is the strategy of allowing a failure in the system, but preventing that failure from compromising the critical function of interest. This is usually performed by detection and containment of the failure before it propagates downstream to the function that cannot be compromised. The failure recovery strategy is possible if the function can be temporarily compromised. In this strategy, a failure occurs and the function is compromised, but it is detected and a response taken that reestablishes control such that the function is once again controlled acceptably, before any system goals are permanently compromised. System goals are not changed in the failure recovery strategy. The goal change strategy is applied when the failure effects are such that the current system functions cannot be maintained. In this strategy, the system switches to less demanding goals than the original set, such as a safing mode to preserve power and reestablish communication with Earth for a robotic spacecraft, or an abort mode to save the crew for a crewed spacecraft.

    Institutionally, these strategies are typically implemented by several groups. When fault avoidance is selected for a function, the implementation of design margins occurs through design groups, manufacturing, and quality assurance organizations. Failure prediction and operational failure avoidance are typically implemented by operations groups, while the three fault tolerance strategies are implemented by SHM and element designers. Analysis of the effectiveness of these strategies is also split, this time between the SHM and element engineers, operations engineers (availability), and safety and mission assurance (reliability etc.) organizations. The SHM engineer has the primary role at the beginning of a project to determine the function preservation strategies, and then for implementation and analysis of these strategies to the extent that these are not already covered by other organizations. The assessment of the total effectiveness of the strategies in mitigating mission risks is typically split between SHM and reliability analysts.

    1.5 Operational Fault Management Functions

    When the design-time fault avoidance strategy is selected, its implementation leads to appropriate design margins, which are then analyzed for their effectiveness in ensuring appropriate component reliability for the various system components (components in this sense can mean hardware, software, or humans that implement functions). However, SHM engineers are typically not involved with the implementation, though they may be involved with analysis and testing to ensure proper implementation. For all of the other strategies, SHM engineers are involved. It is, in fact, the growing sophistication and pervasiveness of active prediction, operational failure avoidance, and fault tolerance designs that is the primary spur to the development of SHM as a discipline. This section will describe the functions of operational implementation, which is the fault management (FM) subset of SHM.

    Under nominal conditions, the system has passive design margins and active control systems that provide system functionality, by maintaining state variables associated with each function within acceptable bounds. FM is the set of operational capabilities that perform their primary functions when the nominal system design is unable to keep state variables within acceptable bounds. To do this, FM, just like the nominal control system, operates as an active control loop, with the system providing information to functions that detect off-nominal conditions, functions to determine the cause(s) of these conditions, decision functions to determine what to do about these conditions, and actions (responses) that implement these decisions to preserve system function. The detection functions are continually active to monitor state variables and determine when they have reached an off-nominal condition. The diagnostic functions to isolate and identify causes of off-nominal conditions, the decision functions, and response functions to take action to remediate off-nominal conditions do not generally execute unless and until an off-nominal condition exists. Together, these FM loops define new system control regimes, which enable the system to function under a variety of off-nominal conditions. For failed conditions, the FM loops provide capability precisely when the nominal control regime can no longer control state variables. For degraded conditions that will eventually lead to failure, FM loops preempt failures so that the regular control system never loses control. As described above, design-time fault avoidance is not an active control loop, as it is a passive function. It is therefore not part of FM.

    Figure 1.3 illustrates the relationship of FM functions, which operate in characteristic control loops. The diagram also shows the boundaries of some of the common labels under which SHM activities have historically occurred. The FM functions are defined below:

    Anomaly detection: Deciding that an anomaly exists.

    Failure detection: Deciding that a failure exists.

    Failure masking: An action to maintain intended function in the presence of failure.

    Operational failure avoidance: An action to prevent a failure from occurring.

    Failure prognosis: Predicting the time at which a component will fail.

    Failure recovery: An action taken to restore functions necessary to achieve existing or redefined system goals after a failure.

    Failure response determination: Selecting actions to mitigate a current or future failure.

    Fault containment: Preventing a fault from causing further faults.

    Fault identification: Determining the possible causes of off-nominal behavior.

    Fault isolation: Determining the possible locations of the cause of off-nominal behavior, to a defined level of granularity.

    Goal change: An action that alters the system's current set of objectives.

    Model adjustment: Modifying the model of the system upon which expectations of future states and behaviors are based.

    Figure 1.3 Operational FM control loops

    1.3

    Each FM loop consists of a suite of these functions, and together the entire FM loop must operate faster than the failure effects that the FM loop is intended to mitigate. The FM loop latencies are the sum total of the latencies required to sense, detect, isolate, decide, and respond to a predicted or current failure. These latencies must be less than the time-to-criticality (TTC), which is the amount of time it takes for failure effects to propagate from the failure mode along failure effect propagation paths (FEPPs) to the first critical failure effect (CFE). Both the FM loop latencies and the TTC are based on the physics by which the nominal or failure effects propagate, which can change along the FEPPs and FM loop paths. For example, failure effects propagating through electrons in a wire generally propagate on the order of a few milliseconds for wire lengths typical of aerospace systems, whereas failure effects propagating via fluid flows are typically on the order of several hundred milliseconds via atoms and molecules, and thermal effects in seconds or minutes. There can be multiple FEPPs and, less frequently, multiple FM loop paths for a single fault.

    The CFE is not always the effect at the moment the mission actually fails. Rather, it is some intermediate effect, which, if it occurs, has irrevocably compromised system objectives, even if the ultimate mission failure or degradation will occur sometime further in the future. Consider a loss of propellant failure scenario in the cruise phase of a planetary probe. The effects of the propellant loss may not be ultimately manifested for months or years when the vehicle must perform orbit operations to gather science data. The relevant time to measure for each FM loop is to the CFE, which in this case is the amount of time, given the rate of loss of propellant based on the current (and projected) leak size, when there will be not enough propellant to meet mission objectives. When several CFEs occur for a given fault, then the CFE of relevance is the one to which the failure effects propagate soonest.

    Both the failure effect propagation times and FM loop latencies are complicated by the fact that many of the individual times are statistical in nature, when assessed during the design phase before any specific failure has occurred. For example, in a typical launch vehicle liquid rocket engine, small variations in input conditions can create dramatically different effects due to the nonlinearity of fluid flows. Post-failure analysis tracks a particular failure event sequence with a specific, deterministic time, but, seen prior to the fact, that particular sequence of events and times is only one of many possibilities. The FM designer must ultimately understand the effectiveness of the FM design, which in part is based on the temporal race conditions of the FM loops versus the TTCs for a given failure scenario, and then summed for all FM loops in all relevant failure scenarios. In complex cases, the analysis is often statistical in nature, based on Monte Carlo runs of physics-based models of the particulars of the system and its failure modes. Sometimes full Monte Carlo runs are unnecessary, such as when a worst-case analysis can be performed to bound the effectiveness estimate. The FM design must provide some quantitative improvement in system reliability and/or availability, and in some cases might only have to meet a given requirement, which could be shown through the worst-case analysis.

    1.5.1 Detection Functions and Model Adjustment

    Mitigation of off-nominal states requires that the system be able to detect that off-nominal states exist. Detection of failures and degradations requires a calculation of the deviation between an estimated state and the ideal intended state for a given variable, which we define as a control error in the control theory sense.⁷ The ideal state is associated with the function that is implementing a system objective. Detection of anomalies is different, as it is based on the deviation between an estimated state and the ideal expected state, which is a knowledge error in the control theory sense. Failure and degradation detection signify that the system's behavior is no longer ideal, whereas anomaly detection signifies that the knowledge of the system's behavior is inaccurate. The concept of error is most appropriate for continuous variables. For discrete variables (true or false, 1 or 0), the error is a simple inequality or mismatch between the desired value and the estimated value.

    Detection functions generally classify the estimated state as nominal, or one of the three off-nominal categories of failure, anomaly, degraded. We explicitly identify two FM detection functions: failure detection and anomaly detection. Though under consideration, we do not currently define a degradation detection function, but degradations must also be identified based on the criteria of decreased performance compared to the ideal. This comparison would be either against the system's original ideal performance value for a function, or against a current or recent performance value, depending on the purpose of classifying a state as degraded. For all three detection types, separation of a nominal from an off-nominal state requires some threshold to separate one from the other, or alternatively some mechanism that allows for a quantitative measurement of comparative nominal or off-nominal characteristics. Most typically a precise threshold value is used.

    Anomaly and failure detections are significantly different functions, which frequently have different implementations. Failure detections can be based on knowledge of specific, known failure states and behaviors based on FMEA-identified failure modes, or they can be designed based on knowledge of what level of function compromise is unacceptable for achievement of the function's objective, regardless of the underlying failure mechanisms. The response to failure detection is ultimately to take some mitigating action. Anomaly detections are of very different design, identifying deviations of current behavior from prior or predicted behavior as the criterion of unexpectedness. These are often based on artificial intelligence, training-based algorithms, but also encompass the engineering judgment of operators. The response to anomaly detection is to investigate the cause of the anomaly. If successful, the investigation classifies the estimated state as nominal, degraded, or failed. If not, the anomaly remains an anomaly. The investigation, if successful, acts as a failure or degradation detection function.

    In many cases, the other result of an anomaly detection is a model adjustment, which modifies the observer's model of system behavior to reflect the new information learned from the investigation. After model adjustment, a future recurrence of the state originally detected as an anomaly would quickly be classified as degraded, failed, or nominal.

    The FM detection and model adjustment functions are all aspects of state estimation, whose purpose is, as the term suggests, to make the best estimate of the true state of the system. As state estimation functions, their effectiveness of these functions is measured by false positive and false negative metrics for individual detection mechanisms, and by coverage metrics for the entire suite of failure detections. Put another way, the coverage metric determines what percentage of the total number of system faults (failure modes) the failure detection mechanisms can detect, adjusted by the probability of the various failure modes. The effectiveness metric reduces this fraction using the false positive/false negative (FP/FN) metrics.

    Off-nominal detection algorithms often include filtering algorithms to separate transient events from permanent ones. These same algorithms can often provide information about the cause of the off-nominal detection. A typical simple example is a three-strike algorithm, which is based on the idea that it is physically impossible for the state variable being measured to physically change rapidly, so that if drastic value changes occur, it is very likely an artifact of the physical mechanisms that measure the state variable, or the digital system that is transporting the measurement data, or an environmental effect on these mechanisms, and thus that this measurement is not providing any information about the state variable in question. Under the physical assumption that the rapid change is a single event upset (SEU), it is highly improbable that it will occur twice in a row. Requiring three consecutive measurement effectively eliminates the possibility of a false positive resulting from a relatively probable SEU event. If the big jump persists, then there is probably a permanent problem in the digital processing system, as opposed to the state variable that is being measured by the relevant observation/sensor.

    Model adjustment, if done incorrectly, can lead to disastrous consequences. In the well-known cases of system failure such as the Challenger and Columbia accidents, anomalies were classified incorrectly as normal behaviors, instead of as unacceptable failures or even to remain anomalous. These model adjustments were called normalization of deviance by Diane Vaughan in her book The Challenger Launch Decision, but were not recognized as a normal engineering activity, nor by the model adjustment name. Far from being deviant or incorrect or abnormal, model adjustment occurs all of the time; the question is whether it is done properly. There are many system models. In fact, it is likely that there are as many models of the system as there are people who interact with the system, and then there are the many formal models of the system as well. Thus, model adjustment is an unrecognized and usually haphazard process, which can lead to misunderstandings and, as the Challenger and Columbia accidents teach us, to disaster. Far more attention must be paid to this function in the future than has been done in the past.

    1.5.2 Fault Diagnosis

    Fault diagnosis is the term that encompasses the two FM functions of fault isolation and fault identification. It can be considered as a composite function that aims to determine the location and mechanism of the underlying failure cause(s). Both fault isolation and identification are measured via ambiguity groups, which are groupings of components that cannot be distinguished from each other based on the detection signature provided by the failure detection and/or anomaly detection functions. If a specific set of failure detections or anomaly detections occurs, there are several possible components in which the underlying failure cause may exist, and it is not possible to determine which component among the set contains the causal mechanism.

    Fault isolation attempts to answer the question of where the causal mechanism of an off-nominal state exists. The FM usage of the phrase fault isolation as a diagnostic function should not be confused with the use of the same phrase to describe a mechanism to prevent failure effects or causal mechanisms from spreading from one location to another. This usage is common in electrical applications and electrical engineering. In FM terminology, this is called fault containment and/or failure containment, and examples include mechanisms such as optical isolators or breaker circuits. The term fault isolation is historically used in fault management and its predecessors, but is somewhat misleading. The fault isolation function determines the location not just of faults (causes of failure inside the system boundary), but also of environmental failure causes. So it would be somewhat better termed failure cause isolation, though for historical reasons we hold to the commonly used term fault isolation.

    Fault identification (sometimes called fault characterization) attempts to answer the question of what the causal mechanism of an off-nominal state is (or alternatively, why the failure occurred, which is usually explained by identifying the causal mechanism). Its implementation sometimes operates similarly to the fault isolation function in that automated diagnosis tools use similar techniques of forward and backward tracing of failure effects to determine the location of faults as it does to determine the possible failure modes that cause the off-nominal state. However, fault identification is frequently implemented quite differently than fault isolation, with humans performing tailored analyses to determine off-nominal behavior causes. As with fault isolation, fault identification seeks for causes that can be inside the system boundary (faults), or outside the boundary in the environment. It is frequently true that fault identification is not necessary for an effective failure response to be implemented. Often it is only necessary to determine the location of a fault, so as to remove it from the control loop.

    The effectiveness of fault diagnosis functions is measured by ambiguity groups that list the sets of possible causes and locations, along with associated false positive and false negative rates assessed against these ambiguity groups.

    1.5.3 Failure Prognosis

    Prognosis is simply defined as prediction of future states or behaviors, and failure prognosis predicts when failure will occur. Failure prognosis typically uses a formal model to evaluate the future consequences of current system behavior. Current system state, behavioral, and environmental data is fed into the formal model, which is usually (though not always) physics based. Knowing the expected future goals and operations of the system, this model is then used to predict the point in time, or range of times, in which the function performed by these components may be compromised. This information is passed to the failure response determination function, which decides whether to take an operational failure avoidance action, safe the system (goal change), retire the system (such as retire an aircraft before its wing fails), or wait until failure occurs and take a failure response.

    Prognosis is a particularly important FM function for systems that have long operational lives and failure effects that have long times to criticality (days, weeks, years), in which the deterioration of components can be tracked and fed into the relevant physics-based models. Deep-space probes with multi-year missions monitor key components for deterioration, such as their power sources (degradation of solar panels, batteries, or radioisotope thermal generators). Fleets of aircraft (and, historically, the Space Shuttles) also have strong prognostic programs, often focused on deterioration of the highest-stressed structural components.

    Failure prognosis as an operational FM function should not be confused with design-time analysis and prediction of behavior, the results of which are then built into the automated system. FM control loops must detect failure early enough to initiate and execute a successful failure response prior to the failure propagating to a severe consequence. The failure detection and diagnosis functions in effect must predict the future consequences of this failure so as to determine what response(s) to execute, and when. Thus, they have embedded in them a sense of prognostics, as the logic of the entire FM loop (failure detection, fault isolation, failure response) is based on a built-in predictive capability. If failure A is detected, then in the future, critical system function X will be compromised, and this means that failure response R must be executed now. Despite its predictive content, this example is not normally thought of as failure prognosis, mainly because the prediction is done at design time and not during operations.

    1.5.4 Failure Response Determination

    Failure response determination is the FM decision function to determine appropriate mitigation actions to current or predicted failure. Failure response determination contains several key sub-functions, which include functional assessment, identifying failure response options, determining the likely outcomes of the response options, prioritizing the response options, selecting which response(s) to initiate, and notifying the system to implement the response(s). Functional assessment determines the compromises to system functionality that are occurring now and will occur in the future given the current failures, how they propagate, and how they affect the system's ability to meet mission goals. Failure response determination can be implemented through automated mechanisms or human operators (ground or flight crew). The location of the failure response determination mechanism is intimately linked to the issue of locus of control for a system (i.e., who or what has decision authority to take actions, for nominal or off-nominal purposes).

    1.5.5 Failure Response

    Failure response is a composite function that covers four FM functions: goal change, failure recovery, failure masking, and operational failure avoidance. It generically describes actions taken to mitigate the effects of failure.

    1.5.5.1 Goal Change

    Goal change is the action that alters the system's current set of objectives. This can be executed by the system for a variety of reasons and is thus not exclusively an FM function, but FM is one of the primary initiators of this function. In the FM context, a goal change is initiated to attempt to regain the system's ability to control the system state (achieve some function) in reaction to a failure. Usually the goal changes to a degraded goal or a subset of the system's original goals. For example, with spacecraft safing, the current science objectives may be abandoned while the spacecraft maintains the goals of maintaining system power and communicating with Earth. In the case of a human-rated launch vehicle, an ascent abort abandons the goal of achieving orbit, but protects the goal of keeping the crew safe. For an aircraft a typical example would be rerouting the flight to an alternate airport or destination.

    1.5.5.2 Failure Recovery

    Failure recovery is the FM function of restoring system functions necessary to achieve existing or redefined system goals after a failure. Failure recovery occurs in two contexts: (1) when the system can temporarily sustain a compromise to function and the failure recovery action is activated without any goal change; and (2) after a goal change (typically safing) to return the system from the safe state back to normal operations. In some cases, normal operation may be identical to the operations occurring prior to the failure, with no change of objectives or functions. However, normal operation may require a new goal (one different from the original goal) for the system, which by comparison to the system's original goal before the failure could be less demanding. An example is failure recovery after a safing action, in which safing (a goal change) changed the system's objectives to something achievable, but often by abandoning some original goal, such as performing all of the science objectives in favor of performing only some of the science. In this case, the failure permanently compromises some of the mission. After the ground or flight crew (or the system autonomously) evaluate the situation, they determine which of the original system objectives can be attained, and command the system into a new configuration with new mission goals and plans.

    Failure recovery has been a label typically applied to in-flight operational systems, but not always to maintenance or manufacturing/supportability actions. This is incorrect, as maintenance actions to repair or replace components after failures are failure recovery actions. An example is the failure of a launch vehicle component prior to launch, leading to a launch scrub and recycle. The failure recovery in this case may include repair and/or replacement of the failed component, reloading propellant tanks, and recycling the launch sequence to a point where it can be restarted.

    1.5.5.3 Failure Masking

    Failure masking differs from failure recovery in that failure masking is implemented when a system function cannot be compromised even temporarily. In failure masking a low-level failure propagates effects, which are halted before compromising the critical function. A classical example is a voting mechanism in a set of fault-tolerant computers. A triplex or quad set of computers perform identical operations, and the voting mechanism ensures that if any one of the computers fails, it is outvoted by the others, so that the output of the vote is always the correct set of information. The location at which the failure effects stop propagating is often called a failure containment zone (or region) boundary.

    1.5.5.4 Operational Failure Avoidance

    Operational failure avoidance is an action to prevent a predicted future failure from occurring. Thus it is not a response to a current existing failure, but to a future predicted failure. It differs from failure masking in that failure masking prevents a failure from spreading beyond a certain location, whereas operational failure avoidance prevents the failure from happening to begin with. Whereas fault avoidance is a design-time passive implementation of design margins and quality assurance mechanisms to prevent faults (and hence failures), operational failure avoidance is an operational action to delay the onset of predicted failure or stop it altogether. An example is a component that has degraded in such a way that the regular mission profile, which normally would have been acceptable, creates high temperatures that can accelerate the degradation of the component so that it will fail in the near future. The system can be operated in a way that avoids these temperature ranges, so the mission operations team changes the mission profile so that this component is now kept in the shade, whereas it would normally have been in attitudes in which it was exposed to the Sun. Reliability-centered and condition-based maintenance are other typical examples of operational failure avoidance.

    1.5.6 Fault and Failure Containment

    Failure masking, fault tolerance, and fault and failure containment are closely linked concepts. To prevent loss of life, loss of the system (or vehicle), or loss of mission, both faults and failures must be contained. The concept of failure containment is easy to understand: failure effects, as they spread along failure effect propagation paths, must be stopped or contained to prevent system or mission failure. The location at which a particular set of failure effects are stopped is called a failure containment zone boundary. The set of failure containment zone boundaries creates a failure containment region, in which certain classes of failure effects are contained.

    Fault containment is a related, but more complex, concept. Its name implies the difference from failure containment—fault containment is defined as preventing a fault (a cause of failure) from causing further faults (further causes of failure). An example best describes the nuances of the concept. Assume that an electrical short circuit occurs in Component A, and the system is designed in such a way that this leads to an overvoltage that propagates to a neighboring Component B, in which the overvoltage creates physical damage and another short circuit. Then assume that further overvoltages are contained so that further components do not experience this condition. Next, a fault diagnosis is performed, leading to the isolation of Component A as the component in which the fault originated. Technicians then replace Component A with a new Component A′. When the system is tested, it still does not function, because Component B has a permanent fault. Only when Component B is also replaced will the system function properly. This is an example of fault propagation, as opposed to merely failure propagation, and in this case fault containment did not exist between Components A and B, for that type of fault and resulting failure effects. One can therefore have fault containment zones that are different from failure containment zones.

    If the failure recovery function operates properly and successfully, then failure effects are generally contained, and, for this reason, failure containment is not considered a separate, independent FM function. It is encompassed in the overall process and functions of failure detection, fault isolation, and failure recovery. However, fault containment is a separate issue that must be addressed separately from the containment of failure effects. The prevention of the spread of permanent physical (or

    Enjoying the preview?
    Page 1 of 1