Appendix A: Engineering Standards Manual ISD 341-2
Appendix A: Engineering Standards Manual ISD 341-2
APPENDIX A
INSTRUMENTED SYSTEMS USED IN SAFETY SIGNIFICANT AND HAZARDOUS PROCESSES DESIGN GUIDANCE (PROGRAMMATIC AND FACILITY)
TABLE OF CONTENTS
1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 PURPOSE .................................................................................................................................... 3 SCOPE ........................................................................................................................................ 3 ACRONYMS AND DEFINITIONS .................................................................................................. 3 SYSTEM ARCHITECTURE ........................................................................................................... 6 SYSTEM BOUNDARIES / CONSTRAINTS...................................................................................... 7 SSIS LIFE CYCLE ....................................................................................................................... 7 DESIGN INPUTS .......................................................................................................................... 9 DESIGN CRITERIA .................................................................................................................... 10 DESIGN VERIFICATION ............................................................................................................ 12 BACKFIT ANALYSIS ................................................................................................................. 12
ATTACHMENT 1: SAFETY INTEGRITY LEVEL ASSIGNMENT METHODOLOGY ...................................... 13 ATTACHMENT 2: SAFETY SIGNIFICANT INSTRUMENTED SYSTEM CHECKLIST .................................... 25
RECORD OF REVISIONS
Rev 0 1 Date 11/17/03 10/27/06 Initial issue. Administrative changes only. Organization and contract reference updates from LANS transition. IMP and ISD number changes based on new Conduct of Engineering IMP 341. Other administrative changes. Description POC Mel Burnett, FWO-DECS Mike Clemmons, FM&E-DES OIC Gurinder Grewal, FWO-DO Kirk Christensen, CENG
Page 1 of 29
Page 2 of 29
1.0
PURPOSE
This appendix provides guidance for the development of performance attributes and design criteria for electrical/electronic/and programmable electronic systems classified as safety significant or protection layers for hazardous processes. The development of design criteria for these systems is based on the ANSI/ISA 84.01-1996 Standard. ISA 84.01 provides a performance based graded approach to the design of safety instrumented systems.
2.0
SCOPE
The guidance presented in this appendix applies only to systems that (1) are identified as a nuclear Safety Significant system, a non-nuclear system that would be considered Safety Significant using the definition in Section 3.0 below, or a safety-related ML-2 system, and (2) require instrumented systems to perform the safety function. Operator actions in response to process alarms that place a process in a safe state in order to prevent or mitigate a safety significant risk are covered within this appendix. The appendix does not cover the methods or procedures to be used to conduct a hazard analysis, perform a risk assessment, develop a risk/consequence based matrix, identify functional classifications, or identify means to be used to prevent and/or mitigate any hazards identified.
3.0
3.1
Page 3 of 29
3.2
DEFINITIONS
Administrative Control Provision relating to organization and management, procedures, record keeping, assessment, and reporting necessary to ensure the safe operation of the facility. Analytical Limit Limit of a measured or calculated process parameter established by the safety or hazards analysis to ensure that a safety limit is not exceeded. Backfit Analysis The process by which an existing SSC is evaluated to determine if it is adequate to perform its upgraded safety function in terms of newly established requirements and safety analyses. Backfit consists of a design assessment and if needed a cost benefit assessment. Basic Process Control System (BPCS) A system that responds to the input signals from the process, its associated equipment, other programmable systems and/or an operator and generates outputs signals causing the process and its associated equipment to operate in the desired manner but does not perform any safety instrumented functions. Common Cause Failure A single event that causes failure in multiple elements of a system. The initiating event may be either internal or external to the system. Design Agency The organization performing the detailed design and analysis of a project or modification. Design Authority The person or group responsible for the final acceptability of and changes to the design of a system or component and its technical baseline. Fail-Dangerous Fault A failure in a system or component that will result in the system/component not performing its safety function. Fail-Safe Fail-safe means that on loss of motive force (electrical power, air supply, hydraulics, etc.) the system will go to a safe state and remain in this safe state. Functional Classification A graded classification system used to determine minimum requirements for SSCs. The Functional Classifications in order of precedence are ML-1 or Safety Class, ML-2 or Safety Significant, and ML-3 or General Service. Independent Protection Layer (IPL) A system, structure, component, or administrative control that acts to prevent or mitigate a safety significant hazardous event. Independent Protection Layers are sufficiently independent so that the failure of one IPL will not cause the failure of another IPL that is credited with preventing or mitigating the same event. Layer of Protection Analysis (LOPA) A Layer of Protection Analysis is a variation of event tree analysis where only two outcomes are considered. The possible outcomes are either failure (PFD) or successful operation. Level of Control (LOC) One or more structures, systems, components, administrative controls, or inherent features (e.g. chemical properties, gravity, physical constants, underground location), which can be readily expected to act to prevent or mitigate a hazardous event.
Page 4 of 29
Management Level 2 (ML-2) Selective application of applicable codes, standards, procedural controls, verification activities, documentation requirements, and formalized maintenance program (i.e., certain elements may require extensive controls, while others may only require limited control measures). Could include facility work that may require independent review, management approval, and verification of design outputs, surveillance during procurement, fabrication, installation, assembly, and construction. Probability of Failure on Demand (PFD) A value that indicates the probability of a system failing to respond to an event for which it is designed. The average probability of a system failing to respond to a demand in a specified time interval is referred to as PFDavg. Risk Reduction Factor (RRF) The inverse of Probability of Failure on Demand (1/PFD). The risk reduction factor is a numeric value identifying the amount of reduction or lessening of the likelihood of an event occurring. Safety Integrity Level (SIL) One of three possible discrete integrity levels (SIL 1, SIL 2, and SIL 3) of Safety Significant Instrumented Systems. SILs are defined in terms of Probability of Failure on Demand (PFD) (see Table 3.1). 1
Safety-Related A term meaning safety class, safety significant, and those ML-1 and ML-2 SSCs that could potentially impact public or worker safety or the environment in the same way as safety class or safety significant systems respectively. Safety Significant (SS) Structures, Systems, and Components that are not designated as SafetyClass SSCs but whose preventive or mitigative function is a major contributor to defense in depth and/or worker safety as determined from safety analyses. [10 CFR 830] As a general rule of thumb, Safety-Significant SSC designations based on worker safety are limited to those Systems, Structures, or Components whose failure is estimated to result in a prompt worker fatality or serious injuries or significant radiological or chemical exposures to workers. The term, serious injuries, as used in this definition, refers to medical treatment for immediately life-threatening or permanently disabling injuries (e.g., loss of eye, loss of limb). Safety Significant Functions SS functions are those functions that have been classified as either SS or ML-2 through the hazards analysis and graded approach. Safety Significant Hazardous Event An event involving a source of danger (i.e. material, energy source, or operation) with the potential to cause illness, injury, or death to personnel or damage to a facility or to the environment that has a functional classification of SS or ML-2. Safety Significant Instrumented System (SSIS) An SS system, a safety-related ML-2 system, or a 29 CFR 1910.119 hazardous process independent protection layer that requires instrumentation, logic devices and final control elements to monitor and detect an SS/ML-2
1 From ANSI/ISA-S84.01-1996 Application of Safety Instrumented Systems for the Process Industries.
Page 5 of 29
event, and which will result in automatic or operator action that will bring the facility or process system to a safe state. TUV A German based certification organization that provides certification services to manufacturers of safety instrumentation and safety systems.
4.0
SYSTEM ARCHITECTURE
A. A Safety Significant Instrumented System (SSIS) generally consists of three parts. The first part of an SSIS is the sensor(s), which monitors one or more process parameters over a specified range to detect the initiation of a safety significant event. The second part of an SSIS is the logic solver(s), which receives input from the sensor(s) and provides logic and/or math functions to generate a safety (SS) output signal to a final control element(s). The third part of an SSIS is the final control element(s) that performs the actual safety significant action. Figure 1 below provides a block diagram of an automatic SSIS. The listing of components shown in the figure for each part of an SSIS is given to provide examples and is not meant to be a complete listing. Figure 1: SSIS Block Diagram Automatic Actuation
SUPPORT SYSTEMS
SENSOR
LOGIC SOLVER
Thermocouples RTDs Transmitters (Pressure, Flow, Level, etc.) Analyzers Radiation Detectors
B. An operator can be included in an SSIS where operator actions are required to bring the facility or process system to a safe state. Figure 2 below provides a block diagram of an SSIS that includes operator action. The listing of components shown in the figure is given to provide examples and is not meant to be a complete listing.
Page 6 of 29
SUPPORT SYSTEMS
SENSOR
LOGIC SOLVER
Thermocouples RTDs Transmitters (Pressure, Flow, Level, etc.) Analyzers Radiation Detectors
5.0
6.0
Page 7 of 29
Start Develop Safety Requirements Specification Establish Operations & Maintenance Procedures
Pre-Conceptual Process Design Perform SSIS Conceptual Design and Verification Pre-Start Readiness Review
Modify or Decommission?
Modify
SSIS Decommissioning
No Backfit Analysis
Yes
No
Page 8 of 29
7.0
DESIGN INPUTS
A. The design requirement for the SSIS is established through the determination of a target Safety Integrity Level (SIL). The SIL defines the level of performance needed by the SSIS to reduce the likelihood or consequences of a hazardous event to an acceptable level. There are three SILs defined by ISA 84.01 (SIL-1, 2 & 3). The higher the SIL number the more likely the SSIS will be available to prevent or mitigate an SS event. The SIL performance requirements in terms of probability of failure on demand (PFD) average and risk reduction factor (RRF) are listed as follows:
SIL-1 SIL-2 SIL-3 PFD: 10-1 to 10-2 PFD: 10-2 to 10-3 PFD: 10-3 to 10-4 RRF: 10 to 100 RRF: 100 to 1000 RRF: 1000 to 10,000
A methodology for determining the Safety Integrity Level for an SSIS is provided in Appendix A. B. Once the SIL level is established, the next step is to develop the Safety Requirements Specification for the SSIS design. Each SS function is generally unique and requires the identification of a specific set of performance requirements. The performance attributes should be identified and documented for each SS function and be provided by the Design Authority to the Design Agency. The following list of design input information should be considered along with information outlined in ANSI / ISA 84.01, Section 5, during the design of an SSIS or designated hazardous process protection layer: 1. Identification of the safety function. Define the state of the process. The complete description of the safety function should be provided including requirements such as the maximum allowed shutoff valve leakage. If the safe state involves sequencing, then the required sequencing should be identified. 2. Required modes of operation. 3. Target SIL of the SSIS. 4. The required operating range and analytical safety limit of the system should be specified. 5. The response time required of the system, including time for operator action, from the detection of a hazardous event to the completion of the final control element action should be specified. 6. Environmental / Seismic design requirements. 7. Desired system functional test interval. 8. Maximum acceptable nuisance/spurious trip rate. 9. Need for bypasses, manual trip or reset action by operator should be identified. 10. Interfaces to other system. This would include status inputs/outputs to/from other systems.
Page 9 of 29
C. The Design Authority should ensure that the Safety Requirements Specification for the SS function is available to the system designers at the start of the SSIS design. If a single technical agency is responsible for the total system implementation of the function, these inputs can be quantified for the overall function. However, if the design of the system is being performed through a number of technical agencies, the design input for the probability of failure on demand (PFD) and time response must be quantified for that portion of the design that each technical agency is responsible to complete. Examples for specifying these inputs amongst the different technical agencies are provided as follows: 1. Response Time Example: The response time requirement for an SSIS has been identified as less than 30 seconds. The design of the sensors and logic solver has been assigned to one design agency and the design of the final control element (valve) has been assigned to a different technical agency. In meeting the required response time, the sensor and logic solver portion of the SSIS should be assigned a response time (e.g., less than 10 seconds) and the final control element should be assigned a response time (e.g., less than 15 seconds). This will allow the SSIS to meet its overall specification. 2. PFD Example: The PFD requirement for an SSIS has been identified as less than 102 (SIL-2). The PFD for the entire SSIS is the sum of the individual PFDs of the sensor, logic solver, and final control element. The design of the sensors and logic solver has been assigned to one design agency and the design of the final control element (valve) has been assigned to a different design agency. In meeting the required PFD, the sensor and logic solver portion of the SSIS should be assigned a PFD (e.g., less than 2 103) and the final control element should be assigned a PFD (e.g., less than 5 103). The summation of these two PFDs (7 103) will satisfy the system level requirement of less than 102. This will allow the SSIS to meet its overall specification. D. A checklist is provided in Appendix B that should be used as guidance in identifying design inputs, performing the design, and assessing the adequacy of the design. Not all items on the checklist are applicable for every SSIS and the checklist is not intended to cover all design considerations for all possible configurations. Furthermore, the checklist should not be a substitute for engineering judgement and good engineering practices, and strict adherence to the checklist does not necessarily guarantee a satisfactory design. However, judicious use of the list will increase the probability that a good design will be executed.
8.0
DESIGN CRITERIA
A. Design criteria and guidelines will vary according to the specific system function and the required SIL level. ANSI / ISA 84.01, Annex B SIS Design Considerations, provides guidance that should be considered in establishing the design criteria that is necessary to meet the SIL requirement of a particular SSIS.
Page 10 of 29
B. Systems should be designed so that the most probable failure modes of a system will increase the likelihood of a safe condition for the function. Additionally, systems should be designed as fail-safe. Appendix 2 of the I&C chapter provides guidance for the fail-safe design of process control loops. Note, however, that a system designed as fail-safe does not necessarily mean that any and all possible failures will result in the system going to a predetermined safe state. C. The safety significant functions of the system should not be interrupted or compromised by any non-safety significant functions performed by the system or by any other system. D. As indicated by ANSI / ISA 84.01, Section 7.4.1.3, safety signals should be hard wired and not multiplexed between the logic solver and field devices (sensor, final control element). Multiplexed signals (e.g., networks, data highways) can be used from a logic solver (e.g., PLC) to an alarm device if located in a manned area and operator action is required for the SSIS. Documentation of this configuration must demonstrate that the application meets the design criteria (PFD) for the function. E. Guidance on routing of safety significant wiring can be found in the I&C Chapter, Section 11.2. F. The human-machine interface should be designed in accordance with the requirements defined by the Design Authority. The applicable criteria found in the I&C Chapter, Section 10.0, and guidance provided in Appendix 5 should be considered in the design. G. An indication that the SSIS has performed the safety function should be provided to the operator. Indications that the SSIS has detected a full or partial system failure (trouble alarm) should also be provided to the operator. H. Systems that provide motive force (e.g., electrical power, instrument air) should be included as part of the SSIS evaluation only where they are required to complete the SS function. I. A certification should be provided for any safety PLC used in an SSIS. TUV and FM provide certification for components used in safety instrumented systems in the process industry. Note, however, that not all components certified to the same safety level (PFD) are equivalent. The certification reports will list restrictions on the operating conditions and the configuration of the components in order to achieve a specific PFD or SIL level. A certification report must be reviewed in its entirety to assure that components can be used in the selected design configuration to achieve the target SIL for the SSIS. As identified in Section 7.0, Item D, a checklist is provided in Appendix B that should be considered during the design process. An additional checklist for generic I&C systems is contained in Appendix C of the I&C chapter. This checklist should also be considered and used, as appropriate.
J.
Page 11 of 29
9.0
DESIGN VERIFICATION
A. The required probability of failure on demand (PFD) is one of the key attributes that should be specified for safety significant functions through the SIL evaluation process. It is essential that the PFD of the SSIS be verified to assure that the SSIS as designed, installed, operated and maintained meets the target SIL specified for the system. The verification of the SSIS PFD should be conducted during the conceptual design in order to develop the SSIS design and at the end of the detailed design. B. The PFD of an SSIS should be verified by the application of Reliability Block Diagrams, Fault Tree Analysis, or Markov Models. Fault Tree Analysis is the preferred method for determining the PFD of the installed SSIS. Further guidance on the analysis of an SSIS can be obtained from draft ISA technical report TR84.00.02. C. An analysis team knowledgeable of the design being evaluated and ISA 84.01 should be convened to initiate the Fault Tree Analysis. The team typically consists of Design Agency, Design Authority, Safety Analysis engineers and a Fault Tree analyst. The team should agree on fault mechanisms, common mode failures, appropriate assumptions, etc., to be used to complete a preliminary Engineering Calculation based on the preliminary design of the SSIS. Other tools may be used to evaluate the PFD of the preliminary design. When the detailed design is complete, the team should reconvene to confirm the Calculation based on the final detailed design. D. Once an Engineering Calculation is completed for a final SSIS design, the calculation is maintained as a supporting document to the Authorization Basis for the facility. E. At the end of the detailed design phase the trip setpoint for the SSIS should be calculated. ANSI/ISA 67.04.01 should be used to establish the required trip setpoint of the safety function. The actual calibrated setpoint should provide sufficient allowance between the analytical limit and the calibrated instrument trip setpoint to account for uncertainties and dynamic responses.
Page 12 of 29
Page 13 of 29
Page 14 of 29
Probable
(Likely) < 10 /yr. to > 102/yr.
0
Occasional
(Unlikely) < 10 /yr. to > 104/yr.
2
Improbable
(Extremely Unlikely) < 104/yr. to > 106/yr.
Remote
(Beyond Extremely Unlikely) < 106/yr.
Consequence
High
> 25 Rem TEDE > ERPG-2
Medium
From > 5 Rem to < 25 Rem From > ERPG-1 to < ERPG-2
Low
From > 0.1 Rem to < 5 Rem From Measurable to < ERPG-1
Negligible
< 0.1 Rem < Measurable
None
Page 15 of 29
Probable
(Likely) < 10 /yr. to > 102/yr.
0
Occasional
(Unlikely) < 10 /yr. to > 104/yr.
2
Improbable
(Extremely Unlikely) < 104/yr. to > 106/yr.
Remote
(Beyond Extremely Unlikely) < 106/yr.
Consequence
High
Immediate Health Effects or Loss of Life
Medium
Long-term Health Effects, Disability, or Severe Injury (non life threatening)
Low
Lost-time Injury but No Disability (work restriction)
Negligible
Minor Injury with No Disability and No Work Restriction
None
Page 16 of 29
Occasional
(Unlikely) < 10 /yr. to > 10 4/yr.
2
Improbable
(Extremely Unlikely) < 10 4/yr. to > 10 6/yr. SIL-3
Remote
(Beyond Extremely Unlikely) < 10 6/yr.
Consequence
High
> 25 Rem TEDE > ERPG-2 SIL-3
Medium
From > 5 Rem to < 25 Rem From > ERPG-1 to < ERPG-2
Low
From > 0.1 Rem to < 5 Rem From Measurable to < ERPG-1 SIL-1
Negligible
< 0.1 Rem < Measurable
Note: The arrows represent the range of risk reduction provided by the different SIL levels. The solid portion of the arrows represent the minimum risk reduction provided by the designated SIL. The dotted portion of the arrows represent the minimum and maximum range of risk reduction that can be achieved by the SIL.
Page 17 of 29
Determination of a Target SIL for an SSIS using a Layer of Protection Analysis (LOPA)
Each of the protective features identified within the hazard analysis as a primary (1st Level of Control) is considered a layer of protection. The hazard analysis process should quantify the expected effectiveness of the layers of protection that are not SSISs in terms of Probability of Failure on Demand (PFD) or availability. If the hazard analysis does not quantify the required PFD or availability of a credited SS system or control, then this must be determined separately before the SSIS SIL can be assigned. Where a system already exists and has been designated as a preventive or mitigation feature, verification of its safety availability is required to determine its effective PFD. A Layer of Protection Analysis (LOPA) is used to determine the required SIL of the SSIS. A LOPA is a form of risk assessment, similar to that of an event tree analysis, in which two outcomes are considered, failure (PFD) or successful operation. The frequency of the unmitigated hazardous event in question is the stating point of the LOPA. If the hazard analysis process identifies a specific event frequency for a hazard, then this value should be used in the LOPA calculation. However, where a qualitative analysis provides the unmitigated event frequency in terms of Probable, Occasional, or Improbable, the midpoint of the frequency range for the respective bin should be used as listed below, providing that the anlaysis is conservative. Probable (Likely) Occasional (Unlikely) Improbable (Extremely Unlikely) 10-1/yr 10-3/yr 10-5/yr
A Basic Process Control System (BPCS) may be used, in combination with the assigned unmitigated event frequency, to calculate a mitigated event frequency for an SS hazardous event. However, the following conditions must be met to allow the inclusion of the BPCS in the event frequency calculation: 1. The failure of the BPCS is not the initiating or contributing cause of the event. 2. The BPCS must be designed to function during the event, including the environmental conditions for which it is credited for operation. 3. A risk reduction factor claimed for the BPCS must be ten or less (PFD 10-1). 4. SSCs that monitor initial conditions and are credited in a LOPA analysis for reducing or establishing the initial event frequency cannot be a part of a BPCS that is also credited in the LOPA analysis. BPCS must be independent of the event initiator or other Layers of Protection. Once the event frequency has been established, the LOPA process consists of the identification of each Independent Protection Layer (IPL) and an evaluation into the effectiveness that each has in preventing and/or mitigating the SS or designated ML-2 hazardous event. Independent Protection Layers (IPLs) may include but are not limited to: (1) design features such as siting, containment, confinement, and shielding, (2) administrative controls that restrict deviations from safe operations through operating procedures or limiting conditions of operation, (3) mechanical or process systems, and (4) an SSIS. Note: Administrative controls require consideration of the human interface in sensing conditions and performing functions.
Page 18 of 29
1. The IPL must be designed to prevent an SS hazardous event, or mitigate the consequences of such an event to an acceptable level. 2. A system, structure, component that is classified as safety class or safety significant, TSR administrative control, or other SSC that is adequately identified and controlled in the requirements of the Authorization Basis of a facility can be considered as an IPL. 3. The IPL is designed to perform a safety function during normal, abnormal and design basis accident environmental conditions for which it is required to operate. 4. IPLs must be sufficiently independent so that the failure of one IPL does not adversely affect the probability of failure of another IPL. If some combination of components or systems is required to function together to protect a worker or the public, they should be considered as one IPL. Thus, if it takes two out of three components to function or a series of components to operate to protect a worker, then the combination of SSCs will constitute one IPL. The IPL may not by itself reduce the risk of the hazardous event occurring to an acceptable level, but it will prevent or mitigate the event to an acceptable level when it works. Given that the IPL design follows the above rules, then the SS hazardous event and the IPL failure can be treated as statistically independent occurrences. Thus, both the hazardous event and a failure of all IPLs must occur before there is an unacceptable result. If any of the IPLs function, then the event will be prevented or mitigated to an acceptable level. LOPA is based on calculating the probability of a series of independent events occurring. The event must occur and all IPLs must fail in order for the hazardous event to affect the workers or public. As an example probability calculation, the probability of failure (likelihood) of getting three IPLs (A, B, and C) to fail is shown below (three input AND gate): P(ABC) = P(A) P(B) P(C)
Symbol for AND P(Z) is the PFD of IPL (Z) The goal of an IPL is to prevent a hazardous event from occurring or mitigate the hazardous event to an acceptable level. Whether the IPL is designed to prevent or mitigate the event, the PFD of the IPL is used in the LOPA calculation. The probability of the undesired consequence is based on the product of the unmitigated event frequency and all of the PFDs of the separate independent protection layers. The SSIS SIL calculated from the LOPA analysis should provide the capability of a one-half decade risk reduction beyond the minimum risk reduction required to reduce the likelihood of the event by an acceptable margin. The one-half decade risk reduction, beyond that minimally required to place the event into an acceptable risk bin, provides a degree of assurance against uncertainties in event frequencies, component failure rates, and other terms used in the calculation to verify the target SSIS SIL.
Page 19 of 29
Example 1 The following is a target SIL determination for a hazardous event that is anticipated with high consequences. The design uses three Independent Protection Layers (IPLs) to reduce the likelihood of the event to less than 106/yr.
IPL-1 (SSC)
SSC Fails
IPL-2 (SSIS)
SSIS Fails PFD=???
IPL-3 (AC)
AC Fails PFD=10
1
PFD=10
No Impact
Calculation: Frequency PFDIPL-1 PFDIPL-2 PFDIPL-3 (101/yr.) (102) PFDIPL-2 101 PFDIPL-2 104/yr. PFDIPL-2 < < < < 106/yr. 106/yr. 106/yr. 102/yr.
Thus, the SSIS (IPL-2) should be designed as a SIL-2 (PFD: 102 to 103). A SIL-2 SSIS, in combination with the other IPLs, has the capability to reduce the likelihood of the unacceptable consequences for this event to 107/yr.
Page 20 of 29
Example 2 The following is a target SIL determination for a hazardous event that is unlikely with medium consequences. The design uses only one Independent Protection Layer (IPL) to reduce the likelihood of the event to less than 104/yr.
PFD=???
No Impact
Calculation: Frequency PFDIPL-1 (103/yr.) PFDIPL-1 PFDIPL-1 < < < 106/yr. 106/yr. 101/yr.
Thus, the SSIS (IPL-1) should be designed as a SIL-1 (PFD: 101 to 102). A SIL-1 SSIS has the capability to reduce the likelihood of the unacceptable consequences for this event to 105/yr.
Page 21 of 29
Example 3 The following is a target SIL determination for a hazardous event that is unlikely with high consequences. The design uses three Independent Protection Layers (IPLs) to reduce the likelihood of the event to less than 106/yr.
IPL-1 (SSC)
SSC Fails
IPL-2 (SSIS)
SSIS Fails PFD=???
IPL-3 (AC)
AC Fails PFD=5102
PFD=102
No Impact
Calculation: Frequency PFDIPL-1 PFDIPL-2 PFDIPL-3 (103/yr.) (102) PFDIPL-2 (5102) PFDIPL-2 (5107/yr.) 5107/yr. < < < < 106/yr. 106/yr. 106/yr. 106/yr.
The SSIS providing the IPL-2 in this example is not required because IPL-1 and IPL-3 provide a combined risk reduction factor in conjunction with the event frequency that achieves the goal of reducing the likelihood of the event. If IPL-1 (SSC) were an instrumented system it would be designated as an SSIS.
Page 22 of 29
Example 4 The following is a target SIL determination for a hazardous event that is anticipated with high consequences. The design takes credit for the operation of the Basic Process Control System (BPCS), which if operating would prevent the event condition from occurring, in addition to two Independent Protection Layers (IPLs) to reduce the likelihood of the event to less than 106/yr.
IPL-1 (SSC)
SSC Fails
IPL-2 (SSIS)
SSIS Fails PFD=???
No Impact
Calculation: Frequency PFDBPCS PFDIPL-1 PFDIPL-2 (101/yr.) (101) (102) PFDIPL-2 PFDIPL-2 (104/yr.) PFDIPL-2 < < < < 106/yr. 106/yr. 106/yr. 102/yr.
Thus, the SSIS (IPL-2) should be designed as a SIL-2 (PFD: 102 to 103). A SIL-2 SSIS, in combination with IPL-1, has the capability to reduce the likelihood of the unacceptable consequences for this event to 107/yr.
Page 23 of 29
Page 24 of 29
Design Input
Safety Significant (SS) Design Input
1. 2. 3. 4. Has the Hazard Analysis identified the consequence and event frequency for each SS function? Has the Safety Integrity Level been assigned for each SS function? Has a time response been assigned for each SS function? Has a setpoint and range been assigned for each SSIS process parameter?
No No No No
Conceptual Design
1. 2. Has the Safety Integrity Level been verified for each SS function? Did an independent assessor review the conceptual design? Note: An independent assessor is considered to be any qualified individual competent enough to have prepared the design but sufficiently independent such that they are not verifying their own design.
Yes Yes
No No
N/A N/A
Detailed Design
Operator Interface
1. 2. 3. 4. Are controls and displays adequate, effective, and suitable for operator tasks? Is the SSIS operation consistent with existing systems, established conventions and operator experience? Do separate displays present consistent information? Does the indication at the operator display show information that is consistent with the related control action or process response to a control or safety action? Is displayed information readable, concise, complete, and usable without extrapolation? Is adequate information about normal and upset conditions displayed? Is display failure readily apparent? Are instruments located at recommended height and reach limits? Are critical alarms obvious to an operator?
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
No No No No No No No No No No No
N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
5. 6. 7. 8. 9.
10. Are related controls, displays, and alarms grouped together? 11. Is manual initiation of the SSIS provided?
Page 25 of 29
No No No
Yes
No
N/A
Sensors
1. 2. 3. 4. 5. 6. 7. 8. Is sensor redundancy employed? If identical redundancy is employed, has the potential for common cause failure been adequately addressed? Are redundant sensors installed with adequate physical separation? Does each sensor have dedicated wiring to the SSIS I/O modules? Does each sensor have a dedicated process tap? Does the configuration allow each sensor to be independently proof tested? Can redundant sensors be tested or maintained without reducing the integrity of the SSIS? Is diversity used? 8.1 Are diverse parameters measured? 8.2 Are diverse means of processing specified? 9. Is there sufficient independence of hardware manufacturer?
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
No No No No No No No No No No No No No No No
N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
10. Is there sufficient independence of hardware test methods? 11. Are sensor/instrument sensing lines adequately purged or heat traced to prevent plugging? 12. Are SSIS sensors clearly identified by some means (tagging, paint, etc) as components of the SSIS? 13. Has the mean time to dangerous failure rate been determined for each sensor?
Logic Solver
1. 2. 3. 4. 5. Does the logic solver have methods to protect against fail-dangerous faults? Is the logic solver a fault-tolerant device? Is the logic solver separated from the Basic Process Control System? Are all SS functions combined in a single logic solver? Is the logic solver TUV or FM certified for the application?
No No No No No
Page 26 of 29
Application Software
1. 2. 3. Is the final program verified through factory acceptance testing that includes fault simulation? Is the final program verified through complete site acceptance testing that includes verification of startup, operation, and testing algorithms? Has software met design criteria?
No No No
Actuators
1. 2. Are backup power sources provided? Are manual actuators safely and easily accessible?
Yes Yes
No No
N/A N/A
Final Elements
1. 2. 3. Have the final elements been checked to ensure proper sizing and application? Have the final elements been checked to ensure that the devices achieve the fail-safe condition? Has the mean time to dangerous failure rate been determined for each final element?
No No No
Process Connections
1. 2. 3. Are process connections properly installed to prevent process fouling? Are process connections installed correctly for the device type and process? Are sensor process isolation valves associated with the SSIS properly marked?
No No No
No No No No No No No
Page 27 of 29
Chapter 8 I&C Rev. 1, 10/27/06 Yes Yes Yes Yes Yes Yes Yes Yes Yes No No No No No No No No No N/A N/A N/A N/A N/A N/A N/A N/A N/A
10. Is there an Uninterruptible Power Supply (UPS) for the SSIS? 10.1 Is it periodically tested? 11. Are primary and backup supplies powered from independent busses? 12. Can redundant supplies be taken out of service for maintenance without interrupting SSIS operation? 13. Is the SSIS properly grounded? 14. Is the SSIS hardware consistent with the area electrical classification? 15. Are the power supplies adequately protected from ground faults or other voltage disturbances?
Pneumatic Supply
1. 2. 3. Is the pneumatic supply source clean and reliable? Have the consequences of loss of pneumatic supply been considered? Have the consequences of over-pressure of the pneumatic supply been considered?
No No No
Hydraulic Supply
1. 2. 3. Is the hydraulic supply source clean and reliable? Have the consequences of loss of hydraulic power been considered? Have the consequences of over-pressure of the hydraulic power been considered?
No No No
Environmental
1. 2. 3. 4. 5. 6. 7. Have the effects of RFI on the SSIS devices been considered? Are the devices being used within the manufacturers environmental specifications? Have sources of excessive vibration been eliminated or mitigated? Have sources of excessive temperature been eliminated or mitigated? Have all SSIS seismic requirements been achieved? Have the effects of the total integrated radiation does on components been considered? Have all SSIS component environmental requirements been achieved?
No No No No No No No
Page 28 of 29
Installation / Operation
Installation
1. 2. 3. Have external causes of common cause failure been identified (e.g., fire, vehicle impact, lightning, etc.)? Is the SSIS segregated from other systems to minimize the probability of external influences causing a simultaneous failure of the systems? Is there sufficient separation in the installation of diverse equipment?
No No No
Operation
1. 2. 3. Are operators provided separate, specific SS procedures? Are operators provided specific training relative to the SS system? Are operators being evaluated for competency in SS operation on a regular basis?
No No No
Testing / Maintenance
Testing
1. 2. 3. 4. Does the periodic test interval for the SSIS and components meet the SIL verification assumptions? If a component fails under test, is the failure cause established to identify manufacturing or design defects? If a redundant element fails, do procedures require the inspection of other elements for similar faults? Is there adequate independence of testing methods for diverse systems?
No No No No
Maintenance
1. 2. Are maintenance bypasses alarmed to the control room? Are operators trained on what to monitor when maintenance bypasses are used?
Yes Yes
No No
N/A N/A
Page 29 of 29