0% found this document useful (0 votes)
50 views314 pages

Shankar C F 2021

Uploaded by

pubgvicca84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views314 pages

Shankar C F 2021

Uploaded by

pubgvicca84
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 314

NEWCASTLE UNIVERSITY IN SINGAPORE

A DECISION-MAKING TOOL FOR


REAL-TIME PREDICTION OF
DYNAMIC POSITIONING RELIABILITY INDEX

CHARLES FERNANDEZ

A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF


PHILOSOPHY

FACULTY OF SCIENCE, AGRICULTURE AND ENGINEERING


NEWCASTLE UNIVERSITY

SINGAPORE
July 2021
ABSTRACT
The Dynamic Positioning (DP) System is a complex system with significant levels of
integration between many sub-systems to perform diverse control functions. The extent of
information managed by each sub-system is enormous. The sophisticated level of integration
between sub-systems creates an array of possible failure scenarios. A systematic analysis of all
failure scenarios would be time-consuming and for an operator to handle any such catastrophic
situation is hugely demanding. There are many accidents where a failure in a DP system has
resulted in fatalities and environmental pollution. Therefore, the reliability assessment of a DP
system is critical for safe and efficient operation. The existing methods are time-consuming,
involving a lot of human effort which imposes built-in uncertainty and risk in the system during
complex operation.

This thesis has proposed a framework for a state-of-the-art decision-making tool to assist an
operator and prevent incidents by introducing a new concept of Dynamic Positioning –
Reliability Index (DP-RI). The DP-RI concept covers three phases, leading to technical
suggestions for the operator during complex operations, which are defined as Data,
Knowledge, Intelligence, and Action. The proposed framework covers analytics including
descriptive, diagnostic, predictive and prescriptive analytics. The first phase of the research
involves descriptive and diagnostic analytics by performing big data analytics on the available
databases to identify the sub-systems which play critical roles in DP system functionality. The
second phase of the research involves a novel approach where predictive analytics are used for
the weight assignment of the sub-systems, dynamic reliability modelling and offline and real-
time forecasting of DP-RI. The third phase introduces innovative prescriptive analytics to
provide possible technical solutions to the operator in a short time during failures in the system
to enable them to respond quickly and prevent DP incidents. Thus, the DP-RI acts as an
innovative state-of-the-art decision-making tool which can suggest possible solutions to the
DPO by using analytics on the knowledge database. The results proved that it is a useful tool
if implemented on an actual vessel with diligent integration with the DP control system.

i
DECLARATION
This is to certify that
• the thesis comprises only my original work towards the PhD,
• due acknowledgement has been made in the text to all other material used,
• the reference to the author’s own work which has been published in conference
proceedings are indicated appropriately,
• the thesis is approximately 64,000 words in length, exclusive of tables, references and
appendices.

Charles Fernandez
Faculty of Science, Agriculture and Engineering
Newcastle University Upon Tyne
July 2021

ii
STATEMENT OF ORIGINALITY
The contents of this PhD thesis are the results of the original research unless otherwise stated
and have not been submitted for a higher degree at any other university or institution. The
results presented in this thesis have been obtained during the author's candidature under the
supervision of Dr. Arun Dev, Dr. Rosemary Norman and Dr. Wai Lok Woo. The database used
for the research was collected based on the real projects performed by DNV GL Singapore Pte.
Ltd and it is confidential information. For validation purposes, the data of the DP 3 vessels
(Semi-submersible drilling rig and Drillship) was used which was obtained during construction
and operation phases which reflect actual offshore operating conditions. The PhD provided me
with the ability to conceive and implement methodologies to demonstrate an understanding of
how to conduct research at the forefront of digital transformation in Marine, Offshore, Oil and
Gas industry.

The material presented in this thesis covers all results from the following conference
proceedings papers and journal papers which are under review:
[1] Fernandez, C, Shashi Bhushan, K, Woo, WL, Norman, R and Dev, A.K. Year. ASME 2018
37th International Conference on Ocean, Offshore and Arctic Engineering, Madrid, Spain.
Pages: 1-10. 17–22 June 2018 2018, 10.1115/OMAE2018-77267.
[2] Fernandez, C, Dev, A.K, Norman, R, Woo, WL and Shashi Bhushan, K. Year. ASME 2019
38th International Conference on Ocean, Offshore and Arctic Engineering, Glasgow, Scotland,
UK. 9-14 June 2019 2019, 10.1115/OMAE2019-95485.
[3] Fernandez, C, Dev, A.K, Norman, R, Woo, WL and Shashi Bhushan, K. Year. ASME 2020
39th International Conference on Ocean, Offshore and Arctic Engineering, Virtual Conference.
3-7 August 2020, OMAE2020-18844.
[4] Fernandez, C, Dev, A.K, Norman, R, Woo, WL and Shashi Bhushan, K. “A novel decision-
making tool using Long Short-Term Memory (LSTM) for time series prediction of Dynamic
Positioning Reliability Index (DP-RI)”. Ocean Engineering. Manuscript Number:OE-D-21-
00456. Submitted in Feb 2021 and Status is Under Review.
[5] Fernandez, C, Dev, A.K, Norman, R, Woo, WL and Shashi Bhushan, K. “Resilience
Management in Dynamic Positioning Vessels using a Prescriptive Analytics BERT Model as
a Question Answering System”. Knowledge Based System. Manuscript Number:KNOSYS-S-
21-01649. Submitted in May 2021 and Status is Under Review.

iii
iv
ACKNOWLEDGEMENTS
During my years as a PhD student I have had the great pleasure of researching, innovating,
meeting, travelling, discussing and working with many interesting people. For this I am very
blessed and grateful to have had such an opportunity. First of all, I would like to thank my
supervisors Dr. Arun Dev, Dr. Rosemary Norman and Dr. Wai Lok Woo for their extensive
support and guidance during my study. They have provided support throughout the doctoral
studies to ensure that I continuously develop the ability to create and interpret new knowledge
through original research and systematic understanding of the existing body of knowledge that
is at the forefront of an academic field.

This research was funded by the Singapore Economic Development Board (EDB) and DNV
GL Singapore Pte Ltd. I express my immense gratitude to DNV GL Singapore Pte. Ltd, for
their invaluable confidential technical information, monitory and emotional support during the
PhD.

Besides, I would like to thank Dr. Shashi Bhushan Kumar, all my colleagues and friends during
my doctoral scholastic career. Although no list could ever be complete, it is my sincere pleasure
to acknowledge many friends and colleagues who provided encouragement, knowledge and
constructive criticism, and with whom I shared many enjoyable discussions. I would also like
to thank my parents, Mr. Shankar and Mrs. Malliga, for giving me the strength and courage to
undertake this journey and strive for continuous development.

Last but not least, I am incredibly grateful to my wife Saranya and my sons (Dave Nick and
Aadhvick Ian) who all stood beside me. They have always been there for me and showing
interest in what I do and supported me with their patience and love. Without their support, this
work would not have been possible.

Through this journey to Doctor of Philosophy, I was fascinated to see how a simple idea could
become more significant, more important, more extraordinary and more authoritative if
determination and focus are persistent.

v
TABLE OF CONTENTS

1. Introduction ....................................................................................................................... 1

Background ................................................................................................................ 2

Description of Dynamic Positioning System ............................................................. 3

The Problem Statements............................................................................................. 4

Dynamic Positioning – Reliability Index (DP-RI) ..................................................... 7

Research Aims and Objectives ................................................................................... 8

Research Motivation .................................................................................................. 9

Research Contribution .............................................................................................. 10

Thesis Outline .......................................................................................................... 11

2. Literature Review............................................................................................................ 13

2.1 Introduction .............................................................................................................. 13

2.2 DP Vessel Complexity and Traditional Reliability calculation ............................... 13

2.3 DP Incidents and Related Impacts ........................................................................... 16

2.4 Dynamic Positioning System Operator Roles .......................................................... 19

2.5 DP Requirement and Assurance Framework ........................................................... 20

2.6 Classification Society Standards Governing the DP System ................................... 23

2.7 Reliability Assessments............................................................................................ 25

2.8 Reliability Estimation............................................................................................... 27

2.8.1 Failure Mode Effects Analysis (FMEA) ........................................................... 28

2.8.2 Failure Mode Effects and Criticality Analysis (FMECA) ................................ 29

2.8.3 Hardware in the Loop (HIL) Testing ............................................................... 29

2.8.4 DP-Capability, Foot-Print and Consequence Analysis ..................................... 31

2.8.5 Site-Specific Risk Analysis (CAMO, TAM, ASOG, and WSOG)................... 33

2.8.6 Reliability Block Diagram (RBD) .................................................................... 34

vi
2.8.7 Fault Tree Analysis (FTA) ................................................................................ 35

2.8.8 Markov Chain ................................................................................................... 36

2.8.9 Monte Carlo Simulation (Network Simulation) ............................................... 36

2.9 Reliability Prediction................................................................................................ 37

2.9.1 Offline Reliability Prediction............................................................................ 39

2.9.2 Real-Time Reliability Prediction ...................................................................... 39

2.10 Recurrent Neural Network Models....................................................................... 40

2.10.1 Multi-Layer Perceptron ..................................................................................... 42

2.10.2 Simple Recurrent Neural Network (SRNN) ..................................................... 43

2.10.3 Long Short Term Memory (LSTM).................................................................. 44

2.10.4 Gated Recurrent Unit (GRU) ............................................................................ 46

2.11 Prescriptive Analytics – NLP and BERT ............................................................. 47

2.12 Human Factors in DP operation ........................................................................... 47

2.13 Summary............................................................................................................... 49

3. DP System Data Sources – Data Collection, Storage and Processing ............................ 51

3.1 Introduction .............................................................................................................. 51

3.2 Big Data Concept for DP System............................................................................. 52

3.3 Stages in Big Data Analytics for the DP system ...................................................... 53

3.4 DP System Data Sources .......................................................................................... 54

3.4.1 Structured Data ................................................................................................. 56

3.4.2 Semi-Structured Data ........................................................................................ 57

3.4.3 Unstructured Data ............................................................................................. 58

3.4.4 Conversion of Semi / Unstructured Data into Structured Data using NLP ...... 60

3.5 DP System Offline Data Lake – Information Management System (IMS) ............. 62

3.5.1 FMEA Database ................................................................................................ 64

vii
3.5.2 HIL and Digital Twin Database ........................................................................ 64

3.5.3 DP-CAP Database............................................................................................. 66

3.5.4 IMCA Database ................................................................................................ 66

3.5.5 WOAD Database .............................................................................................. 66

3.5.6 OREDA Database ............................................................................................. 67

3.5.7 DP Vendor Equipment MTBF Database .......................................................... 67

3.6 DP System Real-Time Data Lake – Information Management System (IMS) ........ 68

3.7 Tools and Experimental Set-Up for the DP System Big Data Analytics ................. 69

3.8 Need for the Implementation of Big Data Analytics................................................ 72

3.9 Summary .................................................................................................................. 72

4. Classification of DP Sub-Systems: Descriptive and Diagnostic Analytics .................... 74

4.1 Introduction .............................................................................................................. 74

4.2 Descriptive and Diagnostic Analytics ...................................................................... 75

4.2.1 Data Exploration and Preparation ..................................................................... 75

4.2.2 Correlation and Interdependencies ................................................................... 79

4.3 Data Dictionary and Critical Attributes at the Sub-System Level ........................... 82

4.4 Reference System (A1) ............................................................................................ 83

4.4.1 System Description ........................................................................................... 83

4.4.2 Data Dictionary for Reference System ............................................................. 85

4.5 DP Control System (A2) .......................................................................................... 86

4.5.1 System Description ........................................................................................... 86

4.5.2 Data Dictionary for DP Control System ........................................................... 88

4.6 Thruster / Propulsion System (A3)........................................................................... 89

4.6.1 System Description ........................................................................................... 89

4.6.2 Data Dictionary for Thruster / Propulsion System ........................................... 91

viii
4.7 Power System (A4) .................................................................................................. 92

4.7.1 System Description ........................................................................................... 92

4.7.2 Data Dictionary for Power System ................................................................... 94

4.8 Electrical System (A5) ............................................................................................. 95

4.8.1 System Description ........................................................................................... 95

4.8.2 Data Dictionary for Electrical System .............................................................. 97

4.9 Environmental System (A6) ..................................................................................... 99

4.9.1 System Description ........................................................................................... 99

4.9.2 Data Dictionary for Environment System....................................................... 101

4.10 Human / Operator Error (A7) ............................................................................. 101

4.10.1 System Description ......................................................................................... 101

4.10.2 Data Dictionary for Human / Operator Error .................................................. 104

4.11 Summary............................................................................................................. 105

5. Weight Assignment of DP Sub-Systems: Analytic Hierarchy Process ........................ 106

5.1 Introduction ............................................................................................................ 106

5.2 Analytic Hierarchy Process as MCDM .................................................................. 106

5.3 Methodology .......................................................................................................... 107

5.4 Application and Advantages of AHP ..................................................................... 109

5.5 AHP Framework for Weighting Distribution......................................................... 110

5.6 DP Sub-System Weighting Distribution using AHP.............................................. 112

5.6.1 Data Collection / Survey Method ................................................................... 113

5.6.2 Define System and Sub-Systems .................................................................... 114

5.6.3 Decision Hierarchy Structure Model .............................................................. 114

5.6.4 Determination of Priorities and Assign Saaty Ratings ................................... 116

5.6.5 Pairwise Comparison Matrix for Sub-System ................................................ 120

ix
5.6.6 Calculate Weighting Distribution among DP sub-systems ............................. 121

5.6.7 Verification of Reliability of Weighting Distribution .................................... 123

5.7 Validation of Weighting through LSTM Algorithm .............................................. 126

5.8 Summary ................................................................................................................ 132

6. Dynamic Positioning – Reliability Index (DP-RI): Predictive Analytics ..................... 133

6.1 Introduction ............................................................................................................ 133

6.2 DP-RI Concept ....................................................................................................... 133

6.2.1 DP-RI Research Framework ........................................................................... 135

6.2.2 Generalised DP-RI Formulation with the Weighting of Sub-Systems ........... 137

6.3 Methodology for Mathematical DP-RI computation ............................................. 137

6.3.1 Sub-System / Component Architecture .......................................................... 138

6.3.2 DP Vessel and Components Modes of Operation........................................... 138

6.3.3 Sub-System / Component Voting Configuration ............................................ 139

6.3.4 Overall System and Sub-System Requirement Definition.............................. 139

6.4 Mathematical Computation of DP-RI – Failure data ............................................. 141

6.4.1 Sub-System Components Voting Configuration ............................................ 142

6.4.2 Sub-System Reliability Computation ............................................................. 144

6.4.3 DP-RI Computation ........................................................................................ 147

6.5 Predictive Analytics – Prediction of DP-RI through RNN – Field Data ............... 148

6.5.1 Recurrent Neural Network Models for Prediction .......................................... 149

6.5.2 Machine Specification and Programming Language ...................................... 149

6.5.3 Datasets - Data Description and Processing ................................................... 150

6.5.4 Training and Testing Dataset Split ................................................................. 151

6.5.5 Learning Curve (Loss and Accuracy) Evaluation ........................................... 152

6.6 Bias (Fairness) in RNN and Bias Types................................................................. 156

x
6.7 Bias Mitigation Framework ................................................................................... 157

6.7.1 Sample Fairness .............................................................................................. 159

6.7.2 Label Fairness ................................................................................................. 160

6.7.3 Model Fairness ................................................................................................ 160

6.7.4 Observer Fairness............................................................................................ 161

6.7.5 Measurement Fairness .................................................................................... 161

6.8 Performance Results............................................................................................... 161

6.8.1 Model Comparison Using Learning Curves ................................................... 162

6.8.2 Evaluation Metrics .......................................................................................... 167

6.9 Hyperparameters for RNN Model Comparison. .................................................... 170

6.9.1 Model Hyperparameters.................................................................................. 171

6.9.2 Algorithm Hyperparameters ........................................................................... 172

6.9.3 Hyperparameter Optimisation ......................................................................... 173

6.10 Offline DP-RI prediction .................................................................................... 176

6.11 Real-Time DP-RI Forecasting ............................................................................ 177

6.12 Summary............................................................................................................. 178

7. Prescriptive Analytics for Resilience during DP Failure Incidents .............................. 179

7.1 Introduction ............................................................................................................ 179

7.2 Research Framework .............................................................................................. 179

7.3 Experiment Set-up .................................................................................................. 181

7.4 Data-Sets ................................................................................................................ 181

7.5 Prescriptive Analytics – Possible Suggestive Solutions ........................................ 187

7.5.1 Natural Language Processing ......................................................................... 187

7.5.2 Transformers ................................................................................................... 188

7.5.3 Bidirectional Encoder Representations from Transformers (BERT )............. 189

xi
7.5.4 BERT Model as Question and Answer System for DP-RI ............................. 195

7.5.5 Experiment ...................................................................................................... 199

7.5.6 Performance Evaluation – Testing and Results discussion............................. 203

7.6 Summary ................................................................................................................ 209

8. Verification and Validation of Predictive and Prescriptive Analytics Results ............. 210

8.1 Introduction ............................................................................................................ 210

8.2 DP-RI Tool Architecture and Network Topology.................................................. 210

8.2.1 Data Management ........................................................................................... 212

8.2.2 Security and Integrity...................................................................................... 212

8.3 Programming Language ......................................................................................... 213

8.4 Machine Specification ............................................................................................ 214

8.4.1 Predictive Analytics ........................................................................................ 214

8.4.2 Prescriptive Analytics ..................................................................................... 214

8.5 Research Assumptions and Boundaries ................................................................. 215

8.5.1 Assumptions.................................................................................................... 215

8.5.2 Boundaries ...................................................................................................... 217

8.6 Experimental Set-Up: Semi-Submersible Drilling Rig – DP 3 (DP AUTRO) ...... 217

8.7 Verification and Validation - Actual Case Studies from IMCA database ............ 219

8.8 Verification and Validation - Hypothetical Case Studies from FMEA and Test
Procedures ......................................................................................................................... 231

8.9 Summary of Verification and Validation – Test Results ....................................... 242

9. Conclusions and Recommendations ............................................................................. 244

9.1 Summary of Research ............................................................................................ 244

9.2 Achievements of the Research and Innovative Work ............................................ 245

9.3 Future Research Work............................................................................................ 246

xii
APPENDIX I: QUESTIONNAIRE TEMPLATE ............................................................ 247

APPENDIX II: VERIFICATION AND VALIDATION FOR DP-RI TOOL.................. 248

Case Study No. 1........................................................................................................... 249

xiii
LIST OF FIGURES

Figure 1-1 Complete overview of Dynamic Positioning System ............................................. 3


Figure 1-2 Traditional Reliability Assessment – Gap Analysis................................................ 5
Figure 1-3 Timeline defintion from fault initiation to DP incident .......................................... 6
Figure 2-1 Dynamics of the vessel with six DOF ................................................................... 13
Figure 2-2 DP system architecture block diagram .................................................................. 15
Figure 2-3 DP incidents reported from 1993-2019 and distribution across sub-systems ....... 17
Figure 2-4 HIL simulation setup for testing the Vessel’s DP system [56] ............................. 30
Figure 2-5 Typical DP capability plot for DP vessel .............................................................. 32
Figure 2-6 Typical Reliability Block Diagram Architecture .................................................. 35
Figure 2-7 Time Sequence in RNN ........................................................................................ 41
Figure 2-8 MLP Network Structure ........................................................................................ 42
Figure 2-9 SRNN Network Structure ..................................................................................... 43
Figure 2-10 LSTM Network Structure ................................................................................... 44
Figure 2-11 GRU Network Structure...................................................................................... 46
Figure 3-1 Stages of Big data Analytics for the DP system ................................................... 53
Figure 3-2 DP System – Data Types....................................................................................... 55
Figure 3-3 Ontology-based NLP – Conversion of semi/unstructured data............................. 61
Figure 3-4 Database collected during DP lifecycle ................................................................ 62
Figure 3-5 IMS – DP system Database Architecture .............................................................. 63
Figure 3-6 HIL database – Key Findings................................................................................ 65
Figure 3-7 Real-Time data transfer from the vessel to shore for Analytics............................ 68
Figure 3-8 HADOOP Ecosystem architecture with tools and applications ............................ 70
Figure 4-1 Data Exploration to identify the DP sub-systems ................................................. 76
Figure 4-2 Scatter Diagrams – Correlation between variables ............................................... 80
Figure 4-3 Correlation Matrix – DP sub-systems and DP-RI ................................................. 81
Figure 4-4 DP System classification – descriptive and diagnostic analytics .......................... 82
Figure 4-5 Reference System A1 – Input Variables / Signals ................................................ 84
Figure 4-6 DP Control System A2 – Input Variables / Signals .............................................. 87
Figure 4-7 Thruster / Propulsion System A3 – Input Variables / Signals .............................. 90
Figure 4-8 Power System A4 – Input Variables / Signals ...................................................... 93
xiv
Figure 4-9 Electrical System A5 – Input Variables / Signals ................................................. 96
Figure 4-10 Environment System A6 – Input Variables / Signal ......................................... 100
Figure 4-11 Human / Operator Error A7 – Input Variables / Signals................................... 103
Figure 5-1 Framework of AHP for weight assignment to DP sub-systems.......................... 111
Figure 5-2 Data Collection and Survey on DP sub-systems ................................................. 113
Figure 5-3 Hierarchy of DP sub-systems .............................................................................. 115
Figure 5-4 Decision Hierarchy Structure Model .................................................................. 115
Figure 5-5 Ranking of DP sub-system from industry experts .............................................. 116
Figure 5-6 Industry experts composition (Organisation group) ........................................... 118
Figure 5-7 Industry experts composition (Discipline) .......................................................... 119
Figure 5-8 Structure of neuron with time-step ...................................................................... 126
Figure 5-9 LSTM network arrangement with Hidden Layer for backpropagation .............. 128
Figure 6-1 Overall Architecture of DP-RI concept............................................................... 134
Figure 6-2 DP-RI research framework for the Prescriptive Analytics ................................. 136
Figure 6-3 Typical RBD arrangement in series & parallel configuration ............................ 137
Figure 6-4 1oo1 voting configuration ................................................................................... 142
Figure 6-5 1oo2 voting configuration ................................................................................... 142
Figure 6-6 1oo3 voting configuration ................................................................................... 143
Figure 6-7 2oo2 voting configuration ................................................................................... 143
Figure 6-8 2oo3 voting configuration ................................................................................... 144
Figure 6-9 Datasets Split ratio .............................................................................................. 152
Figure 6-10 Underfitting Loss Curve.................................................................................... 153
Figure 6-11 Overfitting Loss Curve...................................................................................... 154
Figure 6-12 Good fit Loss Curve .......................................................................................... 155
Figure 6-13 Unrepresentative Training dataset Loss Curve ................................................. 155
Figure 6-14 Unrepresentative Validation dataset Loss Curve .............................................. 156
Figure 6-15 Addressing Bias for DP-RI RNN algorithm ..................................................... 159
Figure 6-16 Training and Validation Learning Curves (Loss) – MLP Model ...................... 163
Figure 6-17 Training and Validation Learning Curves (Accuracy) – MLP Model .............. 163
Figure 6-18 Training and Validation Learning Curves (Loss) – SRNN Model ................... 164
Figure 6-19 Training and Validation Learning Curves (Accuracy) – SRNN Model ........... 164

xv
Figure 6-20 Training and Validation Learning Curves (Loss) – GRU Model ..................... 165
Figure 6-21 Training and Validation Learning Curves (Accuracy) – GRU Model .............. 165
Figure 6-22 Training and Validation Learning Curves (Loss) – LSTM Model .................. 166
Figure 6-23 Training and Validation Learning Curves (Accuracy) –LSTM Model ............ 166
Figure 6-24 RMSE curve of different RNN models with optimised hyperparameters ........ 169
Figure 6-25 MAE curve of different RNN models with optimised hyperparameters .......... 169
Figure 6-26 Table view of HParams in Tensorboard............................................................ 175
Figure 7-1 Research Framework of Prescriptive Analytics integrated with DP-RI ............. 180
Figure 7-2 The Transformer - model architecture [226] ....................................................... 188
Figure 7-3 BERT Model architecture – Encoder Segmentation ........................................... 190
Figure 7-4 BERTBASE and BERTLARGE (Encoder stacking) ................................................. 191
Figure 7-5 Scaled dot product – Encoder in BERT .............................................................. 192
Figure 7-6 Multi-head attention ............................................................................................ 193
Figure 7-7 Input/Output representation for BERT model..................................................... 194
Figure 7-8 BERT model Input / Output with word and context matrix................................ 194
Figure 7-9 BERT pre-training phases ................................................................................... 196
Figure 7-10 Fine-Tuning BERT for QA system ................................................................... 198
Figure 7-11 Rating of possible suggestive solutions ............................................................ 199
Figure 8-1 DP-RI System Topology ..................................................................................... 211
Figure 8-2 TPU architecture for BERT model fine-tuning [176] ......................................... 215
Figure 8-3 Semi-Submersible Drilling Rig Configuration for DP-RI experiment ............... 218
Figure 8-4 Flowchart – Experiments on case studies from IMCA database ........................ 219
Figure 8-5 Semi-Submersible Drilling Rig Configuration for DP-RI experiment ............... 231

xvi
LIST OF TABLES

Table 1-1 DP Sub-Systems failure rate acting as main / primary cause for DP incidents ........ 2
Table 2-1 IMCA DP incidents defintion................................................................................. 16
Table 2-2 Different ships with possible worst-case scenario effects ...................................... 19
Table 2-3 DP Equipment Class definition by IMO ................................................................ 21
Table 2-4 Recommended minimum DP class for different applications ................................ 22
Table 2-5 Classification Societies DP notation for IMO DP equipment Class ...................... 24
Table 3-1 Big Data Characteristics for DP system data ......................................................... 52
Table 3-2 DP System – Big Data Sources description ........................................................... 59
Table 3-3 Experimental set-ups on local premises to handle DP system Big Data ................ 71
Table 4-1 Data Dictionary for the DP system......................................................................... 78
Table 4-2 Reference System (A1)- Data Dictionary............................................................... 85
Table 4-3 DP Control System (A2)- Data Dictionary ............................................................ 88
Table 4-4 Thruster / Propulsion System (A3)- Data Dictionary ............................................. 91
Table 4-5 Power System (A4)- Data Dictionary..................................................................... 94
Table 4-6 Electrical System (A5)- Data Dictionary .............................................................. 97
Table 4-7 Environment System (A6)- Data Dictionary ....................................................... 101
Table 4-8 Human / Operator Error (A7)- Data Dictionary ................................................... 104
Table 5-1 Pairwise comparison assessment table for “Scale of relative importance.” ......... 117
Table 5-2 Sub-System Priority assignment - Likert Scale Rating ........................................ 118
Table 5-3 Pair-Wise Comparison Matrix Template for 7x7 matrix with seven sub-systems120
Table 5-4 Pair-Wise Comparison Matrix for DP sub-systems ............................................. 121
Table 5-5 DP Sub-System Weighting Distribution and Measurement of consistency ......... 122
Table 5-6 Random Index (RI) for AHP ................................................................................ 125
Table 5-7 DP sub-system weighting distribution using AHP ............................................... 125
Table 5-8 DP sub-system weighting distribution using LSTM ............................................ 131
Table 6-1 Split-ratio between Training / Validation and Testing datasets ......................... 152
Table 6-2 Learning Curve Table – MLP............................................................................. 163
Table 6-3 Learning Curve Table - SRNN ........................................................................... 164
Table 6-4 Learning Curve Table – GRU ............................................................................ 165
Table 6-5 Learning Curve Table – LSTM .......................................................................... 166
xvii
Table 6-6 Optimised Hyperparameter for RNN models ..................................................... 168
Table 6-7 RMSE for different RNN Models ...................................................................... 168
Table 6-8 MAE for different RNN Models ........................................................................ 168
Table 6-9 Keras-Tuner Hyperparameter tuned for RNN models ...................................... 176
Table 6-10 DP sub-system weighting for optimised hyperparameters ................................. 176
Table 7-1 DP-RI datasets and distinct features for prescriptive analytics ............................ 182
Table 7-2 Sample DP-RI datasets ......................................................................................... 184
Table 7-3 BERT training time and DP-RI datasets fine-tuning time.................................... 198
Table 7-4 Hyperparameter optimisation (Hyperparameters used for fine-tuning) ............... 202
Table 7-5 Model Performance Evaluation ............................................................................ 204
Table 7-6 Answer evaluation for DP alarms and Question (F1 score analysis) ................... 205
Table 7-7 Rating of solutions suggested (EM results analysis) ............................................ 207
Table 8-1 GCP GPU specification for predictive analytics .................................................. 214
Table 8-2 DP sub-system weighting for optimised hyperparameters ................................... 214
Table 8-3 Case Study 1 - Actual and Experimental results summary .................................. 221
Table 8-4 Case Study 2 - Actual and Experimental results summary .................................. 224
Table 8-5 Case Study 3 - Actual and Experimental results summary .................................. 226
Table 8-6 Case Study 4 - Actual and Experimental results summary .................................. 229
Table 8-7 Case Study 5 - Hypothetical Experiment results summary .................................. 233
Table 8-8 Case Study 6 - Hypothetical experiment results summary ................................... 236
Table 8-9 Case Study 7- Hypothetical experiment results summary ................................... 238
Table 8-10 Case Study 8 - Hypothetical experiment results summary ................................. 241
Table 8-11 Summary of results – DP-RI tool ....................................................................... 243

xviii
xix
LIST OF SYMBOLS
𝛽𝛽 : Fraction of undetected failures that have a common cause
𝛽𝛽𝐷𝐷 : Fraction that has a common cause of those failures that are detected
by the diagnostic tests
𝜏𝜏 : Vector of control inputs
ŧ : Test interval
𝜎𝜎 : Sigmoid activation function
𝜆𝜆 : Failure rate of a component/subsystem
λmax : Eigenvector maximum
𝜆𝜆𝐷𝐷𝐷𝐷 : Dangerous Undetected failure rate of a channel in a Subsystem
𝜆𝜆𝐷𝐷𝐷𝐷 : Detected dangerous failure rate of a channel in a subsystem
⊗ : Pointwise multiplication
⊕ : Pointwise addition
⊚ σ𝐿𝐿 (z 𝐿𝐿 ) : Hadamard product
𝜕𝜕𝐶𝐶
𝐿𝐿 : Gradient cost function
𝜕𝜕𝑊𝑊𝑗𝑗𝑗𝑗

𝐵𝐵ℎ ∶ {𝑏𝑏1ℎ , 𝑏𝑏2ℎ , 𝑏𝑏3ℎ … . . 𝑏𝑏𝑞𝑞ℎ } is the bias vector of the hidden layer
𝑏𝑏 𝑜𝑜 : Bias of the output layer
𝑏𝑏𝑓𝑓 , 𝑏𝑏𝑖𝑖 , 𝑏𝑏𝑜𝑜 and 𝑏𝑏𝑐𝑐 : Biases
𝐶𝐶(𝑣𝑣) : Coriolis-centripetal matrix (including added mass)
𝐶𝐶̃𝑡𝑡 ∶ New candidate values
∁𝑡𝑡−1 ∶ Old candidate state
𝐷𝐷(𝑣𝑣) : Damping matrix
𝑓𝑓𝑡𝑡 : Forget gate
𝑔𝑔𝑜𝑜 : Vector used for pre-trimming (ballast control)
�𝑡𝑡
𝐻𝐻 : New remember gate
K : Keys matrix
𝑀𝑀 : System inertia matrix (including added mass)
𝑀𝑀𝑀𝑀𝑀𝑀 : Mean Repair Time
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 : Mean Time To Restoration
MTTF : Mean Time To Failure
xx
m : Number of components in the sub-system
n : Number of sub-systems in the DP system
ot : output gate ()
𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 : Average probability of failure on demand for DP system
𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 : Average probability of failure on demand for DP sub-system
𝑃𝑃𝑃𝑃𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐1 : Average probability of failure on demand for components
Q : Queries matrix
R(t) : System reliability represented by exponential law
𝑅𝑅𝑅𝑅(𝑡𝑡) : Survivor Function or Reliability of DP system (DP-RI)
𝑆𝑆𝑡𝑡 : Tanh layer to limit values to be between -1 and 1
𝑇𝑇1 : Proof test interval
𝑇𝑇2 : Interval between demands
𝑡𝑡𝐶𝐶𝐶𝐶 : Calculate the channel equivalent mean downtime
𝑡𝑡𝐺𝐺𝐺𝐺 : System equivalent downtime
𝑈𝑈𝑓𝑓 , 𝑈𝑈 𝑖𝑖 , 𝑈𝑈 𝑜𝑜 and 𝑈𝑈 𝑐𝑐 : Input weights
V : Values matrix
𝑊𝑊 𝐼𝐼 ∶ �𝑤𝑤1𝐼𝐼 , 𝑤𝑤2𝐼𝐼 , 𝑤𝑤3𝐼𝐼 … . . 𝑤𝑤𝑞𝑞𝐼𝐼 � is the input weights matrix,
𝑊𝑊 𝑜𝑜 ∶ {𝑤𝑤1𝑂𝑂 , 𝑤𝑤2𝑂𝑂 , 𝑤𝑤3𝑂𝑂 … . . 𝑤𝑤𝑞𝑞𝑂𝑂 } is the output weights matrix,
𝑊𝑊 𝑓𝑓 , 𝑊𝑊 𝑖𝑖 , 𝑊𝑊 𝑜𝑜 and 𝑊𝑊 𝑐𝑐 : Recurrent weights
𝑊𝑊𝑛𝑛 : Weight or Priority Vector
w : Vector of environmental disturbances (wind, waves and currents)
Yj : Actual values
�j
Y : Predicted values
𝑍𝑍𝑡𝑡 : Remember and forgot gates are deleted by creating a new gate

xxi
ABBREVIATIONS
ABS American Bureau of Shipping
AD Adversarial Debiasing
ASOG Activity Specific Operating Guidelines
Adadelta Adaptive Learning Rate Method
Adam Adaptive Moment Estimation
AHP Analytic Hierarchy Process
AI Artificial Intelligence
AMS Asset Management System
ANN Artificial Neural Network
AO Analog Output
API Application-Programming Interface
ASICs Application-Specific Integrated Circuits
AWS Amazon Web Services
BERT Bidirectional Encoder Representations from Transformers
BOP Blow-Out-Preventer
BV Bureau Veritas
CAMO Critical Activity Mode
CAT Customer Acceptance Test
CEOP Calibrated Equalized Odds Postprocessing
CCF Common Cause Failure
CCS China Classification Society
CCSM Crane Control System
CCTV Closed-Circuit Television
CR Consistency Ratio
DCS Drilling Control System
DGPS Differential Global Positioning System
DGNSS Differential Global Navigation Satellite System
DI Digital Input
DIR Disparate Impact Remover

xxii
DNV GL Det Norske Veritas Germanischer Lloyd
DO Digital Output
DOF Degrees of Freedom
DP Dynamic Positioning
DP 1 Dynamic Positioning Class 1
DP 2 Dynamic Positioning Class 2
DP 3 Dynamic Positioning Class 3
DP-CAP Dynamic Positioning – Capability Plot
DP-RI Dynamic Positioning -Reliability Index
DPO Dynamic Positioning Operator
DP vessel Dynamic Positioning Vessel
E/E/PE Electrical / Electronic / Programmable Electronic
EOP Equalized Odds Postprocessing
EPC Engineering, Procurement and Construction
EUC Equipment Under Control
FAT Factory Acceptance Test
FGS Fire and Gas System
FMEA Failure Mode Effects and Analysis
FMECA Failure Mode Effects and Criticality Analysis
FS Field Station
FSVAD Flag State Verification and Acceptance Document
FTA Fault Tree Analysis
FTRL Follow the Regularized Leader
GCE Google Cloud Engine
GCP Google Cloud Platform
GPS Global Positioning System
GPU Graphical Processing Unit
GRU Gated Recurrent Unit
GUI Graphical User Interface
HIL Hardware In-the Loop
HPR Hydroacoustic Position Reference

xxiii
HRA Human Reliability Analysis
HSE Health and Safety Executive
I/O Input/Output
IaaS Infrastructure as a Service
IACS International Association of Classification Societies
IEEE Institute of Electrical and Electronics Engineers
IMCA International Maritime Contractor Association
IMO International Maritime Organization
IMS Information Management System
IoT Internet of Things
IRS Indian Registry of Shipping
IT Information Technology
JSON JavaScript Object Notation
KDD Knowledge Discovery in the Database
KM Knowledge Management
KR Korean Register
LC Learning Curve
LR Lloyd Register
LSTM Long Short Term Memory
MAE Mean Average Error
MASS Marine Autonomous Surface Ship
MCDM Multi-Criteria Decision Making
ML Machine Learning
MLP Multi-Layer Perceptron
MRU Motion Reference Unit
MT Marine Technologies
NDU Network Distribution Unit
NK Nippon Kaiji Kyokai
NLP Natural Language Processing
NMA Norwegian Maritime Authority

xxiv
NOPSEMA National Offshore Petroleum Safety and Environmental Management
Authority
OPP Optimized Pre-processing
OREDA Offshore and Onshore Reliability Data
OS Operator Station
OSV Offshore Support Vessels
OT Operational Technology
PFD Probability of Failure on Demand
PLS Partial Least Squares
PMS Power Management System
PRR Prejudice Remover Regularizer
PSA Petroleum Safety Authority
PWS Portable Water System
QA Question Answering
RBD Reliability Block Diagram
RCA Redundancy and Criticality Analysers
RCU Remote Controller Unit
RDBMS Relational Database Management System
RINA Registro Italiano Navale
RMSE Root Mean Square Error
RNN Recurring Neural Network
RS Russian Maritime Register of Shipping
RSF Rich Subgroup Fairness
RW Re-Weighing
SAP Systems Applications and Products
SBE Sequential Backward Elimination
SDP State-Dependent Parameters
SFS Sequential Forward Selection
SGD Stochastic Gradient Descent
SIL Safety Integrity Level
SIMOPS Simultaneous Operations

xxv
SME Subject Matter Expert
SoC System on Chip
SQuAD Stanford Question Answering Dataset
SR Sampling Rate
SRNN Simple Recurrent Neural Network
STCW Standards of Training, Certification, and Watchkeeping
TAM Task Appropriate Mode
TF-IDF Term Frequency-Inverse Document Frequency
TPU Tensor Processing Units
TSP Time Series Prediction
UPS Uninterruptible Power Supply
VCS Valve Control System
VDR Voyage Data Recorder
VFD Variable Frequency Drives
WCF Worst-Case Failure
WCFDI Worst Case Failure Design Intent
WCSF Worst-Case Single Failure
WOAD World Offshore Accident Databank
WSOG Well Specific Operating Guidelines
XML Extensible Mark-up Language
YRAN Yet Another Resource Manager

xxvi
Dynamic Positioning System: The equipment, and hence the ship, knows where it is at all
times. It knows this because it knows where it isn't. By subtracting where it is from where it
isn't (or where it isn't from where it is, depending on which is the greater), it obtains a
difference or deviation. The DP system uses deviation to generate corrective commands to
steer the ship from a position where it is to a position where it isn't. The ship arrives at the
position where it wasn't; consequently, the position where it was is now the position where it
wasn't. In the event that the position where it is now is not the same as the position where it
originally wasn't, the ship will acquire a variation. The variation is the difference between
where the ship is and where the ship wasn't. If the variation is considered to be a significant
factor, it too may be corrected by the DP. The ship must not know where it was. The "the
thought process" of the DP is as follows:- because the variation has modified some of the
navigation information, it is not sure where it is. However, it is sure where it isn't and knows
where it was. It now subtracts where it should be from where it wasn't (or vice versa), and by
differentiating this from the algebraic difference from where it shouldn't be and where it was,
it is now able to obtain the difference between its variation, the difference is called the
error.........
(Courtesy : Kongsberg)

xxvii
1. Introduction
In today’s world of challenging environments where offshore and marine vessels operate with
high precision within metres of each other and with no options for mooring, the need for safe
and reliable positioning is particularly important to prevent injury to people and damage to
property. In this high pressure and intense environment, the potential for risk and the
probability of accidents has increased. The consequences of loss of position can be severe, so
the need for reliable and safer systems is more significant than ever [1, 2]. The Dynamic
Positioning (DP) system is the only viable option for safe and reliable vessel position control
[2, 3]. The Dynamic Positioning system has been under development since 1961 and as the oil
and gas industry operates in deeper and deeper waters using vessels without Dynamic
Positioning is not feasible. An ultra-deepwater development would be impossible without
dynamic positioning. Today virtually all the offshore operations are one way or another
dependent on Dynamic Positioning systems. For vessels with DP systems, the most critical
safety incidents are loss of position and/or heading [4, 5, 3, 6]. Therefore, it is necessary to
design DP systems to be fault tolerant and fault resistant i.e. to be more reliable [1]. This is to
ensure that the complex vessel, operating in a harsh environment, could be safer, with greater
reliability and efficiency.

With technological evolution, the Dynamic Positioning system diversified its application to
various types of vessel including semi-submersible drilling rigs, drill-ships, self-propelling
jack-up drilling rigs, Floating Production Supply and Offloading (FPSO), Offshore Support
Vessels (OSV), pipe and cable-laying vessels, rock dumping vessels, shuttle tankers, crane and
heavy lifting vessels, minesweeping vessels, dredging vessels, cruise ships, floating
accommodation vessels/rigs and yachts [4]. DP is no longer just about positioning, but about
following a predefined track or staying within a defined area. Loss of position / heading
indicates that the vessel is not able to stay in the pre-defined stationary position or path. The
loss of position may be due to drive off, drift off or a large excursion [7]. The reliability of
DP systems plays a very important role in deciding on the complex offshore marine operation
of vessels. The reliability directly depends on various aspects such as equipment selection,
design, architecture, functionality, integration, verification, commissioning, operation,
maintenance, codes, standards, rules and regulations [5].

1
Background

The DP System of a vessel involves complex interactions between a large number of sub-
systems. Each sub-system plays a unique role in the continuous overall DP function for safe
and reliable operation of the vessel [4]. The level of complexity and sophistication of DP
systems has developed significantly over the last few decades. Recent technological
advancement has enabled maritime vessels to operate safely and, at the same time, increased
the level of autonomy and complexity. During regular operation, the main activities for the
Dynamic Positioning Operator (DPO) are to monitor the DP system status and perform
adjustment for the required positioning and heading. However, during complex operations or
failures in one of the sub-systems, the DPO is forced into a critical situation and has to react
within a limited time to fix the problem to prevent an accident. The primary causes for DP
incidents are presented in Table 1-1, as reported by International Maritime Contractor
Association (IMCA) from 1994-2019, which indicates that Human / Operator Error is one of
the most significant contributors to DP related incidents in the Offshore, Marine, Oil and Gas
industry [5].

Table 1-1 DP Sub-Systems failure rate acting as main / primary cause for DP incidents
Sub-Systems 1994-2003 2005-2013 2014-2019
DP Control System 53 91 104
Power System 37 83 81
Electrical 18 49 14
Propulsion/Thrusters 65 104 194
Reference System 90 126 84
Environment 33 34 23
Human / Operator Error 75 125 99

This trend, showing a continuous increase in Human / Operator Error is a big concern for the
marine and offshore industries as the DPO plays a crucial role in the loop to handle any such
situation to prevent an accident. In addition, the IMCA accident database shows that Human /
Operator Error is the most significant contributor to secondary causes for DP incidents. It
proves that there is a need for a support system to aid the DPO in preventing DP incidents
when there is a failure in the sub-systems.

2
Description of Dynamic Positioning System

Dynamic Positioning Vessel (DP vessel) means a unit or vessel which automatically maintains
its position and/or heading (fixed location, relative location or predetermined track) by means
of thruster force [8]. DP vessels can be divided into different classes based on the design of the
vessel to meet the industrial mission and its application. DP systems installed on a vessel are
designed in compliance with classification society rules and they must be of robust design
ensuring capability in
• Preventing loss of position
• Preventing loss of redundancy
The term DP system means the complete installation necessary for dynamically positioning a
vessel comprising, but not limited to, the following sub-systems [8]:
• Power system
• Thruster system
• DP control system

In addition to the above system, there are other sub-systems which are integrated to perform
the desired function. The complete overview of an advanced DP system is shown in Figure 1-1.

Figure 1-1 Complete overview of Dynamic Positioning System

3
The design of a DP system is made robust by considering the application of seven attributes
below, in one form or another, to all sub-systems to enhance station keeping integrity.
• Independence
• Segregation
• Autonomy
• Fault tolerance
• Fault resistance
• Fault ride-through capability
• Differentiation

A DP vessel should have a sufficient level of station-keeping reliability. Reliability is a product


of the quality of the equipment in the sub-systems and the vendor providing the sub-systems.
The design and operation of it depends on the competence of the engineers who design and
build, the engineers involved in testing and commissioning, the crew and the management who
maintain and operate the vessel.

The Problem Statements

Traditionally, the reliability of a DP system is assessed during the design stage by


methodologies such as Failure Mode Effects and Analysis (FMEA), Proving Trials, Hardware
In-the Loop (HIL) testing, Site-Specific Risk Analysis, DP capability Analysis and during
operation in annual trials to verify functionality. All these methods are time-consuming,
involving a lot of human effort and notably, no analysis of previous accidents is indicated in
the reliability assessment. It imposes in-built uncertainty and risk in DP systems during
operation. It is evident that these risk assessments are insufficient, and factors considered for
the safety of station-keeping reliability are too narrow.

The significant phases of the DP life-cycle include design, construction, commissioning, sea-
trials and operation. In all of these phases, the system evolves, and changes are implemented
for safe, reliable and efficient operation. Historically all the above traditional reliability
assessment methods have been implemented in different phases for improving the design of
the system. However, they have not made a significant contribution in preventing or reducing
the number of accidents based on the DP incidents reports by IMCA over three decades.

4
Besides, these traditional reliability assessment methods have not demonstrated their ability to
provide clear information on faults and provide appropriate solutions during operation. The
analysis of existing reliability assessment methods revealed key drawbacks which are listed
below [1, 9]:
• Lack of user knowledge and experience from previous incidents / accidents database.
• Lack of visibility of interdependencies between subsystems during failure scenarios.
• Lack of consideration of the relatively short reaction time available for an operator to
take corrective actions. This led to unsafe or delayed decision making.
• Lack of inadequate co-ordination between decision-makers.
• Lack of suggestions and preventive measures to avoid any catastrophic failure in case
of trivial shortcomings.
• Lack of simulation facility to evaluate a particular failure and its impact during complex
offshore marine operation.

Figure 1-2 show the critical pitfalls of the conventional reliability techniques.

Figure 1-2 Traditional Reliability Assessment – Gap Analysis

5
The most critical problem in DP related accidents is the time required for the operator to
identify the cause of the failure and take remedial action to prevent the failure leading to an
accident. The critical point to consider is that preventing a failure from turning into an incident
or accident can be achieved in one of two ways. One way is to increase the time by stretching
the available time to prevent the incident and the second way is by shortening the time needed
to solve the problem. There are five phases involved between fault initiation and final solution
implementation: fault detection, fault identification, generation of solution strategy, solution
implementation and system reaction [10]. The DP operator influences the time that is required
to cover three of the phases, that is fault identification, generation of a solution strategy and
solution implementation. The research will focus on developing an advisory support tool using
an advanced deep learning neural network algorithm for processing the data from the sensors
to forecast the reliability of the DP system to aid the operator in preventing DP incidents.
Figure 1-3 shows the time period from the fault initiation to the accident.

Figure 1-3 Timeline defintion from fault initiation to DP incident

6
Therefore, it was required to select an intelligent deep-learning algorithm which can perform
real-time date time series prediction accurately and with fast response time. DP is a complex
system operating in a highly dynamic environment. Reliability of a DP system requires the
components of the system to be selected optimally and it factors in the reliability of the
components which introduces new challenges to the existing approaches. To support self-
optimisation, the reliability prediction should be conducted in near real-time which was not
possible in the past because of processing limitations but now, with technological growth, it
can be implemented quickly to develop a state of the art advisory tool to prevent offshore
accidents and incidents.

Dynamic Positioning – Reliability Index (DP-RI)

The DP-RI concept is proposed to aid an operator with quantitative and qualitative
representations of reliability of DP systems during complex marine operations. This concept
is not a replacement method or an alternative solution for the current reliability assessment; it
will enhance the existing reliability assessment results by combining them with a newly
developed database, actual field datasets, Artificial Intelligence and industry experts’
knowledge. In addition, the tool provides prescriptive suggestions to the operator in the case
of failure so that the operator can react quickly and implement the most suitable available
solution to prevent accidents. To simplify the function of DP-RI the list of functions is provided
below:
• Offline-Forecasting of DP-RI for complex Marine Operation
• Real-Time Dynamic Reliability Analysis of DP-RI
• Prescriptive Analytics for Resilience of DP system during failure incidents
• The suggested solutions are prioritised based on a ranking system and presented to the
DPO to choose the most suitable solution based on the actual condition.

7
Research Aims and Objectives

This research aims to develop an intelligent advisory decision-making tool, DP-RI for offline
and real-time prediction of the reliability of DP systems (Quantitative and Qualitative). This is
to provide the DP operator with complete control and a clear overview of a situation in the case
of failure during complex marine operations along with suggestive solutions to prevent DP
incidents.

The aim will be achieved by attaining the following objectives:


1. Creating a database for the existing traditional reliability assessment including DP
System Level FMEA, DP Vendor FMEA, HIL, Offshore and Onshore Reliability
Data (OREDA), IMCA Station Keeping Accident Analysis Reports, DP capability
plot and Site-Specific Operational Risk Analysis.
2. Systematically evaluating the database using Big Data Analytics for classification of
sub-systems and identifying the inter-dependencies between the DP sub-systems.
3. Developing Systematic Weight Assignment for DP sub-systems using Multi-Criteria
Evaluation Technique, Analytic Hierarchy Process (AHP) and validating with expert
judgement.
4. Developing a framework of DP-RI for calculating the DP system reliability through
mathematical modelling.
5. Offline Forecasting and Real-Time Prediction of DP-RI using Recurring Neural
Network (RNN) Long Short Term Memory (LSTM) for complex marine operation.
6. Implementing Natural Language Processing (NLP) techniques for the prescriptive
analytics using the most advanced models through fine-tuning and pre-training.
7. Evaluating optimised suggestive solutions to the DPO during failure through
Prescriptive Analytics using Bidirectional Encoder Representations from
Transformers (BERT) model as Question and Answering (Q&A) system to prevent
DP incidents.
8. Implementation of Softmax activation function to determine the relative probabilities
between the possible suggestions based on the criticality of the DP sub-systems.
9. Validating the results of DP-RI tool with actual vessel data and existing traditional
reliability assessments.

8
Research Motivation

Over the years the regulatory authorities, classification societies and the industry have
recognised the urgency in improving the safety and reliability of DP complex operations due
to increasing numbers of DP incidents. Incident-free DP operation requires a complete
understanding of the risk associated with the complex operation and the result of the loss of
station keeping capability and the impacts of Design, Operation and process, by the DPO. As
the complexity and automation of DP systems increases, and more and more of the sub-systems
are interconnected and controlled by computers, DPO minds have become hard-pressed to cope
with, and understand, this enormous and dynamic complexity. It seems likely that human
oversight of many of these systems will not be possible in the timescale required to ensure safe
operation.

Artificial Intelligence (AI) and data-driven decisions based on Machine Learning (ML)
algorithms are making an impact on an increasing number of industries. AI has been used for
various application within the Maritime sector, such as condition monitoring and ship
navigation route optimisation. To ensure safe operation of DP vessels that might enter high
consequence scenarios with low probability, it is necessary to have a large amount of data for
proper decision making. However, as DP systems have matured and have been used in the
industry for more than 50 years, there are enormous volumes of data available. The research is
focused on developing a state-of-the-art advisory tool due to the following motivational aspects.

• Availability of enormous amount of offline data related to DP system


• Availability of real-time data from sensors of DP sub-systems
• Possibility to integrate securely to DP systems for extraction of data to provide holistic
information during complex marine operations.
• Acceptance of Artificial Intelligence and data-driven decisions for safe operation in the
maritime industry
• Need for a state-of-art advisory tool to provide possible solutions to the DPO in the
case of faults using a combination of knowledge-based and experience-based AI
models.

9
Research Contribution

The research study makes the following contribution:


• An intelligent state-of-the-art decision-making advisory (DP-RI) tool has been
developed using Artificial Intelligence and Big Data Analytics for DP system.
• The Information Management System (IMS) was created as a database to act as a
central repository. The IMS consists of FMEA reports DB, IMCA accident analysis
DB, HIL testing DB, DP-CAP DB, OREDA DB, DP vendor FMEA DB and DP
simulator DB along with the sensor data of DP sub-systems.
• Identification and classification of critical DP sub-systems using descriptive and
diagnostics analytics.
• An intuitive and conservative approach to determine the weighting between DP sub-
systems was proposed using AHP.
• A hybrid approach using a mathematical model and predictive analytics with LSTM to
determine system-level reliability for computing DP-RI was proposed.
• A novel approach in generating possible solutions using prescriptive analytics with
NLP BERT model as QA system was proposed to prevent DP incidents.
• The testing and case studies proved that the DP-RI tool could result in 80% faster
decision making and 90% reduction in DPO error.
• The ability to simulate the unknown scenarios reduced the volume of missing data from
90% to 2%.
• The decision making time reduced by 50% when evaluated against the traditional risk
assessment and operation procedures.
• Empirical results and validated test cases on complex DP operations have been
generated, providing the following:
o Safe and reliable
o Increased operational efficiency
o Preventing DP incidents
o Training for DPOs
o Support the decision-making process

10
Thesis Outline

The thesis is organised as follows, and the overview is represented in Figure 1-4:
Part 1: (The Context)
• Chapter 1 : Provides the Introduction, Background, Problem Statement, Objectives,
Motivation and Contribution of this thesis.
• Chapter 2 : Reviews related work in a systematic manner, checking the relevant
reliability assessment methods, applicable Neural Network model, existing methods
for Qualitative and Quantitative estimation of Reliability of DP systems.
• Chapter 3 : Presents the various databases for DP systems and how these data sources
can be used as knowledge base for application of Big Data concept.

Part 2: (Research Framework and System Model Analytics for DP-RI Tool)
• Chapter 4 : Presents the DP System – Classification of Sub-Systems through
Descriptive and Diagnostic Analytics
• Chapter 5 : Presents a Systematic weight assignment of DP Sub-Systems through
Analytics Hierarchy Process (AHP)
• Chapter 6 : Presents the Novel Research Framework for Dynamic Positioning –
Reliability Index (DP-RI) through predictive analytics using LSTM.
• Chapter 7 : Presents the Prescriptive Analytics for Resilience of DP System using
NLP BERT model as QA system to suggest possible solutions to the DPOs during
failure.

Part 3: (Validation and Verification of Effectiveness of DP-RI Tool)


• Chapter 8 : Details the Verification and Validation of Predictive and Prescriptive
analytics results performance of DP-RI tool through case studies involving real-life
incidents and hypothetical cases.
• Chapter 9 : Discusses the Conclusion, Achievements, Innovative works and
Recommendations for future work

11
12

Chapter 1 - Introduction Chapter 4 - DP System – Classification of Sub-Systems Chapter 8 – Verification and


Chapter 2 - Literature Chapter 5 - Systematic Weight Assignment of DP Sub-Systems Validation
Review Chapter 6 - Dynamic Positioning – Reliability Index (Predictive Analytics) Chapter 9 – Conclusion and
Chapter 3 - Data Sources Chapter 7 - Prescriptive Analytics using BERT for Resilience Management Recommendation

Figure 1- 4 Research Thesis Overview – Chapter Organisation


2. Literature Review
2.1 Introduction

This chapter begins by presenting the definition of Dynamic Positioning, details of DP


incidents, and their related impact. The next section offers a review of currently available
requirement and assurance frameworks, international standards, and reliability assessment
methods governing the design and operation of DP systems. The gap in the existing
methodologies is discussed briefly, and analysis presented assessing the role of human factors
in DP incidents. Subsequently, the chapter focuses on reliability prediction methods and RNN
model concepts. The various available RNN models and their architecture are discussed before
implementation in the research study. The links to the main objectives of the research and the
methodologies are outlined here and the details are covered in the respective chapters in the
later part of the thesis. Finally, the last section draws together a summary justifying the need
for a new state of the art decision-making tool to aid the DPO.

2.2 DP Vessel Complexity and Traditional Reliability calculation

Ships are the most complex machines constructed. The demand for increasingly sophisticated
ships with integrated systems has added layers to this complexity. A vessel’s six Degrees of
Freedom (DOF), which includes translatory motions: surge, sway, heave, and angular motions
roll, pitch, and yaw are shown in Figure 2-1 [11].

Figure 2-1 Dynamics of the vessel with six DOF

13
A DP vessel experiences more substantial and more rapidly changing dynamics due to
motions when operating in a deep water environment rather than shallow waters. The six DOF
equations of motion representing the kinetics and kinematics of a DP vessel are represented
as Equation (2-1) [6].
𝑀𝑀𝑣𝑣̇ + 𝐶𝐶(𝑣𝑣)𝑣𝑣 + 𝐷𝐷(𝑣𝑣)𝑣𝑣 + 𝑔𝑔(ƞ) = 𝜏𝜏 + 𝑔𝑔𝑜𝑜 + 𝑤𝑤 (2-1)
Where
𝑀𝑀 - system inertia matrix (including added mass)
𝐶𝐶(𝑣𝑣) - Coriolis-centripetal matrix (including added mass)
𝐷𝐷(𝑣𝑣) - Damping matrix
𝑔𝑔(ƞ) - vector of gravitational/buoyancy forces and moments
𝜏𝜏 - vector of control inputs
𝑔𝑔𝑜𝑜 - the vector used for pre-trimming (ballast control)
w - vector of environmental disturbances (wind, waves, and currents)
The dynamics of the vessel presented in Equation (2-1) also represent the physical properties
of the system, which are further used for control system design. These motions are measured
in terms of heading, pitch, roll and yaw. The signals are sent to the DP control system via
passing it through a Kalman Filter in order to get the filtered motion values for processing.
Besides, the environmental forces (wind, wave and current) play a critical role in the design
and operation of the DP vessel as they affect the overall performance of the ships. The
calculation of wind and current forces are relatively straight forward when compared to forces
produced by waves [12].

To maintain stationary position the thrusters on the DP vessel have to counterbalance the
environmental forces. For safe and efficient operation, the number of thrusters required to be
operational at a given time is calculated through the thruster allocation algorithm which
distributes the required total forces and moments among the available thrusters with minimal
use of power [13]. The Power Management System (PMS) communicates between the DP
control system and thrusters for required power distribution. The advanced DP system, by
design, is itself a complex system with significant levels of integration between many sub-
systems to perform diverse control functions, as shown in Figure 2-2.

14
Figure 2-2 DP system architecture block diagram

DP has come a long way since its inception; for more than 50 years developing alongside the
marine, oil and gas industry [12]. Modern information technology, software and electronics
have gradually penetrated and integrated into a large proportion of the systems on-board
marine vessels and offshore rigs. The stakeholders of ships and rigs have made considerable
investments in complex and sophisticated designs of DP systems which would have never been
possible without digitalisation. All parties in the industry understand the operational
advantages of DP in terms of station-keeping. However, in terms of optimum design and
efficiency, it remains challenging to understand due to sophisticated design and continuously
emerging technologies applied by the vendors.

DP can be considered as a safety-related system as it incorporates one or more electrical and/or


electronic and/or programmable electronic devices for its control functions to keep the
Equipment Under Control (EUC) in the safe state during any undesirable event [14]. The
industry-wide accepted risk assessments for DP focuses on mainly component failures,
ignoring human/operator errors which were discussed in detail in Section 1.3 of Chapter 1.
Human error, such as delayed decision-making, inadequate coordination between decision-
makers and unsafe actions, which have contributed to DP accidents are not taken in account in
traditional risk assessments [2, 3, 9, 15, 16, 5].

15
For many years, the traditional assessments have used reliability calculations on a complex
vessel, with integrated systems, based on the general reliability methods [4, 17, 15, 18].
Reliability is represented by the following Equations (2-2) (2-3) (2-4) [4, 19]:
1
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = (2-2)
𝜆𝜆

𝑅𝑅(𝑡𝑡) = 𝑒𝑒 −𝜆𝜆𝜆𝜆 (2-3)


𝜆𝜆
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = − (2-4)
𝑙𝑙𝑙𝑙(𝑅𝑅(𝑡𝑡))

Where
MTTF - Mean Time To Failure
𝜆𝜆 - Failure rate of a component/subsystem
R(t) - System reliability represented by exponential law

The values are used interchangeably between MTTF and R(t) based on the application.

2.3 DP Incidents and Related Impacts

The loss of position (LOP) and/or heading of the vessel, either temporarily or for an extended
period, is considered to be the DP incident that is the most severe safety breach for station-
keeping functionality. The incidents are categorised into three groups as in Table 2-1 [3]:
Table 2-1 IMCA DP incidents defintion

The DP incidents occur due to primary causes and secondary causes. The human factor has
accounted for 23% of the primary reasons, and 64% of the secondary reasons for DP incidents
reported to date [3, 20]. Figure 2-3 shows the DP incidents reported between 1993-2019, with
distribution over each year, different sub-systems, and type of DP incident [3].

16
17

Figure 2-3 DP incidents reported from 1993-2019 and distribution across sub-systems
Similarly, a review of data from the World Offshore Accident Databank (WOAD) from DNV
GL also revealed that most of the DP incidents in the offshore industry are related to Human
Error [20]. The repository of information on accidents maintained from 1970 shows how the
events have escalated from an initial event or fault to an incident where necessary corrective
action was not taken at the right time. It is the responsibility of industry experts to make use of
the lessons from the databases and facilitate hazard identification for risk reduction and to
support the DPO during an emergency. The significant component which differentiates
WOAD from other database is the operating hours' feature, which provides information on
exposure time to derive accurate failure rates in a given period [20]. However, the currently
applied methods for DP reliability assessment do not make use of such data. In addition,
research has revealed that sub-system failures and human errors are critical factors for DP
incidents [21, 22].

DP incidents may cause a variety of impacts, including risks for incident responders and
onboard crew safety, asset damage, environmental damage (oil spills), downtime, and repair
costs. The consequence category may vary from very little to catastrophic, and the severity is
always unpredictable. Efficient design ensures that the system is fault resistant and, for the
given operational mode, the vessel can maintain its DP capability during the worst-case failure
(WCF) [23].

The standard loss of position is referred to as one of three possible conditions, as below [7]:
Drive-Off: This is a situation where active thruster forces drive the vessel away from its target
position. It is defined as a move under power away from the set position.

Drift-Off: This is a situation where the vessel is incapable of maintaining its target position
due to insufficient thrust forces in relation to the environmental forces. The characteristic of
drift-off is that there is inadequate thruster force in relation to the environmental loads.

Large Excursion: This is a situation where the vessel is still under DP control and experiences
brief excursion of position and/or heading more considerable than the usual excursion due to
environmental loads (massive wind gust/wave), thruster fault or degraded position
information.

18
The severity of the incident varies based on the vessel type, operation mode, and application.
Table 2-2 summarises the possible consequence effects for different types of vessels based on
the assumed application of work for various incident scenarios [24].

Table 2-2 Different ships with possible worst-case scenario effects

2.4 Dynamic Positioning System Operator Roles

The DPO plays a critical role in the efficient operation of the vessel as it is their primary
responsibility to prepare the ship for complex operations, in consultation with the Master and
with assistance from the electrician. DP vessels operating offshore have different applications,
systems, equipment classes, operation modes, configurations and requirements, and function
in various environmental, and weather conditions. Therefore, the failures are different, and the
impact of each failure in the sub-systems differs between vessels and cannot be assumed to be

19
the same when a DPO works in different ships. It is up to the vessel owner to ensure that the
DPO familiarises themself with operational constraints/ requirements of these modes and
features. Regardless of the vessel type, a DPO performs the following activities [25, 26, 27]:
• Operate the DP systems of the vessel independently, working in a possibly hostile
environment and in changing weather conditions
• Judge whether DP operations can commence, continue or should be suspended and take
immediate action if required
• Demonstrate that he/she is fully competent, using the systems/modes that apply to the
applicable notation(s)
• Demonstrate a holistic view of the vessel’s management systems and operations and
consider the impact of operating under DP on, e.g., security vulnerability as well as from
a legislative and regulatory point of view.
Recent research has revealed that although training and documentation are in place, the DPO
reaction during an emergency and tendency to react based on experience does not contribute
to preventing accidents [3, 9]. The DPO's ability to perform the required functionality depends
on the following factors along with their technical capabilities, which are often missed out,
resulting in human error.
1. Situational Awareness
2. Decision Making Ability

2.5 DP Requirement and Assurance Framework

The requirements for DP vessels are governed by various international bodies, including
International Maritime Organisation (IMO), IMCA, Classification Societies, Flag States and
National Regulators. The National Regulators, including the Norwegian Maritime Authority
(NMA), National Offshore Petroleum Safety and Environmental Management Authority
(NOPSEMA), Health and Safety Executive (HSE), Petroleum Safety Authority (PSA), etc [28,
29, 30, 31], guide the operator and asset owners for specific requirements in addition to meeting
the IMO and Classification Societies’ safety requirements . The IMO requirements act as
guidelines to facilitate safe and reliable international operations, taking into account that
vessels are moved and operated internationally. Therefore, special consideration is given for
the design and operation criteria requirements to avoid any additional documentation for the

20
new area of operation. In addition, compliance with the guidelines is documented in the Flag
State Verification and Acceptance Document (FSVAD) for the dynamic positioning system to
ensure that the vessel is operated, surveyed, and tested according to vessel-specific procedures
and that the results are adequately recorded [3]. These guidelines provide the basis and
common platform to recommend design criteria, necessary equipment, operating requirements,
and a test and documentation system for dynamic positioning systems to reduce the risk to
personnel, the vessel, other vessels or structures, sub-sea installations, and the environment
while performing operations under dynamic positioning control. DP class determines the
ability of the vessel to achieve position keeping capability under worst-case failure modes. The
various classes of DP systems are defined through equipment classes from the IMO for
different ships required for a particular operation, as shown in Table 2-3 [3].

Table 2-3 DP Equipment Class definition by IMO

21
At present, there are no standards or industry guidance to specifically address the DP assurance
of vessels performing complex operations in the offshore and marine industry. Currently, DP
assurance criteria vary widely across the industry and set out different requirements based on
the operator, owner, and shipyard practices [32]. In addition, vessel owners/operators
implement a process to ensure the technical suitability of the vessel, determine the
configuration for the Critical Activity Mode (CAMO) and Task Appropriate Mode (TAM),
which will define the minimum DP equipment class for the particular application. Table 2-4
shows the recommended minimum equipment class for different DP applications [33].

Table 2-4 Recommended minimum DP class for different applications

22
The Assurance framework has also started focusing on the stringent requirements of training
for the DPO that would meet the minimum safety standards. A DPO should have sufficient
knowledge and experience to be able to take command of the vessel and identify types of
failure and predict required reactions to respond to the failure. Despite the increase of
technology, guidance on the requirements and assurance framework, the number of DP
incidents has not reduced significantly overtime, which indicates that the risk reduction factor
is not sufficient. The main reason is that DPO knowledge is limited due to the complex
interconnection of systems and the large volume of documentation, which has been revealed
in one of the studies, as below [34]:

“The relentless drive within the shipping community to introduce electronic navigation aids
to merchant ships had the principal stated objective of improving safety by enhancing
situational awareness. However, some of the doubts expressed at the inception of these
initiatives regarding their likely success have been realised, in that there is now a commonly
held view that the general standard of bridge watch-keeping has been eroded, leading to
several collisions and groundings.”

2.6 Classification Society Standards Governing the DP System

Classification Societies, as members of International Association of Classification Societies


(IACS) promoting the safety of life, property, and the environment, publish rules containing
standards governing the design and operation philosophy of DP systems through notation.
These notations correspond to the equipment class defined by the IMO with additional
requirements which are accepted by authorities around the world. Classification Societies such
as Det Norske Veritas Germanischer Lloyd (DNV GL), American Bureau of Shipping (ABS),
Bureau Veritas (BV), Lloyd Register (LR), Registro Italiano Navale (RINA), Indian Registry
of Shipping (IRS), China Classification Society (CCS), Korean Register (KR), Nippon Kaiji
Kyokai (NK), Russian Maritime Register of Shipping (RS), etc. have developed standards to
govern the design of DP systems [1, 35, 36, 37, 38, 39, 40, 41, 42, 43]. Table 2-5 shows the
relationship between the equipment classes of the IMO with DP notation from different
Classification Societies. In addition, the recommended practice is published by these societies
to provide design philosophy guidelines, operation guidance, competence requirements for a
DPO, and competence requirements of critical technical personnel [25, 44].

23
Table 2-5 Classification Societies DP notation for IMO DP equipment Class
24
The IMO equipment class focus has proved that concentrating only on the equipment design
will not ensure safety and careful consideration is required for the competence of the technical
personnel involved. Competence requirements need to be ensured at different levels of
cognition, such as knowledge, understanding, application, and integration to maintain the DP
integrity of vessels during complex operations [25, 44]. Several instances of loss of position
have ended up as DP incidents mainly because of intervention by a DPO lacking knowledge,
skills, and understanding of the system functionality and the effects of various forces acting on
the vessels.

Many Classification Society rules, standards, international codes, regulations, and assurance
frameworks have been developed and implemented to ensure that the design and operation of
DP systems can meet the minimum standards for safe and reliable operation [32, 33]. However,
the current advancement in automation and technology development outpaces the rate at which
the standards and frameworks are being developed to ensure the safety level. In addition, the
energy demand, economic growth, and growing competition will urge operators/vessel owners
to adopt technologies as fast as they develop. This forces industry experts to use more advanced
technology to assure safe operation through digital advisory tools rather than just codes and
standards [2, 17, 15].

2.7 Reliability Assessments

The reliability assessment is a mandatory requirement while designing, constructing, and


operating any safety-critical system, and DP systems are no exception [17, 15, 45]. In the
marine and offshore industry, the reliability assessment is performed by several stakeholders
in different phases and with varying levels of rigour [46, 47]. The vessel owner/operator
performs high-level evaluation during the initial phase, along with the development of the
specification of a new-build. The shipyard, vendor, and independent consultants perform the
assessments during design, construction, commissioning, and sea-trial stages.

Reliability is the capability of the system to perform its intended function during the defined
period under pre-established conditions [19, 17]. Often the industry measures DP reliability
through the comprehensive equipment class system of DP class notation [15]. The reliability

25
of DP systems depends on the redundancy so that a sudden failure of one item of equipment
or a negligent act will not cause an unexpected loss of vessel position and/or heading.

The reliability assessment of the DP system can be categorised into two parts as below [19,
48]:
• Reliability Estimation
• Reliability Prediction

Only a few of the traditional reliability assessments would cover both the hardware and
software related to the system. The hardware part is well established, and often the software
reliability assessment is subjected to question due to its difficulty and the outcome is often
controversial [49, 47]. The software reliability cannot be trusted as it does not provide accurate
results due to errors in the probabilistic models. The reason for such a lack of accuracy in
software reliability for DP systems is due to factors such as software complexity, difficulty in
identifying suitable metrics, difficulty in conducting exhaustive testing, difficulty in
quantifying the effectiveness of test cases, etc. [49]:
"Most of the existing quantitative software reliability methods were not developed
specifically for supporting the quantification of software failure rates and demand
failure probabilities to be used in reliability models of digital systems."

Software plays a critical role in the safe and reliable functionality of DP vessels, and therefore
it cannot be ignored (or assumed never to fail). The fundamental differences between hardware
and software failures are [19, 49, 47, 50]:
• Causes of hardware failures include both wear-out and improper design, with the former
being most dominant. The predominant cause of software failures is a design failure.
• The software can have dependencies, such as common cause failure mechanisms, very
different from those of hardware.
• Hardware failures generally lead to complete loss of function, but the software may also
continue running, producing erroneous results.
• When hardware fails, it must be replaced. For fault-tolerant systems, redundant hardware
can maintain functionality while the failed hardware unit is replaced, and redundancy is
restored.

26
• When software fails, it can often be restarted and continue to operate after a failure if the
combinations of inputs that lead to the failure (e.g., a faulty sensor input) are not present
anymore. If the software fault is to be removed, the software cannot be replaced but has
to be updated.

Software reliability is not the main focus of this thesis research. Therefore, it is considered that
software reliability is ensured through other reliability assessments and it is not evaluated in
detail as part of this research.

2.8 Reliability Estimation

Reliability estimation is a process determining the quality of equipment in sub-systems, quality


of design, and competence of the crew to achieve the required station keeping integrity.
Traditionally several reliability estimation techniques are used for the assessing the reliability
of DP systems. These techniques are generally grouped into two categories [15, 46, 51, 52]:
• Qualitative techniques
o FMEA
o Failure Mode Effects and Criticality Analysis (FMECA)
o HIL
o DP capability plot, Footprint, and Consequence Analysis
o Site-Specific Risk Analysis
• Quantitative techniques
o Reliability Block Diagram (RBD)
o Fault Tree Analysis (FTA)
o Markov Chain
o Monte Carlo Simulation (Network Simulation)
The suitable reliability estimation techniques are chosen for a particular vessel based on
various factors. These factors are typically defined by end-user, operator, design companies,
engineering, procurement and construction (EPC) and shipyard. However, these methods have
been proved to have drawback as detailed in Section 1.3. Qualitative and quantitative risk
assessment for the DP system has been used as essential risk reduction methods in proactively
identifying the shortcomings that might have been unidentified during the design stage. These
assessments, if appropriately implemented, prove useful in the optimisation of the design and

27
have a direct impact on the lifecycle costs. Qualitative techniques support the early
identification of potential single-points of failure, which could result in worst-case system
failure through a system trip or loss of redundancy [15]. Qualitative techniques ensure that the
system design is thoroughly reviewed through a systematic approach. However, these
techniques are time-consuming as they need to focus on every detail at the sub-system level,
and cost/benefit cannot be directly derived. Quantitative techniques use the output from the
qualitative techniques such as FMEA / FMECA. Quantitative techniques overcome the
disadvantages of qualitative techniques by supporting cost/benefit through an accurate
understanding of the cost of a design change and the benefit of a change. These techniques are
more time consuming and require failure frequency databases that are proprietary to either
manufacturers or oil companies, which presents limitations [49, 52]. In addition, these
techniques often do not focus on the correlation of the failure modes.

2.8.1 Failure Mode Effects Analysis (FMEA)


The objective of FMEA is to verify that the consequence of any single failure does not exceed
a pre-defined worst-case single failure design intent [1, 23]. It focuses on equipment
redundancy and redundancy group independence for aspects such as the power supply,
physical interfaces, computer units, lubrication oil, ventilation, cooling water, etc. to show
post-failure station-keeping capability. For DP vessels, the design intent is that the ship should
be able to keep its position after a single failure, as required by IMO 645 [8]. All of the
Classification Societies and regulations have rules and standards covering FMEA requirements
to study the reliability of the DP system [46, 53, 45, 54].

FMEAs are usually conducted by specialised companies and require a cross-competent team
covering all engineering aspects of the DP vessel, except software [55, 47]. In addition to the
FMEA analysis, a proving trial procedure (test program) will be written and executed as part
of the sea trial of a new build vessel. The objective is to verify, on running systems onboard
the real ship, conclusions made in the analysis, and possibly investigate aspects that it was not
feasible to conclude in a desktop analysis.

The FMEA analysis presumes that the software in the control systems is working as intended
concerning functionality and fault handling. Such an assumption relies heavily upon the
vendor's internal verification of the software. This practice is questionable when dealing with

28
safety-critical systems, because a lack of independence undermines objectivity in the
verification effort. It reveals that testing and verification of the control system software must
be added to achieve a proper independent assessment of the complete DP vessel. However, it
is noted that there is no indication that FMEA uses information from IMCA about systematic
analysis of DP failure incidents. Even the FMEA guide from IMCA does not provide any
guidance on how the experience from DP failure incidents could be used for improving the
efficiency of FMEA reports. It is clearly evident that there is a gap in FMEA studies and a need
to use the DP failure incident data to bridge the interval [5, 56, 47].

2.8.2 Failure Mode Effects and Criticality Analysis (FMECA)

FMECA is a bottom-up (hardware) or top-down (functional) approach for the risk assessment.
It is an extension of FMEA analysis where the failure modes are prioritised, indicating the
criticality (or severity) of the various failure modes. It acts as the first step and provides the
input to the Quantitative Risk assessment. The vessel operator will develop an asset-specific
risk matrix to identify the probability (frequency/likelihood) and consequence severity
(criticality) of particular failure modes on the vessel operation [46, 45]. In this way, the
assessment will support the operator to perform benchmarking across the vessel and identify
the failures having an impact on the operation based on experience.

However, FMECA does not provide a focus on human error, so is suited for a system with
little or no redundancy. It is challenging to apply to a system with complex redundancies as it
leads to additional documentation, which is time-consuming [15]. This method is often
performed without considering the knowledge and experience gained from previous DP
incidents.

2.8.3 Hardware in the Loop (HIL) Testing


HIL acts as a simulation tool to test the functions such as verification of correct operation as
per specification, performance including capabilities and behaviours operating under different
environmental conditions and failure modes of integrated DP systems or various sub-systems
[55, 56]. HIL testing is performed at early stages, typically at the manufacturer location during
the Factory Acceptance Test (FAT) as a stand-alone system and onboard the vessel during sea-
trial Customer Acceptance Test (CAT) as an integrated system.

29
Studies have revealed that FMEA techniques mainly focus on the physical layout of a system
and ensure only the reliability of the hardware [9, 56, 47]. Therefore, ensuring the reliability
of software in the control system is complicated and the outcome is controversial. Control
system software reliability and its quality depends on the proper, well-defined software
development and testing processes [56]. It is mostly the vendor's responsibility and is
sometimes overlooked by individuals developing software due to various reasons. An FMEA
desktop study will detect and reveal the possible weak points in the physical design and
highlight the need for increased attention in critical software functions. FMEA provides
essential input to the HIL testing, which in turn will provide crucial information on the
functionality and failure handling capabilities of the control system software [47].

Figure 2-4 shows the set-up for HIL testing, where the actual vessel conditions are replaced
with the HIL simulator acting as a digital twin [56]. Usually, this is accomplished by isolating
the control system and its operator stations from its surroundings. The actual field Input/Output
(I/O) is replaced with simulated I/O from a HIL simulator in real-time. The HIL simulator
imitates the actual conditions (i.e., dynamic systems, actuators, and sensors) of the control
system. It provides realistic, continuous, and consistent measurements and responds to the
control signals.

Figure 2-4 HIL simulation setup for testing the Vessel’s DP system [56]

30
The main advantages of using HIL testing are as follows [56, 47]:
• Facilitates closed-loop realistic testing of the DP system during FAT and CAT.
• Enables performance testing over a wide range of environmental and weather conditions.
• Supports testing of each signal failure, which would be painful and undesirable to replicate
physically for a complex system such as reference, power, and propulsion/thruster systems.

Through experience, it has become evident that it is not possible to perform 100% test coverage
of the complex system as each signal may fail in several ways, including broken wires, short
circuits, frozen values, slow/fast drift, or noise with typically interfaced signal counts ranging
from a few hundred to several thousand [47]. So, the focus area is defined as failures involving
high risk considering both probability and the consequences. Furthermore, any hidden,
common causes and multiple failures with some level of human error may result in severe
consequences leading to DP incidents, which points to the need for research to close the gap.

2.8.4 DP-Capability, Foot-Print and Consequence Analysis


A DP capability plot is a graphical illustration of the vessel’s station-keeping (position and
heading) capacity in a specified vessel condition and specified environmental condition [57].
It is used to determine the environmental limits of an operation showing the maximum static
or quasi-static wind, current, and wave loads in which the vessel can maintain its position for
different configurations (e.g., worst-case single failure (WCSF), nominal case). The DP
capability plots are governed by specification from Classification Societies and regulators and
a typical plot is shown in Figure 2-5 [58, 59].

The traditional DP capability analysis is non-conservative compared with time-domain


analysis. In this analysis, the 6 DOF vessel motion, related thrust losses, as well as all other
dynamic effects in the propulsion system like rate limits, are usually neglected [57]. Another
significant shortcoming of DP capability analysis is that the transient conditions during a
failure and recovery after a failure are generally neglected. DP plots are theoretical plots
calculated from detailed information of the vessel’s hull and superstructure form and available
thruster power. The confidence level of the plots is very much reduced when compared to DP
Footprint plots. Therefore it is advised by IMCA and Classification Societies that, wherever

31
possible, the DPO should validate the DP capability plots by taking DP Footprint Plots and
through time-domain analysis [59, 60].

DP CAPABILITY – LX (A,B,C,D)

X = 1, 2 or 3 describes the DP
capability Level
A = Intact condition, heading 0 - ±30°
B = Intact condition, heading 0 - 360°
C = Worst-case single failure condition,
Heading 0 - ±30°
D = Worst-case single failure condition,
Heading 0 - 360°

Example: DP Capability L1(9, 8, 8, 6)

Figure 2-5 Typical DP capability plot for DP vessel

DP time-domain simulations prove its ability to keep station accurately and estimate the proper
headings for a selected number of sea conditions. Dynamic Capability is the next level of DP
capability analysis which is based on systematic time-domain simulations with a sophisticated
6 DOF vessel model, including dynamic wind and current loads, first and second-order wave
loads with slowly-varying wave drift, a complete propulsion system including thrust losses,
power system, sensors, and a DP control system model [61]. It overcomes most of the
shortcomings of DP capability analysis, and the results are much closer to reality. The features
enable the user to define the acceptance criteria in the analysis for the site requirements for
each vessel and operation, such as station-keeping footprint, sea-keeping criteria, dynamic
power load, and transient motion after failure [59]. However, the efficiency in preventing DP
incidents during failure scenarios seems to be unsatisfactory due to its lack of ability to analyse
the failure scenarios for each signal in the DP system.

32
DP Footprint plots represent the actual measurements of the vessel’s station-keeping
performance in the actual operating environmental conditions and with the thruster
configuration at the time of the complex operation, with site-specific information. It provides
a scatter plot of vessel positions at regular intervals around the required set position. It also
enables comparison of points on the limiting wind speed envelope given in the theoretical DP
capability plots. The usual practice is to set the configuration of the thrusters as per the DP
capability plot and obtain the plot for the loss of the most effective thruster(s) and after a worst-
case failure. To better understand the vessel DP station keeping ability and enhance the
knowledge, it is advised to combine the theoretical DP capability plots and DP footprint plots
[60].

Consequence analysis is prescribed by the IMO requirement to ensure an alarm is raised when
the station keeping ability is lost [8]. For DP vessels with equipment classes 2 and 3, it is
mandatory to have online consequence analysis. It is a monitoring function in the DP control
system to check any abnormalities in vessel heading and position in its current operating mode,
in the current weather conditions, in the case of any of the predefined worst-case failures, and
raise the alarm to the operator [62]. The main issue is that the result of re-configuration is
unpredictable as the integrated system is complex and there are various interdependencies.

2.8.5 Site-Specific Risk Analysis (CAMO, TAM, ASOG, and WSOG)


Site-specific risk analysis was adopted by the industry as there was a clear indication of a
decline in experience of DPOs and rapid technology advancement leading to intricate designs
[33]. DP redundancy and fault-tolerant system are often over-ruled by a DPO’s incorrect
understanding of operational design criteria, resulting in DP incidents. Site-specific risk
analysis such as CAMO, TAM, Activity Specific Operating Guidelines (ASOG), and Well
Specific Operating Guidelines (WSOG) support efficient DP operation by establishing
structured operational limit criteria [63]. An advisory status is included to indicate the
deviation from the safest mode of operation. The statuses are Green DP Status for Normal
Operation, Blue DP Status for Advisory Status representing no immediate risk, Yellow DP
Status for high risk should be another failure occur, and Red DP Status for Severely degraded
status or Emergency.

33
The operational planning associated with different risk assessments are detailed as follows [60,
63]:
• CAMO is applied to all the critical activities. It defines the most fault-tolerant
configuration for the DP system and associated plant and equipment such that single point
failure does not exceed the vessel’s identified worst-case failure.
• TAM is applied to less critical activities and uses risk-based operating modes such that the
set-up of the DP system is operated so that single point failure could result in exceeding
the vessel’s identified worst-case failure.
• ASOG defines the operational, environmental, and equipment performance limits for the
location and the specific activity the vessel is undertaking.
• WSOG was developed for specific well or drilling activity and is the same as ASOG [63].

These risk analyses guide the DPO in a user-friendly tabular format with actions to be taken
by the DPO in response to faults and deteriorating conditions. Often these site-specific
guidelines do not provide the actual status of the DP system when any alarm or failure occurs,
which leaves the operator in a predicament and referring to the operation manual/FMEA
document to understand the system reliability.

2.8.6 Reliability Block Diagram (RBD)


A RBD presents a logical relationship between the system, sub-systems, and components. A
system can be modelled for reliability computation and analysis using block diagrams [19]. A
DP system consists of sub-systems and components connected to perform given functions and
maintain vessel position and heading. Due to the integration between sub-systems, it can
become complicated, making reliability analysis difficult. A mathematical model reduces the
system to a graphical representation of the interconnection of its sub-systems.

RBD analysis is a deductive (top-down) method, and it is easy to apply for a complex system
where it could be broken-up into sub-systems [15]. The RBD application is very practical when
DP performs more than one function, and a separate RBD can represent each function. The
significant advantage of using RBD for the reliability analysis is that it provides the
methodology to explore the residual risk due to the common cause of failure mechanism, as it
could lead to a hazardous event if the demand on the Electrical / Electronic / Programmable

34
Electronic (E/E/PE) safety-related system fails. The Safety Integrity Level (SIL) concept is
applied to the RBD to support evaluation of the safety loop from sensors, logic solvers, and
final elements [64, 14].

A typical reliability model can be represented, as shown in Figure 2-6 [15, 65]. These RBD
can be used for mathematical calculations and prediction of the reliability of sub-systems. The
blocks A, B, C, D, E and F refer to the equipment arranged either in series or parallel for
calculating the reliability of sub-system using RBD model.

A D

C F

B E

Figure 2-6 Typical Reliability Block Diagram Architecture

The static RBDs are limited in their ability to express varying system states, dependent events,
and non-series-parallel topologies. Studies have revealed that the accuracy of RBD is prone to
human errors; simulations are inaccurate due to the involvement of pseudo-random number
generators and the sample-based nature of computer arithmetic computations [65, 19, 15, 48].
Dynamic RBD, used in combination with the ML concept, will result in more efficient analysis
methods.

2.8.7 Fault Tree Analysis (FTA)


FTA is a logic diagram which can be used to represent the inter-relationship between potential
critical events and causes of the event in the DP system [19]. It analyses integrated system
failures in terms of combinations of sub-systems and lower level faults, and eventually
component faults. It is a top-down approach, it is possible to start the analysis at a very early
stage, and to complete it as the detailed design is carried out [19, 15]. This method may be
qualitative, quantitative, or both depending on the objective of the study. However, it is widely
used for quantitative analysis for a DP system.

35
FTA does not show the causes of all of the failures or accidents in the system, but it does focus
on the specified failure/accident. It is also wholly dependent on the individual’s capability and
knowledge on the system, which could vary and result in in-consistency in the outcome. These
methods could also be time-consuming for a complicated system. In addition, there may be
inherent uncertainty and consequence from a safety perspective as human error is excluded
from the analysis.

2.8.8 Markov Chain


Markov analysis is a bottom-up analysis method suitable for the evaluation of functionally
complex structures and complicated repair and maintenance strategies. It is a stochastic model
and can be applied to DP, which is a randomly changing dynamic system to predict future
events based on the current events neglecting the historical events [52]. It enables the
calculation of the probabilities of system elements (components, equipment, sub-systems)
being in a particular state at specific points (or intervals) in time. It is efficient and faster in
mathematical computation compared to Monte-Carlo simulations (see Section 2.7.9). In
addition, the method supports the DPO by providing deep insights into the changes in the
system over time.

The main pitfall is that only technical failures of the sub-systems are included in the Markov
modelling. However, the human-machine interface plays a critical role in DP operations. The
DPO monitors the information from different sub-system sensors and analyses the integrated
system status before issuing control commands during complex operations and takes action
during an emergency [15]. Therefore, human factors and operational procedures, along with
technical aspects, should be taken into consideration for reliability analysis. In addition, for an
integrated system with a large number of components, the massive amounts of data will be
unmanageable and much useful information will be lost when merging the functional state and
failed state [19].

2.8.9 Monte Carlo Simulation (Network Simulation)


The Monte Carlo simulation technique is the most widely used approach for reliability analysis
of sophisticated, real-world systems with stochastic elements, which are difficult to evaluate
analytically [15]. The simulations are treated as a series of experiments as the features of the

36
system in a simulated environment resembles the close to reality and supports the measurement
of reliability performance metrics. So, it, in turn, helps to estimate the system performance
under any set of operating environments and weather conditions and to accommodate any
number of failure distributions. For the DP system, the effectiveness of using Monte Carlo
analysis in overcoming the complexity of the problem is directly related to the number of
events in the fault tree [18, 66].

There is a pitfall in the method, which makes it unsuitable for a specific application, and that
is that for a complex system, it is expensive and time-consuming to develop the simulation
model. Also, it is suitable only for the comparison of different systems rather than for
optimisation. In addition, the simulation results are not useful if the validity of the model is
affected due to design changes as it is not adaptive.

2.9 Reliability Prediction

Reliability prediction is one of the more widely used techniques for reliability analysis [48].
The method is a process of forecasting the probability of a system performing its function
successfully when it is demanded to operate [67]. It can be done at any phase of the system
lifecycle. However, it is most commonly used during the initial system design stage to evaluate
the proposed design for reliability concerns [19, 15, 48]. It involves an estimation of the
performance of the system over some time and its reliability based on the failure rates of the
sub-system components. It helps in identifying the weak point in the design at the early stages
of the project to improve it. In recent years, due to technological advancements, reliability
prediction has become widely used during the operational phase of DP to support maintenance
and inspection [48].

In this research, reliability prediction using RNN is evaluated using different algorithms to
understand its suitability and performance. The prediction of the reliability of the DP system
uses the following concepts to form a robust research framework:
• Big Data – Data sources (Chapter 3)
• Diagnostic and Descriptive Analytics (Chapter 4)
• Predictive Analytics (Chapter 6)
• Prescriptive Analytics (Chapter 7)

37
The experimental set-up for reliability prediction should consider the following parameters and
boundaries to define uncertainties upfront for reliability prediction during operation [48]:
• Reliability Prediction uses (Why - Station Keeping Capability )
o Reliability goal assessment
o Mission Reliability Estimation
o Prediction of Reliability performance
• Reliability Prediction in the system life cycle (When - Operation )
o Operational Phase
• Factors to select Reliability Prediction method (What – Consequence of Failure)
o Product Technology
o Consequence of failure
o Failure criticality
o External influences
o Available resources (data, knowledge, and technology, etc.)

Reliability prediction is usually performed using test data or field data. Test data refers to actual
equipment operational experience in a test environment, and the time required to observe
failures is usually accelerated through proper planning to increase the amount of data. Tests
are conducted in test environments, which should resemble actual environmental conditions
such that reliability of the system, sub-systems, assembly and components obtained can be
considered as specific confidence levels. The test environment should necessarily include
failures from different possible sources such as electromagnetic disturbances, humidity,
thermal environment, human intervention, and at the same time, avoid failures which are not
relevant to the operating environment [48, 67]. Reliability prediction based on test data can be
widely used as it covers vast possibilities, and it can also be used for validation of other systems.

Field data refers to the actual data from the components, assemblies, subsystems, and systems
obtained from an actual operational environment. Therefore, for reliability prediction of the
system during the operational phase, field data should be used for greater accuracy and
increased efficiency in forecasting. The type and quality of data used for prediction for the
integrated DP system on any specific vessel should ideally use the field data from a similar

38
vessel with the same operating conditions. In the case of missing information, similar items or
similar environments may be found, and the impact on the accuracy should be evaluated [48].

2.9.1 Offline Reliability Prediction

Offline reliability forecasting is one of the critical activities of the DPO as part of the
operational planning for a DP system for complex operations when allocating the configuration
(power and thruster systems, etc.) and making decisions in case of emergencies [60]. The
current method of planning and decision making heavily relies on the massive number of
documents produced during the initial stages. It is based on knowledge without any proper
understanding of the final location of operation (site-specific information), which demands
significant efforts in terms of time and cost. With the development of technology, adapting to
automation and the use of sensors for components in the sub-systems, there is a massive
amount of data generated onboard the vessel related to the DP system. Recently databases
have been stored for condition monitoring and maintenance of the systems in a dedicated
machine. This massive amount of historical data can be used for predictive planning to prepare
the vessel and the DPO for any site-specific activity, which could become critical in terms of
the operation [2]. There is need for a digital tool that can be used as a simulation tool to be
used offline for the training of the operator. Besides, it also helps planning upfront to
understand the method to handle catastrophic events due to single failure causing worst-case
failures of design intent.

Big Data and predictive analytics have enormous potential to prevent DP accidents in the
offshore marine industry. However, the stakeholders of DP systems tend to be conservative
with regard to implementing digital technology to unlock the potential. Recent trends show
that the shipping industry is adopting predictive analytics for various applications including
route optimisation before a voyage and ship behaviour and wave height prediction for efficient
offshore operations [68, 69].

2.9.2 Real-Time Reliability Prediction

Real-Time reliability prediction or Time Series Prediction (TSP) involves the prediction of
future system reliability based on the information of the current and past status of the sub-
systems. TSP has been implemented to address several real-world problems in the marine and

39
offshore industry for optimising operational efficiency [68, 69, 70]. Typically, four methods
are used for detecting the state of the system and predicting failures: function approximation,
classifiers, system models, and time-series analyses [71, 72]. All of these methods have at least
one of the following drawbacks [73] :
• Does not consider the effect of a dynamic change in the input
• Supports only discrete input variables
• A large amount of historical data is required to achieve a reasonable accuracy
• Unable to tackle random errors in irregular fluctuations.

Many Artificial Neural Network (ANN) methods have been adopted for reliability prediction
and have shown high prediction accuracy and adaptability for stable offline data [50, 74, 75,
76]. However, such methods were not capable of TSP of real-time data. Deep Learning RNNs
are more suitable for addressing TSP than traditional Neural Networks as they provide multiple
nonlinear mapping levels, which can abstract and extract the features of an input signal and
discover the potential underlying relationship at a deeper level [73, 77]. However, the methods
are not sufficiently proven in terms of their adaptiveness where TSP is dynamic. At the same
time, the sampling speed of models should match the controller capability [59]. Therefore, in
this research, four of the widely used RNN models were used to evaluate their suitability for
the time series prediction/forecasting of DP reliability.

2.10 Recurrent Neural Network Models

An RNN introduces the concept of timing into the design of a network structure, which makes
it more adaptable in time series prediction. The RNN architecture provides hidden layers to
share the parameters across the time series. This functionality in the RNN supports built-in
“memory” blocks, allowing the system to recognise and predict long sequences [77].

The various algorithms for RNN are used in the research for determining their suitability for
reliability prediction of DP systems. The predictive analytics framework is used for the
evaluation and for estimating the accuracy. The algorithms are tuned across different
hyperparameters for optimising the efficiency. The details of the performance analysis and the
hyperparameters are discussed in Chapter 6.

40
Figure 2-7 Time Sequence in RNN

The most important aspect of the RNN is its ability to analyse sequential data for representing
the dynamic performance of the system through network delay recursion [79]. Figure 2-7
demonstrates how the sequential data is processed in the RNN model. For example, consider
that Wil is the input layer, Whl is the hidden layer, and Wol is the output layer. The output at
time “T” not only depends on the input at a time “T”, but also on the recursive signal from
time “T-1”. This illustrates the capability of the RNN delay recursion to support the processing
of short term sequential data. However, in a broad context, memorisation, or learning from
deep sequences in time series forecasting, the RNN suffers from the vanishing gradient
problem [80, 81]. Specifically, in the case of long sequence inputs or time series an RNN is
hard to train as the model can only remember the latest information and not the earlier data.

41
2.10.1 Multi-Layer Perceptron

The Multi-Layer Perceptron (MLP) is a class of feedforward deep leaning Neural Network.
An MLP consists of three layers of nodes: an input layer, a hidden layer, and an output layer.
The neurons are organised by type into the layers, and the flow of information is in one
direction [82]. Therefore, the neurons in the one layer are not related and do not share any
information. MLPs are suitable for prediction tasks and are considered universal
approximations of functions since their outputs depend only on the current inputs [79, 80, 83].
The output “𝑦𝑦" for the MLP is given by Equation (2-5):

𝑞𝑞
𝑦𝑦 = 𝑔𝑔��𝑖𝑖=1 𝑤𝑤𝑖𝑖𝑜𝑜 𝑓𝑓�𝑥𝑥 (𝑡𝑡) 𝑤𝑤𝑖𝑖𝐼𝐼 + 𝑏𝑏𝑖𝑖ℎ � + 𝑏𝑏 𝑜𝑜 � (2-5)

where 𝑊𝑊 𝐼𝐼 = �𝑤𝑤1𝐼𝐼 , 𝑤𝑤2𝐼𝐼 , 𝑤𝑤3𝐼𝐼 … . . 𝑤𝑤𝑞𝑞𝐼𝐼 � is the input weights matrix, 𝑊𝑊 𝑜𝑜 = {𝑤𝑤1𝑂𝑂 , 𝑤𝑤2𝑂𝑂 , 𝑤𝑤3𝑂𝑂 … . . 𝑤𝑤𝑞𝑞𝑂𝑂 } is
the output weights matrix, 𝐵𝐵ℎ = {𝑏𝑏1ℎ , 𝑏𝑏2ℎ , 𝑏𝑏3ℎ … . . 𝑏𝑏𝑞𝑞ℎ } is the bias vector of the hidden layer and
𝑏𝑏 𝑜𝑜 refers to the bias of the output layer. The f and g terms are the activation functions of the
hidden and output layers.

An MLP structure is illustrated in Figure 2-8 [82].

Figure 2-8 MLP Network Structure

42
Though MLP proves to be very efficient in prediction and pattern recognition, there is an
inherent drawback in that the variable selection accuracy depends highly on the choice of the
shrinkage parameters. The performance of prediction deteriorates when the shrinkage
parameters are chosen inappropriately [82]. In addition, when there is a set of candidate
variables among which there is a very high pairwise correlation, the model tends to select only
one variable from the pool and ignore the other.

2.10.2 Simple Recurrent Neural Network (SRNN)

In comparison to the MLP, SRNNs support feedback between one or more neurons, which
enables historical information of previous inputs to influence each output [68]. The SRNN uses
bi-directional loops to address problems in the context of input nodes. The transition between
each layer node is no longer the input of a hidden layer. The SRNN is a sequence-to-sequence
model that can process sequence data of any length, which makes it relevant for the DP
reliability prediction application [74, 80]. The underlying network structure of an SRNN is
depicted in Figure 2-9. In the figure, subscript t represents the time, x is input data, O is output
data, S is the network state, W is the update weight, V is the weight between the cell and the
output, and U is the weight between the input and cell.

Figure 2-9 SRNN Network Structure

43
The output “𝑦𝑦" for the SRNN is given by Equation (2-6):

𝑞𝑞
𝑦𝑦 = 𝑔𝑔��𝑖𝑖=1 𝑤𝑤𝑖𝑖𝑜𝑜 𝑓𝑓�𝑥𝑥 (𝑡𝑡) 𝑤𝑤𝑖𝑖𝐼𝐼 + 𝑥𝑥 (𝑡𝑡−1) 𝑤𝑤𝑖𝑖𝐼𝐼 + 𝑏𝑏𝑖𝑖ℎ � + 𝑏𝑏 𝑜𝑜 � (2-6)

where 𝑊𝑊 𝐼𝐼 = {𝑤𝑤1𝐼𝐼 , 𝑤𝑤2𝐼𝐼 , 𝑤𝑤3𝐼𝐼 … . . 𝑤𝑤𝑞𝑞𝐼𝐼 } is the input weights matrix, 𝑊𝑊 𝑜𝑜 = {𝑤𝑤1𝑂𝑂 , 𝑤𝑤2𝑂𝑂 , 𝑤𝑤3𝑂𝑂 … . . 𝑤𝑤𝑞𝑞𝑂𝑂 } is
the output weights matrix, 𝐵𝐵ℎ = {𝑏𝑏1ℎ , 𝑏𝑏2ℎ , 𝑏𝑏3ℎ … . . 𝑏𝑏𝑞𝑞ℎ } is the bias vector of the hidden layer and
𝑏𝑏 𝑜𝑜 refers to the bias of the output layer. The f and g terms are the activation functions of the
hidden and output layers. The SRNN is suitable for prediction where there is a limited amount
of data, and it experiences all the similar drawbacks of the standard RNN as the network
structure is equivalent.

2.10.3 Long Short Term Memory (LSTM)

LSTM is an RNN variant, which uses a purpose-built memory cell to represent the long-term
dependencies in time series data [70, 84]. The LSTM RNN is suitable for relatively long
interval delays in time series prediction. It addresses the vanishing gradient problem of the
SRNN through incorporating self-connected “gates” acting as memory cells in the hidden units
[68, 83]. The complete architecture of LSTM is represented in Figure 2-10, which shows the
flow of information from input to output [81].

Figure 2-10 LSTM Network Structure

44
LSTM consists of the following parameter groups.
(i) Input weights : 𝑈𝑈𝑓𝑓 , 𝑈𝑈 𝑖𝑖 , 𝑈𝑈 𝑜𝑜 and 𝑈𝑈 𝑐𝑐
(ii) Recurrent weights : 𝑊𝑊 𝑓𝑓 , 𝑊𝑊 𝑖𝑖 , 𝑊𝑊 𝑜𝑜 and 𝑊𝑊 𝑐𝑐
(iii) Bias : 𝑏𝑏𝑓𝑓 , 𝑏𝑏𝑖𝑖 , 𝑏𝑏𝑜𝑜 and 𝑏𝑏𝑐𝑐

The first step in LSTM is to decide what information is new and what information is going to
be thrown away from the cell state called the forget gate 𝑓𝑓𝑡𝑡 , sigmoid activation function 𝜎𝜎, as
shown in Equation (2-7) and (2-8):
𝑓𝑓𝑡𝑡 = 𝜎𝜎�𝑋𝑋𝑡𝑡 𝑈𝑈𝑓𝑓 + 𝑆𝑆𝑡𝑡−1 𝑊𝑊 𝑓𝑓 + 𝑏𝑏𝑓𝑓 � (2-7)

𝑓𝑓𝑡𝑡 = 𝜎𝜎�𝜃𝜃𝑥𝑥𝑥𝑥 𝑥𝑥𝑡𝑡 + 𝜃𝜃ℎ𝑓𝑓 ℎ𝑡𝑡−1 + 𝑏𝑏𝑓𝑓 � (2-8)

The next step is that the input gate (it) layer decides which values are to be updated and a tanh
layer creates a vector of new candidate values 𝐶𝐶̃𝑡𝑡 , as represented in Equation (2-9) and (10):
𝑖𝑖𝑡𝑡 = 𝜎𝜎�𝑋𝑋𝑡𝑡 𝑈𝑈 𝑖𝑖 + 𝑆𝑆𝑡𝑡−1 𝑊𝑊 𝑖𝑖 + 𝑏𝑏𝑖𝑖 � (2-9)

𝐶𝐶̃𝑡𝑡 = 𝑡𝑡𝑡𝑡𝑡𝑡ℎ(𝑋𝑋𝑡𝑡 𝑈𝑈 𝑐𝑐 + 𝑆𝑆𝑡𝑡−1 𝑊𝑊 𝑐𝑐 + 𝑏𝑏𝑐𝑐 ) (2-10)

Then, to update the old candidate state, ∁𝑡𝑡−1 , it is combined into the new candidate state 𝐶𝐶̃𝑡𝑡 ,
pointwise multiplication ⊗, and pointwise addition ⊕, which can be given as Equation (2-11):
∁𝑡𝑡 = ∁𝑡𝑡−1 ⊗ 𝑓𝑓𝑡𝑡 ⊕ 𝑖𝑖𝑡𝑡 ⊗ 𝐶𝐶̃𝑡𝑡 (2-11)
The next step is that the output gate (ot) decides what parts of the cell state are going to be
produced as outputs. Then, the cell state goes through a tanh layer (to limit the values to be
between -1 and 1), and it is multiplied by the output gate as shown in Equation (2-12) and (2-
13):
𝑜𝑜𝑡𝑡 = 𝜎𝜎(𝑋𝑋𝑡𝑡 𝑈𝑈 𝑜𝑜 + 𝑆𝑆𝑡𝑡−1 𝑊𝑊 𝑜𝑜 + 𝑏𝑏𝑜𝑜 ) (2-12)

𝑆𝑆𝑡𝑡 = 𝑜𝑜𝑡𝑡 ⊗ 𝑡𝑡𝑡𝑡𝑡𝑡ℎ(𝐶𝐶𝑡𝑡 ) (2-13)


In order to address the vanishing gradient problem of the RNN, the LSTM RNN with input
gate, output gate, and cell state was introduced. This model can be trained for sequence
generation by processing real data sequences one step at a time and predicting what comes
next, which makes it suitable for the DP reliability prediction application. However, the main
drawback is that each memory block needs an input gate and an output gate, which makes the
training more difficult and increases the training time of the network [79, 80, 81].

45
2.10.4 Gated Recurrent Unit (GRU)

GRU is a classical variant of LSTM, which uses a new type of hidden unit similar to the
memory unit in LSTM. The hidden unit combines the forget gate and input gate into a single
update gate. The GRU model is a simpler version as the cellular state and hidden state are
combined [81]. Figure 2-11 shows the complete architecture of a GRU network.

Figure 2-11 GRU Network Structure

In order to determine the activation of the hidden unit at the time step “t” 𝑅𝑅𝑡𝑡 first needs to be
computed by Equation (2-14):
𝑅𝑅𝑡𝑡 = 𝜎𝜎(𝑊𝑊𝑅𝑅 𝐻𝐻𝑡𝑡−1 + 𝑈𝑈𝑅𝑅 𝑋𝑋𝑡𝑡 ) (2-14)

Where 𝜎𝜎 is the logistic sigmoid function and 𝑊𝑊𝑅𝑅 and 𝑈𝑈𝑅𝑅 are the weight matrices.
�𝑡𝑡 , is generated by 𝑅𝑅𝑡𝑡 with a tanh layer in Equation (2-15).
The new remember gate 𝐻𝐻
�𝑡𝑡 = 𝑡𝑡𝑡𝑡𝑡𝑡ℎ(𝑊𝑊(𝑅𝑅𝑡𝑡 ∗ 𝐻𝐻𝑡𝑡−1 + 𝑈𝑈𝑈𝑈𝑡𝑡 )
𝐻𝐻 (2-15)

After this the remember and forgot gates are deleted by creating a new gate 𝑍𝑍𝑡𝑡 of Equation (2-
16)
𝑍𝑍𝑡𝑡 = 𝜎𝜎(𝑊𝑊𝑧𝑧 𝐻𝐻𝑡𝑡−1 + 𝑈𝑈𝑧𝑧 𝑋𝑋𝑡𝑡 ) (2-16)

The hidden state value is updated as represented by Equation (2-17)


�𝑡𝑡
𝐻𝐻𝑡𝑡 = (1 − 𝑍𝑍𝑡𝑡 )𝐻𝐻𝑡𝑡−1 + 𝑍𝑍𝑡𝑡 ∗ 𝐻𝐻 (2-17)

46
GRU proves its capability in reducing the training time and improving network performance
due to its simplified structure when compared to other RNN models [80, 81]. LSTM and GRU
both tend to remember features for a long time, allowing backpropagation to happen through
multiple bounded nonlinearities, which reduces the likelihood of the vanishing gradient. The
suitability of GRU over LSTM is compared through experimental study in this research.

2.11 Prescriptive Analytics – NLP and BERT

Prescriptive analytics analyses the field data from sensors and uses computational models to
provide instant recommendations to the DPO to suit the multiple predicted outcomes.
Prescriptive analytics is related to and a natural progression from, descriptive and predictive
analytics. It searches for and determines the best solution among various possibilities, given
known parameters under a defined operational envelope. In this research, the application uses
NLP and BERT for prescriptive analytics to suggest possible solutions during failure scenarios.
NLP is field of AI which provides the capability to machines to read, understand and decide
based on human languages [85]. BERT is an NLP model that is pre-trained by Google and
used as a Question and Answer system for the DP-RI application. This made it possible to
analyse the enormous amounts of documentation available related to DP systems and provide
possible suggestions to the DPO to respond quickly. The research framework, concept, fine-
tuning and training of the BERT model are discussed in Chapter 7.

2.12 Human Factors in DP operation

DP vessels are increasingly becoming automated, with interconnected systems, due to


demanding operation at sea, safety, and economic benefits and facilitated through recent
technological advancement [16, 2]. This advancement and adaptation to technology have
grown at a rapid pace; however, the training of DPOs and standards governing DP have not
matched the pace of improvement. In a complex, sophisticated, automated DP system, the role of
the DPO shifts from active involvement to more an interpretation of the system. In such a situation,
during complex offshore operations, the DPO role mainly needs to anticipate the system needs. It
is trapped in a situation without any tools to support the function, thus becoming error-prone [86].
The main reason for this is that advanced DP is designed with humans out-of-the-loop, making the
DPO distribute their attention to system needs rather than the operational requirements.

47
The IMCA defined DP incidents as “any loss of position to the surprise of the DPO,” which
clearly shows the critical aspect of the human factor in DP related accidents [3]. The incidents
are often unplanned and non-routine events that could have been avoided with proper
intervention from the DPO. The accidents that occur due to DPO error are related to the
undesired consequence of human-automation interaction [26]. Understanding the need for
training, there were significant revisions to the International Convention on Standards of
Training, Certification, and Watchkeeping (STCW) for Seafarers, and its associated Code was
adopted at a Diplomatic Conference in Manila (The Manila Amendments), in June 2010 (IMO,
2011). The Manila amendments define the necessary training and focus on courses addressing
situational awareness and decision making. The training program enables the personnel to meet
the essential requirements with the ability to handle critical situations in maritime operations
independently or as a team [87]. Similarly, a large number of guidelines have been prepared
by MTS to train the DPO [33, 25, 44, 60].

As mentioned previously, data has shown that 23% of primary causes and 64% of the
secondary causes of DP related accidents are due to human error [3, 20, 21, 22]. Although the
human operator is heavily involved in errors, a full understanding of the errors' origins is found
within the DP complexity, including sub-systems, assemblies, equipment sensors, and
interactions between them. Recent studies proved that human reliability needs to be considered
in the design and operation of the DP system, which should be addressed in terms of training
and operational documentation guidance for a DPO through automated possible solutions [88,
89, 78, 90]. As socio-technical systems become less centralised and more globalised, there is
a necessity to incorporate the ecological concerns in designs that can genuinely support the
operators to deal with functional allocation and visualisation of operational risk.

Most DP related accidents reported in the last few decades could have been prevented if the
DPO had intervened at the right time and had taken corrective actions. To be precise, about
64% of the accidents where human error was the secondary cause could have been prevented
with proper planning in advance [3]. In order to understand the typical root causes of the loss
of position and narrow down on how the DPO could have intervened, incidents were analysed,
as presented below [7, 26].

48
A scenario of Drive-Off: This is a situation where active thruster forces drive the vessel away
from its target position. Defined as a move under power away from the set position or wanted
position. It occurs during full power situations when the vessel is trying to regain its set position,
which is mainly due to false information or wrong commands. A DPO would be the secondary
cause and observing the situation soon enough to react correctly with the necessary steps could
prevent drive-off.

A scenario of Drift-Off: This is a situation where the vessel is incapable of maintaining its
target position due to insufficient thrust forces in relation to the environmental forces. It could
be due to failure occurring in operation outside of the “worst-case failure” environmental limits
or exceeding the theoretical worst-case related to hardware or human error. A DPO would be
the primary cause in most of the cases, and in order to prevent such a situation, thrust capability
and wind, wave, and current conditions should be analysed before the decision making.

Large Excursion: This is a situation where the vessel is still under DP control and experiences
temporary excursion of position and/or heading more significant than the usual excursion due
to environmental loads (massive wind gust/wave), thruster fault or degraded position
information. It is usually referred to as being in the range of 10m or 10 degrees from the desired
position and/or heading. These situations are quite common, and if the DPO reacts promptly,
then a large excursion could be easily avoided.

From the above analysis, it is quite evident that the intervention of the DPO could prevent all
of the possible loss of position situations at the right time with the correct solution. As the
industry is focussing on moving towards the Marine Autonomous Surface Ship (MASS), it is
necessary to address relevant human errors through the following steps to ensure that
functional requirements concerning risk and evaluation criteria are adequately addressed [88].
1. Minimal Human intervention
2. Intervention from Shore
3. Completely automated

2.13 Summary

In this chapter, the complexity of the DP system due to the external environmental forces,
hydrodynamic effects acting on the vessels, and sophisticated integration between different
49
sub-systems have been discussed. Other categories of DP accidents were presented, and related
impacts of the disasters were analysed in detail to identify the primary and secondary causes.
DPO roles and responsibilities were discussed along with the technology influence of the
complex operation and how the DPO role has changed gradually. Subsequently, various
requirements and assurance frameworks, which are guiding the safe and reliable design of the
systems, were presented in detail to identify the gaps and limitations. Classification Societies
have been continuously updating the rules and standards related to methods to ensure that the
technology advancement does not cause any uncertainty in the design, construction, and
operation of the vessels. The analysis revealed that the rules mostly cover the essential safety
requirement as a mandatory factor, and the operational aspect was left to stakeholders to
manage the risk. Various reliability assessment methods traditionally used for the DP system
evaluation were discussed in detail. The advantages and limitations of different techniques
were analysed to focus on the critical areas in these research studies.

Although the maritime, oil, and gas industry has started using big data and analytics for a few
applications, it has not been used adequately to address the problems associated with the DP
system. The biggest drawback is the lack of a consolidated or collective database, DP being a
system with a lot of sensors and effectively having been an “internet of things” for many
decades. It was essential to collect all of the relevant data and form a data lake for the research.
The information part of Information Technology (IT) has not been explored much, which needs
to be addressed whilst, at the same time, ensuring the quality of data. With the massive amount
of data available in the DP system, it can be considered as a Big Data source and analysed
systematically. The ANN models have been used for DP control system design; however, they
have not been used for reliability assessment or become an industry-wide accepted practice. In
the next chapters, all of the limitations of the existing evaluations are addressed through a novel
research framework. The research framework was built on years of available trusted data,
knowledge of industry experts, and intelligence of data analytics.

50
3. DP System Data Sources – Data Collection, Storage and Processing
3.1 Introduction

Big data has the potential to transform the marine, offshore, oil, and gas industries by creating
opportunities to drive innovation through insights and application. The advancements in
automation technology and sensor development along with the analytics revolution, have
resulted in an exponential rise in data availability for analytics and efficient prediction in
decision making [91]. The chapter begins by presenting the association of DP data to the
characteristics of big data. The data related to DP systems are a reinterpretable representation
of information in a formalised manner suitable for communication, interpretation, or
processing [92]. The data collected in a DP vessel at different phases of the lifecycle are of
structured, semi-structured, or unstructured formats. In the next section, the type of data
collected from various sources such as risk assessment reports, DP vessel manuals, historical
data from event loggers/data loggers, and real-time data collected from sub-systems and stored
through the vessel to cloud solutions, are presented. As big data involves substantial data sets
and complicated problems, it is essential to have access to innovative and powerful processing
and computing technologies. In the subsequent section, various databases and how big data
technology is used for storage of the data sets are presented. These robust technologies ensure
that data accessibility is very fast through processing units and file distribution systems [92,
91]. The final section outlines the critical characteristics in identifying the complexity of the
problem for which the data gathering is conducted. It details the necessary tools used for real-
time processing of data and essential formatting, for performing analytics to reveal the
underlying correlation between the systems.

The ability to manage all of the data and information from different sub-systems of the DP
system, safely and efficiently, enables a new level of analysis and monitoring of situations,
critical operations, and installation conditions for the following applications [93].
• Safety performance and integrated operation
• Managing risk and effective decision making
• Management and monitoring of accident and environmental risks
• Energy efficiency (cost and environment)
• Automation of ship operations (long-term)

51
3.2 Big Data Concept for DP System

Big data refers to data whose sizes go well beyond the ability to capture, manage, and process
with traditional computing technology or commonly used software tools [94]. In advanced DP
vessels, the most sophisticated operations are dependent on information systems for control
and analysis. In the last 30 years, two factors have changed radically for the DP system, the
availability and usability of data. Now the current situation is that there is a massive explosion
of available data, and the ability to store, combine and analyse these data is now available. DP
Data is increasingly being considered a valuable asset of equal worth to physical assets, and
considerable infrastructure is required for collecting, storing, and acting upon the data [95].

DP big data collected from a vessel includes unstructured (text, images, audio, video, etc.),
semi-structured (Alarm logger, comma-separated variables, etc.) and multi-structured data
(different data formats such as excel, tabular results, historical database). Big Data is
characterised by attributes of volume, variety, velocity, veracity, and value [94, 96]. Table 3-1
defines these characteristics of big data and their relation to DP system data [96].

Table 3-1 Big Data Characteristics for DP system data

52
3.3 Stages in Big Data Analytics for the DP system

The full potential of big data analytics may be unlocked in different stages. For the DP
application, stages were used to ensure that a standard systematic approach was followed to
mitigate pitfalls and biases that could arise during the experiments. There are five different
stages in big data analytics, which are represented in Figure 3-1. The corresponding steps for
DP system analysis for each stage are presented in detail in Figure 3-1 to show the level of
increased information researched to fill the gaps of the existing assessment methodology [93,
96].

Figure 3-1 Stages of Big data Analytics for the DP system

As explained in the previous section, there is a considerable volume of data. Just having a
massive amount of DP data is not enough; it is vital to store and process the data. Due to
digitalisation of the data, there are three technological trends: Big Data analytics tools and
technology, the concept of digital twins, and the emerging Internet of Things (IoT) platforms,
which were used to create value by producing actionable knowledge from data [95].

The first stage of big data analytics was defining the problem, to identify the intended use and
context for the DP system. The critical issues related to the DP system were described in
Chapter 1 and Chapter 2, focus on the specific problem that will be addressed by the research.

53
The next stage was designing the data requirements, which involves identifying the type and
kind of data required for solving the problem. In this stage, data from operators, vessel owners,
flag state, DNV GL, shipyard, vendors, etc. were acquired in various formats and through
different media. The data was stored in big data platform tools in the local Hadoop cluster and
the cloud, depending on the agreed solutions with operators and vendors. The information is
considered to be a data lake where the different databases such as the FMEA database, HIL
database, DPCap database, OREDA database, IMCA database, and real-time sensor data are
stored in an organised manner [3, 20, 21, 47, 57, 97]. The next stage is the pre-processing of
data involving cleaning, profiling, integrating, and aligning the data for analytics. It is the most
critical stage as all the inherent biases are addressed to ensure that the output from the analytics
is close to reality, and performance accuracy is not compromised. Stage 3 of the Big data
analytics is described in detail in Chapter 3. The next stage is the analytics stage, which
explores the data through different algorithms to understand the patterns, reveal correlations,
perform forecasting, and suggest solutions to respond to failure scenarios. For this research,
four analytics concepts are used at different phases of the study to address the problems related
to DP systems. The details of this stage are presented in Chapter 4, Chapter 6, and Chapter 7
respectively. The final stage is the visualisation of the models, along with verification and
validation, to evaluate the model performance which is described in Chapter 8.

3.4 DP System Data Sources

The types of data that are used for reliability analysis have evolved over time. The traditional
methods, for decades, have used lifetime and degradation data to provide reliability
information. With the advancement of technology in the era of big data, there are new data
types that are available and used for reliability analysis [93]. Massive data sets available in the
past were not used for analysis due to limitations in computing capability. Now, with
technology breakthroughs and the low cost of collection of complete data sets, there is a
motivation for research to explore the data for reliability analysis [95, 98].

A massive amount of data is available from DP systems from sources such as documents,
reports, Systems Applications and Products (SAP), automation servers, historical databases,
images, videos, JavaScript Object Notation (JSON), Extensible Mark-up Language (XML),
mail servers, etc. The part of the overall data generated from these sources is structured, whilst

54
the rest of the data are either in a semi-structured or unstructured form. Storage and analysis
of structured data have been happening for a long time; however, the volume of unstructured
data has multiplied in recent times [98, 99]. Offline and real-time data are starting to be used
in reliability prediction in other industries due to improved accuracy, and it is having a direct
impact on operational efficiency. Following the big data concept used for the DP system, the
data are grouped into three categories as follows [95, 98, 99, 100]:
• Structured Data
• Semi-Structured data
• Unstructured Data

Figure 3-2 DP System – Data Types

Structured, semi-structured and un-structured data must be combined to form the big data
before the application of analytic tools. The format of each of the data sets available is different,
and so appropriate transcript steps were taken before the combining process. Usually, little
effort is needed for the cleaning and profiling of structured data, when compared to semi-
structured / unstructured data, due to the variety of formats for the latter which are not suitable
to import into a pre-defined relational database management system [99, 101]. Thus, more
sophisticated big data tools are required for processing semi-structured / unstructured data.
Figure 3-2 shows the complete architecture of the data types and processing steps [99].

55
3.4.1 Structured Data
Structured data of DP systems is data that is characterised by a pre-defined data model and
standard relational database management system [102]. As the data are in a standard format, it
is therefore straightforward to analyse. Most of the structured data can be represented in tabular
format with the relationship between the rows and columns. Therefore, a systematic method
for cleaning and profiling the data may be applied as it can be sorted through columns and
rows. The structured data in the tabular format were converted into excel or SQL format for
the research studies.

The DP systems’ structured data were collected from the following sources, which are
categorised into two types:
• Offline data
o FMEA database
o HIL database
o DP capability plot database
• Real-Time data
o DP Control System (Sensor Data)
o Power Management (Sensor data)
o Thruster Control System (Sensor Data)
o Reference System (Sensor Data)
o Environmental Monitoring System (Sensor Data)
o Condition Monitoring System (Sensor and Forecasting)
o Alarm Logger
o Event / Data Logger
o DP Footprint & DP simulation
o DP capability (online analysis)

The data, generated by humans or machines, is in a pre-defined format to fit into the Relational
Database Management System (RDBMS), which can be accessed through queries and
algorithms. The data are then pre-processed through standard data cleaning and profiling
techniques. This step ensured that biases and unwanted data are removed from the raw
information. This type of data is easy to enter, save, find and analyse, and is searchable through

56
the structured query language. However, due to its inflexibility, sometimes its usage is
restricted for specific applications. Once the data was cleaned, they were stored in the
HADOOP cluster, which was specially prepared for the research project in dedicated locations
in different offices within the DNV GL organisation network. The structured data was
combined with semi-structured and unstructured data once the other streams of data became
ready for analytics.

3.4.2 Semi-Structured Data


DP system semi-structured data is a form of structured information that does not fit into the
pre-defined data model or any standard RDBMS to comply with the defined tabular format
[100, 103]. The data is in different tabular formats, which contain tags and other identifiers to
separate the semantic elements. Hierarchies of fields within the data also characterise it, and
records are systematically arranged. The semi-structured data are defined by self-describing
structure differentiating it from unstructured data. The typical format of semi-structured data
is JSON, XML, email, and RDF, etc. with metadata.

The DP systems’ semi-structured data are collected from the following sources, which are
categorised into two types:
• Offline data
o IMCA database
o OREDA database
o DP Operational manual database
o Proving trial / Annual Trial
o DP system FAT Procedure
o DP system CAT Procedure
• Real-Time data
o Operational Technology (OT) data (not in pre-defined format)
o IT data (Interface with OT)
o Time-series trends
o Graph and Forecast data
o GPS and DGPS System (Sensor Data)
o Asset Management System (AMS)

57
The semi-structured data are more accessible to analyse than unstructured data due to the
availability of big data tools with the ability to read and process either from JSON or XML.
Therefore, it resembles the features of structured data. The metadata feature of semi-structured
data provides additional information which is frequently used by big data solutions for initial
analysis.

3.4.3 Unstructured Data


DP system unstructured data is information that either does not have a predefined data model
or is not organised in a pre-defined manner. The data usually consists of textual or non-textual
records that are human or machine-generated and stored in the non-relational database
management system (Text, Pdf, Word, email communication, Images, and Reports, etc.) [99,
102, 103]. Due to the inbuilt attributes of unstructured data, it leads to irregularities and
ambiguities that make it difficult to understand using traditional computing tools.

The DP systems’ unstructured data are collected from the following sources, which are
categorised into two types:
• Offline data
o Maintenance Records
o Inspection records
o Photograph (Images) and Screenshots
o Videos in Voyage Data Recorder (VDR)
o Accidents failure investigation reports
o Manufacturer failure database
• Real-Time data
o DP Screen recording
o Closed-circuit television (CCTV) recordings
o Interviews
In recent years, due to technological developments, new tools are available to store, clean, and
analyse specialised types of unstructured data. Therefore, in this research, the big data tools
available within DNV GL were used in the context of big data to ensure that its potential is
fully utilised to address the problem in hand. Table 3-2 shows the various data types used for
the research and its grouping along with features, storage location, size etc.

58
Table 3-2 DP System – Big Data Sources description
59
3.4.4 Conversion of Semi / Unstructured Data into Structured Data using NLP
NN models cannot use the semi-structured and unstructured data of the DP system for
classification, prediction, and prescriptive analytics applications. The critical step in the
research involves the conversion of semi-structured and unstructured data into pre-defined
RDMS structured data [104, 105]. The pre-processing of semi-structured and unstructured data
is required before performing the conversion. The semi-structured data was converted to
structured data using data-mining tools and techniques for information extraction and NLP.
However, it was not straightforward to convert the unstructured data into semi-structured or
fully structured information. During the pre-processing stage, the steps of transcription and
data crawling are performed on the semi-structured and unstructured data. The critical
information is filtered and collected before performing data cleaning. After this, the combined
data are placed in the data lake ready for conversion.

In this research, a common standardised approach was used to convert both the semi-structured
and unstructured data of the DP system to pre-defined structured data and store them in the
HADOOP data cluster before classification data analytics to identify the critical sub-systems.
An ontology-based NLP concept was used by coupling the reasoning concept of ontology with
the lexicon component in NLP [106, 107]. The mix of semi-structured and unstructured data
is passed into the system, where the two features of ontology are used for NLP. Firstly,
ontology is used as a building block when defining the terms for content words through the
lexicon. The second key feature of ontology, acting as a knowledge-base, is where it serves as
the primary system/brain for the complex language processing used along with NLP to
optimise the efficiency in conversion [108, 109].

The ontology-based NLP uses Morphological analysis, Syntactic Analysis, Semantic Analysis,
Pragmatic Analysis, and Discourse Analysis in either a step by step approach or combined
efforts. The Term Frequency-Inverse Document Frequency (TF-IDF) defined the descriptive
definition and documents and grouped accordingly. In this way, it was ensured that the
database with semi-structured and unstructured data is analysed based on how the text is built
up, put together, what it means, how it is used in different situations, goals and intentions and
the inter-relation between additional text. The steps are modified into critical activities to

60
perform the conversion of semi-structured and unstructured data into structured data through
knowledge discovery in the database (KDD) [105, 107, 108].

The critical activities applied to the database for conversion using NLP are listed below [101]:
• Tokenization: Strings into tokens
• Stemming: Normalise words into its base form or root form
• Lemmatization: Morphological analysis of the words
• POS tags: Extraction of semantic information
• Named entity recognition: Process of detecting the names of the entity
• Chunking: Picking the information and grouping into the sentence

The hybrid data lake is created with semi-structured and unstructured data stored in the Hadoop
cluster and cloud infrastructure. The complete architecture of the conversion of semi-structured
and unstructured data of the DP system to pre-defined structured data, in the standardised
approach through ontology-based NLP, is shown in Figure 3-3. Various tools and libraries
such as Google Cloud Platform (GCP), Ghost-script, Google NLP AutoML, Jupyter Notebook,
and NLPTK were used for this application. The converted, pre-defined structured data were
stored in a data warehouse for machine learning models to perform analytics.

Figure 3-3 Ontology-based NLP – Conversion of semi/unstructured data

61
3.5 DP System Offline Data Lake – Information Management System (IMS)

The significant phases of the DP life-cycle include design, construction, commissioning, sea-
trials, and operation [1, 2]. In all of these phases, the design evolves, and changes are
implemented for safe, reliable, and efficient operation. As discussed in Chapter 2, historically
several traditional reliability assessment methods have been implemented in different phases
for improving the design of the system. However, they have not been able to prevent all the
accidents from occurring based on the analysis of the accident database report from IMCA
[3]. Besides, the traditional reliability assessment methods have not demonstrated their ability
to provide clear information on faults and provide appropriate solutions during operation.

The traditional methods have been proved to enhance the design and increase the safety and
reliability of the DP vessels over the years to a certain extent. There is a massive amount of
data produced by these methods by various stakeholders at different stages of the DP life cycle.
Figure 3-4 shows the relation of data collected from the sources and stages of the DP life cycle.
In this research, the databases are organised and stored to form a data lake. Further, it is used
to understand the correlation between the sub-systems and inter-dependencies through the risk
assessment methodologies.

Figure 3-4 Database collected during DP lifecycle

62
The Information Management System is a combination of offline and real-time data. In this
section, the offline data lake of IMS is discussed in detail. The offline data lake is formed by
grouping the below set of databases which consists of structured, semi-structured, and
unstructured data [2, 16]:
• FMEA database
• HIL database
• IMCA database
• WOAD database
• DPCAP database
• OREDA database
• Manufacturer Failure database (MAN FD)

The databases of the offline data lake were collected from different sources either from the
DNV GL domain, vendor databases or from publicly available sources during the initial two
years of the doctoral study. The data are confidential and necessary consent was agreed with
the respective data owners to use the data for research purposes. Figure 3-5 shows the database
architecture used for the research studies along with its interface with the Hadoop platform and
the layer of security.

Figure 3-5 IMS – DP system Database Architecture

63
3.5.1 FMEA Database
The FMEA database was created for this research by collecting the reports of more than 500
vessels for which the risk assessment was performed by DNV GL (DNV and Noble Denton).
The data were generated over 30 years and were cleaned before being used in the research
study, with the consent of the data owner. FMEA is an industry-wide accepted tool for studying
the behaviour of the DP system in cases of single-point failure [15, 45]. It involves a Design
Review of the DP system by evaluating the Functional Design Specification, Drawings, IO list,
etc., preparing the FMEA worksheet, and finally, the FMEA report categorising the impact of
the failure into three categories (Highly Critical, Medium and Low). The FMEA database
contributes 120 GB to the total data size and keeps accumulating along with the current
increase in the number of DP vessels.

The following key features are extracted from the database and used in the research study [17,
46, 53]:
• Identify all possible failure modes and their effects.
• Generate a list of potential failures with an assessment of the magnitude of their effects.
• Develop corrective action priorities in decision making for prescriptive analytics.
• Evaluate design requirements related to redundancy, failure detection systems, fail-safe
characteristics, and automatic and manual override.
• Provides historical documentation for future reference to aid in the analysis of field
failures and consideration of design changes.
• Provide an input basis for quantitative reliability and availability analysis.

3.5.2 HIL and Digital Twin Database


The HIL database was created from the test results performed on more than 100 vessels by
DNV GL (Marine Cybernetics and DNV GL). The data was generated over 12 years from the
ships where testing of the DP system, PMS system, and Thruster system was carried out as
integrated and standalone systems. Today most of the complex and sophisticated DP vessels
undergo HIL testing to make sure that glitches in the software are identified at the early stages
of the projects [47, 56]. Classification Societies like DNV GL, ABS and LR realised the
importance of the HIL testing and came up with rules to cover Control System Software testing

64
by HIL [55]. The safety and Reliability of the DP system have increased to some extent after
the different subsystems of the DP system underwent HIL testing.

Recently the Digital twin concept has been used, as a stand-alone testing solution or in
combination with HIL testing, to evaluate the performance of the software for the DP system.
The digital twin and HIL concepts were applied by major DP system vendors such as
Kongsberg, Rolls Royce, GE, and MT etc. This method has enabled stakeholders to gain
insights into how different DP systems will respond to any particular failure during real
scenarios. The HIL and Digital twin have contributed a total of 60 GB of data to the data lake.
Figure 3-6 shows the critical findings in HIL testing, and the following key features are
extracted for the research study [47]:
• DP system failure contributors
• Different functions of the DP system and consequence of failure on DP sub-system
• The possible solutions for the failure and addressing consequences
• The sub-system in which major software failures could occur
• Identify the DP vendor occupying the significant market share

Figure 3-6 HIL database – Key Findings


65
3.5.3 DP-CAP Database
DP capability plots are mandatory for all DP vessels, and these data were collected from about
300 ships for the research work. This contributes to the off-line as well as real-time data. Those
DP capability plots which are generated during the design stages are stored in the offline
database in the on-premises Hadoop ecosystem. However, those that are generated on-board
during operation along with DP simulations are collected as part of the real-time data and
stored in the cloud infrastructure. The static and time series DP capability plots are collected
and stored, contributing to a total size of 20 GB in the RDBMS format.

3.5.4 IMCA Database


The IMCA has documented DP incidents internally by technical advisors based on data
received from its members and non-members [3]. The data are accepted either in structured or
in semi-structured formats and submitted to the database as long as the station keeping incident
analysis can be performed on the information received. The systematic approach of research
in the report is collected and stored in the database. The data collected was from a report dated
from 1990 to 2020, covering 30 years of DP history and a total of 1900 accidents were analysed.
The total amount of semi-structured data collected is about 4 GB, which are stored in the on-
premises Hadoop ecosystem.

The review and analysis of incident reports are performed by IMCA to encourage and improve
reporting throughout the industry. The study provided valuable lessons learned and meaningful
analysis insights that were used to enhance this research. Definitive conclusions about the
safety of DP operations were not drawn from the statistical analysis of the reports; however,
trends or patterns may be determined over time. In this review, initiating events, causes, and
comments reported by users have been incorporated into a table for easy comparison [3]. This
method enables individual DP vessels to readily complete an onboard comparison of actual
events occurring in the industry with the situation on-board their ship.

3.5.5 WOAD Database


WOAD is the world’s most comprehensive data source of its kind available for offshore risk
assessment and emergency planning. The accident data has been collected since 1970 and
includes more than 50 years of world-wide accident history. The database is continuously
being updated with the latest information available from authorities, official publications and
66
reports, newspapers, databases, rig owners, and operators globally [20]. The database collected
included data on more than 6000 accidents with a wide range of parameters such as name, type,
and operation mode of the unit involved in the accident, date, geographical location, chain of
events, causes, consequences, and evacuation details. The total amount of semi-structured data
collected is about 76 GB, which are stored in the on-premises HADOOP ecosystem.

The WOAD provides various key and critical information that is used for this research study.
Knowledge of past DP accidents serves as an essential input to risk assessment concerning
hazard identification, consequence evaluations, decision support, and identification of high-
risk areas. Learning lessons from accidents is vital to avoid accidents in the future.

3.5.6 OREDA Database


OREDA is a project organisation sponsored by nine oil companies with worldwide operations.
OREDAs' primary purpose is to collect and exchange reliability data among the participating
companies and act as the forum for co-ordination and management of reliability data collection
within the oil and gas industry. OREDA has established a comprehensive databank with
reliability and maintenance data for exploration and production equipment from a wide variety
of geographic areas, installations, equipment types, and operating conditions [97]. The data are
stored in a database, and specialised software has been developed to collect, retrieve, and
analyse the information. The total amount of semi-structured data collected is about 60 GB,
which are stored in the on-premises Hadoop ecosystem.

3.5.7 DP Vendor Equipment MTBF Database


Manufacturers of equipment in the marine, oil, and gas industry maintain failure databases for
all of the equipment supplied by them. OREDA provides more generic failure data from
different manufacturers and collectively addresses the failure data for the equipment. However,
for more accuracy, it is necessary to use the actual failure rate of the equipment from the
manufacturer. Some of the typical suppliers of DP systems from whom the data was collected
are Kongsberg, Rolls Royce, GE Converteam, L3 communication, and Marine Technologies
(MT). The data was collected and stored as a database from more than 2000 projects within
DNV GL. The total amount of semi-structured data collected is about 120 GB, which are stored
in the on-premises Hadoop ecosystem.

67
3.6 DP System Real-Time Data Lake – Information Management System (IMS)

Real-Time data are critical data which represents the actual state of the equipment during
operation. With the development of sensor technology, the IoT, System on Chip (SoC) and
cloud computing, it is easy to extract the data during operation and transfer it to shore for
processing [110, 111]. Figure 3-7 defines the architecture set-up used for harvesting the real-
time data from DP vessels operating offshore. The set-up was established on ten ships to collect
and transfer data using vessel-to-cloud infrastructure for further processing and modelling. The
amount of real-time data collected and simulated by digital twin is about 40TB and the data
are transferred either real-time, near real-time or in batches.

The transfer of data from vessel to onshore are either done independently by each sub-system
or from an integrated IMS system which communicates with the DP System. The system which
is performing the control function on the vessel is classified as OT and systems which take
care of the transfer of data to shore are classified as IT. The information transferred to shore is
secured through firewall protection and used for real-time monitoring, processing and
forecasting functionality [112, 113].

Figure 3-7 Real-Time data transfer from the vessel to shore for Analytics

68
3.7 Tools and Experimental Set-Up for the DP System Big Data Analytics

The experimental set-up for the research used a local big data Hadoop ecosystem and cloud-
based infrastructure for DP system-related data storage, processing, profiling, organising,
analysing, and analytics. The Hadoop ecosystem used semi-structured, unstructured, and
structured data, which are readily available offline data so that a data lake could be created
using an efficient and organised approach [114]. Similarly, for real-time data extracted from
operational vessel technology, which are structured, the vessel-to-cloud approach was used for
storing the data in the cloud. The hybrid approach was used to process the data before analysis.

The key advantages of using the Hadoop ecosystem for this research are listed below [92, 114]:
• Infra-structure is fault-tolerant and highly reliable
• In-built capability to integrate seamlessly with cloud-based services
• Ability to handle different data formats and address big data problems efficiently

The Hadoop ecosystem was created with a group of tools and applications to support the DP
system-related big data management and processing. The Hadoop architecture and ecosystem
tools are shown in Figure 3-8 and Table 3-3. Based on the specific requirement, the tools used
and functional details of the tools in the Hadoop ecosystem are described below [114]:
HDFS: Hadoop Distributed file system to store data.
YARN: Resource manager to leverage and process big data.
PIG: Data processing for grouping, filtering, joining, and storing to HDFS.
HIVE: Processing of RDBMS data through HQL.
MAHOUT: Implementation of Machine Learning for Clustering and Classification.
SPARK: Real-Time data analytics.
HBASE: Supports grouping and automatic distribution of data across a cluster.
DRILL: Combines the database and executes through a single query.
OOZIE: Sequential and event-based job scheduler.
FLUME: Ingest semi-structured and un-structured to HDFS from multiple sources.
SQOOP: Ingest structured data either real-time or batch ingestion via filtering and enrichment.
ZOOKEEPER: Ensures coordination between various tools in the Hadoop ecosystem.
APACHE AMBARI: A cluster management tool for managing and monitoring.

69
70

Figure 3-8 HADOOP Ecosystem architecture with tools and applications


Table 3-3 Experimental set-ups on local premises to handle DP system Big Data

71
3.8 Need for the Implementation of Big Data Analytics

In recent times, there has been a strong trend in the maritime, oil, and gas industries to move
towards data-driven decision making and the use of data analytics for managing performance
and risk. For this research study, which focuses on effective decision making for the DP system,
Knowledge Management (KM) and ANN models are required. Big data and data analytics
techniques play a significant role in KM by active knowledge generation through correlation
and result in significant, timely decision making. The technological advancement changes the
modes in which the data are collected, stored through high-speed data ingestion, and computed
in complicated data structures. The following reasons are identified as the need for
implementation of Big Data Analytics for addressing the research problems:
• Increased connectivity, new capabilities for capturing, storing, processing, presenting,
and visualising data, and, in particular, the transmission of large volumes of varied DP
data at high velocity, have forced the use of Big Data Capability tools.
• Existing lack of consistent knowledge has led to poor performance in the past; therefore,
proper strategies need to be devised to create new experience from the existing historical
data, which are either semi-structured or unstructured. The information from these data-
sets can be used for knowledge generation and used for operational efficiency.
• Selection and accessing of the characteristics that have great significance to the
operational reliability assessment were not possible previously. With IoT, Digital Twin,
and Big Data tools, the critical features are extracted with useful state features at the
same time, reducing the computational burden, even with a massive number of datasets.
• Decisions are made in real-time according to the data generated, and timely and accurate
decision-making is what makes all the difference for complex marine operation
offshore, and it supports in the prevention of any DP related accidents.

3.9 Summary

Chapter 3 gave a detailed overview illustrating why the data sources of the DP System are
considered a big data concept. The correlation between the attributes of big data is evaluated
against the DP system data sources to justify the adoption of a big data methodology for this
research. Subsequently, various stages in big data analytics are presented and how the chapters
in the thesis are connected with each stage. The different types of data: structured, semi-

72
structured, and unstructured, in the data sources are defined along with the properties, format,
and usability for the research. Also, each data source was defined and grouped into offline data
and real-time data. The data types which are not automatically usable for the analysis were
converted into a usable format using an ontology-based NLP methodology.

In addition, all the databases collected were described with details of size, type of data, number
of vessels, projects, and source of the database. Next, the Hadoop ecosystem with complete
tools and applications used for the research was described. The experimental set-up with the
hardware configuration and software application were defined. Finally, the need for the
implementation of big data analytics for this research was presented.

In the next chapters, the data collected, stored, and processed will be used for analytics through
knowledge of experts and intelligence of computing capabilities. The data analytics will focus
on the following aspects to address the research problem step by step:
• Descriptive Analytics  What has happened in the DP System?
• Diagnostic Analytics  Why did it happen in the DP System?
• Predictive Analytics  What will happen if there is any failure in the DP sub-system?
• Prescriptive Analytics  What should be done in the case of failure (possible
suggestions)?

Chapter 4 describes the descriptive and diagnostics analytics used for classification of the DP
System and identifies the critical DP sub-systems. The correlation and inter-dependencies
between sub-systems are identified. Then each of the sub-systems is presented with system
architecture and signal description at the component level.

73
4. Classification of DP Sub-Systems: Descriptive and Diagnostic
Analytics
4.1 Introduction

The next stage of the work is stage 4 of big data analytics, which involves data processing and
analytics. This chapter presents the descriptive and diagnostic analytics to ensure that data
exploration and preparation are sufficiently thorough in addressing any problems associated
with the data. It is proven that more and better data beats a good algorithm every time [115].
Thus, this stage is critical, involving data exploration and decisions as to what data is going to
be used for predictive and prescriptive analytics. In addition, effective data preparation steps
increase the accuracy of the prediction models by utilising the power of the available data most
efficiently. The common issues related to data are addressed by ensuring that the data are
collected from a range of available sources, defining the features, identifying and treating
missing values, deleting the duplicate observations, and making assumptions. As part of the
research study, significant time was spent on meticulously preparing the data. This stage helped
in the later phases during predictive and prescriptive analytics to get better and more credible
results. The descriptive and diagnostic analytics are the first part of advanced analytics, which
identifies the input variables (sub-systems) and its relations with the target variable (DP System
Reliability) are presented. The next sections of this chapter describe the sub-systems covering
the system description, architecture within the Graphical User Interface (GUI) model, and
input variables (signals) at the sub-system level.

Stage 4 of big data analytics involving data processing and analytics (descriptive and
diagnostic) comprises the following activities:
• Data exploration and preparation
• Performing quality checks
• Creating a data dictionary
• Understanding and identifying the variables at system and sub-system level
• Creating other derived variables
• Identifying the correlation and interdependencies between variables
• Classification of DP sub-system
• Description of DP sub-system and architecture

74
4.2 Descriptive and Diagnostic Analytics

Descriptive analytics was used as the first step of the advanced analytics, and helped to analyse
and reveal, “What has happened in the DP system?” [96, 116]. It is the critical step in
understanding the datasets and the basic content in the data lake. It provides various insights
and different metrics captured for the datasets. Typical techniques used for descriptive
analytics are statistics and distributions to identify different sub-systems that have effects on
DP system functionality [117, 118]. Similarly, diagnostic analytics is the second step which
helps to answer “Why did this happen in the DP System?” [116]. It plays a key role in
determining the factors that contribute to the outcome along with dependencies between output
and input variables. Therefore, it also identifies the sub-system roles and their inter-
dependencies in the DP system performance. The two analytics steps are part of data mining
and support in-built data processing steps for the next stages to make predictions and provide
prescriptive solutions. Thus, data exploration and data processing are covered in the next
section along with the different techniques used for descriptive and diagnostic analytics.

4.2.1 Data Exploration and Preparation


The output of stage 3 of data analytics (as shown in Figure 3-1) is the pre-processed data, which
are ready for analytics. Data exploration is vital before the analytics step as it helps to
understand the dataset [96]. For the DP system, there is a massive number of datasets that have
been collected and made available and different tools are used to understand it. A top-down
approach was used for this research study where the DP system was split into various sub-
systems, and each sub-system divided into various associated components. The components
may be designed with one or more sensors that have contributed to the data points. Therefore
during the data exploration stage, the dataset is evaluated carefully to identify the variables for
the research [116].

Variables are often referred to as the feature engineering properties of a dataset. They denote
a specific piece of information about an observation or records in a dataset, which are known
as the input, output, and unused variables [119, 120]. In this research context, the sub-systems
are considered as variables for the DP system reliability, and the signals at each sub-system
level are considered as the variables for sub-system reliability. The variables can be dependent
or independent, and this is determined through correlation analysis.

75
Figure 4-1 Data Exploration to identify the DP sub-systems
The variable selection is a critical step as part of descriptive analytics. Statistics and
distribution analysis, which is considered to be a tool of descriptive analytics, are used to
determine the system-level variables/feature selection [96, 117]. Statistics put the available

76
data set in context to identify the variables. Distributions show how the data is distributed over
its entire range. If the data is very irregularly distributed, the resulting model will probably be
of poor quality [116, 121]. For the DP system, the statistics and distributions are used to
identify the sub-systems at a high level as the first step of data exploration. Figure 4-1 shows
the typical sub-systems which contribute to the overall functionality of the DP system. As
indicated, seven input variables have a direct impact on the DP system reliability.

A systematic approach must be used to identify the variables as the predictive and prescriptive
analytics may not operate well if there are numerous inconclusive features [119, 122]. It has
been proven that algorithm run-time grows dramatically when the features increase and, at the
same time, the accuracy of the algorithm is negatively affected leading to overfitting problems
[121]. The primary advantage of feature selection is that it results in the following [120, 123]:
• Improved accuracy of prediction
• Simplification of the visualisation and increase of readability of data
• Accelerated learning and prediction process
• Increased clarity of the generated rules for experts

In this research, the second level of feature selection was applied using the decision tree
technique to validate the system level selection. This method also proves that seven sub-
systems have an impact on the DP system reliability. Other sub-systems such as the Valve
Control System (VCS), Crane Control System (CCS), Fire and Gas System (FGS), Portable
Water System (PWS), Bilge System, Drilling Control System (DCS), Blow-out-Preventer
(BOP) control system, etc. do not contribute to DP system reliability. The decision tree
technique is also applied at the sub-system level to identify the features that contribute to its
functionality and reliability. The two primary approaches of decision trees, Sequential
Backward Elimination (SBE) and Sequential Forward Selection (SFS) are used for this
purpose. At the system level the decision tree approach is used for validation; however, at the
sub-system level, it is used for identification / shortlisting of variables [124]. A decision tree
is a widely used method for feature selection, and at each node in the tree, it minimises the
number of features [123, 122]. This method results in a better understanding of the datasets,
possibly better visualisation of data, reduced transfer of data from vessel to shore, reduced
training time with improved stability, and model accuracy. Instances / Observations, which

77
describe the actual data of the variables, are considered as records. The definition of records is
described briefly in Chapter 6 and Chapter 7, along with predictive and prescriptive analytics
models.

A data dictionary is a collection of descriptions of data objects or items in a data model,


containing the attributes of variables for the benefit of others who may need to refer to them
[125, 126]. A more detailed data dictionary will include the type, length, and format of the
variable. They are created such that they are self-explanatory, and the fields should be easy to
understand such that data sets can be used for the algorithms of similar applications. Table 4-1
shows the data dictionary at the DP system level. For each sub-system, there is a separate data
dictionary table in the relevant section of the chapter, where it describes the variables along
with type, format, and other necessary attributes. All variables used in the prediction are made
visible to give an intuitive understanding through the data dictionary that does not happen with
more complex representations of data sets [126].
Table 4-1 Data Dictionary for the DP system

The output variable, which is the target variable for the research, is the Dynamic Positioning -
Reliability Index (DP-RI). In general, DP system reliability is represented in a qualitative
format for most vessels. However, a more sophisticated and complex DP vessel, for which
quantitative risk assessment was performed, will have a quantifiable reliability value. For this
research, both scenarios are handled considering the ship with actual DP reliability in

78
numerical form and a vessel with DP reliability in a qualitative format. In the qualitative case,
a derived variable is used to represent the DP system reliability. The output variable at the sub-
system level is a direct measure of reliability in the quantitative format as the values are
calculated based on the critical component/equipment arrangement within the sub-system.

The key advantages of data-exploration and processing as part of the descriptive analytics are
as follows [96, 116, 122]:
• Determination of additional dimensions for deeper insights
• Definition of derived variables
• Structure information of variables in the data dictionary for another user to understand
the data sets easily
• Prevent incorrect analysis
• Address Missing variable and Perform Quality checks

4.2.2 Correlation and Interdependencies


The feature selection step identified the input and output variables, the latter being a target
variable derived through mathematical modelling/calculation. The next step is determining
the correlation between the input variables and between input variables and the target variable.
This step is part of the diagnostic analytics, which performs such activities using defined tools
and techniques. For the correlation determination, the Scatter Plot and Correlation matrix are
used to identify the interdependencies and correlation between variables [96, 116, 117].

Scatter plots help in discovering the dependencies between the target variables and the input
variables. Such charts plot output values versus input values [116, 118]. Figure 4-2 shows the
plots of DP system reliability against the DP sub-systems to identify whether they have a
positive or negative relationship. The correlation is a numerical value between -1 and 1 that
expresses the strength of the relationship between two variables. When it is close to 1 it
indicates a positive relationship (one variable increases when the other increases); a value close
to 0 indicates that there is no relationship; and a value close to -1 indicates a negative
relationship (one variable increases when the other decreases) [127, 128]. Thus, this method
can be used to identify and remove redundant data to avoid over complication in the model
algorithm.

79
(a) DP Reliability vs. Reference System (b) DP Reliability vs. DP Control System

(c) DP Reliability vs. Thruster System (d) DP Reliability vs. Power System

(e) DP Reliability vs. Electrical System (f) DP Reliability vs. Environmental System

Figure 4-2 Scatter Diagrams – Correlation between variables

80
Correlation is a process of establishing a relationship or identifying the interdependencies
between variables [128, 129]. Figure 4-3 depicts the correlation matrix indicating the inter-
dependencies between the DP system and its sub-systems. The correlation technique is a part
of diagnostic analytics that uses correlation coefficients. If the correlation is 0, they are
independent of each other, that is, increasing or decreasing one does not increase or decrease
another. On the other hand, if the correlation coefficient is 1 or -1, they are directly or inversely
dependent, respectively.

Figure 4-3 Correlation Matrix – DP sub-systems and DP-RI


Besides, the latent relationship was captured using the Partial Least Squares (PLS) regression
method [130]. For real-time data, dynamic PLS are implemented in combination with the State-
Dependent Parameters (SDP) approach to explicitly identify the relationship and
interdependencies of time series [128, 130]. This way, the collinearity between the input
variables is removed. At the same time, the accuracy of the model prediction is improved,
which is described in later chapters of the thesis.

81
Figure 4-4 shows the complete architectural overview of the classification of the DP system to
identify the critical sub-systems contributing to safe and reliable operation. Through
descriptive and diagnostic analytics, it was identified that there is a total of seven sub-systems
which have direct dependencies on the DP system reliability. In the next sections, sub-systems
are evaluated to determine the critical variables contributing to the functionality and reliability.

Figure 4-4 DP System classification – descriptive and diagnostic analytics

4.3 Data Dictionary and Critical Attributes at the Sub-System Level

At the sub-system level, a comprehensive data dictionary is created as the number of variables
is large. In the evaluation of DP system design, it is often found that the standard connection
between the sub-systems that is intended to provide redundancy increases the probability of a
fault [4]. A fault in one redundant system can affect another independent system. In the design,
the class rules are applied to ensure that the redundant concept is used to achieve no loss of
position. However, redundancy does not guarantee a high level of reliability; therefore, it is
evident that the fault-tolerant concept needs to follow the above class requirements. Large sets
of variables lead to potential configuration errors which are addressed through Redundancy
and Criticality Analysers (RCA) and the data dictionary [23].

82
DP sub-systems are further divided into equipment, sensors, and signals. The following
classifications are defined for typical signals for DP sub-systems at a sensor level to achieve
higher reliability and availability, and these need to be added to the data dictionary [1, 33, 131]:
• Critical Redundant: The equipment and component within the sub-system are required
to ensure the vessel is single fault-tolerant. To remove such equipment would either
remove the DP system’s fault tolerance entirely or reduce its post-failure DP capability. It
is sufficient for WCDFI. Sampling rate (SR) is 1 millisecond.
• Critical Non-redundant: This is not applicable within the DP system
• Non-Critical Redundant: The equipment and components within the sub-system that are
required to provide greater availability and higher reliability. SR is 10 milliseconds.
• Non-Critical Non-redundant: The equipment and components within the sub-system
which do not have a direct impact on the DP functionality. SR is 1 second.
Also, attributes such as whether the signal is Input (monitoring) or Output (control) along with
type (Analog or digital) are defined to perform the feature selection and correlation analysis at
the sub-system level. The signals are represented as Analog Input (AI), Digital Input (DI),
Analog Output (AO), and Digital Output (DO). The equipment configuration is either 1oo1
(one-out-of-one), 1oo2 (one-out-of-two), 1oo3 (one-out-of-three), or 2oo3 (two-out-of-three).
For this research study, there are total of 1513 signals identified with 1054 redundant and 459
non-redundant signals.

4.4 Reference System (A1)

4.4.1 System Description


A reference system (A1) is used to measure the position and heading data of the vessel at any
point in time [132, 133, 134]. The position and heading data are critical information for the DP
control system as it compares the measured values with the set value to determine the error.
The DP control system functionality depends on this value to determine the command signal
to the propulsion system. Depending on the type of operation, most DP vessels have more than
one reference system working actively at the same time, providing either absolute or relative
measurement [1, 135]. The position reference systems could be Differential Global Navigation
Satellite System (DGNSS), Differential Global Positioning System (DGPS), Laser radar,
Hydroacoustic Position Reference (HPR) system and Taut wire or Gyrocompass [1].

83
84

Figure 4-5 Reference System A1 – Input Variables / Signals


The reference system is part of the guidance and control system, which usually consists of a
latitude control system and a path control system. The systems consist of hardware, software,
and sensors to supply information and corrections necessary to give accurate position and
heading references. Figure 4-5 represents the GUI model of sub-system “Reference System
(A1)” with the information at equipment and sensor level. The DP 3 vessel considered for the
research consists of Gyros (3 units), Motion Reference Unit (MRU) (4 units), Global
Positioning System (GPS) (3 units) and DGPS (2 units).

4.4.2 Data Dictionary for Reference System


Table 4-2 shows the data dictionary for the Reference System (A1) along with the details of
the equipment configuration. There is a total of 12 equipment items and 67 signals. Out of the
67 signals, 31 are redundant signals, and the remaining 36 are non-redundant signals.

Table 4-2 Reference System (A1)- Data Dictionary

85
4.5 DP Control System (A2)

4.5.1 System Description


The DP control system is considered to be the most critical sub-system as it performs the
control algorithm based on the set-point and measurement. It consists of a set of hardware
(Operator Station (OS), Remote Controller Unit (RCU), Sensors, Field Station (FS), Network
Distribution Unit (NDU), etc.) and software (logic implemented in the controller) which
automatically controls surge, sway and yaw motion. It maintains the desired position and
heading of the vessel based on inputs from the reference systems and sensors, and controls the
propulsion system [136]. The DP control system is also defined as a set of computers that
combines automatic computation with instruction from operators, enabled through interfaces.

The software logic implemented in the controller consists of various functionalities which are
integrated to maintain the desired position and heading, as shown in Figure 2-2 [62, 133]:
Environmental System Input: This logic reads information on environmental data and
converts it into ships load for the position controller logic module through feedforward control.
Kalman Filter: This logic reads the measurement of vessel motions from sensors and filters
out any noise before sending it to the DP position controller logic.
Power System: This logic module provides the information of total power available in the
power plant for a thruster to be brought online and provide the necessary thrust.
DP Position control: This logic calculates the resultant force that thrusters should generate
based on the input received from the environmental system and vessel sensors.
Thruster Allocation Algorithm: This logic finds the available thrusters and allocates the
required thrust based on the power available from the power system.

86
87

Figure 4-6 DP Control System A2 – Input Variables / Signals


Figure 4-6 represents the GUI model of the sub-system “DP Control System (A2)” with the
information at the equipment and sensor level. The DP 3 vessel considered consists of OS (6
units), FS (3 units), RCU (3 units), NDU (6 units) and the model programmed as a software
module in the RCU. The hydrodynamic model should predict the position and orientation of
the ship in each time step. The hydrodynamic model should include low-frequency and wave-
frequency effects and should contain all six degrees of freedom.

4.5.2 Data Dictionary for DP Control System


Table 4-3 shows the data dictionary for the DP Control System (A2) along with the details of
the equipment configuration. There is a total of 24 equipment items and 127 individual signals,
which are grouped under 33 signal groupings for this research. Out of 33 signal groups, 16 are
redundant, and the remaining 17 are non-redundant.
Table 4-3 DP Control System (A2)- Data Dictionary

88
4.6 Thruster / Propulsion System (A3)

4.6.1 System Description


Thruster / Propulsion system refers to propellers, thrusters (Azimuth, Tunnel or Bow), Variable
Frequency Drives (VFD), Electrical motors, and rudders that receive the command from the
DP system and produce the necessary forces to maintain the vessel at a given position set-point
and heading. Generally, the propulsion system is defined as a system which makes thrust/force
from power delivered by the power system and a control signal from the DP system [62, 13].
The control signals from the DP system generally consist of rotational speed, azimuth angle,
etc, to produce enough thrust to maintain the desired position and heading. Figure 4-7
illustrates the GUI model of sub-system “Thruster / Propulsion System (A3)” with the
information at equipment and sensor level. The DP 3 vessel considered consists of 8 azimuth
thrusters, VFD, electrical motors, and associated machinery.

89
90

Figure 4-7 Thruster / Propulsion System A3 – Input Variables / Signals


4.6.2 Data Dictionary for Thruster / Propulsion System
Table 4-4 shows the data dictionary for the Thruster / Propulsion System (A3) along with the
details of the equipment configuration. There are a total of 8 equipment items, 352 sensors,
and 616 signals. Out of 616 signals, 528 are redundant, and the remaining 88 are non-redundant.

Table 4-4 Thruster / Propulsion System (A3)- Data Dictionary

91
4.7 Power System (A4)

4.7.1 System Description


The Power System comprises engines, prime movers, generators, and all auxiliary machinery
systems providing electrical power to the vessel. The power generated is supplied to consumers
on-board the ship and the propulsion system, which consumes the majority of the power in
almost all DP vessels. The Power Management System is an automatic system that helps to
minimise the manual power demand calculation, start/stop of the generators, and connecting
of the standby generator in the case of the demand [62, 132, 133]. Figure 4-8 represents the
GUI model of sub-system “Power System (A4)” with the information at equipment and sensor
level. The DP 3 vessel considered consists of 8 engines, 8 alternators, and associated Fuel oil
and lube oil systems.

92
93

Figure 4-8 Power System A4 – Input Variables / Signals


4.7.2 Data Dictionary for Power System
Table 4-5 shows the data dictionary for Power System (A4) along with the details of the
equipment configuration. There are a total of 8 engines, eight generators, 424 sensors, and 616
signals. Out of 616 signals, 368 are redundant, and the remaining 248 are non-redundant.

Table 4-5 Power System (A4)- Data Dictionary

94
4.8 Electrical System (A5)

4.8.1 System Description


The electrical system consists of Power distribution units, switchboards, Uninterruptible Power
Supply (UPS) and batteries, and provides interconnection between the power system and the
consumers of the DP system [62]. Figure 4-9 represents the GUI model of the sub-system
“Electrical System (A5)” with the information at equipment and sensor level. The DP 3 vessel
considered consists of 4 switchboards, 4 master circuit breakers, 4 slave circuit breaker, 8
incoming feeders from generators and 8 outgoing feeders to thruster.

95
96

Figure 4-9 Electrical System A5 – Input Variables / Signals


4.8.2 Data Dictionary for Electrical System
Table 4-6 shows the data dictionary for Electrical System (A5) along with the details of the
equipment configuration. There are a total of 4 switchboards, 4 slave circuit breakers and 4
master circuit breakers, 59 sensors, and 106 signals. Out of 106 signals, 94 are redundant, and
the remaining 12 are non-redundant.

Table 4-6 Electrical System (A5)- Data Dictionary

97
98
4.9 Environmental System (A6)

4.9.1 System Description


The environment system refers to the wind, current, and wave parameters affecting the position
and heading of the DP vessels. The calculation of all horizontal forces on a ship is summative,
i.e., wave, wind, and current all could be considered collinear, and their forces can be added
together to get the total environmental forces. The environment loads are wind speed, wave
height, current speed, the direction of the wind, wave, and current [13, 133, 137].

99
100

Figure 4-10 Environment System A6 – Input Variables / Signal


Figure 4-10 represents the GUI model of sub-system “Environment System (A5)” with the
information at equipment and sensor level. The DP 3 vessel considered consists of 3 wind
sensors, 3 wave radars, but as there is no sensor for the current, it is considered to be a
contributor to the non-modelled errors in the system.

4.9.2 Data Dictionary for Environment System


Table 4-7 shows the data dictionary for Environment System (A6) along with the details of the
equipment configuration. There are a total of 3 wind sensors, 3 wave radars, 6 sensors, and 45
signals. Out of 45 signals, 18 are redundant, and the remaining 27 are non-redundant.
Table 4-7 Environment System (A6)- Data Dictionary

4.10 Human / Operator Error (A7)

4.10.1 System Description


Human / Operator Error is considered as one of the variables (sub-system) contributing to the
reliability of the DP system. Though there is no direct mathematical representation of the
human reliability that is widely accepted, for some DP vessels, quantitative risk assessment is

101
used to factor in the DPO contribution to DP reliability. The error could be from the DP
operator, Engine Room technicians, Deck officers, instrument technicians etc. which directly
or indirectly results in failures of the system leading to DP accidents. The DP operator can
influence the complete DP system, and wise decision/interference could prevent DP accidents
from occurring [33].

Human error is one of the most significant contributors to DP accidents, which means reducing
the number of operators related DP accidents could significantly increase the safety and
reliability of DP operation [3, 20]. The DPO is tasked with monitoring a highly automated and
complicated system which leaves the DPO “out-of-the-loop”. However, the operator is asked
to intervene when the DP system is failing in ways unforeseen by designers often with little
time available for decision response. For this research, four factors have been considered as
affecting the performance of the DPO, namely Situation Awareness, Decision Making ability,
Dexterity, and Distraction. They are assigned to a particular level based on the capability of a
DPO to respond and take corrective action through a generalised format as below [26, 87]:
• Level 1: Perception refers to the perception of attributes and dynamics of elements in an
environment.
• Level 2: Comprehension refers to the integration and interpretation of that information
to understand what is happening in a situation,
• Level 3: Projection involves the operator's estimation of the system's future states. The
outcome of this continuous assessment of the current situation can be utilised to
determine future courses of action.

In addition to the 4 direct factors, 18 potential factors were identified, which are grouped into
Tangible (internal and external) and Intangible (internal and external) as shown in Figure 4-11.
The system will be provided with the input every time there is a shift change or during complex
offshore operations, which will be mentioned in the DP operation manual, including
Simultaneous Operations (SIMOPS). Figure 4-11 represents the GUI model of sub-system
“Human / Operator Error (A7)” with the information at a different level, having an impact on
the DPO ability to respond during emergency scenarios. The DP 3 vessel considered has DP
crew consisting of 2 DPO, 1 Captain, 1 Chief Engineer, 1 Chief Mate, 2 Electrician, and
necessary supporting engineers.

102
103

Figure 4-11 Human / Operator Error A7 – Input Variables / Signals


4.10.2 Data Dictionary for Human / Operator Error
Table 4-8 shows the data dictionary for Human / Operator Error (A7) along with the details of
the factors affecting the DP reliability. There is a total of 4 factors, each with 3 levels, and 4
sub-components with 18 variables identified. All the signals are considered to be assigned by
the Captain for Human Reliability Analysis (HRA) modelling, and a total of 30 signals was
identified. Out of the 30 signals, none are redundant, so all 30 are considered as non-redundant.

Table 4-8 Human / Operator Error (A7)- Data Dictionary

104
4.11 Summary

In this chapter, the first stages of analytics involving descriptive and diagnostic analytics are
explained, along with data exploration and processing. The feature selection was performed
through descriptive analytics tools such as statistics and distribution plots. The input and output
variables (sub-systems) contributing to the DP system reliability were identified. This step
performed quality checks on the data, avoided any collinearity and provided clear visualisation
of the variables. The data dictionary at the system level was developed to provide a clear
understanding of the data sets. The correlation and interdependencies between the variables
(sub-systems) were then identified. Finally, each of the sub-systems identified was described
along with the system architecture, GUI model, and data dictionary. The number of equipment
items, sensors, and arrangement configuration was explained so that during the development
of the mathematical model, it can quickly be evaluated against the DL algorithm performance.

Chapter 5 describes a systematic approach for weight assignment to the DP sub-systems


through Analytic Hierarchy Process (AHP).

105
5. Weight Assignment of DP Sub-Systems: Analytic Hierarchy Process
5.1 Introduction

In this chapter, the different sub-systems identified as input variables are evaluated and
assigned a relative weighting based on the contribution to the overall functionality and
reliability of the DP system. AHP is a structured technique used for dealing with complex
decisions, and weighting assignment between alternatives and was used for weighting of the
DP sub-systems [141]. The chapter begins with a description of the AHP process, along with
the methodology used. In the next section, features of AHP and the step-by-step process
involved in the weighting assignment are explained in detail with the overall flowchart.
Subsequently, the application of the AHP process to the DP system was performed. The first
step of AHP involving the data collection and definition of the system and sub-systems is
described. The next step was the decision hierarchy model establishment, which involved
defining the main goals and criteria for the DP problem context. After that, the hierarchy of the
system to the sub-systems and from sub-system to the components is established. Then
technical ranking for the sub-systems was collected from different industry experts, and a
pairwise comparison matrix was generated. The relative weighting between the sub-systems
was established, and verification was performed using the consistency ratio. Finally, the
weighting assignment was validated against results obtained through the LSTM algorithm [16].
Once proven, the weighting assignment for the DP sub-system is fixed for the research study
to be used in predictive and prescriptive analytics.

5.2 Analytic Hierarchy Process as MCDM

DP being a complex system, it is necessary to use more advanced tools to increase the accuracy
of decision making, eliminate inconsistencies generated from vast numbers of comparisons,
reduce inherent limitations, minimise errors, and handle multi-attributes [138]. Each DP sub-
system plays a unique role in the continuous overall DP function for safe and reliable operation
of the vessel. Rating the significance or assigning weightings to the DP sub-systems in different
operating conditions is a complex task that requires input from many stakeholders. The
weighting assignment is a critical step in determining the reliability of the DP system during
complex marine and offshore operations. Thus, an accurate weighting assignment is crucial as

106
it, in turn, influences the decision-making of the operator concerning the DP system
functionality execution. Often DP operators prefer to rely on intuition in assigning the
weightings. However, this introduces an inherent uncertainty and a level of inconsistency in
decision-making [139, 140, 141]. The systematic assignment of weightings requires a clear
definition of criteria and objectives and data collection with the DP system operating
continuously in different environmental conditions. The sub-systems of the overall DP system
are characterised by multi-attributes resulting in a high number of comparisons, thereby
making weighting distribution complicated. If the weighting distribution was performed by
simplifying the attributes, making the decision by excluding part of them or compromising the
cognitive efforts, then this could lead to inaccurate decision-making [142].

Multi-Criteria Decision Making (MCDM) methods have evolved over several decades and
have been used in various applications within the maritime, oil and gas industries [143, 144].
DP, being a complex system, naturally lends itself to the implementation of MCDM techniques
to assign weight distribution among its sub-systems [138]. An AHP model is useful in obtaining
the domain knowledge from numerous experts and representing knowledge-guided indexing.
The approach involved the examination of several criteria in terms of both quantitative and
qualitative variables. AHP provides a comprehensive and rational framework for structuring a
decision problem, for representing and quantifying its elements, for relating those elements to
the overall goals and for evaluating the alternative solutions. AHP is one of the most well-
known and widely used MCDM techniques. It decomposes the multiple-attributes into
hierarchies or groups as per their characteristics and entities and then compares them for
weighting distribution. In real applications, such as DP sub-systems, the comparisons are
subject to judgmental errors. Therefore, a careful evaluation is required after receiving input
from subject matter experts, and analysis must be performed without bias or influence from
the vendor [60].

5.3 Methodology

AHP is a method of “measurement through pairwise comparisons and relies on the judgments
of experts to derive priority scales” [145]. Decision-makers and researchers use it because it is
a simple and powerful tool. At the same time, this method seeks a systematic practice to define

107
priorities and support complex decision making. In AHP, the decision problem is decomposed
into different levels as criteria and sub-systems within which the substantial number of
pairwise comparisons are completed, and finally, the weighting distribution is determined. In
this technique, complete aggregation among the criteria is assumed, and linear additive models
are developed. The weights, priorities, and scores will be achieved by pairwise comparison
between all the options. AHP is a useful tool for dealing with complex decision making and
may aid the decision-maker to set priorities and make the optimum decision. By reducing
complex decisions to a series of pairwise comparisons, and then synthesising the results, AHP
helps to capture both subjective and objective aspects of a decision. AHP also provides a means
of verification and proving the subject matter expert’s opinion mathematically by using the
consistency ratio. The critical aspect of AHP is that it incorporates the method of verification
of reliability by checking the consistency of the decision maker’s evaluations, thus reducing
the bias in the decision-making process.[145, 146]

For the weighting distribution of the DP sub-system, the following critical information is
established to make the right choice or decision for a real-world application. The following
high-level information is established before the application of AHP [145]:
• Definition of the problem
• The need, reason, and purpose of the decision/weightings
• The available alternatives/sub-systems
• The alternative actions to take
• The criteria and sub-criteria to evaluate the alternatives/sub-systems
• The hierarchy of arrangement of sub-systems
• Identification of the different stakeholders and groups involved in the problem and how
alternatives will affect them.

The criteria and sub-criteria identified can be either tangible or intangible items. However,
when they are intangible, there is no way to measure them as a guide to the ranking of the
alternatives [147, 148]. The situation involving intangible items is addressed by creating
priorities for the criteria to weigh the alternatives and add up all the requirements to obtain the
desired overall ranks of the other options. It is a challenging task, and it is usually a complicated

108
procedure. The execution should follow the steps below when the criteria and sub-criteria are
intangible [144, 149]:
• Structuring a decision problem and selection of criteria
• Priority setting of the criteria by pairwise comparison (weighting)
• Pairwise comparison of options on each criterion (scoring)
• Obtaining an overall relative score for each option
• Evaluating results and verifying the reliability of the output

5.4 Application and Advantages of AHP

AHP is a compensatory method and has been applied in various real-life applications for
MCDM. AHP has been widely used in the following applications, which makes it suitable for
the determination of DP sub-system weighting assignment [145, 150]:
• Finding the optimal decision when multi-decisions are available in any situation.
• Selecting one alternative (system/variables) from a set of other options.
• Determining the quality of a product in a quality management application.
• Weighting distribution / Priority / Evaluation, determining the relative merits from a set
of alternatives.
• Allocating resources, determining the right combination of alternatives due to numerous
constraints.
• Benchmarking the system/process under evaluation with the known system/process.

The weighting distribution obtained through AHP can be used universally for addressing
reliability issue as the AHP has an axiomatic foundation with the following advantages [145,
150, 151]:
• Comparison is reciprocal, meaning that the pairwise comparison matrix that is formed
should be invertible. For example, if A is k times more important than B then B is 1/k
times more critical than A.
• Homogeneity, meaning that there must be similarity in the comparisons.
• Dependence, meaning that each level is concerned (complete hierarchy) although it is
possible that the relationship is not perfect (incomplete hierarchy).

109
• Expectation, meaning that it includes assessment expectations and perceptions of
decision-makers as a priority.

5.5 AHP Framework for Weighting Distribution


The framework of AHP is defined in the form of a step-by-step execution procedure for
weighting assignment to the DP sub-systems. It has provided clear guidance during the process
of exploring the different scoring techniques and finally agreeing on the values with proper
justification [145, 150]. The elaborate process involved in the AHP methodology was made
simple by splitting it into many steps and executing the steps sequentially with an iterative
process for optimisation. The first step involves data collection from different stakeholders,
and significant efforts are required for analysing the data and grouping it appropriately for the
next step. The next step is defining the system and sub-system identification. After this step,
comes the preparation of the hierarchy, which is a critical step involving defining the goals,
criteria, and the sub-system arrangements. Then the weighting distribution of the sub-systems
is determined by the assessment of criteria and alternatives [147]. The next step involves the
evaluation of the sub-systems by experts with different competence, skills, and knowledge of
the DP system. The data were thoroughly filtered and organised using relevant software to
avoid bias and influence on the input. The weighting distribution values obtained through this
step are used in determining the weights for the DP sub-systems.

The values obtained through AHP are verified through the consistency ratio check method.
Further, the weighting assignment has been validated using the LSTM algorithm through
reliability prediction. Figure 5-1 represents the framework of AHP, indicating the step-by-step
execution used in the DP sub-system weighting distribution. During the verification step, if the
results are not within the acceptable range, then the process is again repeated by rating the sub-
system using the Saaty rating scale. Therefore, the check related to the consistency ratio plays
a critical role in deciding on the final assignment. In case of any deviation, the process should
be re-iterated until satisfactory results are obtained.

110
START

DATA COLLECTION / SURVEY

DEFINE SYSTEM & SUB-SYSTEMS

ESTABLISH DECISION HEIRARCHIAL MODEL WITH


GOAL, CRITERIA, SYSTEM AND SUB-SYSTEM

DETERMINATION OF PRIORITIES / RANKING BY


EXPERT USING SAATY RATING SCALE

DEFINE PAIRWISE COMPARISON MATRIX

CALCULATE WEIGHTING DISTRIBUTION

PERFORM CONSISTENCY TEST

NO
𝑪𝑪𝑪𝑪 ≤ 𝟎𝟎. 𝟏𝟏𝟏𝟏

YES

ASSIGN WEIGHT DISTRIBUTION TO SUB-SYSTEMS

STOP

Figure 5-1 Framework of AHP for weight assignment to DP sub-systems

111
5.6 DP Sub-System Weighting Distribution using AHP

DP sub-systems need to be assigned with appropriate weights, so it is necessary to evaluate the


contribution of each to the overall reliability of the DP system [138, 152]. For the AHP
methodology, the basic principle of decision making is that the problem needs to be expressed
in the form of hierarchical decomposition. Once the decomposition is completed, then the
assessment of relative importance between the elements in each hierarchy, a comparative
judgment, will be performed [149, 144]. This step is followed by priority synthesis and
consistency of assessment evaluation.

In this section, the steps involved in weighting distribution among the sub-systems follows the
framework of AHP. The DP sub-system is structured in a hierarchy model, and the criteria for
the defined problem are identified. In particular, the framework developed for the application
provided clarity, and at the same time, the weighting distribution among the sub-systems are
axiomatic and assertive.

This framework of AHP is flexible and easy to adapt for different applications with slight
variations. For DP sub-system weight assignment, the framework key concepts are used along
with the following variations [143, 150]:
1. To introduce the general underlying principles in obtaining weights, with particular
attention to subjective weights
2. To identify and analyse the approaches for obtaining weights:
a. Statistical approaches (for obtaining “objective” weights), generally applied in the
scope of composite indicator construction,
b. Multi-Attribute approaches,
c. Scaling approaches, allowing subjective data to be managed; among these, the models
should be able
i. to handle subjective evaluations and judgments, expressed in explicit or implicit
ways,
ii. to obtain subjective weights at the group level and the individual level. These will
be identified and described from the perspective of obtaining subjective weights.

112
5.6.1 Data Collection / Survey Method

The processes of sub-system selection, criteria identification, and sub-criteria identification are
conducted through three steps of data collection. The data is collected from databases,
correlation analysis, and industry experts such as vendor control system specialists, design
engineers, FMEA experts, and vessel operators. Figure 5-2 shows how the big data collected
from for the research are used to identify the critical DP sub-systems and criteria for weight
assignments through different processes.

SUB-SYSTEM
CLASS
BIG DATA

20 SUB-
GROUP QUESTIONNAIRE ONE-ONE
SYSTEM
DISCUSSION BY EMAIL INTERVIEW
GROUPING

SELECTION OF
7 SUB-SYSTEM

CRITERIA
DEFINTION BY
LITERATURE

GROUP QUESTIONNAIRE ONE-ONE


20 CRITERIA
DISCUSSION BY EMAIL INTERVIEW
DEFINITION

IDENTIFYING 6
CRITERIA

Figure 5-2 Data Collection and Survey on DP sub-systems

113
The data was collected through different methodologies, and it was ensured that the input was
independent, and that one person’s decision did not affect other inputs. The data was also
collected across various disciplines, regions, shipyards, classification societies, FMEA
consultants, design companies, vessel operators, DP vendors, and experts directly or indirectly
involved in DP operations. The data collected in the initial stages were grouped and analysed
in detail by Subject Matter Expert (SME) for further shortlisting before sending out for another
round of Questionnaire with Template as shown in Appendix I data decision gathering [153,
154]. Finally, seven sub-systems were identified as critical, having a direct impact on DP
system functionality. Similarly, six criteria were identified as essential, which determine the
safe and reliable operation of the DP system.

5.6.2 Define System and Sub-Systems

Once the data has been collected, the next step is to define the system and sub-systems for
which the AHP concept needs to be applied. Chapters 3 and 4 already detailed the process of
identifying the system and sub-systems. The system is the overall DP system, and the sub-
systems were identified as Reference System (A1), DP Control System (A2), Thruster /
Propulsion System (A3), Power System (A4), Electrical System (A5), Environment System
(A6) and Human Error (A7).

5.6.3 Decision Hierarchy Structure Model

Once the data had been collected, the critical step in the AHP technique is developing the
structure of the decision hierarchy. It is generated based on human judgment ability to construct
a hierarchical perception of a multi-criteria problem. The hierarchy is a systematic
representation of a complex system in a multi-level structure and is created using attributes
and entities [144, 150]. By defining the hierarchy, the complex problem will become more
transparent for decision-making problems where the entities are interconnected with each
other. For this specific application of weighting distribution among the sub-systems, the
hierarchy is divided into two levels containing the seven sub-systems and six criteria entities.
Figure 5-3 shows the classification of the DP sub-system based on big data analytics and
experts’ judgments. Figure 5-4 shows the decision hierarchy model with the main criteria and
sub-criteria aligned in the hierarchy and its interdependencies with the DP sub-systems. The
hierarchy is flexible, and the system is very agile, which enables new criteria to be added once
114
identified [147]. The hierarchy is prepared based on the inputs collected from SME with many
years of field experience.

Figure 5-3 Hierarchy of DP sub-systems

GOAL SAFE AND RELIABLE


DYNAMIC POSITIONING
SYSTEM

CRITERIA

P STATION P PREVENT P PREVENT P DP MODE P PREVENT P DP CLASS


KEEPING LOSS OF LOSS OF OF SOURCE OF OF
CAPABILITY POSITION REDUNDANCY OPERATION ACCIDENTS OPERATION

REFEREN DP THRUSTER/ ENVIRON


POWER ELECTRICAL HUMAN
CE CONTROL PROPULSION MENT
SYSTEM SYSTEM ERROR
SYSTEM SYSTEM SYSTEM SYSTEM

SUB-SYSTEM
Figure 5-4 Decision Hierarchy Structure Model

115
5.6.4 Determination of Priorities and Assign Saaty Ratings

AHP gives flexibility to the user for assigning whether a variable is more, less or equally
important to another variable. In this step, the user may assign a quantitative value based on
qualitative factors. However, experts with similar domain knowledge may give different
ratings for the same sub-system. The weighting of the DP sub-systems was collected from
SME through various methodologies to have independency on their input, as shown in Figure
5-5. The methods are widely accepted for collecting information from a group for a problem
that requires feedback based on field experiences [154].

Table 5-1 represents the “scale of relative importance” in qualitative and quantitative factors,
which will be used for pairwise comparisons. The rating for each sub-system for specific
qualitative inputs may vary from individual to individual. To minimise the error and avoid
inconsistency, the Likert scale methodology is adopted [145, 146]. Once the rating is assigned
for a sub-system, the relative rating with the immediately previous sub-system is assigned.
From the data of the Likert scale, as shown in Table 5-2, a quantitative suggestion matrix could
be developed for a straightforward and convenient pairwise comparison. AHP allows the user
to determine the rating for the sub-system intuitively by making a pairwise comparison then to
change the pairwise comparison into a set of numbers that represents the relative priority of
each criterion consistently.

WEIGHTING
ASSIGNMENT BY
EXPERT JUDGING

GROUP BRAIN ONE-ONE


GROUPING IN 8 STORMING BY
DISCUSSION INTERVIEW
CATEGORIES EMAIL

DEFINING
PRIORITIES FOR
AHP

Figure 5-5 Ranking of DP sub-system from industry experts

116
Table 5-1 Pairwise comparison assessment table for “Scale of relative importance.”

117
Table 5-2 Sub-System Priority assignment - Likert Scale Rating

Figure 5-6 shows the composition of the participants who are industry experts and contributed
to the weighting of the DP sub-systems. The Participants with different backgrounds and
knowledge of DP systems contributed to provide the input.

Figure 5-6 Industry experts composition (Organisation group)

118
Figure 5-7 shows the composition of the participants who are experts in different disciplines
and contributed to the weighting of the DP sub-systems.

Figure 5-7 Industry experts composition (Discipline)

They are SME from various companies, academic institutions and countries, with different
skill sets, working with the DP system to varying phases of the DP life cycle. The questionnaire
was prepared based on the criteria and sub-criteria for ranking through several rounds of
shortlisting and refinement as shown in Appendix I. The questionnaire was validated against
the IMO regulations, IMCA guidelines, and International Association of Classification
Societies (IACS) standards for safe assignment of weighting to each of the DP sub-systems.

119
5.6.5 Pairwise Comparison Matrix for Sub-System

Table 5-3 shows the rating for each sub-system based on the input data collected through the
survey method. Once the rating for each sub-system is assigned, then the “scale of relative
importance” is determined. Any given set affects only one of the other sets and is affected only
by a different one of the other sets for relative rating. However, when ranking the sub-systems,
the rating is based on various criteria and sub-criteria affecting the overall complex problem
or goal [147, 150]. Thus, the method proved to be useful as ranking was based on all possible
criteria relative to each sub-system.

Table 5-3 Pair-Wise Comparison Matrix Template for 7x7 matrix with seven sub-systems

The next step of AHP is to form the pairwise comparison matrix between the set of sub-
systems. The comparisons between the sub-systems are made for each criterion. For the DP
application, there are seven sub-systems classified for weighting distribution. Therefore, the
DP sub-system pairwise comparison matrix is formed as a 7x7 matrix table, as shown in Table
5-4. The pairwise comparison matrix reveals the relation between the sub-systems. It analyses
whether they are significantly different from one another. Each sub-system is matched head-
to-head (one-on-one) with each of the other sub-systems [145, 155].

120
Table 5-4 Pair-Wise Comparison Matrix for DP sub-systems

5.6.6 Calculate Weighting Distribution among DP sub-systems

The critical step for the weighting distribution of sub-systems is determined by multiplying the
values in each row together and calculating the Nth root of said product, normalising the nth
root of roots products to get the appropriate weights and finally calculating and checking the
Consistency Ratio (CR) [146]. It can be explained in the following steps:

Once the pairwise comparison table has been formed, then the Nth root of the product in the
matrix needs to be calculated. Consider there are “m” number of sub-systems, and “n”
represents the unique number for each sub-system.

The Nth-root-of-product is expressed as shown in Equation (5-1)


𝐴𝐴𝑛𝑛 = (𝐴𝐴𝑛𝑛1 ∗ 𝐴𝐴𝑛𝑛2 ∗ 𝐴𝐴𝑛𝑛3 ∗ 𝐴𝐴𝑛𝑛4 … . .∗ 𝐴𝐴𝑛𝑛𝑛𝑛 )1/𝑚𝑚 (5-1)

In this case, the matrix for the DP system is 7x7. Therefore the 7th-root-of-product in each row
is calculated as follows for “m” sub-systems.
For example, the 7th root of Product for each sub-system is calculated using Equation (5-1) as
below:
𝐴𝐴1 = (𝐴𝐴11 ∗ 𝐴𝐴12 ∗ 𝐴𝐴13 ∗ 𝐴𝐴14 ∗ 𝐴𝐴15 ∗ 𝐴𝐴16 ∗ 𝐴𝐴17 )1/7
𝐴𝐴1 = (1.00 ∗ 2.00 ∗ 3.00 ∗ 2.00 ∗ 3.33 ∗ 1.66 ∗ 0.50)1/7
𝐴𝐴1 = 1.64
121
Table 5-5 DP Sub-System Weighting Distribution and Measurement of consistency

Similarly, the values of the Nth Root of the product for the other sub-systems are calculated
using Equation (5-1). The results are listed in Table 5-5 for easy reference and will be used for
the determination of Weight / Priority of the sub-system. The 7th-root-of-product values are
then added together to give a total of 8.10. Once this is done, the weight determination or
priority vectors are calculated. The 7th-root-of-product values and total are normalised to get
the appropriate weights for each of the sub-systems.

The weights or priority vectors for each sub-system are calculated below using the AHP
methodology formula. The weights or priority vectors for the alternatives are expressed using
Equation (5-2):
𝑁𝑁𝑁𝑁ℎ 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝑂𝑂𝑂𝑂 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑜𝑜𝑜𝑜 𝐴𝐴𝑛𝑛
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝐴𝐴𝑛𝑛 = (5-2)
𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑜𝑜𝑜𝑜 𝑁𝑁𝑁𝑁ℎ 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃

122
1.64
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊1 = = 0.20 ≈ 20%
8.103
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊2 = 0.14 ≈ 14%
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊3 = 0.10 ≈ 10%
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊4 = 0.10 ≈ 10%
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊5 = 0.06 ≈ 05%
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊6 = 0.08 ≈ 10%
𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑜𝑜𝑜𝑜 𝑊𝑊7 = 0.32 ≈ 30%

Once the weight distribution or priority is assigned for the different sub-systems, it is necessary
to verify the consistency of the decisions.

5.6.7 Verification of Reliability of Weighting Distribution

The most crucial characteristic of AHP is consistency. During assessment in the AHP
technique, the assessment criteria between sub-systems are not entirely consistent [143, 145].
As in other MCDM methods, AHP allows for inconsistency; however, it limits the
inconsistency range not to exceed 10%, which is set as the threshold limit for acceptance. It is
defined as the CR.

The CR determines how consistent the decision-maker has been when making the pair-wise
comparisons [151]. The consistency ratio is used for verification of the reliability of the
weighting distribution among sub-systems of the DP system to ensure the consistency of
experts’ judgments arranged in pairwise comparisons from the results of the survey. In this
way, the weight assignment chosen for the DP sub-system is justified and proven to have a
systematic approach for the research application.

CR calculation is performed in five steps as follows:


i. The pairwise comparison values in the 7x7 matrix are added together for each column as
the “Sum” values. The sum values are expressed as per Equation (5-3)
𝑆𝑆𝑆𝑆𝑆𝑆 𝑜𝑜𝑜𝑜 𝐴𝐴𝑛𝑛 = ∑𝑚𝑚
1 𝐴𝐴𝑚𝑚𝑚𝑚 (5-3)
= (𝐴𝐴11 + 𝐴𝐴21 + 𝐴𝐴31 + 𝐴𝐴41 + 𝐴𝐴51 + 𝐴𝐴61 + 𝐴𝐴71 )
= (1.00 + 0.50 + 0.33 + 0.50 + 0.30 + 0.63 + 2.00)
𝑆𝑆𝑆𝑆𝑆𝑆 𝑜𝑜𝑜𝑜 𝐴𝐴1 = 5.26

123
ii. The Sum values are then multiplied by the respective weight factor for that sub-system from
the “Priority Vector” column for that sub-system as shown in Equation (5-4)
𝑆𝑆𝑆𝑆𝑚𝑚 ∗ 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴1 = 𝑆𝑆𝑆𝑆𝑆𝑆 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴1 ∗ 𝑃𝑃𝑃𝑃 𝑜𝑜𝑜𝑜 𝐴𝐴1 (5-4)
= 5.26 ∗ 10.2
= 1.06
𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴1 = 1.06

iii. The λmax of the 7x7 matrix is calculated by adding the (Sum * PV) of each subsystem from
the previous step. The measurement consistency of a matrix is based on the eigenvector
maximum (λmax). The closer the λmax obtained with the 7 x7 matrices to the 7th-root-of-
product, the more consistent the results.
Eigenvector maximum is calculated using the Equation (5-5)
𝜆𝜆𝑚𝑚𝑚𝑚𝑚𝑚 = ∑𝑚𝑚
1 𝐴𝐴𝑚𝑚 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) (5-5)
𝜆𝜆𝑚𝑚𝑚𝑚𝑚𝑚 = {𝐴𝐴1 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) + 𝐴𝐴2 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) + 𝐴𝐴3 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) + 𝐴𝐴4 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃)
+ 𝐴𝐴5 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) + 𝐴𝐴6 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃) + 𝐴𝐴7 (𝑆𝑆𝑆𝑆𝑆𝑆 ∗ 𝑃𝑃𝑃𝑃)}
𝜆𝜆𝑚𝑚𝑚𝑚𝑚𝑚 = {1.06 + 1.03 + 1.03 + 1.00 + 1.00 + 0.98 + 0.98 }
𝜆𝜆𝑚𝑚𝑚𝑚𝑚𝑚 = 7.09

iv. Calculate the Consistency Index (CI) for the 7x7 matrix is expressed as per the Equation (5-
6):
𝐶𝐶𝐶𝐶 = (𝜆𝜆𝑚𝑚𝑚𝑚𝑚𝑚 − 𝑚𝑚) / (𝑚𝑚 − 1) (5-6)
= (7.09 − 7) / (1 − 1)
= 0.02

v. Calculate and check the Consistency Ratio (CR) to ensure that the decision-maker is
consistent while making pair-wise comparisons. However, before this step, the Random
Index for the 7x7 matrix should be defined. The RI has been chosen based on
vi. Table 5-6. The value for the CR is determined using the Equation (5-7)
𝐶𝐶𝐶𝐶 = 𝐶𝐶𝐶𝐶 / 𝑅𝑅𝑅𝑅 (5-7)
= 0.02 / 1.32
= 0.01

124
Table 5-6 Random Index (RI) for AHP

The numerical values of CR define the consistency of the decision-maker. A higher number
means the decision-maker has been less consistent, whereas a lower number means the
decision-maker is consistent. The general rule of thumb for the verification of CR is as follows
[145, 146]:
• If the CR > 0.10, then the decision-maker should seriously consider re-evaluating the
pairwise comparison, which indicates that the sources of inconsistency must be
identified and resolved. The analysis needs to be re-performed.
• If the CR ≤ 0.10, then the decision-maker pair-wise comparisons are relatively
consistent.

The DP application problem analysed in the above section has CR = 0.01, which reveals that
the pairwise comparisons between the sub-systems are relatively consistent. Therefore, no
corrective actions are required. The weighting distribution factors among the sub-systems of
the DP system have been successfully assigned by using AHP, and it has been proved to be
consistent and axiomatic. Therefore, the weighting distribution of sub-systems, as indicated in
Table 5-7, will be used for further research studies.

Table 5-7 DP sub-system weighting distribution using AHP

125
5.7 Validation of Weighting through LSTM Algorithm

For validation of the weighting distribution among the DP sub-systems, the LSTM algorithm
is used on part of the data set. The details of the prediction are discussed in the next chapter.
However, a high-level prediction of the reliability of the DP system using the LSTM algorithm
was used to determine the weights of the DP sub-system for validation of the AHP results. The
structure of LSTM with three layers is shown in Figure 5-8 [79, 80].

A A

Figure 5-8 Structure of neuron with time-step

For this validation, a simple LSTM architecture is used to retrieve the weights and biases from
the dataset. The LSTM has four layers, each of four neurons. The four layers form together
with a gate, which is a combination of forget gate, input gate, and output gate (in the order of
the sequence) [156]. The architecture of LSTM was defined in Chapter 2 Section 2.10.3, and
the detailed implementation is defined in Chapter 6 along with a comparison with other
architectures. The validation part used the input shape of (50000 x 7), which refers to 50,000
unique timesteps of data points and seven sub-systems (variables) [157, 156]. For deep
networks, a heuristic method may be used to initialise the weights depending on the activation
function. The weight value for the sub-system is initialised through Xavier's initialisation
[158].

126
The reliability of the DP system and its relationship with the weight functions of sub-systems
are expressed using Equation (5-8) below:
𝐷𝐷𝐷𝐷 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = (𝐴𝐴1 + 𝐴𝐴2 + 𝐴𝐴3 + 𝐴𝐴4 + 𝐴𝐴5 + 𝐴𝐴6 + 𝐴𝐴7) (5-8)
Where
𝐴𝐴1 = 𝑊𝑊1 𝑋𝑋1 + 𝐵𝐵1
𝐴𝐴2 = 𝑊𝑊2 𝑋𝑋2 + 𝐵𝐵2
𝐴𝐴3 = 𝑊𝑊3 𝑋𝑋3 + 𝐵𝐵3
𝐴𝐴4 = 𝑊𝑊4 𝑋𝑋4 + 𝐵𝐵4
𝐴𝐴5 = 𝑊𝑊5 𝑋𝑋5 + 𝐵𝐵5
𝐴𝐴6 = 𝑊𝑊6 𝑋𝑋6 + 𝐵𝐵6
𝐴𝐴7 = 𝑊𝑊7 𝑋𝑋7 + 𝐵𝐵7
𝑋𝑋1 , 𝑋𝑋2 , 𝑋𝑋3 , 𝑋𝑋4 , 𝑋𝑋5 , 𝑋𝑋6 & 𝑋𝑋7 are the reliability of DP sub-systems
𝑊𝑊1 , 𝑊𝑊2 , 𝑊𝑊3 , 𝑊𝑊4 , 𝑊𝑊5 , 𝑊𝑊6 & 𝑊𝑊7 are the weights for DP sub-systems
𝐵𝐵1 , 𝐵𝐵2 , 𝐵𝐵3 , 𝐵𝐵4 , 𝐵𝐵5 , 𝐵𝐵6 & 𝐵𝐵7 are the bias for DP sub-systems
A1, A2, A3, A4, A5, A6 & A7 are the overall realiability of DP sub-systems

The weights of the sub-systems are determined through backpropagation in LSTM [159]. The
overview of the model architecture for sub-system weight distribution through
backpropagation of LSTM is shown in Figure 5-9. It shows the data, features, hidden layer,
and output arrangements. The main goal with backpropagation is to update each weight in the
network so that it tends to converge to the optimal point to ensure that the output is closer to
the target output. In this way, it minimises the error for each output neuron and the network as
a whole. To understand the mathematical calculation behind the back-propagation, it is
essential to implement the forward computation. The forward computation is represented
through Equations (5-9) to (5-13) [159, 160].
ℎ𝑓𝑓𝑡𝑡 = σ (𝑊𝑊𝑓𝑓 [ℎ𝑡𝑡−1 , 𝑥𝑥𝑡𝑡 ] + 𝑏𝑏𝑓𝑓 ) (5-9)
ℎ𝑖𝑖𝑡𝑡 = σ (𝑊𝑊𝑖𝑖 [ℎ𝑡𝑡−1 , 𝑥𝑥𝑡𝑡 ] + 𝑏𝑏𝑖𝑖 ) (5-10)
ℎ𝑜𝑜𝑡𝑡 = σ (𝑊𝑊𝑜𝑜 [ℎ𝑡𝑡−1 , 𝑥𝑥𝑡𝑡 ] + 𝑏𝑏𝑜𝑜 ) (5-11)
ℎ𝑐𝑐𝑡𝑡 = 𝑡𝑡𝑡𝑡𝑡𝑡ℎ (𝑊𝑊𝑐𝑐 [ℎ𝑡𝑡−1 , 𝑥𝑥𝑡𝑡 ] + 𝑏𝑏𝑐𝑐 ) (5-12)
𝑐𝑐𝑡𝑡 = ℎ𝑓𝑓𝑓𝑓 ∗ 𝑐𝑐𝑡𝑡−1 + ℎ𝑖𝑖𝑖𝑖 ∗ ℎ𝑐𝑐𝑐𝑐 (5-13)

127
128

Figure 5-9 LSTM network arrangement with Hidden Layer for backpropagation
In the LSTM cell structure, the old hidden state “H(t-1)” is concatenated with the current input
“X(t)”, therefore the input “Z” for the LSTM net would be the addition of the number of
neurons in the hidden state “H” and the dimension of the input. As the LSTM output layer has
“H” neurons, each of the weight matrices would be “Z x H,” and each bias vectors’ size would
be 1xH. The weights in the fully connected layer are fed to the SoftMax layer, and the resulting
output would be a probability distribution over all possible items with the size of “H x D.”

The next process is to determine the network gradient analytically through backpropagation.
This process involves the partial derivative of a function and the use of the chain rule [161].
The backpropagation is based on the forward step results with a reversal in the direction of
step implementation. The backpropagation is implemented in the following four steps to
determine the optimised weights for the sub-system.

Step 1:
The relationships between the sub-systems and their weights are defined along with the
activation function. Using the input “X” to the activation function as the input layer represented
by Equation (5-14) to (5-23). The result of this layer is fed to the next layer (and so on).
𝑍𝑍 = 𝑊𝑊𝑊𝑊 + 𝑏𝑏 (5-14)
𝐴𝐴1 = 𝑊𝑊1 𝑋𝑋1 + 𝐵𝐵1 (5-15)
𝐴𝐴2 = 𝑊𝑊2 𝑋𝑋2 + 𝐵𝐵2 (5-16)
𝐴𝐴3 = 𝑊𝑊3 𝑋𝑋3 + 𝐵𝐵3 (5-17)
𝐴𝐴4 = 𝑊𝑊4 𝑋𝑋4 + 𝐵𝐵4 (5-18)
𝐴𝐴5 = 𝑊𝑊5 𝑋𝑋5 + 𝐵𝐵5 (5-19)
𝐴𝐴6 = 𝑊𝑊6 𝑋𝑋6 + 𝐵𝐵6 (5-20)
𝐴𝐴7 = 𝑊𝑊7 𝑋𝑋7 + 𝐵𝐵7 (5-21)
𝑎𝑎 = σ (Z) (5-22)
𝑎𝑎 = σ (A1) (5-23)
Step 2:
In this step, the relationship of the input with the weights is computed for each layer. For
simplicity the Equation (5-24) and (5-25) are shown only for the Reference sub-system.
𝐴𝐴1𝐿𝐿 = 𝑊𝑊1𝐿𝐿 𝑎𝑎𝐿𝐿−1 + 𝐵𝐵1𝐿𝐿 (5-24)
𝑎𝑎𝐿𝐿 = σ (z 𝐿𝐿 ) (5-25)

129
Step 3:
The next step is to compute the error vector in the “L” layer, as shown in Equation (5-26).
𝛿𝛿 𝐿𝐿 = ∇𝑎𝑎 C ⊚ σ𝐿𝐿 (z 𝐿𝐿 ) (5-26)
Then the rate of change of C is expressed concerning the output activations as shown in
Equation (5-27).
∇𝑎𝑎 C = (a𝐿𝐿 − y) (5-27)
Before proceeding to the next step, the generalised error vector formula is determined. Once
the error vector at the last layer is computed, then backpropagation is used to determine the
error in the previous layer as in Equation (5-28).
𝛿𝛿 𝐿𝐿 = (a𝐿𝐿 − y) ⊚ σ𝐿𝐿 (z 𝐿𝐿 ) (5-28)

Step4:
In the final step, the process moves backwards from the output layer to the input layer through
hidden layers to find the optimum weights and, at the same time, reduce the error through the
gradient cross-function [162, 163]. The model consists of 3 hidden layers to determine the
optimal sub-system weights. They are defined as below:
L  output layer
L-1  Hidden layer before the output layer
L-2  2nd hidden layer
L-3  3rd hidden layer
L-4  Input Layer

The error vector for the last layer (output layer) is given by Equation (5-29):
𝛿𝛿 𝐿𝐿 = (w 𝐿𝐿+1 )𝑇𝑇 𝛿𝛿 𝐿𝐿+1 ⊚ σ𝐿𝐿 (z 𝐿𝐿 ) (5-29)
Where
(w 𝐿𝐿+1 )𝑇𝑇 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤ℎ𝑡𝑡 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 𝑜𝑜𝑜𝑜 𝐿𝐿 + 1 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙

The transpose is applied to the weight matrix (w 𝐿𝐿+1 )𝑇𝑇 so that weights can be expressed
intuitively as moving the error back through the network, giving a measure of the error at the
output of that Lth layer. Similarly, the values of each weight in the hidden layers are calculated
by expressing weights in terms of 𝑊𝑊𝑗𝑗𝑗𝑗𝐿𝐿 with “J” representing the feature and “K” the actual
neuron in the hidden layer.

130
Finally, the Hadamard product ⊚ σ𝐿𝐿 (z 𝐿𝐿 ) is computed. This step moves the error backwards
through the activation function in layer L, giving the error 𝛿𝛿 𝐿𝐿 in the weighted input to layer, L,
to determine whether it meets the acceptable level and outperforms other algorithms.

The gradient cost function is then calculated through the Equation (5-30) and (5-31).

𝜕𝜕𝐶𝐶
𝐿𝐿 = 𝑎𝑎𝑘𝑘𝐿𝐿−1 𝛿𝛿𝑗𝑗𝐿𝐿 (5-30)
𝜕𝜕𝑊𝑊𝑗𝑗𝑗𝑗

𝜕𝜕𝐶𝐶
= 𝛿𝛿𝑗𝑗𝐿𝐿 (5-31)
𝜕𝜕𝑏𝑏𝑗𝑗𝐿𝐿

Then this allows adjustment of the weights and biases to minimise the cost functions. The
LSTM model is trained using initial values to find the optimum value for the weights, and at
the same time to converge rapidly. During the training of the model using the actual data from
the DP vessel, it was found that the weight distribution for the sub-systems converged to
optimum results. Table 5-8 shows the weighting distribution of the DP sub-system obtained
through LSTM.
Table 5-8 DP sub-system weighting distribution using LSTM

These values obtained through LSTM either matched with or were very similar to AHP results.
Therefore, the AHP results were rounded to the nearest value as per the LSTM results for the
further research study.

131
5.8 Summary

In this chapter, the AHP framework was used to assign the weighting for the DP sub-systems.
The systematic approach and its application to the research study were described through a
step-by-step process. The process involves the work of experts in determining the sub-systems,
goals, criteria, sub-criteria, and relative weighting of the sub-systems. The AHP methodology
is proven and straightforward, as it addresses all questions on the reliability of the results and
consistency such that the approximation can be applied. At the same time, it provided a strong
basis for comparison of the model results and evaluating its performance if it is deemed
necessary. Finally, the results obtained are verified through CR and validated using LSTM so
that they can be used in the other part of the research.

In the next chapter, Stage 4 of big data analytics involving predictive analytics modelling and
a new research framework for the reliability of the DP system will be described.

132
6. Dynamic Positioning – Reliability Index (DP-RI): Predictive
Analytics
6.1 Introduction

In this chapter, stage 4 of the big data analytics, shown in Figure 3-1, involving predictive
analytics is discussed. The predictive analytics is used for the prediction of the reliability of
the DP system for offline simulation and real-time applications during complex offshore
marine operations. A new research framework called the Dynamic Positioning – Reliability
Index (DP-RI) has been introduced with high-level architecture that is presented in section 6.2.
In the next section, the mathematical modelling for DP-RI computation through the RBD is
discussed. The RBD for each sub-system has been calculated depending on the vessel and
system configuration. Then the overall DP reliability was computed. The next section discusses
the basic predictive analytics in terms of the step by step procedure for the prediction of the
reliability of the DP system (DP-RI). The section describes the different RNN models, machine
specifications, such as Graphical Processing Unit (GPU), and hardware used for this research
along with Tensorflow 2.0, which has a vast ecosystem of related components, including
libraries like Tensorboard, deployment and production API. In the next section, the datasets
used for comparing the performance of the different algorithms and the split between the
training, validation, and test data are presented. Other biases affecting the RNN models and
the AIF360 framework to address biases are detailed. Finally, the performance results of each
algorithm are compared using the evaluation metrics to select the suitable RNN models for the
DP-RI advisory tool.

6.2 DP-RI Concept

The DP-RI concept is proposed to aid an operator with quantitative and qualitative
representations of the reliability of DP systems during complex marine operations. This
concept is not a replacement or an alternative solution for the current reliability assessment; it
will enhance the existing reliability assessment results by combining them with a newly
developed database and industry experts’ knowledge [2]. The DP system is classified into
various sub-systems using big data analytics and a correlation method, as described in Chapter
4. The Reliability Index (RI) of each of the sub-systems has been calculated based on the
weighting factor, DP class type, configuration, and mode of operation, as in Chapter 5.

133
134

Figure 6-1 Overall Architecture of DP-RI concept


Figure 6-1 shows the overall architecture of the DP-RI concept indicating the complete steps
from data sources to prediction and prescriptive solutions. In this chapter, the DP-RI concept
is explained through a detailed research framework covering the methodology, dynamic
reliability modelling, DP-RI computation through RBD for mathematical modelling and
finally, prediction of DP-RI using LSTM. DP-RI concepts support both offline prediction and
real-time forecasting based on the current input and historical information.

6.2.1 DP-RI Research Framework

The DP-RI is an advisory tool aiding a DP operator with quantitative and qualitative
representations of the reliability of DP systems during complex marine operations [2]. Due to
the enormous amount of work and cost involved in traditional quantitative assessment, the DP
community can use this research framework as an efficient alternative solution tool. In this
chapter, the reliability of the DP system is determined through a mathematical calculation
using RBD for the sub-systems and at the same time prediction using LSTM based on field
data. Through the process, a simple mathematical formulation for the reliability of the DP
system is introduced in the form of the Reliability Index (RI). The reliability representation in
DP-RI is closer to reality than the current methodology, providing a complete overview to the
operator and aiding them to take the necessary action in the case of any failures within the DP
system.

Different models of deep learning algorithms have been used for TSP forecasting in the DP-
RI tool to determine their suitability based on the accuracy and speed of prediction. The
architecture and framework for the DP-RI are shown in Figure 6-2. It depicts the flow of
information from different databases to individual systems to reliability modelling and, finally,
the real-time prediction of reliability [2]. The DP-RI research framework can be used for
offline forecasting to allow the DPO and vessel to prepare for the specific operation. Similarly,
the tool supports real-time prediction during complex offshore marine operations to assist the
DPO in understanding the reliability of the DP system and, at the same time, the status of each
sub-system to focus on critical areas in the case of failures. The final DP-RI value predicted
could be scaled to different DP classes and configurations of a ship as it matches to
mathematical calculations.

135
136

Figure 6-2 DP-RI research framework for the Prescriptive Analytics


6.2.2 Generalised DP-RI Formulation with the Weighting of Sub-Systems

The Reliability Index (RI) is calculated based on the reliability of each sub-system and their
weighted contribution to the overall vessel DP performance based on the Analytical Hierarchy
Process [2]. The DP class type, system configuration and mode of operation are taken into
consideration for an accurate representation of the DP-RI as a result of LSTM prediction
defined in Section 6.7 and Section 6.8. With careful analysis of the databases, review of the
availability of redundant systems, experience, knowledge from previous accidents, and input
from industry experts (consultants, designers, engineers, operators, etc.) RI is represented for
normal and fault conditions through Equation (6-1):

𝑹𝑹𝑹𝑹 (𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄, 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕) = {(𝟎𝟎. 𝟐𝟐𝟐𝟐 ∗ 𝑨𝑨𝑨𝑨 + 𝟎𝟎. 𝟏𝟏𝟏𝟏 ∗ 𝑨𝑨𝑨𝑨 + 𝟎𝟎. 𝟏𝟏𝟏𝟏 ∗ 𝑨𝑨𝑨𝑨 + 𝟎𝟎. 𝟏𝟏𝟎𝟎 ∗ 𝑨𝑨𝑨𝑨 +
𝟎𝟎. 𝟎𝟎𝟎𝟎 ∗ 𝑨𝑨𝑨𝑨 + 𝟎𝟎. 𝟏𝟏𝟏𝟏 ∗ 𝑨𝑨𝑨𝑨 + 𝟎𝟎. 𝟑𝟑𝟑𝟑 ∗ 𝑨𝑨𝑨𝑨) ∗ 𝑲𝑲} (6-1)
where
𝟏𝟏 𝒊𝒊𝒊𝒊 𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄 = 𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏 , 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 = 𝒂𝒂𝒂𝒂𝒂𝒂 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕
𝟎𝟎. 𝟗𝟗 𝒊𝒊𝒊𝒊 𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄 = 𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇 , 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 = 𝑫𝑫𝑫𝑫𝟑𝟑
𝑲𝑲 = �
𝟎𝟎. 𝟖𝟖 𝒊𝒊𝒊𝒊 𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄 = 𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇 , 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 = 𝑫𝑫𝑫𝑫𝟐𝟐
𝟎𝟎. 𝟑𝟑 𝒊𝒊𝒊𝒊 𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄𝒄 = 𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇𝒇 , 𝒕𝒕𝒚𝒚𝒚𝒚𝒚𝒚 = 𝑫𝑫𝑫𝑫𝟏𝟏

Based on the class of DP vessel, the reliability will vary as the redundancy of the components
in individual systems is designed for fault tolerance in each type of DP class [4].

6.3 Methodology for Mathematical DP-RI computation

For the mathematical computation of the DP-RI, the RBD method is used due to its efficiency
in representing quantitively, the actual status of complex systems. A typical reliability model
is represented with series and parallel combinations of sub-systems/components, as shown in
Figure 6-3.

Figure 6-3 Typical RBD arrangement in series & parallel configuration

137
An RBD presents a logical relationship between the system, sub-systems, and components. A
DP system is modelled for reliability computation and analysis using block diagrams [12].
During the formulation of DP-RI, certain factors are taken into consideration, which affect the
mathematical computation. These are listed below [19]:
• Sub-Systems / Components architecture
• DP vessel and component Modes of Operation
• Voting Configuration
• Overall System and Sub-System functional requirement

6.3.1 Sub-System / Component Architecture

The sub-systems/components are represented through RBD models based on the system design
architecture. The system architecture is one of the below models [19, 64, 14]:
• Static System Models
• Series Model
• Stand-by System
• (K,n) System
• Parallel Model

Based on the system configuration, the sub-system reliability is calculated using the principle
of probability theory. If the system had more than one function, each function was considered
individually, and a separate reliability block diagram was established for each system function.
The system reliability is then modelled using the reliability of the various sub-systems [65,
67]. The mathematical model is used to assist in making changes to the system for reliability
improvement. The model is used to identify weak links in the design and to indicate where
reliability improvement activities should be introduced.

6.3.2 DP Vessel and Components Modes of Operation

A DP system operates in different modes during complex marine operations depending on the
functionalities required by the vessel [1, 59, 164]:
• Station Keeping
• Joystick

138
• Auto-Pilot mode
• Auto Heading
• Auto track / Follow Target

The DPO and Captain would ensure that the sub-systems are arranged as per the DP operating
manuals and site-specific risk assessment instructions. From a functional safety perspective,
the components within the sub-system will be operating in specific modes to fulfil the
requirements of the safety-related systems. The components within the sub-system determine
the reliability of the sub-system based on the mode of operation which will fall into one of
the following groupings based on the vessel type and DP configuration [64, 14]:
• Low demand mode
• High demand mode
• Continuous Mode

6.3.3 Sub-System / Component Voting Configuration

The sub-system architecture and modes of operation determine the component’s configuration
and voting group to prevent failure of a safety function in the case of accidental events. The
voting configuration provides redundancy depending on the criticality of the signal, as
discussed in Section 4.3 in Chapter 4. The sub-system and the components within the sub-
system are grouped under one of the following voting configurations [19, 64]:
• 1oo1 (one-out-of-one)
• 1oo2 (one-out-of-two)
• 1oo3 (one-out-of-three)
• 2oo2 (two-out-of-two)
• 2oo3 (two-out-of-three)

The voting configuration not only provides the reliability but also the availability and safety
of the system when hidden failures are difficult to detect.

6.3.4 Overall System and Sub-System Requirement Definition

The mathematical computation model and prediction model for the reliability of a sub-system
are computed only when the system design assumptions and uncertainties have been

139
adequately defined [165]. In this section, the vessel type, class, system set-up, sub-system
configuration, critical, non-critical, redundant, non-redundant grouping, and design boundary
are defined for the experiment in the next section. The sub-system signals were identified and
grouped across different categorisations, considering the design phase for evaluation [1, 9, 59,
164, 166, 167]. The grouping of the signals for the different sub-systems is presented in
Sections 4.4 to Section 4.10 in Chapter 4. The sub-systems component arrangement,
architecture, voting group, and criticality grouping are shown in Table 4-2 to Table 4-8.

The DP system is a composite entity with complex integration between sub-systems


comprising equipment, software, materials, procedures, and personnel. In this research study,
the analysis for the sub-systems was performed based on two aspects [48]:
• Structural focus refers to the physical architecture defining the hierarchy of system, sub-
systems, and components.
• Functional focus refers to the logical architecture, which depicts the functional
relationship between the sub-systems.

The system may be transformed into a functional block diagram or RBD or fault tree to
represent the functional architecture. The RBD is built as success-oriented networks
illustrating how the sub-systems operate as functional blocks to fulfil the overall DP system
functional requirement [8, 167]. The structure of the RBD is described mathematically by
structure functions. This structure-function will be used to calculate the sub-system reliability.
The experimental set-up for the reliability calculation of the sub-systems was performed with
the following parameters defined as a standard for the two approaches and validation [48]:
• System
• System Boundary
• Sub-Systems
• Assemblies / Components
• Outputs
• Inputs
• Boundary conditions
• Support
• External Threats

140
The DP-RI mathematical computation was based on the failure data from the data lake defined
in Chapter 3. The failure data were collected for the equipment at a sensor level and generalised
for use for different vessel applications. The uncertainties are treated as known uncertainties,
which will introduce minor error in the system, therefore at a broader scale, the efficiency is
still at a high level [1, 56]. In the next section, the mathematical computation of the DP-RI is
described. The output will act as the target variable for the reliability prediction and is further
used as the actual output value for comparison and performance evaluation. If the results prove
to be within an acceptable range, then the mathematical calculation of reliability of the DP
system based on the weights can be used.

6.4 Mathematical Computation of DP-RI – Failure data

The mathematical computation of DP-RI is highly dependent on the structural focus, which is
expressed through the RBDs. The structural architecture defines the basic system hierarchy of
each of the sub-systems and the association of lower levels and components with higher-level
assemblies and systems. Once the structural aspects of the physical sub-system are defined, the
functional focus is taken into consideration for the mathematical computation [4, 48, 165]. As
shown in Figure 6-2, for mathematical analysis, each sub-system is further divided into groups
of different equipment. The three main activities involved in the calculation of reliability of
sub-systems are [168]:
• Reliability Block Diagram based on system configuration
• System Diagnostic based on voting group
• Pattern recognition for functional fault identification

The reliability of the system and sub-systems can be calculated through the Probability of
Failure on Demand (PFD) represented by mathematical Equations (6-2) and (6-3) [19, 169]:
𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 = (𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠2 + ⋯ 𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 ) (6-2)
𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = (𝑃𝑃𝑃𝑃𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐2 + ⋯ 𝑃𝑃𝑃𝑃𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ) (6-3)

Where
𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 = Average probability of failure on demand for DP system
𝑃𝑃𝑃𝑃𝑃𝑃𝑠𝑠𝑠𝑠𝑠𝑠−𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = Average probability of failure on demand for DP sub-system
𝑃𝑃𝑃𝑃𝑃𝑃𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐1 = Average probability of failure on demand for components

141
n = Number of sub-systems in the DP system
m = Number of components in the sub-system

6.4.1 Sub-System Components Voting Configuration

The components and sensors are designed to increase the reliability depending on operation
complexity and criticality of the specific mode of operation. Section 6.3.3 described the various
possibilities of the arrangement of the components within the sub-systems and sensors within
the components. The reliability formulation or average PFD is highly dependent on the voting
configuration. The arrangement and voting of components are in one of the following
architectures, as detailed in Section 6.3.3. The architecture and generalised formula for the
different voting configurations used for the reliability calculation of the sub-system and in-turn
for the DP system are described below [19, 64, 14].

1oo1:

The voting configuration for 1oo1 (one-out-of-one) is shown in Figure 6-4, and the
corresponding average probability of failure on demand is represented by Equation (6-4).

Figure 6-4 1oo1 voting configuration

𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴𝐴𝐴𝐴𝐴 = (𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 (6-4)

1oo2:

The voting configuration for 1oo2 (one-out-of-two) is shown in Figure 6-5, and the
corresponding average probability of failure on demand is represented by Equation (6-5).

Figure 6-5 1oo2 voting configuration

142
𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴𝐴𝐴𝐴𝐴 = 2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀) (6-5)
2

1oo3:

The voting configuration for 1oo3 (one-out-of-three) is shown in Figure 6-6, and the
corresponding average probability of failure on demand is represented by Equation (6-6).

Figure 6-6 1oo3 voting configuration


𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴𝐴𝐴𝐴𝐴 = 6((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )3 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 + 𝑀𝑀𝑀𝑀𝑀𝑀�6-6)
2

2oo2:
The voting configuration for 2oo2 (two-out-of-two) is shown in Figure 6-7, and the
corresponding average probability of failure on demand is represented by Equation (6-7).

Figure 6-7 2oo2 voting configuration

𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴𝐴𝐴𝐴𝐴 = 2𝜆𝜆𝐷𝐷 𝑡𝑡𝐶𝐶𝐶𝐶 (6-7)


2oo3:
The voting configuration for 2oo3 (two-out-of-two) is shown in Figure 6-8, and the
corresponding average probability of failure on demand is represented by Equation (6-8).

143
Figure 6-8 2oo3 voting configuration
𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴𝐴𝐴𝐴𝐴 = 6((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀) (6-8)
2

Where
𝜆𝜆𝐷𝐷𝐷𝐷  Dangerous Undetected failure rate of a channel in a Subsystem
𝜆𝜆𝐷𝐷𝐷𝐷  Detected dangerous failure rate of a channel in a subsystem
𝑡𝑡𝐶𝐶𝐶𝐶  Calculate the channel equivalent mean downtime
𝑡𝑡𝐺𝐺𝐺𝐺  System equivalent downtime
𝛽𝛽  The fraction of undetected failures that have a common cause
𝛽𝛽𝐷𝐷  the fraction that has a common cause of those failures that are detected by the
diagnostic tests
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀  Mean Time To Restoration
𝑀𝑀𝑀𝑀𝑀𝑀  Mean Repair Time
𝑇𝑇1  Proof test interval
𝑇𝑇2  Interval between demands

6.4.2 Sub-System Reliability Computation

The system architecture/voting was used to calculate sub-system reliability from the PFD. In
this section, the component voting for each sub-system was used to determine the reliability of
the sub-systems [48, 19]. The seven DP sub-systems’ reliability is calculated individually using
the RBD architecture, and then the overall system-level reliability is calculated. The model
architecture of the DP3 vessel, defined in Sections 4.4 to Section 4.10 in Chapter 4, is used for
the calculation of the reliability of the DP sub-systems.

144
Reference System (A1):

The average PFD of the Reference System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴1 ) is represented by Equation (6-9) and (6-
10) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴1 = 𝑃𝑃𝑃𝑃𝑃𝑃𝐺𝐺𝐺𝐺𝐺𝐺𝐺𝐺 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑀𝑀𝑀𝑀𝑀𝑀 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐺𝐺𝐺𝐺𝐺𝐺 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷 (6-9)
3 𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴1 = �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3
𝑀𝑀𝑀𝑀𝑀𝑀�� + {2𝜆𝜆𝐷𝐷 𝑡𝑡𝐶𝐶𝐶𝐶 } + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 +
𝑇𝑇
𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 + 𝑀𝑀𝑀𝑀𝑀𝑀�� + {2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 +
2
𝑇𝑇
𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)} (6-10)
2

DP control System (A2):

The average PFD of the DP Control System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴2 ) is represented by Equation (6-11) and
(6-12) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴2 = 𝑃𝑃𝑃𝑃𝑃𝑃𝑂𝑂𝑂𝑂 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑁𝑁𝑁𝑁𝑁𝑁 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐹𝐹𝐹𝐹 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑅𝑅𝑅𝑅𝑅𝑅 (6-11)
2 𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴2 = �2�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 + 𝑀𝑀𝑀𝑀𝑀𝑀�� +
2
2 𝑇𝑇1
�2�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � + 𝑀𝑀𝑀𝑀𝑀𝑀�� +
2
2 𝑇𝑇
�2�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 + 𝑀𝑀𝑀𝑀𝑀𝑀�� +
2
𝑇𝑇
{6((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )3 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)} (6-12)
2

Thruster / Propulsion System (A3):

The average PFD of the Thruster / Propulsion System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴3 ) is represented by Equation (6-
13) and (6-14) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴3 = 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇2 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇3 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇4 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇5 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇6 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇7 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇8 (6-13)
𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴3 = �2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2
𝑇𝑇
�2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2

145
𝑇𝑇
�2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2
𝑇𝑇1
{2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( + 𝑀𝑀𝑀𝑀𝑀𝑀)} (6-14)
2

Power System (A4):

The average PFD of the Power System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴4 ) is represented by Equation (6-15) and (6-16)
based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴4 = (𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷2 ) + (𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷3 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷4 ) + (𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷5 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷6 ) +
(𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷7 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷8 ) (6-15)

𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴4 = �2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2
𝑇𝑇
�2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝑈𝑈 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2
𝑇𝑇
�2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( 1 + 𝑀𝑀𝑀𝑀𝑀𝑀)� +
2
𝑇𝑇1
{2((1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 )2 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 ( + 𝑀𝑀𝑀𝑀𝑀𝑀)} (6-16)
2

Electrical System (A5):

The average PFD of the Electrical System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴5 ) is represented by Equation (6-17) and (6-
18) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴5 = 𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆2 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆3 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆4 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐶𝐶𝐶𝐶1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐶𝐶𝐶𝐶2 +
𝑃𝑃𝑃𝑃𝑃𝑃𝐶𝐶𝐶𝐶3 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐶𝐶𝐶𝐶4 (6-17)
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴5 = {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 } + {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 } + {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 )} + {(𝜆𝜆𝐷𝐷𝐷𝐷 +
𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐸𝐸 } + {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 } + {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 } + {(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 )} +
{(𝜆𝜆𝐷𝐷𝐷𝐷 + 𝜆𝜆𝐷𝐷𝐷𝐷 )𝑡𝑡𝐶𝐶𝐶𝐶 } (6-18)

Environmental System (A6):

The average PFD of the Environmental System (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴6 ) is represented by Equation (6-19) and
(6-20) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴6 = 𝑃𝑃𝑃𝑃𝑃𝑃𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑊𝑊𝑊𝑊𝑊𝑊𝑊𝑊 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶 (6-19)

3 𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴6 = �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2

146
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2

𝑀𝑀𝑀𝑀𝑀𝑀�� (6-20)

Human / Operator Error (A7):

The average PFD of the Human / Operator Error (𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴7 ) is represented by Equation (6-21)
and (6-22) based on the different component voting configurations defined in Section 6.4.1.
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴7 = 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷𝐷𝐷 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷𝐷𝐷 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷𝐷𝐷 + 𝑃𝑃𝑃𝑃𝑃𝑃𝑆𝑆𝑆𝑆 (6-21)

3 𝑇𝑇
𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴7 = �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2
3 𝑇𝑇
𝑀𝑀𝑀𝑀𝑀𝑀�� + �6�(1 − 𝛽𝛽𝐷𝐷 )𝜆𝜆𝐷𝐷𝐷𝐷 + (1 − 𝛽𝛽)𝜆𝜆𝐷𝐷𝐷𝐷 � 𝑡𝑡𝐶𝐶𝐶𝐶 𝑡𝑡𝐺𝐺𝐺𝐺 𝑡𝑡𝐺𝐺2𝐸𝐸 + 𝛽𝛽𝐷𝐷 𝜆𝜆𝐷𝐷𝐷𝐷 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 + 𝛽𝛽𝜆𝜆𝐷𝐷𝐷𝐷 � 1 +
2

𝑀𝑀𝑀𝑀𝑀𝑀�� + 𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 (𝐼𝐼𝐼𝐼) (6-22)

6.4.3 DP-RI Computation

The overall DP-RI computation is based on the reliability of the sub-systems. As discussed in
the previous sections, the PFD for the sub-systems is calculated based on the component voting
configuration within the sub-systems, mode of operation, class of vessel, and DP vessel system
configuration during a specific offshore complex operation. For the system level reliability
computation, the PFD at the system level is computed using Equation (6-23) [170].
𝑃𝑃𝐹𝐹𝐹𝐹𝐷𝐷𝐷𝐷−𝑅𝑅𝑅𝑅 = 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴1 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴2 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴3 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴4 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴5 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴6 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴7 + 𝑃𝑃𝑃𝑃𝑃𝑃𝐴𝐴8 (6-23)

The above equation for calculating the overall PFD has proved useful in various safety system
reliability applications [15, 62, 171, 172]. For a sophisticated safety system, the PFD represents
the reliability measure, which is an unavailability number. The reliability of the DP system is
calculated using Equation (6-24) [170].
1 ŧ
𝑃𝑃𝑃𝑃𝑃𝑃𝐷𝐷𝐷𝐷−𝑅𝑅𝑅𝑅 = 1 − ∫ 𝑅𝑅𝑅𝑅(𝑡𝑡) 𝑑𝑑𝑑𝑑
ŧ 0
(6-24)

147
Where
𝑅𝑅𝑅𝑅(𝑡𝑡) Survivor Function or Reliability of DP system (DP-RI)
ŧ  Test interval

The mathematical computation is used for performance evaluation of the RNN, which is used
for the prediction of DP system reliability in the next section. The model predicted value is
compared with the numerical value to determine the error. The mathematical computation
obtained through RBD gives a quantitative representation of the reliability, which consumes a
considerable amount of time, effort, and enormous cost due to the experts and consultants
involved from different stakeholders. Therefore, this is not a preferred method for all vessels.
However, if the prediction results could be proven to have the same effects as the mathematical
computation, then reliability prediction can be widely used.

6.5 Predictive Analytics – Prediction of DP-RI through RNN – Field Data

Reliability prediction is becoming the most commonly used method in the oil and gas industry
for assessing complex systems [4, 166, 173]. The DP sub-system data from the field (historical
and real-time) and test simulations are used for training the RNN models for predicting the
near-future values.

The test data was simulated to address bias for the missing system configuration. The reliability
prediction through RNN models was performed with the information such as reliability
requirements, system architecture, operating environment, operating profile and failure
mechanism. The causes are fed into the model to ensure that there would be the desired degree
of precision in the prediction, as per Institute of Electrical and Electronics Engineers (IEEE)
IEEE 1413 standard [48]. The data used for the prediction of reliability are categorised into
two groups as below:
• Field Data: As described in Section 6.5.3
• Test data
o Accelerated data
o Non-accelerated data

148
6.5.1 Recurrent Neural Network Models for Prediction

RNN models have proved to be more efficient in addressing problems related to complex real-
time applications. Research has shown that RNN models provide more accurate, faster
responses and self-learning capability when implemented in diverse applications with minimal
effort from humans [80, 82, 83, 81, 84]. RNN models are more suitable for relatively long
interval delays in time series prediction and long-term dependencies in time series data. The
following RNN models defined in Section 2.10 are evaluated for suitability in the DP-RI
application.
• MLP
• SRNN
• GRU
• LSTM

6.5.2 Machine Specification and Programming Language

In this section, the details of the experimental set-up are given related to the machine
specification for cloud computing using RNN models, programming language, an application-
programming interface (API), platforms, libraries and debugging plugins for the
implementation of research. The details of hardware specification and programming are in
Section 8.3. The following tools, platforms, and libraries are used for the experiment related
to predictive analytics [174, 175, 176]:
• Compute Engine
• GCP and GPU
• Colaboratory or "Colab"
• Pycharm IDE
• AIF360 Source Toolkit
• Tensorflow 2.0
• Tensorboard and Tensor-graph
• Keras (Model, API & Tuner)
• Numpy and Pandas
• Matplotlib
• Seaborn

149
6.5.3 Datasets - Data Description and Processing

For the specific task of Predictive Analytics, data from a DP vessel was used. The vessel
configuration, DP class, and further sub-systems arrangement were discussed in Chapter 4.
The data collected from the DP-3 ship were raw data values from the historic event database.
Therefore, the data was characterised by noise and redundant information. Machine learning
model output depends on the context of the data, and it is essential to be clear on what the data
represents so that domain experts can apply theories and domain knowledge to the data for
better validation. The final data used for prediction consisted of 450,000 unique datapoints.

For the evaluation of the RNN models, the DP 3 vessel data samples were stored as time series
with a sample interval of 1 millisecond. The sensor data was grouped at the equipment level
before grouping into the sub-system level. The sub-system level corresponds to the seven
different sub-systems of a complex DP system [2] [16]. As in any data sets, the real-time data
collected from the field included redundant information and noise data that could have an
impact on the model training resulting in a bias. To ensure fair comparison and avoid biasing,
the data sets were pre-processed for relevant feature engineering. The steps and methods are
described in detail in Chapter 3 and Chapter 4. A brief introduction is provided below before
the research experiments are described.

Data Collection: This is the first essential step for addressing problems through machine
learning models. The data sets were recorded by the DP-3 system vendor and stored in a
separate machine known as the History Station. The data collection was formulated in such a
way that the system automatically determined the relevant attributes of each sub-system and
stored the data in a comma-separated variable (.csv) file format.

Data Exploration and Profiling: After the data collection, the next step was to assess the
condition of the data to find trends, outliers, exceptions, incorrect, inconsistent or missing data,
errors, repeated and skewed information. The time-series data sets were filtered with a
weighted average of past data points, within a time-span of 10-points, to generate a smoother
estimation of the time series [177]. The profiling also meant that data was not removed from
the limited sample, which could have introduced unseen biases.

150
Formatting data for consistency: The data sets were obtained initially without considering
the mode of operation of the vessel or the number of equipment items connected to each sub-
system at a given point in time. The time-stamps were compared between the datasets, which
were classified with the sub-system data and the overall state of the DP system. In this way,
the time-series data, which were non-stationary, exhibiting specific trends were removed from
the relational data [177]. The aggregation of data from different data sources was formatted to
remove errors and in-consistency and to best fit the proposed machine learning models.

Improving data quality: In the datasets, erroneous data, missing values, extreme values, and
outliers were carefully dealt with, with a strategic interest in improving the accuracy of
prediction. The outliers in the input data, above the range, below the range, and spurious values
were evaluated using data preparation tools such as Excel, Python, Alteryx etc. A standard way
of removing trends is by defining the observation from the previous time step (t-1) and
comparing it with the current time step(t); this was adopted to remove the repeated values from
the dataset [178].

Feature Engineering: This is the most critical step in data processing as it involves the
transformation of raw data into the attributes/features that represent meaningful patterns /
cognitive targets for the machine learning models. The time-series data was divided into input
(A1, A2, A3, A4, A5, A6, and A7) and output (DP-RI). The target models, through
mathematical formulae, define the correlation between the input and output variables to capture
the specific relationships [177].

6.5.4 Training and Testing Dataset Split

The initial step involved splitting the dataset into two sets: one for training the models and the
second one for the evaluation of performance. GCP was used to ensure that the datasets used
were non-overlapping sub-sets between the training and testing data. For the first case, the
datasets of 450,000 samples were used to evaluate the performance of different machine
learning models. Figure 6-9 shows one split-ratio indicating that 67% of the samples were used
for Training (Training + Validation), and 33% of the samples were used for Testing
(Evaluating the performance).

151
Figure 6-9 Datasets Split ratio

For the optimisation algorithm for each model, the hyperparameters, batch size, and epochs
were varied during the sample distribution to prevent over-fitting while at the same time,
verifying the results [179]. Therefore, the training samples were further divided into 50%
training and 17% validation to address the problem related to model bias, which results in
overfitting/underfitting. Additionally, for evaluating the gradient descent, the sample
distribution was varied as per Table 6-1, and the performances tested.

Table 6-1 Split-ratio between Training / Validation and Testing datasets

The performance of the models was tested for all different combinations of ratios in the above
table to find the ratio with the highest accuracy measured through evaluation metrics defined
in Section 6.8, which was chosen for the DP-RI tool development.

6.5.5 Learning Curve (Loss and Accuracy) Evaluation

The Learning Curve (LC), which provides information on the loss and accuracy of the model
prediction, is an excellent method to evaluate the performance. There are two-learning curves
used to assess the dynamics which are defined as follows [180]:
Train Learning Curve: Learning curve calculated from the training dataset that gives an idea
of how well the model is learning.

152
Validation Learning Curve: Learning curve calculated from a hold-out validation dataset that
gives an idea of how well the model is generalising.

The shape and dynamics of the learning curves were used to diagnose the behavior of different
algorithms. There are three common shapes and dynamics that will define the performance of
the RNN algorithm, which are usually represented as below [180, 181, 182]:
• Underfit
• Overfit
• Good Fit

In the next section, the learning curve results are discussed in detail to find the models with a
good fit for the data collected from the DP-3 vessel. The good-fit learning curves will represent
that the loss decreased to the point of stability with a minimal gap between two final loss values
of training and validation learning curves.

Underfitting: Underfitting in LC refers to a RNN model that cannot learn the training dataset
or is unable to capture the underlying trend in the data. The LC will be a flat line or have high
loss values indicating that the model is unable to learn from the training dataset. The data
collected from DP vessels is evaluated to ensure that it does not fall into the underfit category
[180]. Underfit LC may be represented, as shown in Figure 6-10, where it is either a flat line
or training loss is decreasing and continues to decline at the end of the plot implying that it
requires further learning. The LC indicates that other improvements are needed as the training
process was halted prematurely.

Figure 6-10 Underfitting Loss Curve

153
Overfitting: Overfitting in LC refers to a model that has learned the training dataset too well,
including the statistical noise or random fluctuations. In overfitting scenarios, the models
follow the error or noise too closely, resulting in an inability to generalise to new data leading
to significant generalisation error. The generalisation error is reflected in validation loss [180].
An overfitting plot can be represented, as shown in Figure 6-11and it could reveal the dynamics
of overfitting through the following points:
• The training loss continues to decrease with experience.
• The validation loss decreases to a point and begins increasing again.
• The inflexion/deviation point in validation loss at which training was halted.

Figure 6-11 Overfitting Loss Curve

Good Fit: A good fit in the LC refers to a model that exists between the overfit and underfit
cases. A good fit is represented by an LC in which the training and validation loss decrease to
the point of stability with a minimal gap between the two final loss values [182]. The minimal
gap is referred to as the “generalisation gap.”

154
Figure 6-12 Good fit Loss Curve

Unrepresentative Data Set: An unrepresentative dataset means a dataset that does not capture
the statistical characteristics relative to another dataset drawn from the same domain. It is used
to diagnose the properties of the dataset. There are usually two types, as shown below [52]:
• Unrepresentative Training dataset: Refers to a training dataset that does not contain
sufficient information to learn the problem relative to the validation dataset used to
evaluate it.
• Unrepresentative Validation dataset: Refers to a validation dataset that does not provide
sufficient information to evaluate the ability of the model to generalise.

Figure 6-13 Unrepresentative Training dataset Loss Curve

155
Figure 6-14 Unrepresentative Validation dataset Loss Curve

The RNN models used for the research were evaluated so that the problems related to
underfitting, overfit, and also unrepresentative datasets are addressed through a research
framework. The models are analysed using “Loss” and “Accuracy” metrics for training and
validation dataset.

6.6 Bias (Fairness) in RNN and Bias Types

Bias (Fairness) is a measure of whether the decision making developed with the RNN
algorithm is free from discrimination. Bias is “unconscious” and could result in in-accurate
results from algorithms [175, 183]. Bias in recent years has become an increasingly critical
factor in predictive and prescriptive analytics as RNN are used in high-risk decision-making

156
applications [175, 184]. Bias is often treated as a complex and multi-faceted concept that
depends on the context of the application. For the DP-RI research study, there are possibilities
of various biases caused either by humans, data, or technology. To effectively address the
biases, a two-step approach is used in this research. In the first step, the types of biases for the
DP-RI application are identified, and in the second step, the biases are addressed by using the
concept of the fit/transform/predict paradigm through the AIF360 source toolkit [175].

For the DP-RI application, the types of biases are categorised into two groups [176, 185]:
• Human Related: Stereotyping, prejudice, or favouritism towards certain sensors,
components, or sub-systems over others. These biases can affect the collection and
interpretation of data, the design of a system, and how users interact with a system.
o automation bias
o confirmation bias
o experimenter’s bias
o group attribution bias
o Implicit / in-group bias
o out-group homogeneity bias
• Data Related: Systematic errors introduced by a data sampling or reporting procedure.
o coverage bias
o non-response bias
o participation bias
o reporting bias
o sampling bias
o selection bias

6.7 Bias Mitigation Framework

In this section, methods are presented to show how the different types of bias were addressed
systematically to ensure that the prediction results from the models are accurate and can be
validated against real-life scenarios. The accuracy of predictions/performance of the models
could be affected by bias and uncertainties if not appropriately addressed. All the different
types of biases identified in Section 6.6 are grouped and addressed by the following fairness
terms [186, 185]:

157
• Sample Fairness
• Label Fairness
• Model Fairness
• Observer Fairness
• Measurement Fairness

A systematic framework with bias mitigation algorithms, bias metrics, and bias explanatory
was implemented at different stages to address problems associated with biases, as shown in
Figure 6-15. Bias mitigation algorithms attempt to improve the fairness metrics by modifying
the training data, the RNN algorithm, or the predictions. The bias mitigation algorithm inspects
discrimination in the overall logic, ensures that the biasing is addressed, and assures model
fairness and fosters trust. The framework steps for bias mitigation are as follows [175, 184]:
• Import Data
• Check bias at Pre-Processing, In-Processing and Post-Processing stages
o Identify Protected attributes
o Determine the Privileged and Unprivileged groups
o Evaluate the bias against unprivileged groups detected in the metrics from
thresholds
• Mitigate Bias
• Compare original and mitigated results

All of the algorithms are implemented by inheriting from the Transformer class. Transformers
are an abstraction for any process that acts on an instance of Dataset class and returns a new,
modified Dataset object [175, 187]. The different bias mitigation in AIF360 is as below [175].
• Pre-Processing
o Disparate Impact Remover
o Learning Fair Representations
o Optimized Pre-processing
o Reweighing
• In-Processing
o Adversarial Debiasing
o Meta-Algorithm for Fair Classification

158
o Prejudice Remover Regularizer
o Rich Subgroup Fairness
• Post-Processing
o Equalized Odds Postprocessing
o Reject Option Classification
o Calibrated Equalized Odds Postprocessing

Figure 6-15 Addressing Bias for DP-RI RNN algorithm

The adoption of a particular bias mitigation algorithm is based on the requirements if the RNN
models are found to have biased results. Few biases were identified at the beginning of the
research study. However, some biases were identified during model development and a few
after the implementation. Therefore, the mitigation algorithm has acted as an effective method
to address the bias issues relevant to the DP-RI context.

6.7.1 Sample Fairness

The real-time dataset collected from the DP-3 vessel did not cover all of the possible scenarios
of failure across DP sub-systems. Therefore, during data pre-processing, a distribution of
failures across the sub-systems was ensured for different failure scenarios to cover the full
spectrum. In this way, it was assured that the research would be based on unbiased data, which

159
is neutral, accurate, well balanced, and evenly distributed [176, 188]. The sample fairness
ensured that the relevant biases in Section 6.6 were appropriately addressed. The pre-
processing algorithms used to manage the sample biases for the DP-RI application are [189,
190, 191]:
• Disparate Impact Remover (DIR)
• Optimized Pre-processing (OPP)
• Reweighing (RW)

6.7.2 Label Fairness

The biases related to the mathematical model developed for the DP-RI were evaluated for
performance against the DP-Capability plot and DP vendor control system output [58, 177,
192]. The semi-qualitative results of the models were matched with the field status during
commissioning and sea trials. The targets for the algorithm were fixed with proper boundaries
considering uncertainty and defining the operational envelope [175]. The label fairness ensured
that relevant biases in Section 6.6 were appropriately addressed. The in-processing algorithms
used to manage the label biases for the DP-RI application are [193, 194, 195]:
• Adversarial Debiasing (AD)
• Prejudice Remover Regularizer (PRR)
• Rich Subgroup Fairness (RSF)

6.7.3 Model Fairness

Model bias was eliminated by splitting the 67% training dataset into two sets (Training +
Validation) and obtaining the learning curve to evaluate the performance. From the learning
curves, it was ensured that over-fitting and under-fitting were eliminated [196, 197, 187]. The
results proved to be of good-fit as the loss decreased to the point of stability with a minimal
gap between the two final loss values of the training and validation. The model fairness ensured
that the biases highlighted in Section 6.6 were appropriately addressed. The post-processing
algorithms used to manage the model biases for the DP-RI application are [198, 199]:
• Equalized Odds Postprocessing (EOP)
• Calibrated Equalized Odds Postprocessing (CEOP)

160
6.7.4 Observer Fairness

During the programming and result evaluation, there is a tendency to see what one expects to
see or wants to see. Therefore, it was ensured that the experiments were conducted by well-
trained persons and without any potential biases. This resulted in the avoidance of Observer
bias [200, 188]. The observer fairness ensured that the relevant biases in Section 6.6 were
addressed. The pre-processing algorithms used to manage the sample biases for the DP-RI
application are [193, 194, 195]:
• Adversarial Debiasing
• Prejudice Remover Regularizer
• Rich Subgroup Fairness

6.7.5 Measurement Fairness

Measurement bias will occur when there is an issue with a device resulting in systematic value
distortion. It will tend to skew the data in a particular direction. When one sensor failed, and
the other senor was working, the software was programmed to freeze the value of the failed
sensor through constant comparison with the redundant sensors [201]. Therefore, measurement
bias was addressed with due diligence during data collection to prevent any bias during the
run-time of the algorithm. The measurement fairness ensured that relevant biases highlighted
in Section 6.6 were addressed. The pre-processing algorithms are used to manage the sample
biases for the DP-RI application [189, 190, 191]:
• Disparate Impact Remover
• Optimized Pre-processing
• Reweighing

6.8 Performance Results

In this section, the application of the RNN models to the DP-RI prediction from offline and
real-time data are discussed and compared. The data represents the maximum extreme
scenarios that the DP system could go through in its lifetime generated through field data and
test data (un-accelerated and accelerated tests). All of the different test cases were performed
with the various possible combinations to test the performance of the tool. In general, the four
RNN models took similar amounts of time (approximately 16-18 hours) for two sets of sample

161
predictions (150,000 and 300,0000 samples), and the third (450,000 samples) took close to
twice as long, at approximately 30-32 hours for each time prediction. The standard evaluation
metrics, along with the learning curve method, were used to assess the prediction accuracy,
speed of prediction, and ability to scale to different types of vessel data. The methods used for
evaluating the performance of the RNN are divided into:
• Learning Curve Metrics
• Mathematical Evaluation Metrics

6.8.1 Model Comparison Using Learning Curves

Especially with the development of RNN, learning curves have become widely adopted for
models that learn incrementally over time to optimise their internal parameters [202]. Learning
curves help in evaluating model performance using multiple metrics and aid the user in
selecting the model that is suitable for a particular application.

For the DP-RI application, where predicting the reliability is critical, two metrics have been
used to classify the learning curve for selecting the models. One of the metrics is “Accuracy,”
which is used to measure the model performance, and the other is “Cross-entropy Loss,” which
is used for model optimisation [202]. The learning curve can be classified into one of two types
[179, 182].
• Performance Learning Curves: Learning curves calculated on the metric by which
the model will be evaluated and selected, e.g., accuracy.
• Optimization Learning Curves: Learning curves calculated on the metric by which
the parameters of the model are being optimised, e.g., loss.

It is common to create dual learning curves during training. The training dataset is typically
split into training and validation datasets as per Table 6-1. The models are evaluated on the
training datasets to give an idea of how well the model is “Learning” [202]. The validation
dataset provides a method to assess how well the model is “Generalising” [202].

162
Table 6-2, Figure 6-16, and Figure 6-17 shows the performance of the MLP RNN model.
Table 6-2 Learning Curve Table – MLP

Train Validation

0.95
0.85
0.75
0.65
0.55
Loss

0.45
0.35
0.25
0.15
0.05
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-16 Training and Validation Learning Curves (Loss) – MLP Model

Train Validation
1.00
0.90
0.80
0.70
Accuracy

0.60
0.50
0.40
0.30
20 40 60 80 100 120 140 160 180 200
Epoch

Figure 6-17 Training and Validation Learning Curves (Accuracy) – MLP Model

163
Table 6-3, Figure 6-18, and Figure 6-19 shows the performance of the SRNN RNN model.
Table 6-3 Learning Curve Table - SRNN

Train Validation

0.95
0.85
0.75
0.65
0.55
Loss

0.45
0.35
0.25
0.15
0.05
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-18 Training and Validation Learning Curves (Loss) – SRNN Model

Train Validation

0.95

0.85
Accuracy

0.75

0.65

0.55
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-19 Training and Validation Learning Curves (Accuracy) – SRNN Model

164
Table 6-4, Figure 6-20, and Figure 6-21 shows the performance of the GRU RNN model.
Table 6-4 Learning Curve Table – GRU

Train Validation

0.95
0.85
0.75
0.65
0.55
Loss

0.45
0.35
0.25
0.15
0.05
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-20 Training and Validation Learning Curves (Loss) – GRU Model

Train Validation
1.00

0.95

0.90
Accuracy

0.85

0.80

0.75
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-21 Training and Validation Learning Curves (Accuracy) – GRU Model

165
Table 6-5, Figure 6-22, and Figure 6-23 shows the performance of the LSTM RNN model.
Table 6-5 Learning Curve Table – LSTM

Train Validation

0.95
0.85
0.75
0.65
0.55
Loss

0.45
0.35
0.25
0.15
0.05
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-22 Training and Validation Learning Curves (Loss) – LSTM Model

Train Validation
1.00

0.95

0.90
Accuracy

0.85

0.80

0.75
20 40 60 80 100 120 140 160 180 200

Epoch

Figure 6-23 Training and Validation Learning Curves (Accuracy) –LSTM Model

166
In most of the cases, there was substantial evidence that LSTM outperformed the other models
and was more suitable for the DP-RI application. The learning ability of LSTM adapts quickly
and optimises the results for the validation datasets. Similarly, the forecast accuracy of LSTM
proved superior to the other models. As the number of epochs was increased, the datasets were
found to be learnable, and the accuracy improved for the models. In contrast to the other
models, LSTM proved to be a good fit, and the accuracy was 3% better than GRU, 13% better
than SRNN, and 32% better than MLP due to the LSTM attribute of constructing highly non-
linear mapping between the input variable and the target variable. Therefore, during the
comparison of RNN models using the learning curve, the results showed that LSTM is more
accurate, fair, trusted, and easily understood, which makes it suitable for the DP-RI application.

6.8.2 Evaluation Metrics

Once the learning curves had been evaluated, the RNN models were compared based on the
evaluation metrics, with the hyperparameters tuned to optimal values for performance analysis.
The hyperparameter optimisation techniques are described in Section 6.9. The Root Mean
Square Error (RMSE) evaluates how closely the predictions match the observations, as in
Equation (6-25) [69, 79].
N 2
1
RMSE = � � �j �
�Yj − Y (6-25)
N j=1

The Mean Average Error (MAE) measures the difference between observed and modelled
results, as in Equation (6-26) [69, 79].
N
1
MAE = � �j |
|Yj − Y (6-26)
N j=1

�j are the actual and predicted values, respectively, and N is the number of
where Yj and Y
samples. The values usually range from 0 (perfect fit) to ∞ (no fit) based on the relative range
of the data. The test data sets, which were independent of the training datasets (Training +
Validation), were used to evaluate the performance of the models against the evaluation
metrics. RMSE is more sensitive to large deviations between forecasts and actuals. MAE, on
the other hand, is a suitable measure when forecast errors are proportional to energy costs.

167
Table 6-6 Optimised Hyperparameter for RNN models

Table 6-7 RMSE for different RNN Models

Table 6-8 MAE for different RNN Models

168
LSTM GRU SRNN MLP

0.014
Root Mean Square Error (RMSE)

0.012

0.01

0.008

0.006

0.004

0.002

0
1 2 3 4 5 6 7 8 9 10
Epochs

Figure 6-24 RMSE curve of different RNN models with optimised hyperparameters

LSTM GRU SRNN MLP

0.014
Mean Absolute Error (MAE)

0.012

0.01

0.008

0.006

0.004

0.002

0
1 2 3 4 5 6 7 8 9 10
Epochs

Figure 6-25 MAE curve of different RNN models with optimised hyperparameters

Table 6-7, Table 6-8, Figure 6-24 and Figure 6-25 clearly show that the LSTM outperformed
the other models. In the majority of cases, the best performance for MAE and RMSE was
shown by the LSTM model. The performance results justify the statement that LSTM is the
best-suited model for the DP-RI tool application.

169
6.9 Hyperparameters for RNN Model Comparison.

Hyperparameters are the variables of RNN models that govern the training process and the
topology of a model. The hyperparameters were kept constant during the training process of
the RNN models and tuned in successive runs of the training of a model. These variables are
different from the RNN model parameters. The variables in the model that are determined
using the training dataset are termed the model parameters. The model parameters in the
research are the weights and biases. Hyperparameters are adjusted to obtain the optimised
weights to maximise model prediction accuracy. The hyperparameters are categorised into two
groups as below [176, 203, 204, 185]:
• Model hyperparameters
o Number of Hidden Layer
o Number of Neurons
o Activation Function
• Algorithm hyperparameters
o Batch Size
o Epochs
o Split between the training and testing datasets
o Optimizer
o Learning Rate
o Regularisation
o Regularisation rate

In the next section, the key hyperparameters are identified and tuned for optimised results and
increased performance for each of the RNN models. The models are then evaluated as
discussed in Section 6.8 to identify their suitability for the DP-RI application. It is vital to
choose appropriate hyperparameters in the model to obtain optimised results and performance.
The optimal hyperparameters are determined by analysing their influence on the accuracy of
prediction and the convergence time. It is an empirical process to select the optimal
hyperparameter, so the grid search method and Keras-tuner were used.

170
6.9.1 Model Hyperparameters

Model hyperparameters influence model selection. For the RNN models used in this research
study, the model hyperparameters were tuned to get the optimal prediction.

Hidden Layer L:
The hidden layer is one or more layers between input and output layers where the neurons are
arranged with a set of weighted inputs and produce an output through an activation function
depending on the problem and the dataset [176, 205].

Hidden Layer Neurons N:


The neurons in the hidden layers do not have any direct connection with the problem, and their
primary function is to perform the computation and transfer information from the input nodes
to the output nodes [176, 205]. These parameters determine the data transfer between different
neurons, which is kept constant across each layer for other RNN models. The ideal number of
hidden layers and neurons depends on the problem and the dataset. The optimisation
techniques determine the perfect number of hidden layer and neurons per layer through a
systematic approach.

Activation Function:
The activation function introduces non-linearity into the output of a neuron to ensure that
neurons learn non-linear representation [176, 179]. The activation function determines the
output shape of each node in the layer. The following activation functions were used for
determining the optimal function for the DP-RI application:
• Tanh
• Sigmoid
• SoftMax
• ReLU
• Linear
After conducting a series of in-house experiments, it was found that the ReLU activation
function yields the best performance in terms of loss and accuracy. Thus, the ReLU activation
function was used for all the models.

171
6.9.2 Algorithm Hyperparameters

Algorithm hyperparameters influence the speed and quality of the learning algorithm. For the
RNN models used in this research study, the model hyperparameters below were tuned to get
the optimal prediction.

Batch Size:
Batch size is the number of patterns shown to the network before the weights are updated. It
defines how many patterns to read at a time and keep in memory [206]. Mini-batch gradient
descent is described as the batch size is between “1” and “size of the training set.”

Epochs:
Epochs are the number times that the learning algorithm will work through the entire training
dataset. The ideal combination of batch size and epochs depends on the problem and the dataset
[207, 202]. The two hyperparameters are tuned together.

Split between the training and testing datasets:


The split between the training (training + validation) and the testing dataset was performed to
evaluate the gradient descent and the learning curve performance [180, 202]. The split was
evaluated for different combinations as defined in Section 6.5.4 and Table 6-1.

Optimizer:
An optimizer is a hyperparameter used to minimise the loss function by applying the computed
gradients to the model's variables, by iteratively calculating the loss and gradient for each batch
and adjusting the model during training. Gradually, the model will find the best combination
of weights and bias to minimise loss [208], where the lower the loss, the better the model's
predictions. The following optimizers were investigated [176, 207]:
• SGD (Stochastic Gradient Descent)
• RMSprop (Divide the gradient by the running average of its recent magnitude)
• Adam (Adaptive Moment Estimation)
• Adadelta (An Adaptive Learning Rate Method)
• FTRL (Follow the Regularized Leader)

172
Learning rate ƞ:
The Learning rate is referred to as the step size, which defines the amount that the weights are
updated during the training of the models [180, 206]. The learning rate has a significant
influence on the speed of model convergence and the training effect for RNN based models.

Regularization:
Regularization is a hyperparameter technique which penalises the co-efficient to perform slight
modifications in the learning algorithm. Regularization in the RNN models penalises the
weight matrices to address the overfitting problem so that the model generalizes. The following
regularization methods were applied to determine the most suitable method [209].
• L1
• L2
• Dropout
• Early Stopping
• Regularization rate

The regularization updates the general cost function by adding a regularization term.
Cost function = Loss (e.g. mean square error, binary cross entropy) + Regularization term

Due to the addition of this regularization term, the values of weight matrices decrease because
it assumes that a neural network with smaller weight matrices leads to simpler models.
Therefore, it will also reduce overfitting to an extent.

6.9.3 Hyperparameter Optimisation

The technique of selecting the correct set of models and algorithm hyperparameters for the
RNN model is generally referred to as “hyperparameter tuning or Optimisation” [210, 208]. It
is considered as the most critical step, and it determines the success of the DP-RI tool. For the
hyperparameter optimisation, two sophisticated and well-established methods were used as
below [211, 212, 210] :
• Grid Search
• Keras Tuner – Hyperband

173
Grid Search:
Grid search is widely used for hyperparameter optimisation. It involves constructing and
evaluating one model for each combination of the hyperparameters. Cross-validation is used
to evaluate each model, and the default of 3-fold cross-validation has been implemented. The
steps applied in a grid search are as follows [213, 208]:
• Define a grid on n dimensions
• For each dimension, define the range of possible values
• Search for all the possible configurations and wait for the best results

For a given set of hyperparameters and their potential assignments, the naive practice is to
search through the entire grid of parameter assignments and pick the one that performed the
best. There are numerous hyperparameters for the DP-RI application which makes grid search
infeasible as the number of possible assignments increases exponentially. Grid search and
cross-validation took an enormous amount of time for the optimisation as it involves
searching through a manually specified subset of the hyperparameter space [213, 214].

Keras-Tuner:
Later in the research work, it was found that there was a development in Tensorflow 2.0, which
supported Keras-tuner. This was utilised, and the results obtained from both methods were the
same. The Keras-tuner is a library that helps to pick the optimal set of hyperparameters. It is
an easy-to-use, distributable hyperparameter optimisation framework that solves the problems
associated with the traditional method of hyperparameter search [212]. Keras-tuner makes it
easy to define a search space, and leverage included algorithms to find the best hyperparameter
values. There are four methods in Keras-tuner for optimisation [176, 212]:
• Bayesian Optimisation
• Hyperband
• Random Search algorithms
• SKlearn

For this research study, the “Hyperband” method was implemented for hyperparameter
optimisation. For fair comparisons between the different RNN models, the hyperparameters
were tuned independently to achieve the best possible prediction. The Hyperband tuning
algorithm uses adaptive resource allocation and early-stopping to quickly converge on a high-

174
performing model [211, 215]. The Tensorflow2.0 HParams dashboard provides historical
information to determine the combination of the hyperparameters for better prediction and
accuracy with faster convergence time. It consists of three different views to track and choose
the best combination of hyperparameters, as shown in Figure 6-26 :
• Table view lists the runs, the hyperparameters, and the metrics.
• Parallel co-ordinates show each run going through an axis for each hyperparameter.
• Scatter plot view shows plots comparison of each hyperparameter with each metric.

Figure 6-26 Table view of HParams in Tensorboard

The results showed that all hyperparameters have a significant impact on the prediction
accuracy of the models. The combination of the hyperparameters showed that there might be
possible trade-offs between the prediction accuracy and convergence time, which needed an
additional algorithm to find the right mix. The values shown in Table 6-9 were found to provide
the highest prediction accuracy and, at the same time, have faster convergence time. Similarly,
the optimised weights for the DP sub-systems with the best combination of hyperparameters
are shown in Table 6-10.

175
Table 6-9 Keras-Tuner Hyperparameter tuned for RNN models

Table 6-10 DP sub-system weighting for optimised hyperparameters

6.10 Offline DP-RI prediction

The DP-RI application enables the DPO to prepare themselves in advance for complex offshore
marine operations through offline simulation. The tool acts as an intelligent advisory-decision
support element that will enhance the current process of check-list and preparation strategies.
The time consuming and paper-based activities can be performed instead through the DP-RI
tool with actual conditions from the DP system integrated into the tool. If any of the sub-
systems is under repair, then the overall DP system and vessel performance can also be
176
calculated, and results are transparent to decisions on execution. The following are some of the
critical applications for the offline DP-RI application:
• Preparation of the vessel for complex offshore operations
• Evaluation of the impact of failed sensors
• Evaluation of the performance of the DP system with reduced sub-system availability
• Foundation for the test cases and act as a foundation for prescriptive analytics
• Training of a new DPO

The offline prediction may be used as the first step for developing additional features such as
real-time forecasting, prescriptive analytics, and evaluating suitability for different vessels. It
also enables various stakeholders to assess the vessel performance with a different
configuration to improve the system at the design stage and perform benchmarking between
vessels.

6.11 Real-Time DP-RI Forecasting

Real-Time predictive analytics for the DP-RI application involves extracting useful
information from the datasets received in real-time from the vessel through the vessel cloud
infrastructure. The main feature of the real-time DP application is forecasting what might
happen based on particular "if" scenarios rather than precisely predicting what will happen in
the future. It enables and supports the decision-making process in real-time during complex
offshore marine operations. The model is built through streaming offshore operation data at
the sensor level involving rigorous experimentation, historical data, and iterative processes. As
discussed in Chapter 3, the data collection for real-time is performed through an architectural
set-up and ingested into the model after the data profiling to ensure it is structured data.

The real-time forecasting of the DP-RI tool would mainly be used for the following
applications:
• Real-time visualisation of the reliability of the DP system
• Real-time visualisation of reliability of DP sub-systems
• Enhanced DPO experience
• Accurate profiling and status of the DP system
• Proactive action and prevention of the failure

177
• Demand sensing and data acquisition

The real-time feature, coupled with the suggestion from prescriptive analytics, makes DP-RI a
complete holistic decision support tool for DP vessels. The semi-quantitative value of the DP-
RI is used as the initial gauge to determine the reliability and plan for subsequent actions. For
the research framework three categories are defined for the DP-RI as follows:
• DP-RI > 80, then the system is highly reliable and the DPO can take some time to
evaluate the situation for any further actions
• 80 < DP-RI > 60, then the system is medium reliable and the DPO should react to
implement the solution before the system reliability further reduces
• DP-RI < 60, then the system is low reliable and the DPO is required to immediately
react and implement a solution to address the failure for safe operation.

6.12 Summary

The chapter provided a detailed description of the implementation of predictive analytics for
semi-quantitative and quantitative reliability assessment of the DP system. A new research
framework on the DP-RI concept was presented, which will be used for the decision making
advisory support tool. Following this, a more straightforward method for the semi-quantitative
representation of DP-RI was discussed, which used the optimised weights from the LSTM
RNN models. Then the DP-RI framework internal architecture involved in the mathematical
computation of reliability and prediction of reliability through RNN models was presented. For
the mathematical calculation, the RBD method, system architecture, and component voting
configuration acted as the basis for reliability computation. For the reliability prediction, the
actual field data and test data are used for comparison on different RNN models to find the
most suitable one. The datasets, machine specification, experimental set-up, including the data
split, and learning curves were discussed in detail. The bias mitigation framework, which was
used to make the system fair and trusted, was then presented. Finally, the performance results
of the RNN models (MLP, SRNN, GRU, and LSTM) were evaluated using two methods. The
results for both of the methods proved LSTM to be the most suitable model for the DP-RI
application with the hyperparameters tuned for optimum results. In the last sections of the
chapter, a brief description of the offline and real-time application of the DP-RI tool was
presented.

178
7. Prescriptive Analytics for Resilience during DP Failure Incidents
7.1 Introduction

This Chapter describes stage 4 of the big data analytics, shown in Figure 3-1, involving
prescriptive analytics for DP-RI application. The prescriptive analytics support in providing
possible solutions to the DPO in the case of failure in the DP system to prevent failures leading
to significant accidents. Section 7.2 of the chapter provides a high-level research framework
for prescriptive analytics integrated into the DP-RI tool. The next section then presents the
experimental set-up for the prescriptive analytics model implementation, which is used for the
case studies. Following this section, the datasets prepared from the DP sub-system IMCA
database with site-specific risk assessment documentation are presented and tailored for the
model application. In the next section, the prescriptive analytics model execution is described.
This starts with NLP suitability for the research framework and implementation of
transformers in the DP-RI application context. The BERT model was then applied as a training
strategy for prescriptive analytics through the Question and Answer system. Finally, the pre-
trained model is fine-tuned with the datasets specific to the DP-RI concept, and the results are
evaluated for accuracy and performance measurement.

7.2 Research Framework

The research framework for prescriptive analytics is an extension of the DP-RI application tool
concept. The output of the DP-RI tool, which provides the qualitative and quantitative
representation of the reliability of the DP system, is coupled with the DP control system alarm
[2]. In this way, prescriptive analytics could analyse the exact situation and prescribe
suggestions to the DPO, combining expert knowledge with existing risk assessment studies.
The systematic approach of prescriptive analytics as a research framework is shown in Figure
7-1. The site-specific risk assessment, such as CAMO, TAM, ASOG, and WSOG, along with
DP operation manuals, provides information for taking preventive and corrective action during
emergencies [60, 63, 216, 217]. Therefore, in this research, prescriptive analytics uses the most
advanced algorithm from Google, BERT, for handling the NLP task and prescribing possible
solutions during failures in the DP system. The BERT model was applied as a Question and
Answering system to provide the appropriate solutions to the DPO.

179
180

Figure 7-1 Research Framework of Prescriptive Analytics integrated with DP-RI


7.3 Experiment Set-up

In this section, the details of the experimental set-up related to the machine specification for
cloud computing using BERT transformer models, programming language, API, platforms,
libraries and debugging plugins for the implementation of the research are described [174,
175, 176]. K-Train library is an additional library that was used for prescriptive analytics, but
not for predictive analytics.

Tensor Processing Units (TPUs) were used for suggestive solutions as part of the transformer
models for the prescriptive aspect of the DP-RI application. They consist of Google’s custom-
developed application-specific integrated circuits (ASICs) used to accelerate machine learning
workloads. The BERT model was trained with a large dataset, so it acts as a pre-trained model,
and it demonstrated the ability to transfer learning. Thus only two additional steps of pre-
training and fine-tuning needed to be done when addressing a specific NLP task [218]. The
tools, platforms, and libraries used for the experiment related to prescriptive analytics are
described in Section 8.2 of Chapter 8.

7.4 Data-Sets

In this research, the dataset for the prescriptive analytics is prepared based on the Stanford
Question Answering Dataset (SQuAD) benchmarking format. The datasets meet the
requirement of Question Answering (QA) capabilities [219, 220]. The data sources defined in
section 3.4 of Chapter 3 consist of a massive amount of unstructured and semi-structured data.
The BERT model, which has proven accuracy on SQuAD bench-marking, was tailor-made for
pre-training and fine-tuning with DP-RI datasets. The DP-RI datasets are extracted from
different data sources with distinct features, as shown in Table 7-1.

The individual datasets are carefully evaluated and combined to form the overall dataset. This
dataset is used in fine-tuning and training of the BERT model as a QA system. The answers
are predicted through the BERT model and evaluated by industry experts through a well-
established method of data collection, as explained in Section 5.6.1 with criteria specific to
failure scenarios taken from the data lake [221, 219]. As part of the research study, there were
250,000 datasets created from 2000 vessel data inputs (Design Document, FAT procedure,
FMEA report, Proving Trial Report and DP operation manual, etc.).

181
Table 7-1 DP-RI datasets and distinct features for prescriptive analytics

During the training and fine-tuning of the BERT model, the datasets were split as below:
1. Training dataset
2. Development (Dev) dataset
3. Test dataset

The DP-RI dataset consists of six critical parameters [221, 219, 222]:
Paragraph / Passage (Input):
A paragraph of text, consisting of a set of 5-10 passages, is used as input data which may
contain an answer to the question. The passages are extracted from DP system related
documents and annotated by experts. The passage without answers was indicated with
selection “Zero.” The fairness in the dataset for the answers is addressed by equal distribution
among different sub-systems and its criticality according to vessel-specific DP operation
manual [217].

182
Questions (Input):
The expert develops questions based on domain knowledge from the alarms generated in the
system. A few questions were selected and filtered by another layer of experts to ensure
whether they were answerable using the paragraph passed as input. Mapping of the problem
(question) with the actual alarms from the existing DP control system is critical, therefore the
site-specific risk assessment, which considers the operation scenarios was used as the basis for
question formation.

Answers (Output- comparison):


Experts prepared answers for each question, and the dataset may contain zero or more solutions
(answers). The experts synthesise the natural language answer with correct information in the
passage for the prescriptive analytics to provide appropriate suggestions to the DPO. The
output from the BERT model after the model-specific layer is compared against the answers
annotated by experts.

Documents:
The paragraphs used are extracted from the documents. The “Title”, “ System”, and the “Body
text” of documents are included in the dataset. A total of 140 papers were indexed. The cross-
reference is used for analysing the potential to extend research for different DP vessels.

Question Type:
The question in the dataset is automatically annotated using an internal algorithm to classify
the question based on a segment label into either “Description”, “System”, “Sub-System”, or
“Person”. The segment labels used for classification are “What”, “Why”, “Who”, “How” etc.

Ranking:
Experts do the ranking of the sub-systems based on the weightings determined in Chapter 5.
Based on the order, the possible solution priorities are determined and listed for the DPO
operator to take action.

Table 7-2 shows the datasets indicating the Questions and Paragraph used for the training the
BERT models. In this research, the alarms from the DP control system are transformed to
questions manually and fed into BERT model to predict the answer from the relevant paragraph.

183
Table 7-2 Sample DP-RI datasets
184
185
186
7.5 Prescriptive Analytics – Possible Suggestive Solutions

7.5.1 Natural Language Processing

NLP is a collection of computational methodologies used for automatic analysis and


representation of human languages [223]. The DP system has a lot of unstructured and semi-
structured data contributing about 70% of the data, with only 30% being structured data.
Recent advancements in technology have enabled NLP applications to be developed with
higher accuracy, on a par with human capabilities, by allowing a deeper understanding of the
text through neural network models [224, 225]. The other applications of NLP are syntactic
information (e.g., part-of-speech tagging, chunking, and parsing) or semantic information (e.g.,
semantic role labelling, and named entity extraction) were defined in Chapter 3 [224]. NLP in
this research has been implemented for text classification, sentiment analysis, and question and
answering systems.

A simple neural network cannot easily handle the NLP problems as they are bag-of-words
models leading to missing values. To address the issue, an RNN was used, which is a deep
learning algorithm with multiple layers due to its recursive nature [84, 226]. However, the
RNN has the problems of vanishing and exploding gradients and learning long term
dependencies. LSTM addressed the issue of vanishing and exploding gradients [227]. However,
LSTM models for NLP tasks are challenging to train, and transfer learning will not work
efficiently. Besides, LSTM needs a specifically labelled dataset for every job. LSTM and RNN
limitations for NLP applications can be summarised as follows [226]:
• Sequential computation inhibits parallelization
• No explicit modelling of long and short-range dependencies
• “Distance” between positions is linear
• Difficult to train and Transfer learning is not possible

The drawbacks are addressed through feed-forward network architectures, called Transformers
using only attention mechanisms, dispensing with convolutions and recurrence entirely [226].
This has achieved state-of-the-art performance on several tasks and has been found to
generalise very well to other NLP tasks, even with limited data.

187
7.5.2 Transformers

Transformers are a type of deep learning neural network architecture built with an attention
mechanism to draw out global dependencies between the input and output. The model was
developed to solve problems associated with sequence transduction or neural machine
translation, which involve transforming an input sequence to an output sequence. The overall
architecture of the transformer was built using an encoder-decoder stack with self-attention
and a point-wise feed-forward network consisting of two fully connected layers, as shown in
Figure 7-2 [226]. The attention mechanism allows the architecture to model dependencies
without regard to their distance in the input or output sequences. The feed-forward nature and
multi-head self-attention are critical aspects of transformers [226, 218]. The Transformer
allows for significantly more parallelization and can reach a new state of the art in translation
quality as the model is pre-trained with standard datasets.

Figure 7-2 The Transformer - model architecture [226]

188
The transformer model architecture performs various tasks using the encoder and decoder that
are stacked on top of each other multiple times, which is described by Nx in Figure 7-2. It uses
the modules mainly consisting of Multi-Head Attention and Feed Forward layers. The inputs
and outputs (target sentences) are first embedded into an n-dimensional space and changed to
vector representation, as a string cannot be used directly in the model. The critical part of the
model is the positional encoding of different words. These positions are added to the embedded
representation (n-dimensional vector) of each word, which supports the model to interpret the
context of the paragraph. The sequence of operations performed within transformer models are
below [226]:
• Transformer Encoder
o Scaled Dot-Product Attention
o Multi-Head Attention
o Skip connection & Layer normalization
o Position-wise Feed-Forward Networks
o Positional Encoding
• Transformer Decoder
o Embedding and Softmax in Training
o Inference
o Encoder-decoder attention
o Training
o Soft Label

The main advantages of the Transformer model are that it accepts nonsequential inputs, which
supports the research in that the model does not require that the input sequence be processed
in the order. Transformers are parallelized and scaled much more quickly than previous NLP
models with much higher accuracy and speed. Thus the transformer model is used for
prescriptive analytics of this research with BERT using only the encoder part [226, 218].

7.5.3 Bidirectional Encoder Representations from Transformers (BERT )

BERT is a training strategy that uses the transformer model architecture; it is not a new
architecture design [218]. However, in the transformer model, word embedding cannot explore
the context of neighbouring words virtually. For this research, the steps included creating a

189
dense representation for NLP inputs, including those in the QA system, along with developing
a representation model that is multi-purposed [218, 219]. BERT model architecture (Attention,
Scaled Dot Product, Multi-head Attention, etc.) and IO representation were evaluated further
to be implemented for DP-RI application.

BERT Model Architecture:


BERT uses the Transformer encoder to create vector representation. The critical aspect is that
instead of focusing on the problem directionally, it discovers the context concurrently, making
it different from other approaches. BERT’s model architecture is a multi-layer bidirectional
Transformer encoder, as shown in Figure 7-3, along with the internal structure of the encoder
used for this research [226].

T1 T2 TN

TRM TRM TRM

TRM TRM TRM

E1 E2 EN

Figure 7-3 BERT Model architecture – Encoder Segmentation

There are two model sizes of BERT, which were used for this research study to determine their
suitability for the DP-RI application along with LSTM. The two models used are BERTBASE
and BERTLARGE, which are different from the actual structure of the transformer model, as
shown in Figure 7-4 [220, 228, 229]. The two model sizes are: BERTBASE (L=12, H=768,
190
A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total
Parameters=340M). The parameters are “L” encoder layers (Transformer blocks), “H” as the
hidden size (embedding dimension), and “A” as the number of self-attention heads [218, 219,
220].

Figure 7-4 BERTBASE and BERTLARGE (Encoder stacking)

In the research, only the encoder part of the Transformer is used by the BERT model for DP-
RI prescriptive analytics. Therefore, the section below will focus on the attention part of the
encoder before applying it to the QA system.

Attention: An attention function supports mapping a query and a set of key-value pairs to an
output, where the query, keys, values, and output are all vectors. The output of the model was
computed as a weighted sum of the values, where a compatibility function of the query
computes the weight assigned to each value with the corresponding key [226]. The attention
in the transformer is of two types: self-attention and encoder-decoder attention. For the BERT
model, only self-attention in the encoder is used.

Self-attention is an attention mechanism on the encoder side relating different positions of a


single sequence to compute a representation of the sequence. It supports recomposing the
sequence and understands how each of the elements is related to one another, grabbing global
information/context about a sequence or sentence [226, 223]. Self-attention has been
successfully applied for various NLP tasks, including Reading Comprehension, Question and
Answer, abstractive summarisation, and learning task-independent sentence representations.

191
Scaled dot product:
The first part of each encoder performs the attention. Each word in the sentence serves as a
single query. In this research, the context is considered as a query. The information is passed
from the datasets prepared in Section 7.4. Q, K & V are matrices representing the sentences /
sequences (after embedding) [226, 219]. All the attention can be computed concurrently
with Q, K, and V packing all of the queries, keys, and values into matrices. The attention is
computed through Equation (7-1) as below:

𝑄𝑄𝑄𝑄𝑇𝑇
Attention (Q, K, V) = softmax ( )V (7-1)
√dk

𝑄𝑄𝑄𝑄 𝑇𝑇 shows how 𝑄𝑄 is related to 𝐾𝐾 , word by word. It also states V(=K) according to the
attention mechanism. The attention has the same shape as Q. However, it is made up of
elements from V concerning their correlation with Q, as shown in Figure 7-5.

Figure 7-5 Scaled dot product – Encoder in BERT


Multi-head attention:
Multi-head attention allows the model to jointly read and analyse the information from
different representation vector subspaces at a different position in the paragraph. Multi-head
attention generates “h” attention per query. Conceptually, it packs h scaled dot-product
attention together. For encoders, there are 12 attention heads for BERTBASE and 16 for

192
BERTLARGE. Therefore, each transformation provides a different projection for Q, K, and V.
The multi-head attention allow the model to view relevancy from 12 or 16 other “perspectives”
based on BERTBASE and BERTLARGE models as in Equation (7-2). Thus, the overall accuracy
increases. In each attention, the model will transform Q, K, V linearly with a different trainable
matrix, respectively. As shown in Figure 7-6, the output vectors are concatenated, followed by
a linear transformation and then processed by a model-specific layer [226, 223, 219].

MultiHead(Q, K, V) = Concat(head1 , head2 … headℎ )𝑊𝑊 𝑜𝑜 (7-2)

Figure 7-6 Multi-head attention

Input/Output representation:
To pre-train the model, the input assembly and expectation on the output had to be defined
before pre-training BERT for the DP-RI QA system. The input for the encoder is converted to
a token through the BERT tokenizer. Then tokens are converted to vectors through the word
embedding process. The input consists of embedded words, an indication of 1st and 2nd words,
and positional embedding [218, 219]. The input is composed of two sequences, a [SEP] token
in between Sequence A and Sequence B, as in Figure 7-7 indicating the input and the context.
The output is represented by two types: special output token to represent the NLP task [CLS]
and the contextual representation token (Tk) as in Figure 7-7. As the NLP task involves QA,
at the output, the token representations are fed into an output layer, which is a model-specific
QA layer similar to SQuAD layer used in SQuAD benchmarking problems [228, 229].

193
Figure 7-7 Input/Output representation for BERT model

Figure 7-8 BERT model Input / Output with word and context matrix

For the QA NLP task, the outputs corresponding to the paragraph sequence will be used to
derive the start and the end span of the answer. Instead of using every single word as a token,
BERT breaks a word into word pieces to reduce the vocabulary size. The complete architecture
for I/O representation for BERT is shown in Figure 7-8.
194
7.5.4 BERT Model as Question and Answer System for DP-RI

The BERT model has been widely used as a contextual representation of input question –
passage pairs by using experience from SQuAD benchmarking problems [220, 228, 229]. For
this research study involving the DP system, when the failure occurs, the reliability of the DP
system reduces, leading to potential accidents. Therefore, the BERT model is applied as a QA
system to prescribe solutions from the widely available database. The datasets prepared in
Section 7.4 use the question and context, separated by the [SEP] token as input. In the end, a
single dense layer is applied to each vector/token with two neurons at the end, as shown in
Figure 7-9. The first neuron is used as the score for “being the start of the answer”. Similarly,
the second neuron is used as the score for “being the end of the answer” [220].

In the BERT training strategy, the transformer model is first pre-trained with data that requires
no human labelling. The output from the pre-trained model is a dense representation of the
input. To suit the DP-RI prescriptive analytics, a QA system was implemented for which the
BERT model is modified by simply adding a shallow, dense layer connecting to the output of
the original BERT model. After this, the model is re-trained with a dataset defined in Section
7.4 with labels specific for the DP-RI application. The prescriptive analytics part of DP-RI was
divided into three activities as below [218]:
• Pre-Training: Creating a dense representation of the input
• Fine-Tuning: Fine-tuning the model to the QA system with DP-RI specific datasets for
• Rating of suggestive solutions (Answers): Rating the answers based on the weightings
of the sub-systems using softmax activation through the logits score.

Pretraining:
The pre-training involves the training of the models with generalised prediction applications.
The pre-training was performed on a text without any human labelling. BERT pre-trains the
model for two specific NLP jobs. The two jobs are as follows [218, 220]:
• Masked Language Model: It enables the understanding of the sentence through
approximation and supports vectors for each token. The Transformer encoder generates
a vector representation of the input. Then BERT applies a shallow, deep decoder to
reconstruct the word sequence(s).

195
• Next Sentence Prediction: It enables higher-level understanding from words to
sentences and a single vector for the classification. The vector determines whether the
model is used for the classification or other NLP tasks such as QA systems. The
fundamental purpose is to create a representation in the output C that will encode the
relations between Sequences A and B.

Figure 7-9 BERT pre-training phases


These two training tasks help BERT to train the vector representation of one or two word-
sequences. Other than the context, pre-training discovers additional linguistics information,
including semantics and coreference. Figure 7-9 shows the architecture for pre-training phases.

196
Fine-Tuning:
The next step after pre-training is fine-tuning involving fitting the task-related data and
corresponding labels to refine the model parameters end-to-end. A shallow classifier layer is
added for the QA system task to fine-tune the model output [218]. As BERT is more of a
training strategy rather than a model architecture, the fine-tuning is straightforward. The self-
attention VC mechanism allows swapping out the appropriate input and output and unifies two
stages.

In the fine-tuning phase, as shown in Figure 7-10 for the question answering task, the input
question and paragraph are represented as a single packed sequence. The question uses one
embedding (A), and the paragraph uses another embedding (B). During fine-tuning, the start
vector (S) and the end vector (E) are defined to identify the answer for the specific question.
The probability of the word “i” being the start and the end of the answer span is computed as
a dot product between Ti and S, Ti, and E. This is followed by a softmax over all of the words
in the paragraph. It is presented by the Equations (7-3) and (7-4) [218, 219, 220].
𝑒𝑒 𝑆𝑆.Ti
Pi (S) = 𝑆𝑆.Tj (7-3)
� 𝑒𝑒
j

𝑒𝑒 𝐸𝐸.Ti
Pi (E) = 𝐸𝐸.Tj (7-4)
� 𝑒𝑒
j

The score of a candidate span from position i to position j is defined as 𝑆𝑆. Ti + 𝑆𝑆. Tj , and the
maximum scoring span where j ≥ i is used as a prediction. The training objective is the sum of
the log-likelihoods of the correct start and end positions. The maximum scoring for the start
word and end word of the answer are used to determine the answer to the specific question.
Finally, the model is fine-tuned with hyperparameters such as epochs, learning rate, and batch
size to increase the performance and accuracy of the model. The details of the fine-tuning with
the hyperparameters are presented in the experiment section 7.5.5.

197
Figure 7-10 Fine-Tuning BERT for QA system
The fine-tuning instability of BERT was one of the critical issues in the research study during
its application to DP-RI as a QA system [230]. The analysis showed that the instabilities were
reported due to the smaller datasets of the DP-RI and catastrophic forgetting. The training was
done on the TPU (POD and CHIPS). Table 7-3 presents the training time and fine-tuning time
for the TPU and estimates of the GPU.

Table 7-3 BERT training time and DP-RI datasets fine-tuning time

The fine-tuning instability was addressed by using small learning rates combined with bias
correction for avoiding vanishing gradients early in training. Another added feature was added
by increasing the number of iterations and train to zero training loss by making use of early
stopping [230]. The fine-tuning of the DP-RI dataset was performed on 100,000 labelled
samples with DP-RI datasets. The research results revealed that the datasets above 20,000
showed robust performance for different hyperparameters. The optimised values of the
hyperparameters are chosen for the testing on the testing/validation datasets which are shown
in Table 7-4.

198
Rating of suggested solutions (Answers):

An additional layer is added for the model-specific application in which the answers are
concatenated if multiple failures are leading to numerous questions as the input. For the DP-
RI application, when there are numerous failures in the system, it leads to numerous alarms.
Under such a situation, the mapping of alarms to the question leads to numerous questions and
paragraphs fed into the BERT model [229]. The prediction of answers for the different
questions provides the suggestion to the DPO to respond and take necessary steps. As shown
in Figure 7-11, the softmax layer determines the rating for each answer based on the weighting
of the sub-system and availability of redundancy during a particular mode of operation. The
probability determination is represented by Equation (7-5) [176, 223].
𝑒𝑒 yi
S (yi ) = y (7-5)
� 𝑒𝑒 j
j

Figure 7-11 Rating of possible suggestive solutions


The dense fully connected softmax layer is used only in the case of multiple failures that need
different actions from the DPO. If there is only one failure, the answers from the BERT model
will be directly fed to the output with the softmax layer. The details of the rating on the actual
application are explained in Chapter 8 with case-studies, and details of validation are presented
in Appendix II as screenshots.

7.5.5 Experiment

The experiment was carried out in Colab using TensorFlow with TPU architecture. The details
of the hardware, libraries, and software are presented in section 7.3 and Chapter 8. The datasets
defined in Section 7.3 were used in the experiment for training, development (validation), and
testing (evaluation). The investigation was carried out in the following steps:

199
• Creating dependencies and Bias Mitigation
• Data Pre-Processing
• Loading Dataset
• Creating the training and test datasets
• Building the model
• Training the model
• Evaluating Results

Creating Dependencies and Bias Mitigation:


The dependencies between the datasets were identified to create a single overall data set for
the DP-RI application. The DP-RI dataset consists of dependencies between the sub-systems
for various configurations and different modes of operation. Similarly, the dependencies
between the software necessary for running the experiment in Colab were identified, and
necessary libraries imported as packages. The next step is mitigating the biases in the dataset
through a manual process that involves data pre-processing, splitting training, and test datasets
and addressing the fairness through well-defined methodologies.

Data Pre-processing:
This involves creating the “.json” files for execution of the programming. The training data
file and the vocabulary files are imported into the system. Then the metadata is created for the
input training dataset. Finally, the training dataset is created similar to the SQuAD dataset
using the Batch Size =4, and metadata sequence length = 450, and it is ensured that the training
information is passed on to the program [228, 229].

Loading Data set:


The dataset is loaded to the Colab through the Keras library, which is part of Tensorflow 2.0.
The dataset is stored in the “.json” format on Google drive. The three files training dataset,
evaluation dataset, and vocabulary are loaded into the program. Then the root directory is
created with the association for computational linguistics to ensure that the dataset is accessible
for the BERT model and the output is created in the same location.

Creating the training and test datasets:


The data are loaded to the model and further split as below:

200
• Training dataset
• Testing dataset (development/validation and testing / evaluation)

Different classes of data within the datasets are identified with unique data labels, and the
distribution is maintained between the datasets. The Keras K-train algorithm is used for the
split, thus ensuring the fairness in the process.

Building the Model:


The model building process involves two steps. The first step is building the task-specific QA
system layer (similar to the SQuAD layer) to train the BERT model. A dense layer is created
representing the BERT QA layer using custom layers in Tensor from Keras using Tensorflow
2.0. The number of units is defined as “2,” indicating the position of the score for the start of
the answer and position of the score for the end of the answer. “Truncated Normal,” which is
a standard initializer used by Google, is adopted with “Standard deviation = 0.02”. The logits
are defined to get the input from the QA system layer and unstack the logits to get “logit [0]
and “logit [1],” which represents the score to identify the “Answer.”[220, 228, 229]

The second step is building the whole model structure, in which the additional dense layer is
added to the model. In this step, it is ensured that the model is trainable, ahead of the next step.
It takes in the sequence of inputs and produces the sequence of outputs [231]. The BERT layer
call is in three-dimensions for the input, which are “Input_Word_Id,” “Input_mask,” and
“Input_Segement_Id.” Finally, the model returns the “Start_logits” and “End_logits.” Thus,
the BERT model is ready for the QA system, and the training can be performed on the model.

Training the Model:


Once the dataset is ready, and the model has been created, then the AI needs to be compiled
for training the model. The hyperparameters are defined at the initial stages to determine the
model performance, and the optimised values are obtained by iteration. The value of the
hyperparameters is shown in Table 7-4. The critical item is the optimizer, which is a modified
version of “Adam” preferred by Google [176, 218, 225, 229]. The optimizer uses the initial
learning rate, number of training steps, and several warm-up steps. The scores are calculated
through the loss function using classification to determine the start position and end position.

201
The sparse categorical cross-entropy is applied to both the outputs. The losses are computed
separately, and then the total is obtained using both losses.

Table 7-4 Hyperparameter optimisation (Hyperparameters used for fine-tuning)

202
The next step is creating the custom training loop to keep track of losses and measure metrics
to evaluate the performance of the QA system BERT models [231]. The custom training loop
is used in the research to provide more flexibility and train the model with more control. It is
performed with the support of Epochs. Training loss of each epoch is calculated for the
complete training datasets. The “gradienttape” is used to record the operation, which is
executed within the context manager [231]. It supports evaluating the weights to compute the
loss function and gradients. The optimizer is applied to the model to decrease the gradient loss.

The model is trained for each epoch for pre-defined batch sizes, and the results are stored in
the checkpoint manager [231]. Then time taken for each epoch run is saved; it took more than
4-6 hours for execution due to connectivity with the Colab server and the TPU accessibility.
In this research experiment, the number of epochs is set to “3”. The results are stored for every
batch size of “50” through the training datasets. In the next section, the evaluation of the model
performance is discussed.

7.5.6 Performance Evaluation – Testing and Results discussion

The BERT model implemented as a QA system for the DP-RI tool was evaluated using the
testing dataset, and results are discussed with the metric evaluation parameters. The evaluation
of the model performance is presented below in three steps.

Evaluation Preparation:
The evaluation preparation process is concept intensive, and thus diligent care was used to split
the datasets into validation and test datasets. The dev dataset is imported, and the features are
identified. After this, the tokenizer is used to token the dataset in the vocabulary and change
it to lowercase.

Evaluation Creation:
A dictionary is created for evaluation, which defines types of unique field collection.
“Namedtuple” is used to determine the output with field names to extract the output in a defined
format. The fields are “unique_id,” “start_logits,” and “end_logits.”

203
Evaluation Result:
The model performance and accuracy are evaluated through two metrics “F1” score and “Exact
Match (EM)” score [219, 229, 232]. Metrics ignore the articles and punctuation for
performance measurement. F1 score conveys the balance between the precision and the recall,
and it is represented by equation (7-6). It measures the average overlap between the prediction
and ground truth answers.
(Precision−Recall)
𝐹𝐹1 = 2 ∗ (7-6)
(Precision+Recall)

Exact match measures the percentage of predictions that match any one of the ground truths
answers precisely. Table 7-5 shows the F1 and EM score values for LSTM, BERTBASE, and
BERTLARGE. The evaluation results clearly show that the BERTLARGE model outperforms the
other models and algorithms. When comparing the training period, BERTLARGE took more time
and capacity, leading to a high cost due to TPU usage. However, in terms of validation and
testing, the accuracy was higher, and the response time was faster. Therefore, for the DP-RI
application, BERTLARGE would be recommended for the prescriptive analytics part of the state-
of-the-art advisory tool.
Table 7-5 Model Performance Evaluation

The model performance on the prediction of the answers was measured through the “F1” score,
and the results analysis is shown in Table 7-6. The output answers from the DP-RI tool are
compared with the available documentation and experts’ judgements. Similarly, the rating of
the solutions suggested are evaluated using “EM” score, and the analysis of the results are
shown in Table 7-7. The rating/ranking order of the solutions suggestion are compared with
the actual implementation steps onboard by the DPO during real scenarios based on the system
availability. The scenarios from Table 7-2 are used for the detailed analysis of “F1” and “EM”
scores.

204
Table 7-6 Answer evaluation for DP alarms and Question (F1 score analysis)
205
206
Table 7-7 Rating of solutions suggested (EM results analysis)
207
208
7.6 Summary

The prescriptive analytics to address the failure related to the DP system was presented with a
state-of-the-art BERT model from Google. The research framework proposed in this chapter
for prescribing possible solutions in the case of failure using the BERT model in the QA system
acts as a basis for other applications. It will reduce the time required for solution generation by
the DPO as it avoids referencing a massive number of documents or seeking expert advice
during critical situations. During the training phases, once the alarm is simulated the question
is created and fed to the model to prescribe solutions. This step acted as a “human-in-the-loop”
prescriptive analytics methodology. Once the model is trained the human part is removed, and
the process is automated to have fully functional AI system for the prescriptive analytics part
in DP-RI tool. The possible solutions suggested, along with the actual reliability condition
indicated in the DP-RI tool, will support the DPO to take quick action and prevent the failures
leading to accidents. In this chapter, the experimental set-up and TPU details used for running
the model are described. The datasets collection and pre-processing was presented followed
by which methods of aligning data to suit the need for the QA system was defined. The bias
mitigation methods were briefly described. Various NLP models and their suitability for DP-
RI application was evaluated and the reason for choosing the BERT model was presented along
with the evaluation results. Finally, the hyperparameter optimisation and application of the
prescriptive analytics solution for other DP3 vessels are highlighted. In the next chapter, the
case studies of predictive and prescriptive analytics are verified and validated using traditional
risk assessment methodologies.

209
8. Verification and Validation of Predictive and Prescriptive Analytics
Results
8.1 Introduction

This chapter describes the experimental set-up and case studies for verification and validation
of results from the DP-RI tool with proven risk assessment methods. Section 8.2 describes the
architecture of the DP-RI tool and network topology of the interfaces. The offshore set-up for
the DPO and the onshore set-up for the SME used to validate DP-RI are described in detail.
The data management, user access, and the security for information sharing with the on-board
DP control system and DP-RI were identified and appropriate steps to mitigate the spurious
control trips were addressed. In the next section, the programming language and libraries for
the DP-RI tool are listed. The machine specification of the GPU used for the predictive
analytics and the machine specification of the TPU used for the prescriptive analytics are
presented. The experimental set-up for the case studies, along with the research boundaries
and limitations, are presented to define the operation envelope for the DP-RI tool. The DP 3
vessel configuration is shown, which is used for the case studies. The set-up for the case studies
was extracted from the IMCA accident database, and some of the case studies were based on
the FAT / CAT procedure to test the effectiveness and robustness of the DP-RI tool. The
verification and validation of the case study results were evaluated to confirm, through
objective evidence, that the specified requirements of the DP-RI tool are fulfilled [233]. The
final section of the chapter describes the summary of the results and improvement in the test
cases.

8.2 DP-RI Tool Architecture and Network Topology

The DP-RI tool architecture and network topology are shown in Figure 8-1, depicting its
interface with the DP control system on-board the vessel, whilst offshore, and transmission of
data onshore for real-time monitoring. The interface was independent of the process network
and secured through a router and firewall settings. The prediction and prescription algorithm
runs on the DP-RI computer, and it reads the field data through the DP control system via read-
only access. Based on the field information, the algorithm predicts the near future reliability of
the DP system. In the case of failure in a sub-system, the DP-RI provides possible suggestions
to the DPO for faster response action.

210
211

Figure 8-1 DP-RI System Topology


8.2.1 Data Management

One significant feature of the DP-RI tool is to interface and share data and information from
the DP control system (including sub-systems) for safe and efficient operation. The data is
collected from multiple sources, and the ability to manage all of the information seamlessly
enables a new level of possibility to analyse and monitor situations, critical operations, and
installation conditions. The DP-RI tool is provided with dynamic visualisation to handle the
massive amount of data and assist the DPO to be efficient in critical operations.

Data Server
Data will be collected from the DP control system through a historical database protected via
the router. The time-series defined sensor data are transferred through the OPC server
configured as a master-slave configuration. The algorithm uses only the alarms and the events
of the sensor tags, and there will be a delay of fewer than 3 seconds.

Data Collection and Storage


The time-series data will be recorded and stored as it changes. The capacity of the system is
provided such that the time-series data can be stored for 30 days. The data features are 0.2%
of range as dead-band compression and 0.1% of range as vector-based compression. The data
are stored locally in the DP-RI computer, and there is a provision to store to the cloud for
simulation of different configurations.

Data transfer time


Data transfer time for the information from the sensor to the controller to the data-logger and
finally to the DP-RI application was less than 60 seconds. The time-series data are aggregated
and stored in the datalogger. The time delay from the sensor to the DP-RI tool was due to
transfer of data from the controller to the datalogger.

8.2.2 Security and Integrity

The DP-RI tool requires a username with a log-on password to perform simulations. Access
rights were provided to different users with specific roles and sets of rights to set-up the
applications. The DP-RI tool application does not affect the DP control system performance
and integrity.

212
8.3 Programming Language

The Google Colaboratory (Colab), which is an open-source web application, was used to create
and share documents that contain live code, equations, visualisations, and narrative text. The
experiments were run using the codes which were developed using Colab, which is an
executable document written and run within Google Drive using Python programming [174].
Python was used due to its efficiency in providing high-level data structures, and because it is
a practical approach for object-oriented programming, which was required for handling the
massive amounts of data for this application. Besides, Python was used for data cleaning and
transformation, statistical modelling, data visualisation, and machine learning [234].

Tensorflow 2.0 is an end-to-end open-source platform for machine learning, including RNN.
The features are used to build and deploy RNN-powered applications quickly by using a
comprehensive, flexible ecosystem of tools, libraries, and community resources. Tensorboard
provides the visualisation and tooling needed for machine learning experimentation to track
and visualise metrics such as loss and accuracy. It also aids in visualising the model graph
(operation and layers) and viewing histograms of weights, biases, or other tensors as they
change over time. The Keras Functional API enabled the implementation of a wide range of
different deep learning models so that suitable models for the DP-RI application could be
identified [176]. Other libraries, such as Tensor graph, seaborn, pandas, NumPy, random, and
Matplotlib, were used for various functions during the comparison of the models and
evaluating the metrics [234]. The main advantages of using the Python programming language,
Keras API, and libraries were that it helped in providing the following features [174, 176, 186]:
(i) Interface with GPUs in the Google Cloud Platform for increased performance
(ii) Data cleaning and comparison of the models during the validation stage, making it
possible to check the loss and accuracy metrics
(iii) Debugging was faster due to the availability of Tensorboard, where the
scalar/hyperparameters can be varied to find the optimal performance of the models
(iv) Running different complex deep learning algorithms for prediction and validating with
the test data set for time series data was made possible due to the capability of the GPUs
available in GCP through Recurrent Neural Networks.
(v) Tuning Hyperparameters and comparing the performance over-time.

213
8.4 Machine Specification

The sections below describe the machine specification of GPU and TPU used for predictive
and prescriptive analytics.

8.4.1 Predictive Analytics

The GCP uses the Google Cloud Engine (GCE), which provides GPUs that can be used as
virtual machines for running deep learning models. The GCP’s Computing Engine with GPUs
is used in efficient ML models to provide faster execution for real-time data interface. The
GCP Infrastructure as a Service (IaaS) was used for the experimental evaluation of the research
[235]. For the DP-RI application, as shown in Table 8-1, a high number of GPU and vCPU
devices were selected from the various models available on the GCP as the DP-RI tool has a
high computation requirement [234]. A GPU accelerates the specific workloads such as
machine learning and data processing for optimised execution of the programming [234]. This
particular model provided the capability to run a vast number of data-sets, and it can be readily
applied to data from different vessels for further research evaluation.
Table 8-1 GCP GPU specification for predictive analytics

8.4.2 Prescriptive Analytics

TPUs were designed from the ground up with the benefit of Google’s deep experience and
leadership in machine learning [176]. Figure 8-2 shows the TPU architecture, and Table 8-2
specified the machine specification on the GCE.
Table 8-2 DP sub-system weighting for optimised hyperparameters

214
Figure 8-2 TPU architecture for BERT model fine-tuning [176]
TPUs are used for the NLP to speed up the prescriptive analytics as the time to prescribe
suggestive solutions is limited, and action needs to be instantaneous.

8.5 Research Assumptions and Boundaries

8.5.1 Assumptions

The following are the assumptions for this research study:


• The two DP vessel models (DP3 Semi-submersible Drilling Rig and DP3 Drillship)
used for the research had the same configuration as below:
o 8 Diesel Engine (Rolls Royce)
o 4 Switchboards (Siemens) Open and Close Bus-Tie Configurations
o 8 Azimuth thrusters (Wartsila)
o VFD drives (Siemens)
o DP3 Control System (Kongsberg)
o Reference System Sensors (Kongsberg)
o Environmental Sensors (Kongsberg)
o DPO (Seadrill, Transocean, Heerema, Floatel Etc)
• The DPOs working on the vessels are assumed to meet the competency requirements
of the nautical institute, regulators, classification societies, and the Company.
• The weighting of the DP sub-systems is based on the AHP methodology presented in
Chapter 4 and the opinions of industry experts.

215
• The probability of two or more failures occurring in a sub-system simultaneously is
low.
• The DP-RI tool is used to make safety-critical decisions in real-time. Therefore, the
artificial intelligence systems with deep learning algorithms needed to be designed with
a high quality algorithm.
• The DP-RI tool was used to test vessels with different applications (Drilling rig,
floating accommodation, PSV, etc.), class (DP3 / DP2), configuration (8/ 6 thruster and
4/3 switchboards), and sub-system vendors for specific cases to ensure the scalability.
• The inherent biases of the RNN model are appropriately addressed through necessary
precautions and uncertainty in the forecasting was kept to a minimum.
• The data extracted from the two vessels are used to train the model with different
scenarios as much as possible, and the model is assumed to deal with new situations
and failures for which it was not explicitly trained.
• The data used for the thesis are knowledge-based data collected from industry
databases and real-time data collected directly from sensors through the DP control
system.
• The research is assumed to have a focus on the following criteria to ensure that the
application to real vessels is possible shortly.
o High-risk scenarios with low probability were evaluated to ensure that the
decision should not have catastrophic consequences due to inaccurate
prediction.
o Critical consequences are often related to tail events – for which data are
naturally scarce. As the data is insufficient, the uncertainty associated with
prediction is high. This scenario is assumed to be addressed through training
scenarios by testing with available case studies with high-risk ranking.
• The research was executed in the GCP and assumed to have the same efficiency when
migrated to other IaaS platforms such as Azure or Amazon Web Services (AWS).

216
8.5.2 Boundaries

The following are the limitations of this research:


• The below databases were used for the big data analytics which are part of DNV GL
Datawarehouse and publicly available data source.
o IMCA Accident Database (Publicly available)
o FMEA database (DNV GL)
o HIL Database (DNV GL)
o DP-Capability (DNV GL)
o OREDA (Publicly available)
• Possible suggestive solutions to the operator during failure are generated based on the
limited available databases and SME judgment. Therefore, the tool needs to be tested
with as many scenarios as possible with available confidential databases before
application into a real DP vessel.
• The DP-RI tool was integrated with onboard DP control systems through secured
connection for monitoring pre-defined set of signals. The isolated sub-systems was not
directly interfaced with DP-RI tool.
• Numerical simulations were performed, but physical model tests were not performed.
• Case studies from IMCA and hypothetical cases were used for evaluating the
effectiveness of the DP-RI tool, and specific operation envelopes were considered to
define the uncertainties.

8.6 Experimental Set-Up: Semi-Submersible Drilling Rig – DP 3 (DP AUTRO)

The semi-submersible drilling rig of DP-3 class was used for numerical simulations in the
experimental set-up for the verification and validation of the DP-RI tool. Similarly, the results
are validated on one Drillship DP-3 class during proving trials for some of the actual and
hypothetical case studies. Figure 8-3 shows the configuration of the semi-submersible drilling
rig used for the numerical simulations. In Section 8.7 and Section 8.8, different case studies
are presented, and the effectiveness of the DP-RI tool is discussed along with verification and
validation with existing methodologies. The simulation and run-time screenshots of the overall
DP system, seven sub-systems and the limit set-up are shown in the Appendix II section of the
thesis.

217
218

Figure 8-3 Semi-Submersible Drilling Rig Configuration for DP-RI experiment


8.7 Verification and Validation - Actual Case Studies from IMCA database

Case studies (1-4) are extracted from the IMCA database, and comparison is made between
the actual events and simulation in the DP-RI. The flow of the case studies explained in the
next section is defined in the flowchart as shown in Figure 8-4.

EXPERIMENT START

VESSEL CONFIGURATION
OPERATION VESSEL SET-UP

SUB-SYSTEM SIMULATION OF SUB-


FAILURE CAUSE SYTEM FAILURE

DPO REFERENCE DPO DP-RI


EVENT 1 EVENT 1
ACTION DOCUMENTS ACTION TOOL

DPO REFERENCE EVENT 2 DPO DP-RI


EVENT 2
ACTION DOCUMENTS ACTION PREDICT

DPO REFERENCE EVENT 3 DPO DP-RI


EVENT 3
ACTION DOCUMENTS ACTION PRESCRIBE

ULTIMATE EVENT SAFE STATE / ULTIMATE EVENT


(LOP1 / LOP 2 / OBSERVATION) (LOP1 / LOP 2 / OBSERVATION)

CONCLUSION
VERIFICATION & VALIDATION

Figure 8-4 Flowchart – Experiments on case studies from IMCA database

219
Case Study 1:
IMCA Dynamic Positioning Station Keeping Review DPSI 29 -2019 (Event 3).

Description of the Case study and detailed analysis of simulation:


Case study 1 was extracted from the DP review event reported in 2019. The reported Event 3
resulted in a DP incident (LOP 1) on the actual vessel. Table 8-3 shows the comparison of
the vessel set-up, failure in the sub-system, and successive events between the real scenario
and the numerical simulations on the DP-RI tool. The experimental simulation of the existing
scenarios was replicated in a controlled environment with a defined operational envelope. The
DP-RI tool was not trained with this particular scenario so it could be used to evaluate the
performance and accuracy. The vessel is a DP 3 drilling rig with System A, System B, System
C, and System D connected in an open bus with one generator in each system running and
connected online. There was a failure of the Automatic Voltage Regulator (AVR) in system A
which led to drilling operation interruption. The DPO investigated the loss and used reference
documents to rectify the failure. As there was no guidance on the precautionary steps, it led to
network failures (Net A and B). This resulted in complete loss of DP control, and the vessel
drifted 100 meters resulting in the DP incident.

The experimental simulation of the same scenario on the DP-RI tool showed different results
as the tool provides information instantaneously to the DPO. When AVR failure was simulated
in the electrical system, reliability was predicted as 75%, and the overall reliability of the DP
system was predicted as 88%, indicating that the vessel can maintain position. The DP-RI tool
also prescribed possible solutions and precautionary steps which could help the DPO to isolate
the system during the rectification of failure. Therefore, it does not result in a network failure,
and the system was restored for the drilling operation. From the results in Table 8-3, it is
evident that the DP-RI tool could prevent the DP incident through its predictive and
prescriptive functionalities. During the incident analysis of the actual case by IMCA experts,
it was revealed that the vessel drifted after the network loss of DP control. The LOP 1 could
have been prevented if the DPO has taken preventive steps to act on this. The response time
was limited by the DPO’s time to react and take corrective action to prevent a DP incident.
Also, the same scenario was tested on a drillship during sea-trials with integration between the
control system and DP-RI. The failure was simulated, and the DP-RI tool was able to predict
the reliability value instantaneously and prescribe solutions to the DPO for further actions.

220
Table 8-3 Case Study 1 - Actual and Experimental results summary
221
The suggestion was based on the inputs provided to the tool during the prescriptive analytics
training. Although the training was performed for each scenario separately, the DP-RI proved
its performance when a sequence of events happened. It demonstrates the capability to scale
for other failures.

Failure in sub-systems:
Electrical System (A5)  Primary cause
Human / Operator Error (A7)  Secondary cause

Actual Vessel Final Event:


DP Incident (LOP 1)

Experimental Set-Up (DP-RI) Final Event:


The vessel in a safe state

Risk Studies and documents used for verification and validation:


• FMEA results
• DP simulators
• DP Capability Plots
• Site-Specific Risk Analysis
• DP operation Manual

Conclusion - Verification, and Validation:


The DP-RI tool was able to bring together information from the different studies and present
it instantaneously to the DPO with the possible suggestions to select and implement. This
feature significantly reduces the reaction time, addressing the root cause of the problem
without a single failure leading to catastrophic losses. As discussed in the table, the reliability
of the overall system was 88%, and prescribing possible suggestions is the most novel feature
integrated with the DP system to ensure safe operation. Thus, for this specific case, the DPO
was alerted to potential consequences and able to take preventive action. In a case where it led
to successive events, the DP-RI would have suggested corrective actions to the DPO.

222
Case Study 2:
IMCA Dynamic Positioning Station Keeping Review DPSI 23 -2012 (ID 1242).

Description of the Case study and detailed analysis of simulation:


Case study 2 was extracted from the DP review event reported in 2012. The reported accident,
ID 1242, resulted in a DP undesired event (LOP 2) on the actual vessel. Table 8-4 shows the
comparison of the vessel set-up, failure in the sub-system, and successive events between the
real scenario and the numerical simulations on the DP-RI tool. The case study involves loss in
the sub-system DP control system (A2) as the primary cause and human/operator error (A7) as
the secondary cause. The DP-RI tool was never trained with this particular scenario so it can
be used to evaluate the performance and accuracy. The vessel is DP 3 class with System A,
System B, and System C connected in an open bus with four generators online and two
generators on standby. All the thrusters in the Thruster System (A3) are online and connected.
The failure of the DP control system led the thrusters to react erratically, and the vessel started
drifting away from the set position. The DPO did not have sufficient time to prevent the
subsequent event as there was no readily available solution from the current system set-up. The
final event was a minor loss of position resulting in a DP undesired event (LOP 2).

The experimental simulation of the same scenario in the DP-RI tool showed different results,
and the vessel was in a safe state throughout the fault condition. When a failure in the DP
control system was simulated, the DP-RI tool predicted the reliability of the DP control System
(A2) as 50% and the overall reliability of the DP vessel as 83%. The DP-RI tool prescribed
possible suggestions to take control of the DP back-up control station to avoid excursion from
the set-position. The DPO was able to react in time and take control and keep the vessel in a
safe state. If the DPO did not react due to human error, then the DP-RI predicted the reliability
of Human / Operator error (A7) as 50% and the overall DP vessel reliability as 54%. At this
stage, the DP-RI tool would have provided possible suggestions to take control through
Joystick or Manual operation. Then the DPO would have reacted to prevent the DP incident
and keep the vessel in a safe state. Then the failures could be fixed, bringing the overall
reliability of the DP vessel back to 100%.

Failure in sub-systems:
DP control System (A2)  Primary cause
Human / Operator Error (A7)  Secondary cause
223
Table 8-4 Case Study 2 - Actual and Experimental results summary
224
Actual vessel’s final event:
DP undesired event (LOP 2)

Experimental Set-up (DP-RI) final event:


The vessel in a safe state

Risk Studies and documents used for verification and validation:


• Site-Specific Risk Analysis
• DP operation Manual
• Investigation reports
• DP Capability Plots

Conclusion - Verification, and Validation:


The DP-RI tool showed the timely intervention of the DPO could prevent the failure in a sub-
system leading excursion from the set-position . As the analysis showed the initial loss in the
DP control system (A2) had only reduced the reliability of the overall DP system to 83%, and
the secondary cause was the main reason resulting in the minor loss of position. Thus, for this
specific case, the DPO was alerted to the possible consequences and could take preventive
action without Human/operator error (A7) acting as a secondary cause. In the case where it led
to successive events, the DP-RI would have suggested additional corrective actions to the DPO
based on the circumstances.

Case Study 3:
IMCA Dynamic Positioning Station Keeping Review DPSI 29 -2018 (ID 18121).

Description of the Case study and detailed analysis of simulation:


Case study 3 was extracted from the DP review document reported in 2018. The reported
accident, ID 18121, resulted in a DP incident (LOP 1) on the actual vessel. Table 8-5 shows
the comparison of the vessel set-up, failure in the sub-system, and successive events between
the real scenario and the numerical simulations on the DP-RI tool. The case study involves a
failure in the Thruster System (A3) as the primary cause and the power system (A4) as the
secondary cause. The DP-RI tool was not trained for this particular scenario and so can be used
to evaluate the model's ability to predict new techniques. The vessel is DP 3 class drilling
vessel with System A, System B, System C, and System D connected in a closed bus with four
generators online and four generators on standby. Four of the eight thrusters in the Thruster

225
Table 8-5 Case Study 3 - Actual and Experimental results summary
226
System (A3) are online and connected. The failure in one generator DG8 tripped four thrusters
(T1, T2, T3, and T4), and the vessel was drifted from the set-position leading to a significant
loss of position resulting in the DP incident (LOP 1). The DPO was not expecting multiple
failures, and the time to prevent it was minimal, which led to stopping drilling operation.

The experimentation simulation of the scenario in the DP-RI tool was similar for the initial
events as the multiple failures were instantaneous. When a failure was simulated in DG8 which
tripped thrusters T1, T2, T3, T4, the DP-RI tool predicted reliability of the Power System (A4)
as 75% and the Thruster System (A3) as 0%. This resulted in the overall reliability of the DP
vessel prediction as 68%. The vessel started drifting, and the drilling operation stopped. The
DP-RI tool prescribed possible suggestions as to create the remaining thrusters and isolate DG8
by arranging the system configuration to the “OPEN” bus. However, in the meantime, the
vessel drifted from set-position / heading, which led to a minor loss of position leading to the
DP undesired event (LOP 2). The DPO was able to take the necessary action in time, however,
could not prevent LOP 2 as this failure was the result of multiple failures (two losses in two
different sub-systems instantaneously). By proper action from the DPO based on the DP-RI
tool suggestion, LOP 1 was prevented, and the DPO was able to bring the vessel back to operate
with the reliability of the overall DP vessel at 100% with minimum downtime.

Failure in sub-systems:
Power System (A4)  Primary cause
Thruster System (A3)  Secondary cause

Actual Vessel Final Event:


DP Incident (LOP 1)

Experimental Set-Up (DP-RI) Final Event:


DP undesired event (LOP2) and Vessel to a safe state.

Risk Studies and documents used for verification and validation:


• DP Capability Plots
• DP simulator and FMEA report
• Site-Specific Risk Analysis
• DP operation Manual
• Investigation reports

227
Conclusion – Verification and Validation:
The DP-RI tool was able to perform well even when multiple failures happened in the system
instantaneously. During the analysis, it revealed that initial losses in the Power System (A4)
and the Thruster System (A3) had reduced the reliability of the overall DP system to 68%
resulting in the minor loss of position (LOP 2). However, in the actual vessel, the result was a
significant loss of position (LOP 1). Thus, for this specific case, the DPO was provided with
possible suggestions promptly to take preventive action without Human/operator error (A7)
acting as another layer of failure. Therefore, the DP-RI prevented the DP incident and reduced
downtime by a considerable amount.

Case Study 4:
IMCA Dynamic Positioning Station Keeping Review DPSI 27 -2016 (ID 1665).

Description of the Case study and detailed analysis of simulation:


Case study 4 was extracted from the DP review document reported in 2016. The reported
accident, ID 1665, resulted in a DP incident (LOP 1) on the actual vessel. Table 8-6 shows the
comparison of the vessel set-up, failure in the sub-system, and successive events between an
actual scenario and the numerical simulations on the DP-RI tool. In this case study, failure
happens in the DGNSS due to loss/interruption of signal from the satellite. It leads to the loss
of the DGNSS signal to the DP control system. The failure in Reference System (A1) was
considered as the primary cause and the Human / Operator error (A7) as a secondary cause.
The DPO was tired due to working long hours, which is considered as a reduced capability to
perform the duty resulting in an error in the sub-system. The vessel is DP 2 class with System
A, System B, and System C connected in an open bus with three generators online and three
generators on standby. Three thrusters in the Thruster System (A3) are online, and three
thrusters on standby. The failure of the DP control system led the thrusters to react erratically,
and the vessel started drifting away from the set position. The DPO did not have sufficient
time to prevent the subsequent event as there was no readily available solution from the current
system set-up. The final event was a significant loss of position resulting in a DP incident (LOP
1) with the vessel drifting 400 m from the set position.

The experimental simulation of the ID 1665 scenario in the DP-RI tool showed different results
for the final event, although the vessel experienced a minor loss of position.

228
Table 8-6 Case Study 4 - Actual and Experimental results summary

VESSEL VESSEL FAILURE SUCCESSIVE REFERENCE DP-RI DP-RI DPO


SET-UP CONFIG EVENTS DOCUMENTS PREDICT PRESCRIBE ACTION
ACTUAL 3 Generators Online DGNSS signal EVENT1: Site Specific Refer to system
VESSEL 3 Generators Standby lost. DGNSS signal loss. documents. and procedure to
3 Thrusters Online Vessel stopped FMEA and stop operation
3 Thrusters Standby Interruptions in operation. Operation Manual.
3 Gyros, 2 MRU and 2 the service EVENT 2: Alarm in DP control Refer to alarms
Wind Sensors Online DP control system System. Site and check for
2 DGNSS, 1 laser lost signal and started specific documents. appropriate steps
system and 1 radar drifting in DP operation
system Online manuals.
Wind Speed = 10 Knots EVENT 3: Knowledge of the Visual
(250o) Vessel drifted 400m DPO. Identification and
Current Speed = 0.8 on DP leading to DPO intervention
Knots (285o) LOP 1. after actual
229

Wave Height = 1 metre incident.


SET-UP 3 Generators Online DGNSS signal EVENT1: The reliability of Prescribes the DPO follows the
FOR 3 Generators Standby lost. DGNSS signal loss. the Reference possible suggestion suggestion and
DP-RI 3 Thrusters Online Vessel stopped System predicted ad 1. Change to other changes the
3 Thrusters Standby Interruptions in operation. 75%. satellite signal satellite signal and
3 Gyros, 2 MRU and 2 the service 2. Set manual manual value in
Wind Sensors Online The overall DP value to DP control DP control for
2 DGNSS, 1 laser and 1 reliability is 72%. model to operate.
radar system Online EVENT 2: Due to shift change Prescribes possible DPO shift change
Wind Speed = 10 Knots Manual Operation to the operator error suggestions and fix DGNSS
(250o) bring vessel to set predicted as 25% 1. DPO SA to be issue
Current Speed = 0.8 position increased by
Knots (285o) The overall DP changing shift
Wave Height = 1 metre reliability is 54%. earlier
2. Rectify failures
EVENT 3: The reliability of No suggestion DPO performs
Vessel – Safe State the overall DP when no failures in normal monitoring
Operation resumed system is 100%. the system. duty during the
operation.
When the failure in the Reference System (A1) was simulated, the DP-RI tool predicted the
reliability of A1 as 75% and the overall reliability of the DP vessel as 72%. The DP-RI tool
prescribes the possible solution as to change the satellite signal due to interruption in the
DGNSS or set the manual value in the DP control system to ensure that the model performs
the allocation algorithm to keep the vessel in set-position. As it was a long shift for the DPO,
the fatigue was taken into account. The DP-RI tool predicted the reliability of human/operator
error (A7) as 25% and the overall reliability as 54%. The tool has provided a suggestion to
change the shift of the DPO in duty. The new DPO rectified the failure with the DGNSS to
ensure that the vessel is safe, and the overall reliability of the DP vessel is returned to 100%.

Failure in sub-systems:
Reference System (A1)  Primary cause
Human / Operator Error (A7)  Secondary cause

Actual Vessel Final Event:


DP Incident (LOP 1)

Experimental Set-Up (DP-RI) Final Event:


DP undesired event (LOP2) and Vessel to safe state.

Risk Studies and documents used for verification and validation:


• DP operation Manual
• DP Capability Plots
• Site-Specific Risk Analysis

Conclusion - Verification, and Validation:


The DP-RI tool was able to predict the reliability of failures in the sub-system and the human
error effectively and advise on the allocation of the right resource to execute the operation
efficiently. During the analysis, it revealed that initial failures in the Reference System (A1)
and the Human / Operator error (A7) had reduced the reliability of the overall DP system to
54% with the vessel still in position compared to the actual vessel, which resulted in a
significant loss of position (LOP 1). Thus, for this specific case, the DPO was provided with
possible suggestions promptly to take preventive action without Human/operator error (A7)
acting as another layer of failure. Based on the swift action from the DPO, the DP incident was
prevented, and the vessel was able to perform its operation without any interruption.

230
8.8 Verification and Validation - Hypothetical Case Studies from FMEA and Test
Procedures

Case studies (5-8) are extracted from FMEA reports with a common cause failure (CCF)
identified as low probability and high consequences. The proving trial procedures were used
as a reference to build the case study descriptions for simulation in the DP-RI tool. The
flowchart in Figure 8-5 provides a better understanding of the case studies.

EXPERIMENT START

CONFIGURATION
VESSEL SET-UP

FAILURE CAUSE

DPO DP-RI
EVENT 1 TOOL
ACTION

DPO DP-RI
EVENT 2 PREDICT
ACTION

DPO DP-RI
EVENT 3
ACTION PRESCRIBE

CONCLUSION

VERIFICATION &
VALIDATION

Figure 8-5 Semi-Submersible Drilling Rig Configuration for DP-RI experiment

231
Case Study 5:
Multiple failures due to CCF– From FMEA and SAT Procedure

Description of the Case study and detailed analysis of simulation:


The experimental simulation of the hypothetical scenarios was executed on the DP-RI to
evaluate its performance when such failures occur during operational scenarios. As shown in
Table 8-7, when there are multiple failures due to CCF, it affects two systems out of three. The
first CCF affects the information related to Power Generation and the Power Management
System (PMS), and complete control is lost on System A. At this point, the DP-RI tool predicts
the reliability of the power system (A4) as 66.66% and the reliability of the overall DP-system
as 82% indicating that the vessel can maintain its position with systems B and C online. DP-
RI prescribes possible solutions to the DPO to take appropriate action to fix the system A
failure. The prescribed suggestions are as follows:
1. Fix one network (A or B) to the field station/cabinets.
2. CCF leading to power supply failure, so isolate power at the power source or field stations.
3. Power supply modules interface, which could lead to CCF.
4. Restore system A through by-pass.

The DPO could select one of the possible suggestions and implement it to see whether the
failure is fixed, and system A is restored. If one suggestion does not solve it, then the next
could be tried to resolve the issue. As all the possible failures are shown, the DPO could react
quickly and bring the vessel to regular operation by addressing the CCF in the power system.

In the meantime, another failure happens in the thruster system (A3) in system B. The DP-RI
tool predicts the reliability of the thruster system (A3) as 66% and the overall reliability of the
DP system as 64%. Now the DP vessel's overall reliability to maintain its position is “Medium,”
and the vessel may start losing its position. DP-RI prescribes the following possible solutions
to the DPO to take appropriate action to fix the system B failure.
1. Check for fire/flood in the pump room where thruster cabinets are located.
2. Fix one network (A or B) to the field station/cabinets.
3. CCF leading to power supply failure, so isolate power at the power source or field stations.
4. Power supply modules interface, which could lead to CCF.
5. Restore system A through by-pass.

232
Table 8-7 Case Study 5 - Hypothetical Experiment results summary
233
As the error in the power system (A3) of system A was fixed during this time, the overall
reliability of the DP system predicted as 82%. The DPO could select one of the possible
suggestions and implement it to see whether the failure is fixed, and system B is restored. The
error in the thruster system (A3) of system B could be fixed as the DPO could react quickly
and bring the vessel back to regular operation by addressing the CCF in the thruster system in
system B. The DP-RI tool predicts the reliability of the thruster system (A3) as 100% and the
overall reliability DP system as 100%.

Such hypothetical scenarios have a low probability of occurrence and high consequences.
Based on experience sharing sessions with Operators, Oil companies, independent consultants,
and the DPO informed during similar situations in offshore it resulted in DP incident (LOP1)
and operation was stopped. This was because there was a limited time for the DPO to react and
fix the issue. However, with the advancement in technology, implementation of the DP-RI tool
interfacing with the DP control system would be a much more efficient way of operating DP
vessels offshore for complex operations.

Failure in sub-systems:
Power System (A4)  Primary cause
Propulsion System (A3)  Secondary cause

Risk Studies and documents used for verification and validation:


• FMEA results
• DP Capability Plots
• Site-Specific Risk Analysis

Conclusion – Verification, and Validation:


The DP-RI tool was able to bring together the best of different studies and present it
instantaneously to the DPO with a possible suggestion to select and implement. The DPO
would have found the solutions to address the problem without the DP-RI tool. However, it
would have taken a long time. With multiple failure scenarios, it would most probably have
led to a DP incident and resulted in the loss of position/property damage / environmental
spill/downtime. The DP-RI tool predicted an accurate reliability value and prescribed
suggestions for the DPO to react quickly, preventing a DP incident.

234
Case Study 6:
Multiple failures with Reference System and DP Control System– From DP capability Plot
and DP proving trial test

Description of the Case study and detailed analysis of simulation:


The experimental simulation of the hypothetical scenario was simulated, as shown in Table
8-8. The failure was simulated in the DGNSS signal through power failure, and the signal to
the DP control system was lost. At this point, the DP-RI tool predicts the reliability of the
Reference System (A1) as 75% and the reliability of the overall DP-system as 83%, indicating
that the vessel can maintain its position. DP-RI prescribes possible solutions to the DPO to
take appropriate action to fix the system A failure. The prescribed suggestions are as follows:
1. Satellite signal needs to be corrected
2. Another reference signal can be used as a precision signal
3. Manual value can be set in the system

The DPO could select one of the possible suggestions and implement it to see whether the
failure is fixed. If one suggestion does not solve it, then the next could be tried to fix the issue.
As all the possible losses are shown, the DPO could react quickly and bring the vessel to regular
operation by addressing the right failure.

In the meantime, another failure is simulated in the DP control system through loss of model
reference. The DP-RI tool predicts the reliability of the DP control system(A3) as 25% and
the overall reliability of the DP system as 40%. Now the DP vessel's overall reliability to
maintain its position is “Low,” and the vessel may start losing its position. DP-RI prescribes
the following possible solutions to the DPO to take appropriate action:
1. Use Back-Up DP control system
2. Set manual value if possible or change to other modes of operation

The DPO could select one of the possible suggestions and implement it to see whether the
failure is fixed. The errors in the Reference system (A1) and the DP control system (A2) are
fixed as the DPO could react quickly and bring the vessel to regular operation. The DP-RI tool
predicts the reliability of the overall DP system as 100%.

235
Table 8-8 Case Study 6 - Hypothetical experiment results summary
236
Failure in sub-systems:
Reference System (A1)  Primary cause
DP control system (A2)  Secondary cause

Risk Studies and documents used for verification and validation:


• DP Capability Plots
• Proving trial results
• Site-Specific Risk Analysis

Conclusion – Verification and Validation:


The DP-RI tool provided the appropriate solutions instantaneously to the DPO to take action
and resolve the issue in the system. The failure, which would have resulted in the vessel
experiencing a significant loss of position, was prevented with the help of DP-RI. The results
are compared with the industry-proven risk studies such as CAMO and ASOG.

Case Study 7:
Multiple failures in different sub-systems – From FMEA and DP proving trial test

Description of the case study and detailed analysis of simulation:


The experimental simulation of the hypothetical scenarios was simulated, as shown in Table
8-9. The failure was simulated in the wind sensor signal through a power failure, which caused
a communication failure from IAS to the DP control system. At this point, the DP-RI tool
predicts the reliability of the Environmental System (A6) as 66.66% and the reliability of the
overall DP-system as 86.66%, indicating that the vessel can maintain its position. DP-RI
prescribes possible solutions to the DPO to take appropriate action to fix the system A failure.
The prescribed suggestions are as follows:
1. Change to wind sensor signals from back-up DP control system
2. Isolate wind sensor 1 as it caused CCF of IO module
3. Fix CCF and restore the Main DP control system.

The DPO could select one of the possible suggestions for the initial failure and implement it
to see whether the failure could be resolved. As all the possible suggestions are provided, the
DPO could react quickly and bring the vessel to regular operation by addressing the leading
cause for the event 1.

237
Table 8-9 Case Study 7- Hypothetical experiment results summary
238
In the meantime, the failure causes loss of the IO module in the main DP control system leading
to the unavailability of critical signals. The DPO capability for decision making and situation
awareness reduces due to the end of the shift. The DP-RI tool predicts the reliability of the DP
control system (A3) as 50% and Human / Operator error as 50%. Now the overall reliability
of the DP system is 45.66%, the DP vessel's overall reliability to maintain its position is “Low”,
and the vessel may start to lose its position. DP-RI prescribes the following possible solutions
to the DPO to take appropriate action:
1. Set to back-up DP control system
2. Perform shift change for the DPO
3. Take the DP to manual control if possible as per DP alert status

The DPO could select one of the possible suggestions and implement it to see whether the
failure is resolved. The errors in the Environmental system (A6), the DP control system (A2),
and the Human / Operator error (A7) are fixed as the DPO could react quickly and bring the
vessel to a safe state. The DP-RI tool predicts the reliability of the overall DP system as 100%.

Failure in sub-systems:
Environment system (A6)  Primary cause
DP control system (A2)  Secondary cause
Human / Operator error (A7)  Indirect Secondary cause

Risk Studies and documents used for verification and validation:


• FMEA results
• Proving trial test procedure results
• DP simulator
• Site-Specific Risk Analysis

Conclusion – Verification and Validation:


The DP-RI tool was able to process the data continuously and instantaneously predict
/prescribe the solution to the DPO. This enables the operators to focus on the issues correctly
and promptly rather than checking the documents which would be time-consuming. More
importantly, the differentiating factor the DP-RI tool is that for multiple failures, it provides
solutions for the particular losses right way and prevents the failure leading to a DP incident.

239
Case Study 8:
Multiple failures due to CCF– From SAT, FMEA, and DP proving trial test procedure

Description of the case study and detailed analysis of simulation:


The experimental simulation of the hypothetical scenarios was executed, as shown in Table
8-10, with a sequence of failures simulated in different sub-systems. The first failure was
simulated in SWBD B of the electrical system (A5) with loss of system B. At this point, the
DP-RI tool predicts the reliability of the electrical system (A5) as 66.66%, and the reliability
of the overall DP-system as 88% indicating that the vessel can maintain its position with
systems A and C. DP-RI prescribes possible solutions to the DPO to take appropriate action to
fix the system B failure. The prescribed suggestions are as follows:
1. Check System A and System C can maintain the position. If able to maintain within its
capability start the necessary sub-system with System A and C.
2. Rectify the SWBD failure in System B
3. Ensure there is no CCF affecting other systems.

The DPO could select one of the possible suggestions and implement it to see whether the
vessel able to main its position. If option 1 is not able to address the failure, then System B is
restored. If one suggestion does not solve it, then the next could be tried to fix the issue. As all
the possible failures are shown, the DPO could react quickly and bring the vessel to regular
operation by addressing the loss in system B.

In the meantime, another failure was simulated in one of the direct factor parameters as shown
in Figure 4-11, for the human/operator error (A7) sub-system affecting the performance of the
overall DP vessel. This failure was simulated without clearing the loss in system B. The DP-
RI tool predicts the reliability of the human / operator error (A7) as 25% and the overall
reliability of the DP system as 68%. Now the DP vessel's overall reliability to maintain its
position is “Medium,” and the vessel may start to lose its position. DP-RI prescribes the
following possible solutions to the DPO to take appropriate action to fix the system B failure.
1. Change the shift of the DPO
2. Operate the vessel with maximum redundancy to ensure failure consequences are
addressed through mitigation action
3. Ensure that system B is restored

240
Table 8-10 Case Study 8 - Hypothetical experiment results summary
241
As the error in the electrical system (A5) of system B was fixed during this time, the overall
reliability of the DP system is 82%. The DPO could select one of the possible suggestions and
implement it to see whether the failure is fixed, and system B is restored. The loss in
human/operator error (A7) could be fixed as the DPO could react quickly and bring it to regular
operation. The DP-RI tool predicts the overall reliability DP system as 100%.

Failure in sub-systems:
Electrical System (A5)  Primary cause
Human / Operator Error (A7)  Secondary cause

Risk Studies and documents used for verification and validation:


• SAT procedure
• FMEA results
• Proving trial test procedure results
• Site-Specific Risk Analysis

Conclusion – Verification and Validation:


The specific case scenario was tested with an intention to see how the human factor plays a
critical role during a failure in a sub-system. In conclusion, the procedure without the DP-RI
could have resulted in a DP incident (LOP 1). The reason would be that the failure on the
SWBD B was dormant (unnoticed), and the DPO was alerted only when the consequence
resulted. There are multiple failures for the DPO to react and fix it within a limited time. In
this scenario, the reliability of the sub-system Human / Operator error is 25%, and failure
cannot be fixed in the time leading to a DP incident. The DP-RI tool results were verified and
validated against the SAT procedure and proving trial results. Additionally, there was more
information readily available qualitatively and quantitatively for the DPO to decide on the next
steps with ease.

8.9 Summary of Verification and Validation – Test Results

In addition to the case studies presented above, numerous tests were performed with the DP-
RI tool. The actual case studies were extracted from the IMCA and WOAD databases. The
results were verified and validated against the detailed report prepared for each incident by
experts. Similarly, the hypothetical case studies are extracted from the FAT, CAT, DP

242
operation manuals, and Proving trial procedures. The results are verified and validated against
the expected results of the methods. The DP-RI tool was tested with different cases to ensure
that it could perform its function as intended. Different DP-classes, configurations, vessel
types, system providers, and DPO experiences were considered, and analysis was performed
to identify the gaps. Necessary functionalities were added to implement it on the operational
vessel. A summary of the test results is presented in Table 8-11 shows the number of actual
cases and hypothetical cases, along with the accuracy of prediction.

Table 8-11 Summary of results – DP-RI tool

From the experimental results, it was evident that the DP-RI tool was able to predict and
prescribe possible suggestions even for new scenarios on which it had not been trained. If it
was tested for a particular case similar to training, then the accuracy was 100% and matches
the exact results of the reference documentation and industry experts. For all the case studies,
the prescribed possible suggestion was evaluated with the existing risk studies, and it was
verified to match the real solutions to be implemented in the case of failures. The time taken
by the DPO for implementing the solution and prevent the accident has reduced by half (50%)
with the DP-RI without any reduction in accuracy. In fact, the accuracy has increased as the
numerous expert’s input were taken into consideration from the database for suggesting
possible solutions to the DPO. Thus, the DP-RI tool is a novel advisory tool for the DP vessel
to predict the reliability of the DP system and assist DPO during complex operations.

243
9. Conclusions and Recommendations
9.1 Summary of Research

The overall aim of this research was to develop an intelligent, state-of-the-art decision-making
tool using artificial intelligence and data analytics. The decision-making tool, DP-RI, will be
used for the prediction of offline and real-time reliability of DP systems (Quantitative and
Qualitative) through predictive analytics. The functionality of the DP-RI tool also suggests
possible solutions to the DPO during any failure in the DP system through prescriptive
analytics. This ensures that the DPO is in complete control and has a clear over-view during
complex marine operations to prevent DP related accidents in the case of failures. Detailed
analysis of the limitations and entanglements of existing risk assessments, safety studies,
numerical methods, and simulations for the reliability assessment of DP systems are presented.

This research work meets the main aim and objectives through a systematic approach, which
is presented in different chapters of the thesis. The research resulted in a universal database for
the DP systems created using DP System Level FMEA, DP Vendor FMEA, HIL and OREDA.
It also consists of IMCA Station Keeping Accident Analysis Reports, DP capability plots, and
Site-Specific Operational Risk Analysis. The critical information is transformed into structured
data from semi-structured and unstructured data. This is the first significant break-through
achieved in the research, which facilitated in carrying out various analytics on the DP context.
Descriptive and Diagnostic Analytics are implemented on the database to identify the
correlation between different sub-systems with the DP system. A new classification of sub-
systems is determined, which played a vital role in DP system functionality, and this
classification was completed based on big data concepts. The sub-system inter-dependencies
were presented, which were used in developing suggestive solutions, and to provide a clear
overview for the DPO; the lack if this was one of the limitations of existing studies. AHP was
used to determine the weighting of each sub-system and this was validated through LSTM. A
holistic research framework using predictive analytics was proposed for offline and real-time
prediction of the reliability of the DP system through the LSTM model. In addition,
prescriptive analytics were applied through Natural Language Processing using BERT as
Question and Answering model to provide the suggestive solutions to the DPO.

244
9.2 Achievements of the Research and Innovative Work

To summarise the key achievement and innovation of research are as follows:


• A new method of classification of the DP system was introduced, which identified all
the critical sub-systems, and which is different from the traditional way. It was based
on the IMS consisting of other databases through descriptive and diagnostic analytics
of big data concepts.
• An intuitive and systematic approach was proposed for the weighting of the sub-
systems using AHP through input from various industry experts involved in different
phases of the DP system life-cycle. This was in-line with the IMCA guidance of
involving other discipline leads and, at the same time, expanding the study from FMEA
to much more diverse applications.
• For the first time, a novel research framework was developed based on predictive
analytics using LSTM. This was implemented in predicting the offline and real-time
reliability of DP systems (Quantitative and Qualitative). The results were compared
with proven risk assessment methodologies.
• State-of-the-art prescriptive analytics using the BERT model was instigated by using
the model as a Question and Answering system in generating possible solutions to the
DPO during failures to prevent DP incidents. The prescriptive analytics is combined
with the predictive analytics part to function as a unified approach in providing possible
solutions along with the ranking for the DPO to choose the best one for any particular
situation.
• Numerical experiments with an actual DP system architecture were created for the
verification and validation of the DP-RI tool. The interface between different DP
systems through other protocols are established to ensure system integration of the DP-
RI tool with the actual system onboard the vessel. The case studies were developed
from the existing records on the IMCA accident database and complicated situations to
evaluate the robustness of the DP-RI tool.

245
9.3 Future Research Work

This thesis provides a novel research framework for the prediction of the reliability of the DP
system through state-of-the-art RNN models using universal databases. However, the
prediction is based on the dataset collected for the research through DNV GL and research
partners. So, there is a need for fine-tuning of the models based on the actual real-time data
from numerous vessels.

The recent digital transformation in the marine, oil, and gas industries has enabled companies
to create a digital ecosystem for technology and infrastructure to share data and integrate with
more partners (Vendors, Operators, Classification Societies, etc.). Thus, in a few years from
now, the real-time data from vessels will be accessible for research to fine-tune the model and
increase the robustness of the DP-RI tool.

The DP-RI tool results were verified and validated against the existing risk assessment methods
and safety studies. The mathematical model for the reliability of sub-systems is developed
using the RBD for a different configuration. Further, the overall reliability of the DP system is
calculated through traditional assurance methods. However, in the future, the digital twin
representation of the actual physical model will become the norm for sub-systems assurance.
Currently, due to a lack of standardisation and unavailability of guidance for the digital twin,
the research on DP-RI cannot include this part. In the future, researchers will be able to focus
on integrating the digital twin with DP-RI from the design stage to ensure that the sub-system
physical models are kept updated throughout the DP lifecycle. The integration will support in
estimating the accuracy of the prediction and, at the same time, help the model to learn new
scenarios more quickly.

With all the future research improvements, it would be possible to develop a more advanced
state-of-the-art real-time DP-RI tool application. The predicted information from the DP-RI
could be integrated into the DP control system to rapidly indicate the reliability value and
provide possible solutions to the DPO when the failure occurs. Thus, the integration would
support in improving the robustness and closed-loop performance of the DP system to provide
practical solutions over time, in turn leading to autonomy.

246
APPENDIX I: QUESTIONNAIRE TEMPLATE

247
APPENDIX II: VERIFICATION AND VALIDATION FOR DP-RI TOOL

248
Case Study No. 1
Login Page: Credential to login to DP-RI Tool
249
Overview Page
250
251
DP System
252
Reference System (A1)
253
DP Control System (A2)
254
Thruster System (A3)
255
Power System (A4)
256
Electrical System (A5)
257
Environment System (A6)
258
Human / Operator Error (A7)
259
Risk Profile
260
Overall Reliability and Prescriptive analytics
261
REFERENCES
[1]. DNV GL Classification Society. 2015. Dynamic positioning vessel design philosophy
guidelines. Section 2: Dynamic positioning vessel design philosophy [Online], DNVGL-RP-
E306. Available: https://rules.dnvgl.com/docs/pdf/DNVGL/RP/2015-07/DNVGL-RP-
E306.pdf [Accessed 19 Feb 2018].

[2]. Fernandez, C, Shashi Bhushan, K, Woo, Wl, Norman, R and Dev, A.K. " Dynamic
Positioning Reliability Index (DP-RI) and Offline Forecasting of DP-RI During Complex
Marine Operations". ASME 2018 37th International Conference on Ocean, Offshore and
Arctic Engineering, Madrid, Spain. Pages: 1-10. 17–22 June 2018 2018, 10.1115/OMAE2018-
77267.

[3]. International Marine Contractors Association (IMCA). 2019. Dynamic Positioning -


Analysis of Station Keeping Incidents Data 1993 - 2019. Dynamic Positioning Station Keeping
Review [Online], DPSI 1-29. Available: https://www.imca-int.com/divisions/marine/dynamic-
positioning/dp-events-incidents/ [Accessed 22 Feb 2020].

[4]. Vedachalam, N and Ramadass, G. A "Reliability assessment of multi-megawatt capacity


offshore dynamic positioning systems". Applied Ocean Research, 63, Pages: 251-261. 2017.

[5]. Consulting, Dnv Gl. 2004. Research report - Review of methods for demonstrating
redundancy in dynamic positioning system for the offshore industry. HSE Science and
Research Publication [Online]. Available: https://www.hse.gov.uk/research/rrpdf/rr195.pdf
[Accessed 09 Nov 2018].

[6]. Fossen, T.I 2002. Marine control systems: guidance, navigation and control of ships, rigs
and underwater vehicles, Trondheim, Norway, Marine Cybernetics. Number of Pages: 570.
2002.

[7]. Ralph, H.B. " FPSO NORNE Close Proximity DP Operations with MSV REGALIA".
Dynamic Positioning Conference, Houston, USA. Pages: 1-20. 17-18 October 2000.

[8]. Marine Safety Committee. 2017. Guidelines For Vessels And Units With Dynamic
Positioning (DP) Systems. Section 3: Functional Requirements [Online], MSc.1 / Circ 1580.
Available: https://www.register-iri.com/wp-content/uploads/MSC.1-Circ.1580.pdf [Accessed
23 July 2019].

[9]. Rokseth, B, Ingrid Bouwer, U and Jan Erik, V "A systems approach to risk analysis of
maritime operations". Proceedings of the Institution of Mechanical Engineers, Part O: Journal
of Risk and Reliability, 231, Pages: 53-68. 2016.

[10]. Arjen, T, Clemens, V.D.N, Hugo, G and Douwe, S. " The Road to Eliminating Operator
Related Dynamic Positioning Incidents". Dynamic Positioning Conference, Houston, USA.
Pages: 1-18. 9-10 October 2007.

[11]. Fossen, T.I 1994. Guidance and control of ocean vehicles, New York, NY, USA, John
Wiley & SOns. Number of Pages: 494. 1994.

262
[12]. Dev, A.K. " Environmental Forces for Dynamic Positioning: Ships vs. Semi-
Submersibles". 4th International Conference on Technology and Operation of Offshore
Support Vessels, Singapore. Pages: 64-78. 16-17 August 2011 2011, 10.3850/978-981-08-
9731-4_OSV2011-08.

[13]. Arditti, F, Cozijn, H, Van Daalen, E and Tannuri, E. A "Robust thrust allocation algorithm
considering hydrodynamic interactions and actuator physical limitations". Journal of Marine
Science and Technology, 24, Pages: 1057-1070. 2019.

[14]. International Electrotechnical Commission (IEC). 2018. Functional safety - Safety


instrumented systems for the process industry sector. Part 1 - 3 [Online], IEC 61511 Available:
https://webstore.iec.ch/publication/5527 [Accessed 12 Oct 2020].

[15]. E, Remi, Jacob, H and Robert, M. "Assessing the Reliability Of Dynamic Positioning
Systems For Deepwater Drilling Vessels". Dynamic Positioning Conference, Houston, USA.
Pages: 1-13. 12-13 October 1999.

[16]. Fernandez, C, Dev, A.K, Norman, R, Woo, Wl and Shashi Bhushan, K. "Dynamic
Positioning System: Systematic Weight Assignment for DP Sub-Systems Using Multi-Criteria
Evaluation Technique Analytic Hierarchy Process and Validation Using DP-RI Tool With
Deep Learning Algorithm". ASME 2019 38th International Conference on Ocean, Offshore
and Arctic Engineering, Glasgow, Scotland, UK. 9-14 June 2019 2019, 10.1115/OMAE2019-
95485.

[17]. Clavijo, V, Miralles Schleder, A and Ramos Martins, M. "Reliability analysis of Dynamic
Positioning Systems". Progress in Maritime Technology and Engineering: Proceedings of the
4th International Conference on Maritime Technology and Engineering (MARTECH 2018),
Lisbon, Portugal. Pages: 265-272. 7-9 May 2018.

[18]. Cheliyan, A. S. and Bhattacharyya, S. K. "Dynamic fault tree analysis of dynamic


positioning system using Monte Carlo approach". Safety in Extreme Environments, 1, Pages:
1-9. 2019.

[19]. Marvin, R and Arnljot, H 2003. System Reliability Theory: Models, Statistical Methods,
and Applications, 2nd Edition, Hoboken, New Jersey, USA, John Wiley & Sons. Number of
Pages: 672. 2003.

[20]. DNV GL Digital Solutions. 1970-2020. WOAD - World Offshore Accident Database.
Oil and Gas offshore accident data for diverse facility types [Online]. Available:
https://woad.dnvgl.com/Login.aspx?ReturnUrl=%2fdefault.aspx [Accessed 22 Feb 2020].

[21]. Chen, H and Moan, T "DP incidents on mobile offshore drilling units on the Norwegian
continental shelf". Advances in safety and reliability, 1, Pages: 37-44. 2005.

[22]. Petroleum Safety Authority (PSA). 2011. Risikonivå i petroleumsvirksomheten. Norsk


sokkel:Prosjektrapport–Akutte-utslipp [Online]. Available:
https://translate.google.com/translate?hl=en&sl=no&u=https://www.ptil.no/fagstoff/rnnp/&p
rev=search&pto=aue.Pages: 1-176. [Accessed 02 March 2020].

263
[23]. Marine Technology Society (MTS). 2019. DP Vessel Design Philosophy Guidelines. DP
Control Systems [Online], 10201925. Available: https://dynamic-positioning.com/wp-
content/uploads/2019/11/MTS-DP-Vessel-Design-Philosophy-Document-2019-10201925-
DRAFT.pdf [Accessed 29 Dec 2019].

[24]. Jenman, Chris. " DP Past, Present and Future". Dynamic Positioning Conference,
Houston, USA. Pages: 1-13. 11-12 October 2011.

[25]. DNV GL Classification Society. 2018. Competence of dynamic positioning operators.


Competence Requirements [Online], DNVGL-ST-0023 Available:
https://rules.dnvgl.com/docs/pdf/dnvgl/st/2014-04/dnvgl-st-0023.pdf [Accessed 07 Sep 2019].

[26]. Pazouki, K, Forbes, N, Norman, R and Woodward, M D "Investigation on the impact of


human-automation interaction in maritime operations". Ocean Engineering, 153, Pages: 297-
304. 2018.

[27]. International Marine Contractors Association (IMCA). 2016. Guidelines for The Training
and Experience of Key DP Personnel. Section 7: Qualification and Knowledge Requirement
of Key DP Personnel [Online], IMCA M117. Available: https://www.imca-
int.com/publications/97/the-training-and-experience-of-key-dp-personnel/ [Accessed 30 Nov
2019].

[28]. Health and Safety Executive (HSE), Uk. 2013. Inspection of Maritime Integrity (Loss of
Stability & Position). HID Inspection Guide Offshore [Online], ED OIG - Loss of Stability &
Position. Available: https://www.hse.gov.uk/offshore/ed-loss-of-stability-and-positioning.pdf
[Accessed 04 Dec 2019].

[29]. National Offshore Petroleum Safety and Environmental Management Authority


(NOPSEMA) . 2017. DP systems' tolerance to human error: an international collaboration. DP
system [Online], Regulator Article. Available: https://www.nopsema.gov.au/safety/safety-
resources/dp-systems-tolerance-to-human-error-an-international-collaboration/ [Accessed 04
April 2020].

[30]. Norwegian Maritime Authority (NMA). 2016. Regulations of 10 July 2009 No. 998 on
positioning and anchoring systems on mobile offshore units. Section 10 Dynamic Positioning
[Online], Anchoring Regulations 09. Available:
https://www.sdir.no/contentassets/309195da396848bf8f2930e093c47157/10-july-2009-no.-
998-anchoring-regulations-09.pdf?t=1604129361848 [Accessed 18 Nov 2018].

[31]. Petroleum Safety Authority (PSA). 2018. Regulation 90 Positioning. Chapter XVI
Maritime Operations [Online], Regulations. Available: https://www.ptil.no/en/regulations/all-
acts/the-activities-regulations3/XVI/90/ [Accessed 04 Jan 2020].

[32]. Oil Companies International Marine Forum (OCIMF). 2016. Dynamic Positioning
Assurance Framework - Risk based Guidance. Dynamic Positioning Assurance Framework
[Online], DPAF. Available: https://www.ocimf.org/media/159746/dp-failure-mode-effects-
analysis-assurance-framework-risk-based-guidance.pdf [Accessed 30 July 2020].

264
[33]. Marine Technology Society (MTS). 2012. DP Operations Guidance - Part 1. DP related
documentation [Online], MTS_DP_TECH_COMM_DP GUIDANCE_PART_1_Ver2-
09201217.Available:https://dynamicpositioning.com/files_mailing/dp_tech_committee_dpgu
idance_part1.pdf [Accessed 02 March 2019].

[34]. Hadnett, E "A Bridge Too Far?". Journal of Navigation, 61, Pages: 283-289. 2008.

[35]. American Bureau f Shipping (ABS). 2020. Guide for Dynamic Posiioning Systems April
2020. Dynamic Positioning System Design [Online], ABS Guide for Dynamic Positioning
Systems.Available:https://ww2.eagle.org/content/dam/eagle/rules-and-
guides/current/other/191_dpsguide/dps-guide-apr20.pdf [Accessed 15 June 2020].

[36]. Bureau Veritas (BV). 2020. Rules for the Classification of Steel Ships. Part F - Additional
Class Notation [Online], NR 467.F1 DT R12 E. Available:
http://erules.veristar.com/dy/data/bv/pdf/467-NR_PartF_2020-01.pdf [Accessed 19 Jan 2020].

[37]. China Classification Society (CCS). 2018. Rules for Classification of sea-going steel
ships. Additional Requirements [Online], CCS Volume 5. Available:
http://www.ccs.org.cn/ccswzen/font/fontAction!downloadArticleFile.do?attachId=4028e3d6
63380af801634e251b420015 [Accessed 06 Oct 2019].

[38]. Indian Register of Shiping (IRS). 2015. Rules and Regulations for the Construction and
Classification of Indian Coast Guard Ships. Dynamic Positioning Systems [Online], IR Class.
Available: https://www.irclass.org/media/4214/icg_rules_2015.pdf [Accessed 08 March
2020].

[39]. Korean Register (KR). 2007. Introduction to the Classification Technical Rules. Part 9
Chapter 4 Dynamic Positioning System [Online], GR-00-E. Available:
http://eclass.krs.co.kr/krrules/KRRules2007/data/data_part/english/pe000000.pdf [Accessed
09 Dec 2018].

[40]. Registro Italiano Navale (RINA). 2020. Rules for the Classification of Ships.
Classification of Ships. Available: https://www.rina.org/en/rules [Accessed 08 May 2020].

[41]. Russian Maritime Register Of Shipping (RS). 2018. Rules for Technical Supervision
during construction of ships and manufacture of materials and products for ships. General
Regulations for Technical Supervision [Online], ND No. 2-020101-040-E Volume 1.
Available: https://rs-class.org/upload/iblock/67d/67d9a34333c291532a9e038565cf7d3e.pdf
[Accessed 04 May 2019].

[42]. Nippon Kaiji Kyokai (NK). 2019. Rules for the survey and construction of Steel Ships.
Part H - Electrical Installations [Online], Machinery Rules: ERUL20-420. Available:
https://www.classnk.or.jp/hp/en/publications/pub_rule.aspx [Accessed 03 Sep 2020].

[43]. Lloyd's Register (LR). 2019. Rules and Regulations for the Classification of Ships. Part
7 Chapter 4 [Online], Classification of Ships. Available:
http://docshare04.docshare.tips/files/30667/306670934.pdf [Accessed 14 Nov 2019].

265
[44]. DNV GL Classification Society. 2017. Competence in dynamic positioning for key
technical personnel. Introduction [Online], DNVGL-ST-0123. Available:
https://rules.dnvgl.com/docs/pdf/DNVGL/ST/2017-03/DNVGL-ST-0123.pdf [Accessed 20
Sep 2019].

[45]. International Marine Contractors Association (IMCA). 2019. Guidance on Failure Modes
and Effects Analysis (FMEA). Section 3: DP FMEA Methodology [Online], IMCA M166.
Available: https://www.imca-int.com/publications/179/guidance-on-failure-modes-and-
effects-analysis-fmea/ [Accessed 05 Dec 2019].

[46]. DNV GL Classification Society. 2012. Failure Mode and Effect Analysis (FMEA) of
Redundant Systems. Redundancy Design Intention [Online], DNV-RP-D102 Available:
https://rules.dnvgl.com/docs/pdf/DNV/codes/docs/2012-01/RP-D102.pdf [Accessed 22 Oct
2019].

[47]. Johansen, T, Fossen, T and Bjørnar, V. "Hardware-in-the-loop Testing of DP systems".


Dynamic Positioning Conference, Houston, USA. Pages: 1-16. 15-16 November 2005.

[48]. The Institute of Electrical and Electronics Engineers (IEEE). 2003. IEEE Std 1413.1.
IEEE Guide for Selecting and Using Reliability Predictions Based on IEEE 1413 [Online],
1413.1. [Accessed 27 Aug 2019].

[49]. Chu, T, Yue, M, Martinez-Guridi, M and Lehner, J "Review of Quantitative Software


Reliability Methods". International Topical Meeting on Probabilistic Safety Assessment and
Analysis 2011, PSA 2011, 3, Pages: 1-99. 2011.

[50]. Kiran, N.R and Ravi, V. "Software Reliability Prediction Using Wavelet Neural
Networks". International Conference on Computational Intelligence and Multimedia
Applications (ICCIMA 2007), Sivakasi, Tamil Nadu, India. Pages: 195-199. 13-15 December
2007. 10.1109/ICCIMA.2007.104.

[51]. Wang, F, Bai, Y and Xu, F. " Reliability Analysis of Redundant Dynamic Positioning
Control System With Human Factor Involved". ASME 2016 35th International Conference on
Ocean, Offshore and Arctic Engineering, Busan, South Korea. Pages: 1-5. 19-24 June 2016.
10.1115/OMAE2016-54101.

[52]. Fang, W, Ming, L, Lei, L and Yong, B "On Markov modelling for reliability analysis of
class 3 dynamic positioning (DP) control system". Ships and Offshore Structures, 13, Pages:
191-201. 2018.

[53]. International Marine Contractors Association (IMCA). 2017. Example Specification for
a DP FMEA for a New DP Vessel. Section 1-7 [Online], IMCA M219. Available:
https://www.imca-int.com/publications/330/example-specification-for-a-dp-fmea-for-a-new-
dp-vessel/ [Accessed 05 Feb 2020].

[54]. International Electrotechnical Commission (IEC). 2018. Failure modes and effects
analysis (FMEA and FMECA). Section 5: Methodology for FMEA [Online], IEC 60812.
Available: https://webstore.iec.ch/publication/26359 [Accessed 09 March 2019].

266
[55]. DNV GL Classification Society. 2016. Hardware in the loop testing (HIL). General
[Online], DNVGL-ST-0373 Available: http://rules.dnvgl.com/docs/pdf/dnvgl/ST/2016-
05/DNVGL-ST-0373.pdf [Accessed 20 Sep 2019].

[56]. Øyvind, S "Managing DP System Software - A Life-cycle Perspective". IFAC-


PapersOnLine, 48, Pages: 324-334. 2015.

[57]. Jinlong, M and Dev, A.K. "DP Capability Analysis by Time Domain Simulation
Approach". 6th International Conference on Operation and Technology of Offshore Support
Vessels, Singapore. Pages: 47-55. 01 September 2016.

[58]. International Marine Contractors Association (IMCA). 2017. Specification for DP


Capability Plots. Section 4: Failure Condition [Online], IMCA M140. Available:
https://www.imca-int.com/publications/111/specification-for-dp-capability-plots/ [Accessed
22 Dec 2019].

[59]. DNV GL Classification Society. 2018. Assessment of station keeping capability of


dynamic positioning vessels. General concept description [Online], DNVGL-ST-0111.
Available: https://rules.dnvgl.com/docs/pdf/DNVGL/ST/2016-07/DNVGL-ST-0111.pdf
[Accessed 20 Jan 2020].

[60]. Marine Technology Society (MTS). 2012. DP Operations Guidance to aid in the safe and
effective management of DP Operations - Part 2. Section 4: Themes [Online], DP Guidance
Part2.Available:https://dynamicpositioning.com/files_mailing/dp_tech_committee_dpguidan
ce_part1.pdf [Accessed 13 May 2020].

[61]. Pivano, L, Smogeli, O and Vik, B. "DynCap - The Next Level Dynamic DP Capability
Analysis.". Proceedings of the 2nd Marine Operations Specialty Symposium, Singapore.
Pages: 17-26. 6 – 8 August 2012. 10.3850/978-981-07-1896-1_MOSS-18.

[62]. Torstein, B, Johansen, T.A, Sørensen, A.J and Mathiesen, E "Dynamic consequence
analysis of marine electric power plant in dynamic positioning". Applied Ocean Research, 57,
Pages: 30-39. 2016.

[63]. (Imca), International Marine Contractors Association. 2012. Guidance on Operational


Activity Planning. Section 5: Descriptions [Online], IMCA M220. Available:
https://www.imca-int.com/publications/331/guidance-on-operational-activity-planning/
[Accessed 16 Dec 2019].

[64]. International Electrotechnical Commission (IEC). 2005. Functional safety of


Electrical/Electronic/Programmable Electronic Safety Related Systems Part 1- 7 [Online], IEC
61508. Available: https://www.iec.ch/functionalsafety/ [Accessed 22 June 2020].

[65]. Hasan, O, Ahmed, W, Tahar, S and Hamdi, M. "Reliability block diagrams based
analysis: A survey". American Institue of Physics (AIP) Conference Proceedings, Online.
Pages: 1-4. 01 April 2015. 10.1063/1.4913184.

267
[66]. Souza-Franco, V, Clavijo, V, Miralles Schleder, A and Ramos Martins, M. " Monte Carlo
Simulation to Consider Uncertainty in the Reliability Analysis of Dynamic Positioning
Systems". Proceedings of the 29th European Safety and Reliability Conference. Pages: 2446-
2453. 22-26 Sep 2019.10.3850/978-981-11-2724-3_0482-cd.

[67]. Kiran, D.R. "Reliability Engineering". Total Quality Management. England, United
Kingdom: Butterworth-Heinemann. Pages: 391-404. 2017.

[68]. Gao, M, Shi, G and Li, S "Online Prediction of Ship Behavior with Automatic
Identification System Sensor Data Using Bidirectional Long Short-Term Memory Recurrent
Neural Network". Sensors, 18, Pages: 1-16. 2018.

[69]. Kumar, N.K, Savitha, R and Mamun, A.Al "Regional ocean wave height prediction using
sequential learning neural networks". Ocean Engineering, 129, Pages: 605-612. 2017.

[70]. Sagheer, A and Kotb, M "Time series forecasting of petroleum production using deep
LSTM recurrent networks". Neurocomputing, 323, Pages: 203-213. 2019.

[71]. Vaidyanathan, K and Trivedi, K.S. "A measurement-based model for estimation of
resource exhaustion in operational software systems". Proceedings 10th International
Symposium on Software Reliability Engineering, Boca Raton, FL, USA. Pages: 84-93. 1-4
Nov 1999. 10.1109/ISSRE.1999.809313.

[72]. Andrzejak, A and Silva, L. " Deterministic Models of Software Aging and Optimal
Rejuvenation Schedules". 10th IFIP/IEEE Symposium on Integrated Management (IM 2007),
Munich, Germany. Pages: 159-168. 21-25 May 2007. 10.1109/INM.2007.374780.

[73]. Jan, B, Farman, H, Khan, M, Imran, M, Islam, I.U, Ahmad, A, Ali, S and Jeon, G "Deep
learning in big data Analytics: A comparative study". Computers & Electrical Engineering,
75, Pages: 275-287. 2019.

[74]. Roy, Pratik, Mahapatra, G. S., Pooja, R, Pandey, S. K. and Dey, K. N. "Robust
feedforward and recurrent neural network based dynamic weighted combination models for
software reliability prediction". Applied Soft Computing, 22, Pages: 629-637. 2014.

[75]. Lakshmanan, I and Ramasamy, S "An Artificial Neural-Network Approach to Software


Reliability Growth Modeling". Procedia Computer Science, 57, Pages: 695-702. 2015.

[76]. Jaiswal, A and Malhotra, R "Software reliability prediction using machine learning
techniques". International Journal of System Assurance Engineering and Management, 9,
Pages: 230-244. 2018.

[77]. Ho, S. L., Xie, M. and Goh, T. N. "A comparative study of neural network and Box-
Jenkins ARIMA modeling in time series prediction". Computers & Industrial Engineering, 42,
Pages: 371-375. 2002.

268
[79]. Hongbing, W, Zhengping, Y, Qi, Y, Tianjing, H and Xin, L "Online reliability time series
prediction via convolutional neural network and long short term memory for service-oriented
systems". Knowledge-Based Systems, 159, Pages: 132-147. 2018.

[80]. Caihong, H, Qiang, W, Hui, L, Shengqi, J, Nan, L and Zhengzheng, L "Deep Learning
with a Long Short-Term Memory Networks Approach for Rainfall-Runoff Simulation". Water,
10, Pages: 1-16. 2018.

[81]. Chen, J, Jing, H, Chang, Y and Liu, Q "Gated recurrent unit based recurrent neural
network for remaining useful life prediction of nonlinear deterioration process". Reliability
Engineering & System Safety, 185, Pages: 372-382. 2019.

[82]. Zhang, F, Sun, K and Wu, X "A novel variable selection algorithm for multi-layer
perceptron with elastic net". Neurocomputing, 361, Pages: 110-118. 2019.

[83]. Eslami, E, Choi, Y, Lops, Y and Sayeed, A "A real-time hourly ozone prediction system
using deep convolutional neural network". Neural Computing and Applications 32, Pages : 1-
15. 2020.

[84]. Deng, Y, Bucchianico, A.D and Pechenizkiy, M "Controlling the accuracy and
uncertainty trade-off in RUL prediction with a surrogate Wiener propagation model".
Reliability Engineering & System Safety, 196, Pages: 1-10. 2020.

[85]. Poornima, S and Pushpalatha, M "A journey from big data towards prescriptive
analytics". ARPN Journal of Engineering and Applied Sciences, 11, Pages: 11465-11474.
2016.

[86]. Hogenboom, Sandra, Rokseth, Børge, Vinnem, Jan Erik and Utne, Ingrid Bouwer
"Human reliability and the impact of control function allocation in the design of dynamic
positioning systems". Reliability Engineering & System Safety, 194, Pages: 1-11. 2020.

[87]. Internaional Maritime Organization (IMO). 2010. The 2010 Manila Amendments to the
International Convention on Standards of Training, Certification and Watchkeeping for
Seafarers, 1978. Adoption Of The Final Act And Any Instruments, Resolutions And
Recommendations Resulting From The Work Of The Conference [Online], MSC-
MEPC.2/Circ.15/Rev.1. Available: https://www.samgongustofa.is/media/english/STCW-
CONF.2-33-Attachment-1-to-the-Final-Act-of-the-ConferenceResolution-1The-Manila-
Amendments-to-the-an.-Secret-1-.pdf [Accessed 15 March 2017].

[88]. Christoph Alexander, T, Ingrid Bouwer, U and Stein, H "Assessing ship risk model
applicability to Marine Autonomous Surface Ships". Ocean Engineering, 165, Pages: 140-154.
2018.

[89]. Man, Y, Weber, R, Cimbritz, J, Lundh, M and Mackinnon, Sc "Human factor issues
during remote ship monitoring tasks: An ecological lesson for system design in a distributed
context". International Journal of Industrial Ergonomics, 68, Pages: 231-244. 2018.

269
[78]. Musharraf, M, Smith, J, Khan, F and Veitch, B "Identifying route selection strategies in
offshore emergency situations using decision trees". Reliability Engineering & System Safety,
194, Pages: 1-10. 2020.

[90]. Chen, H, Moan, T and Verhoeven, H "Safety of dynamic positioning operations on


mobile offshore drilling units". Reliability Engineering & System Safety, 93, Pages: 1072-
1090. 2008.

[91]. Hongfang, L, Lijun, G, Mohammadamin, A and Huang, K "Oil and Gas 4.0 era: A
systematic review and outlook". Computers in Industry, 111, Pages: 68-90. 2019.

[92]. Mohammadpoor, M and Torabi, F "Big Data analytics in oil and gas industry: An
emerging trend". Petroleum, In Press, Pages: 1-8. 2018.

[93]. Saleem, S, Eric, T and Eric, S "Interrelationship between big data and knowledge
management: an exploratory study in the oil and gas sector". Journal of Knowledge
Management, 21, Pages: 180-196. 2017.

[94]. Perrons, R.K and Jensen, J.W "Data as an asset: What the oil and gas sector can learn
from other industries about “Big Data”". Energy Policy, 81, Pages: 117-121. 2015.

[95]. Hong, Y, Zhang, M and Meeker, W.Q "Big data and reliability applications: The
complexity dimension". Journal of Quality Technology, 50, Pages: 135-149. 2018.

[96]. DAMA International. 2017. Data Management Body of Knowledge - Second Edition.
DMBOK [Online], DAMA-DMBOK2. Available: https://technicspub.com/dmbok/ [Accessed
22 Jan 2020].

[97]. DNV GL Digital Solutions (Various Oil. 2016. Offshore and Onshore Reliability Data.
Consortium has collected Oil & Gas reliability data for almost 40 years [Online]. Available:
https://store.veracity.com/oreda-cloud-offshore-onshore-reliability-
data?utm_source=oreda.com&utm_medium=referral&utm_campaign=ds_ver_q4_oreda&ut
m_content=front-page [Accessed 16 Aug 2020].

[98]. Karkouch, A, Mousannif, H, Al Moatassime, H and Noel, T "Data quality in internet of


things: A state-of-the-art survey". Journal of Network and Computer Applications, 73, Pages:
57-81. 2016.

[99]. Ryu, G A, Nasridinov, A, Hyungchul, R and Kwan-Hee, Y "Forecasts of the Amount


Purchase Pork Meat by Using Structured and Unstructured Big Data". Agriculture, 10, Pages:
1-14. 2020.

[100]. Ahmad, G, Tilmann, R, Minqing, H, Francois, R, Meikel, P, Alain, C and Hans-Arno,


J. "BigBench: towards an industry standard benchmark for big data analytics". Proceedings of
the 2013 ACM SIGMOD International Conference on Management of Data, New York, New
York, USA. Pages: 1197–1208. 14 June 2013. 10.1145/2463676.2463712.

270
[101]. Kim, Y, Lee, J, Lee, E and Lee, J.H. "Application of Natural Language Processing
(NLP) and Text-Mining of Big-Data to Engineering-Procurement-Construction (EPC) Bid and
Contract Documents". 6th Conference on Data Science and Machine Learning Applications
(CDMA), Prince Sultan University, Riyadh, Saudi Arabia. Pages: 123-128. 4-5 March 2020.
10.1109/CDMA47397.2020.00027.

[102]. Lee, I "Big data: Dimensions, evolution, impacts, and challenges". Business Horizons,
60, Pages: 293-303. 2017.

[103]. Chandarana, P and Vijayalakshmi, M. "Big Data analytics frameworks". International


Conference on Circuits, Systems, Communication and Information Technology Applications
(CSCITA), Mumbai, India. Pages: 430-434. 4-5 April 2014. 10.1109/CSCITA.2014.6839299.

[104]. Acharjya, Debi and P, Kauser "A Survey on Big Data Analytics: Challenges, Open
Research Issues and Tools". International Journal of Advanced Computer Science and
Applications, 7, Pages: 511-518. 2016.

[105]. Rusu, O, Halcu, I, Grigoriu, O, Neculoiu, G, Sandulescu, V, Marinescu, M and


Marinescu, V. "Converting unstructured and semi-structured data into knowledge". 2013 11th
RoEduNet International Conference, Sinaia, Romania. Pages: 1-4. 17-19 Jan 2013.
10.1109/RoEduNet.2013.6511736.

[106]. Janzen, S and Maass, W. " Ontology-Based Natural Language Processing for In-store
Shopping Situations". 2009 IEEE International Conference on Semantic Computing, Berkeley,
CA, USA. Pages: 361-366. 14-16 September 2009. 10.1109/ICSC.2009.44.

[107]. Gharehchopogh, F.S and Khalifelu, Z.A. "Analysis and evaluation of unstructured data:
text mining versus natural language processing". 5th International Conference on Application
of Information and Communication Technologies (AICT), Baku, Azerbaijan. Pages: 1-4. 12-
14 October 2011. 10.1109/ICAICT.2011.6111017.

[108]. Sivaramakrishnan, N, Vandana, V, Vishali, M, Dharshana, S. G., Subramaniyaswamy,


V and Umamakeswari, A "Conversion of unstructured data to structured data with a profile
handling application". International Journal of Mechanical Engineering and Technology, 8,
Pages: 623-630. 2017.

[109]. Padmapriya, G and Hemalatha, M "A Recent Survey on Unstructured Data to Structured
Data in Distributed Data Mining ". International Journal of Computer Technology and
Applications, 5, Pages: 338-344. 2014.

[110]. Qinglin, Q, Fei, T, Tianliang, H, Nabil, A, Ang, L, Yongli, W, Lihui, W and Andrew,
N "Enabling technologies and tools for digital twin". Journal of Manufacturing Systems, 1,
Pages: 1-19. 2019.

[111]. Aiello, Giuseppe, Giallanza, Antonio and Mascarella, Giuseppe "Towards Shipping 4.0.
A preliminary gap analysis". Procedia Manufacturing, 42, Pages: 24-29. 2020.

271
[112]. Chauhan, V and Virk, M "Big Data and Shipping-managing vessel performance".
International Journal on Informatics Visualization, 2, Pages: 73-75. 2017.

[113]. Kimberly, T, Kevin, J and Maria, P "Threats and Impacts in Maritime Cyber Security".
Engineering & Technology Reference, 1, Pages: 1-12. 2012.

[114]. The Apache Software Foundation. 2020. Apache Hadoop Ecosystem: Tools and
Applications [Online]. Available: https://projects.apache.org/projects.html?category
[Accessed 02 Feb 2020].

[115]. Wuthrich, B, Cho, V, Leung, S, Permunetilleke, D, Sankaran, K and Zhang, J. "Daily


stock market forecast from textual web data". Conference Proceedings of 1998 IEEE
International Conference on Systems, Man, and Cybernetics, San Diego, CA, USA, USA.
Pages: 2720-2725. 14 Oct 1998. 10.1109/ICSMC.1998.725072.

[116]. Husamaldin, Laden and Saeed, Nagham "Big data analytics correlation taxonomy".
Information, 11, Pages: 1-12. 2020.

[117]. Fatima, T and Jyothi, S. "Big Data Analytics in Health Care". Emerging Research in
Data Engineering Systems and Computer Communications, Singapore. Pages: 377-387. 20
January 2020.

[118]. Venkata Krishna, P and Mohammad, S. O "Emerging Research in Data Engineering


Systems and Computer Communications". Proceedings of CCODE 2019 - Part of Advances
in Intelligent Systems and Computing book series, 1054, Pages 1- 675. 2019.

[119]. Parsa, M, Zare, H and Ghatee, M "Unsupervised Feature Selection based on Adaptive
Similarity Learning and Subspace Clustering". Engineering Applications of Artificial
Intelligence, 95, Pages: 1-15. 2020.

[120]. Sierra, A and Corbacho, F. "Input and Output Feature Selection". Artificial Neural
Networks - ICANN 2002, International Conference, Madrid, Spain, August 28-30, 2002,
Proceedings. Pages: 625-630. 28-30 August 2002. 10.1007/3-540-46084-5_102.

[121]. Michalski, A and Tsurko, V. "Feature Selection by Distributions Contrasting". AIMSA:


16th International Conference on Artificial Intelligence: Methodology, Systems, and
Applications, Varna, Bulgaria. Pages: 139-149. 11-13 September 2014. 10.1007/978-3-319-
10554-3.

[122]. Tahir, N.M, Hussain, A, Samad, S.A, Ishak, K.A and Halim, R.A. "Feature Selection
for Classification Using Decision Tree". 2006 4th Student Conference on Research and
Development, Selangor, Malaysia. Pages: 99-102. 27-28 June 2006.
10.1109/SCORED.2006.4339317.

[123]. Yashkov, I "Feature selection using decision trees in the problem of JSM classification".
Automatic Documentation and Mathematical Linguistics, 48, Pages: 6-11. 2014.

272
[124]. Brodley, C and Utgoff, P "Multivariate Decision Trees". Machine Learning, 19, Pages:
45-77. 1995.

[125]. Ezzine, I and Benhlima, L. "A Study of Handling Missing Data Methods for Big Data".
5th International Congress on Information Science and Technology (CiSt), Marrakech,
Morocco. Pages: 498-501. 21-27 October 2018. 10.1109/CIST.2018.8596389.

[126]. Ted, C. 2003. The Data Dictionary: A Tool for Management of Variables and Access to
Data. SAS Conference Proceedings: Western Users of SAS Software 2003 November 5-7, 2003,
San Francisco, California, [Online]. Available: https://www.lexjansen.com/cgi-
bin/xsl_transform.php?x=wuss2003.

[127]. Cook, N.D "Correlations between input and output units in neural networks". Cognitive
Science, 19, Pages: 563-574. 1995.

[128]. Luyi, L, Zhenzhou, L and Changcong, Z "Importance analysis for models with
correlated input variables by the state dependent parameters method". Computers &
Mathematics with Applications, 62, Pages: 4547-4556. 2011.

[129]. Fujiwara, K and Kano, M "Nearest Correlation-Based Input Variable Weighting for
Soft-Sensor Design". Frontiers in chemistry, 6, Pages: 1-8. 2018.

[130]. Dong, Y and Qin, S.J "Regression on dynamic PLS structures for supervised learning
of dynamic data". Journal of Process Control, 68, Pages: 64-72. 2018.

[131]. International Marine Contractors Association (IMCA). 2011. Guidance for Developing
and Conducting Annual DP Trials Programmes for DP Vessels. Section 4: Development of the
Trial Programme [Online], IMC M190. Available: https://www.imca-
int.com/publications/309/guidance-developing-conducting-dp-annual-trials-programmes/
[Accessed 05 March 2020].

[132]. Tannuri, E and Morishita, H "Experimental and numerical evaluation of a typical


dynamic positioning system". Applied Ocean Research, 28, Pages: 133-146. 2006.

[133]. Jon, H. "Basics of Dynamic Positioning". Dynamic Positioning Conference, Houston,


USA. Pages: 1-10. 13-14 October 1998.

[134]. Sørensen, A, Sagatun, S and Fossen, T "Design of a Dynamic Positioning System Using
Model Based Control". Control Engineering Practice, 17, Pages: 359-368. 1996.

[135]. Balchen, J.G, Jenssen, N.A, Mathisen, E and Saelid, S. "Dynamic positioning of floating
vessles based on Kalman filtering and optimal control". 19th IEEE Conference on Decision
and Control including the Symposium on Adaptive Processes, Albuquerque, NM, USA, USA.
Pages: 852-864. 10-12 December 1980. 10.1109/CDC.1980.271924.

[136]. Du, J, Hu, X, Liu, H and Chen, C.L.P "Adaptive Robust Output Feedback Control for a
Marine Dynamic Positioning System Based on a High-Gain Observer". IEEE Transactions on
Neural Networks and Learning Systems, 26, Pages: 2775-2786. 2015.

273
[137]. Nils, A.J "What is the DP Current?". DYNAMIC POSITIONING CONFERENCE
October 17-18, 2006, 1-11. 2006.

[138]. Marghany, K.M "An Integration Risk Assessment Approach and application to DP
System". PORT –SAID ENGINEERING RESEARCH JOURNAL, 19, Pages: 82-89. 2018.

[139]. Sorensen, L, Øvergård, K and Martinsen, T.J.S "Understanding human decision making
during critical incidents in dynamic positioning". Contemporary Ergonomics and Human
Factors 2014. Southampton, UK: Taylor and Francis Group - Institue of Ergonomics and
Human Factors. Pages: 359-366. 2014.

[140]. Øvergård, Kjell, Sorensen, Linda, Nazir, Salman and Martinsen, Tone "Critical
incidents during dynamic positioning: operators’ situation awareness and decision-making in
maritime operations". Theoretical Issues in Ergonomics Science, 16, Pages: 366 - 387. 2015.

[141]. Øvergårda, K I, Sorensena, L J, Martinsena, T J and Nazirb, S. "Characteristics of


Dynamic Positioning Operators' Situation Awareness and Decision Making during Critical
Incidents in Maritime Operations". Proceedings of the 5th International Conference on Applied
Human Factors and Ergonomics AHFE, Kraków, Poland. Pages: 240-251. 19-23 July 2014.

[142]. Payne, J. W "Task complexity and contingent processing in decision making: An


information search and protocol analysis". Organizational Behavior and Human Performance,
16, Pages: 366-387. 1976.

[143]. Velasquez, M and Hester, P "An analysis of multi-criteria decision making methods".
International Journal of Operations Research, 10, Pages: 56-66. 2013.

[144]. Emovon, I, Norman, R and Murphy, A.J "Hybrid MCDM based methodology for
selecting the optimum maintenance strategy for ship machinery systems". Journal of
Intelligent Manufacturing, 29, Pages: 519-531. 2018.

[145]. Saaty Thomas, L "How to make a decision: The analytic hierarchy process". European
Journal of Operational Research, 48, Pages: 9-26. 1990.

[146]. Saaty, R. W. "The analytic hierarchy process—what it is and how it is used".


Mathematical Modelling, 9, Pages: 161-176. 1987.

[147]. Pembuain, A, Priyanto, Si and Suparma, L "The Weighting of Risk Factors for Road
Infrastructure Accidents Using Analytic Hierarchy Process Method". Advanced Science
Engineering Information Technology, 9, Pages: 1275-1281. 2019.

[148]. Moslem, S, Farooq, D, Ghorbanzadeh, O and Blaschke, T "Application of the AHP-


BWM model for evaluating driver behavior factors related to road safety: A case study for
Budapest". Symmetry, 12, Pages: 1-11. 2020.

[149]. Arief Dhany, S, Nitin, M, Abdullah Gokhan, Y and Perera, B. J. C. "Using the Analytic
Hierarchy Process to identify parameter weights for developing a water quality index".
Ecological Indicators, 75, Pages: 220-233. 2017.

274
[150]. Bangweon, S and Seokjoong, K "A Method of Assigning Weights Using a Ranking and
Nonhierarchy Comparison". Advances in Decision Sciences, 2016, Pages: 1-9. 2016.

[151]. Kwiesielewicz, M and Van Uden, E "Inconsistent and contradictory judgements in


pairwise comparison method in the AHP". Computers & Operations Research, 31, Pages: 713-
719. 2004.

[152]. Balmat, J.F, Lafont, F, Maifret, R and Pessel, N "A decision-making system to maritime
risk assessment". Ocean Engineering, 38, Pages: 171-176. 2011.

[153]. Vinnem, J.E and Røed, W "Marine Systems Risk Modelling". Offshore Risk Assessment
Vol. 1: Principles, Modelling and Applications of QRA Studies. London: Springer London.
Pages: 395-443. 2020.

[154]. Rauch, E, Vickery, A.R, Brown, C.A and Matt, D.T "SME Requirements and Guidelines
for the Design of Smart and Highly Adaptable Manufacturing Systems". Industry 4.0 for
SMEs: Challenges, Opportunities and Requirements. Cham, Switzerland: Springer
International Publishing. Pages: 39-72. 2020.

[155]. Chou, S.Y, Chang, Y.H and Shen, C.Y "A fuzzy simple additive weighting system under
group decision-making for facility location selection with objective/subjective attributes".
European Journal of Operational Research, 189, Pages: 132-145. 2008.

[156]. Gers, F.A and Schmidhuber, E "LSTM recurrent networks learn simple context-free and
context-sensitive languages". IEEE Transactions on Neural Networks, 12, Pages: 1333-1340.
2001.

[157]. Adam, K, Smagulova, K and James, A.P "Memristive LSTM Architectures". Deep
Learning Classifiers with Memristive Networks: Theory and Applications. Cham, Switzerland:
Springer International Publishing. Pages: 155-167. 2020.

[158]. Ditthapron, A, Banluesombatkul, N, Ketrat, S, Chuangsuwanich, E and Wilaiprasitporn,


T "Universal Joint Feature Extraction for P300 EEG Classification Using Multi-Task
Autoencoder". IEEE Access, 7, Pages: 68415-68428. 2019.

[159]. Wang, J, Du, Y and Wang, J "LSTM based long-term energy consumption prediction
with periodicity". Energy, 197, Pages: 1-12. 2020.

[160]. Yu, W. "A new concept using LSTM Neural Networks for dynamic system
identification". American Control Conference (ACC), Seattle, WA, USA. Pages: 5324-5329.
24-26 May 2017. 10.23919/ACC.2017.7963782.

[161]. Jing, D, Pinjia, Z, Mazumdar, J, Harley, R.G and Venayagamoorthy, G.K. "A
comparison of MLP, RNN and ESN in determining harmonic contributions from nonlinear
loads". 34th Annual Conference of IEEE Industrial Electronics, Orlando, FL, USA. Pages:
3025-3032. 10-13 November 2008. 10.1109/IECON.2008.4758443.

275
[162]. De, V, Teo, T.T, Woo, W.L and Logenthiran, T. "Photovoltaic Power Forecasting using
LSTM on Limited Dataset". IEEE Innovative Smart Grid Technologies - Asia (ISGT Asia),
Singapore, Singapore. Pages: 710-715. 22-25 May 2018. 10.1109/ISGT-Asia.2018.8467934.

[163]. Zuchang, G, C, C. S., Wai Lok, W, Junbo, J and Wei Da, T. "Genetic Algorithm based
Back-Propagation Neural Network approach for fault diagnosis in lithium-ion battery system".
6th International Conference on Power Electronics Systems and Applications (PESA), Hong
Kong, China. Pages: 1-6. 15-17 Dec 2015. 10.1109/PESA.2015.7398911.

[164]. Society, Dnv Gl Classification. 2015. Dynamic positioning systems - operation


guidance. Operation Guidance Modus [Online], DNVGL-RP-E307. Available:
https://rules.dnvgl.com/docs/pdf/DNVGL/RP/2015-07/DNVGL-RP-E307.pdf [Accessed 20
June 2019].

[165]. The Institute of Electrical and Electronics Engineers (IEEE). 2010. IEEE Std 1413.
IEEE Standard Framework for Reliability Prediction of Hardware [Online], 1413. Available:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5446443 [Accessed 24 Nov
2019].

[166]. Anantharaman, M , Islam, R, Khan, F , Garaniya, V and Lewarn, B "Data Analysis to


Evaluate Reliability of a Main Engine". The International Journal on Marine Navigation and
Safety of Sea Transportation, 13, 5. 2019.

[167]. Desai, N "Dynamic Positioning: Method for Disaster Prevention and Risk
Management". Procedia Earth and Planetary Science, 11, Pages: 216-223. 2015.

[168]. Zheng, W and Liyang, X "Dynamic Reliability Model of Components Under Random
Load". IEEE Transactions on Reliability, 57, Pages: 474-479. 2008.

[169]. Lina, S, Huang, N and Yanan, B. "A new fractal based reliability model". 2017 Second
International Conference on Reliability Systems Engineering (ICRSE), Beijing, China. Pages:
1-6. 10-12 July 2017. 10.1109/ICRSE.2017.8030781.

[170]. Murthy, D.N.P, Rausand, M and Osteras, T "An Introduction to Reliability Theory".
Product Reliability - Specification and Performance. England, United Kingdom: Springer
London Ltd. Pages: 55-88. 2010.

[171]. Soleimani, M, Pourgol Mohamad, M, Rostami, A and Ghanbari, A "Design for


Reliability of Complex System: Case Study of Horizontal Drilling Equipment with Limited
Failure Data". Journal of Quality and Reliability Engineering, 2014, Pages: 1-13. 2014.

[172]. Braband, J, Vom Hövel, R and Schäbe, H."Probability of Failure on Demand – The Why
and the How". SAFECOMP '09: Proceedings of the 28th International Conference on
Computer Safety, Reliability, and Security, Berlin, Heidelberg. Pages: 46-54. 16 September
2009. https://doi.org/10.1007/978-3-642-04468-7_5.

[173]. Riedel, G.J, Huesgen, T and Schmidt, R. "Reliability prediction sensitivity analysis -
How to perform reliability prediction time efficiently". 6th IET International Conference on

276
Power Electronics, Machines and Drives (PEMD 2012), Bristol, UK. Pages: 1-6. 27-29 March
2012. 10.1049/cp.2012.0326.

[174]. Bisong, E "Google Colaboratory". Building Machine Learning and Deep Learning
Models on Google Cloud Platform: A Comprehensive Guide for Beginners. Berkeley, CA:
Apress. Pages: 59-64. 2019.

[175]. Bellamy, R. K. E, Dey, K, Hind, M, Hoffman, S.C, Houde, S, Kannan, K, Lohia, P,


Martino, J, Mehta, S, Mojsilović, A, Nagar, S, K.N, Ramamurthy., Richards, J, Saha, D,
Sattigeri, P, Singh, M, Varshney, K.R and Zhang, Y "AI Fairness 360: An extensible toolkit
for detecting and mitigating algorithmic bias". IBM Journal of Research and Development, 63,
Pages: 1-15. 2019.

[176]. Géron, A 2019. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow:
Concepts, tools, and techniques to build intelligent systems, Sebastopol, California, USA,
O'Reilly Media. Number of Pages: 718. 2019.

[177]. Ziawasch, A, Lukasz, G and Felix, N. "Data Profiling: A Tutorial". Proceedings of the
2017 ACM International Conference on Management of Data, Chicago, Illinois, USA. Pages:
1747-1751. 10 May 2017. 10.1145/3035918.3054772.

[178]. Alvaro, E.M "Matching Algorithms within a Duplicate Detection System". IEEE Data
Engineering Bulletin. Special Issue on Data Cleaning., 23, Pages: 14-20. 2000.

[179]. Jacquemont, M, Vuillaume, T, Benoit, A, Maurin, G, Lambert, P, Lamanna, G and Brill,


A. "GammaLearn: a Deep Learning framework for IACT data". 36th International Cosmic Ray
Conference, Madison, United States. Pages: 1-9. 17 January 2020. 10.22323/1.358.0705.

[180]. Smith, L.N. 2018. Part 1 -- learning rate, batch size, momentum, and weight decay. A
disciplined approach to neural network hyper-parameters [Online], arXiv:1803.09820.
Available: https://arxiv.org/abs/1803.09820. Pages: 1-21. [Accessed 01 March 2018].

[181]. Ghasemian, A, Hosseinmardi, H and Clauset, A "Evaluating Overfit and Underfit in


Models of Network Community Structure". IEEE Transactions on Knowledge and Data
Engineering, 32, Pages: 1722-1735. 2019.

[182]. Haider Khalaf, J and Rafiqul Zaman, K "Methods to Avoid Over-Fitting and Under-
Fitting in Supervised Machine Learning (Comparative Study)". Computer Science,
Communication & Instrumentation Devices. Singapore: Research Publishing Services,
Singapore. Pages: 163-172. 2014.

[183]. Barbosa, N.M and Monchu, C. "A Labeling Framework Addressing Bias and Ethics in
Machine Learning". Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems, Glasgow, Scotland Uk. Pages: 1-12. 4-9 May 2019. 10.1145/3290605.3300773.

[184]. Dwork, C, Hardt, M, Pitassi, T, Reingold, O and Zemel, R. "Fairness through


awareness". Proceedings of the 3rd innovations in theoretical computer science conference,

277
Cambridge, Massachusetts, USA. Pages: 214-226. 11 January 2012 2012,
https://doi.org/10.1145/2090236.2090255.

[185]. Google - Machine Learning Tensorflow. 2019. Machine Learning Glossary [Online].
Available:https://developers.google.com/machine-learning/glossary#automation_bias
[Accessed 20 June 2020].

[186]. Abadi, M, Agarwal, A, Barham, P, Brevdo, E, Chen, Z, Citro, C, Corrado, G.S, Davis,
A, Dean, J and Devin, M. 2016. Tensorflow: Large-scale machine learning on heterogeneous
distributed systems. Google Research Publication [Online]. Available:
https://research.google/pubs/pub45166/. Pages: 1-19. [Accessed 01 Jan 2020].

[187]. Kleinberg, J, Mullainathan, S and Raghavan, M. "Inherent trade-offs in the fair


determination of risk scores". 8th Innovations in Theoretical Computer Science Conference
(ITCS 2017), Berkeley, California, USA. Pages: 1- 23. 08 January 2017.

[188]. Narayanan, A. "Translation tutorial: 21 fairness definitions and their politics". ACM
Conference on Fairness, Accountability, and Transparency (ACM FAccT) New York, USA.
Pages: 1-7. 23-24 Feb 2018.

[189]. Feldman, M, Friedler, S.A, Moeller, J, Scheidegger, C and Venkatasubramanian, S.


"Certifying and removing disparate impact". Proceedings of the 21th ACM SIGKDD
international conference on knowledge discovery and data mining, Sydney, Australia. Pages:
259-268. 10-13 August 2015. https://doi.org/10.1145/2783258.2783311.

[190]. Calmon, F, Wei, D, Vinzamuri, B, Ramamurthy, K.N and Varshney, K.R." Optimized
pre-processing for discrimination prevention". Proceedings of the 31st International
Conference on Neural Information Processing Systems, Long Beach, CA, USA. Pages: 3995-
4004. 20 Dec 2017.

[191]. K, Emmanouil., Eleftherios, S.X, Symeon, P and Yiannis, K. "Adaptive Sensitive


Reweighting to Mitigate Bias in Fairness-aware Classification". Proceedings of the 2018
World Wide Web Conference, Lyon, France. Pages: 853–862. 20 April 2018 2018,
10.1145/3178876.3186133.

[192]. Friedler, S.A, Scheidegger, C and Venkatasubramanian, S. On the (im) possibility of


fairness. Computers and Society [Online], arXiv:1609.07236. Available:
https://arxiv.org/abs/1609.07236. Pages: 1-16. [Accessed 28 October 2018].

[193]. Zhang, B.H, Lemoine, B and Mitchell, M. "Mitigating unwanted biases with adversarial
learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New
Orleans LA USA Pages: 335-340. 24 Dec 2018. https://doi.org/10.1145/3278721.3278779.

[194]. Kamishima, T, Akaho, S, Asoh, H and Sakuma, J. "Fairness-aware classifier with


prejudice remover regularizer". Joint European Conference on Machine Learning and
Knowledge Discovery in Databases, Würzburg, Germany. Pages: 35-50. 16-20 September
2012,

278
[195]. Kearns, M, Neel, S, Roth, A and Zhiwei Steven, W. "An empirical study of rich
subgroup fairness for machine learning". FAT* '19: Proceedings of the Conference on
Fairness, Accountability, and Transparency, Atlanta, GA, USA Pages: 100-109. 14 Jan 2019.
https://doi.org/10.1145/3287560.3287592.

[196]. Morgan, L.E, Nelson, B.L, Titman, A.C and Worthington, D.J "Detecting bias due to
input modelling in computer simulation". European Journal of Operational Research, 279,
Pages: 869-881. 2019.

[197]. Menon, A.K and Williamson, R.C. "The cost of fairness in binary classification". ACM
Conference on Fairness, Accountability, and Transparency (ACM FAccT) New York, NY,
USA. Pages: 107-118. 23-24 Feb 2018.

[198]. Hardt, M, Price, E and Srebro, N. "Equality of opportunity in supervised learning".


Advances in Neural Information Processing Systems (NIPS), Barcelona, Spain. Pages: 3315-
3323. 20 December 2016.

[199]. Pleiss, G, Raghavan, M, Wu, F, Kleinberg, J and Weinberger, K.Q. "On fairness and
calibration". 31st Conference on Neural Information Processing Systems (NIPS 2017), Long
Beach, CA, USA. Pages: 5680-5689. 24 Jan 2018.

[200]. Bernard, A.T.F, Götz, A, Kerwath, S.E and Wilke, C.G "Observer bias and detection
probability in underwater visual census of fish assemblages measured with independent
double-observers". Journal of Experimental Marine Biology and Ecology, 443, Pages: 75-84.
2013.

[201]. Hajna, S, Dasgupta, K, Joseph, L and Ross, N.A "A call for caution and transparency in
the calculation of land use mix: Measurement bias in the estimation of associations between
land use mix and physical activity". Health & Place, 29, Pages: 79-83. 2014.

[202]. Claudia, P. 2009. Learning Curves in Machine Learning: IBM Research Report. IBM
Research Report [Online], RC24756 (W0903-020). Available:
https://dominoweb.draco.res.ibm.com/reports/rc24756.pdf [Accessed 23 Feb 2018].

[203]. Claesen, M and Moor, B.D. "Hyperparameter Search in Machine Learning".


Metaheuristics International Conference (MIC 2015), Agadir, Morocco. Pages: 1-5. 07-10
June 2015.

[204]. Tan, M and Le, Q.V. "Efficientnet: Rethinking model scaling for convolutional neural
networks". Proceedings of the 36 th International Conference on Machine Learning, Long
Beach, California, USA. Pages: 6105-6114. 12 June 2019.

[205]. Rami Al-Rfou, D and Bryan, P. "DDGK: Learning Graph Representations for Deep
Divergence Graph Kernels". International World Wide Web Conference Committee (IW3C2),
San Francisco, CA, USA. Pages: 1-12. 13 - 17 May, 2019. 10.1145/3308558.3313668.

279
[206]. Smith, S.L, Kindermans, P.J, Ying, C and Le, Quoc V. "Don't decay the learning rate,
increase the batch size". 6th International Conference on Learning Representations, ICLR
2018, Vancouver, BC, Canada. Pages: 1-11. April 30 - May 3, 2018.

[207]. Miaomiao, Y and Attahiru Sule, A "Algorithm for computing the queue length
distribution at various time epochs in DMAP/G(1, a, b)/1/N queue with batch-size-dependent
service time". European Journal of Operational Research, 244, Pages: 227-239. 2015.

[208]. Kaneko, H and Funatsu, K "Fast optimization of hyperparameters for support vector
regression models with highly predictive ability". Chemometrics and Intelligent Laboratory
Systems, 142, Pages: 64-69. 2015.

[209]. Srivastava, N, Hinton, G, Krizhevsky, A, Sutskever, I and Salakhutdinov, R "Dropout:


A Simple Way to Prevent Neural Networks from Overfitting". Journal of Machine Learning
Research, 15, Pages: 1929-1958. 2014.

[210]. Hazan, E, Klivans, A and Yuan, Y. "Hyperparameter optimization: A spectral


approach". 6th International Conference on Learning Representations, Vancouver, BC,
Canada. Pages: 1-18. 30 April - 3 May 2018.

[211]. Lisha, L, Jamieson, K, Desalvo, G, Rostamizadeh, A and Talwalkar, A "Hyperband: A


novel bandit-based approach to hyperparameter optimization". The Journal of Machine
Learning Research, 18, Pages: 6765-6816. 2017.

[212]. Frank, H, Holger, H and Kevin, L.B. "An Efficient Approach for Assessing
Hyperparameter Importance". Proceedings of the 31st International Conference on Machine
Learning, Bejing, China. Pages: 754-762. 22-24 June 2014.

[213]. Jeff, R, Yuxiong, H, Feng, Y, Olatunji, R and Rodrigo, F. "HyperDrive: exploring


hyperparameters with POP scheduling". Proceedings of the 18th ACM/IFIP/USENIX
Middleware Conference, Las Vegas, Nevada. Pages: 1–13. 01 December 2017.
10.1145/3135974.3135994.

[214]. Loshchilov, I and Hutter, F. "CMA-ES for Hyperparameter Optimization of Deep


Neural Networks". International Conference on Learning Representations Caribe Hilton, San
Juan, Puerto Rico. Pages: 1-4. 2-4 May 2016.

[215]. Probst, P, Boulesteix, A.L and Bischl, B "Tunability: Importance of Hyperparameters


of Machine Learning Algorithms". Journal of Machine Learning Research, 20, Pages: 1-32.
2019.

[216]. International Marine Contractors Association (IMCA). 2019. Guidelines for The Design
and Operation of Dynamically Positioned Vessels. Section 2: Design Guidance [Online],
IMCA M103. Available: https://www.imca-int.com/publications/57/guidelines-for-the-
design-and-operation-of-dynamically-positioned-vessels/.

[217]. Marine Technology Society (MTS). 2014. Tehnical and Operational Guidance - DP
Operation Manual. Philosophy [Online], TECHOP_ODP_05_(O)_(DP OPERATIONS

280
MANUAL)_Ver2-02201413. Available: https://dynamic-
positioning.com/files_mailing/TECHOP%20DP%20OPERATIONS%20MANUAL.pdf
[Accessed 06 June 2019].

[218]. Devlin, J, Ming-Wei, C, Lee, K and Toutanova, K. "BERT: Pre-training of deep


bidirectional transformers for language understanding". Proceedings of the 2019 Conference
of the North American Chapter of the Association for Computational Linguistics: Human
Language Technologies, Minneapolis, USA. Pages: 4171-4186. 02-07 June 2019.
10.18653/v1/N19-1423.

[219]. Alberti, C, Lee, K and Collins, M. 2019. A bert baseline for the natural questions.
Computer Science [Online], Corpus ID: 59291988. Available:
https://arxiv.org/pdf/1901.08634.pdf. Pages: 1-4. [Accessed 02 Jan 2020].

[220]. Zhangning, H. 2019. Question Answering on SQuAD with BERT. Natural Language
Processing (NLP) [Online], Corpus ID: 204792137. Available:
http://web.stanford.edu/class/cs224n/reports/default/15792151.pdf. Pages: 1-9. [Accessed 20
September 2020].

[221]. Kwiatkowski, T, Palomaki, J, Redfield, O, Collins, M, Parikh, A, Alberti, C, Epstein,


D, Polosukhin, I, Devlin, J and Lee, K "Natural questions: a benchmark for question answering
research". Transactions of the Association for Computational Linguistics, 7, Pages: 453-466.
2019.

[222]. Sanda, M.H, Steven, J.M and Marius, A.P "Open-domain textual question answering
techniques". Nat. Lang. Eng., 9, Pages: 231–267. 2003.

[223]. Chowdhary, K.R "Natural Language Processing". Fundamentals of Artificial


Intelligence. New Delhi, India: Springer India. Pages: 603-649. 2020.

[224]. Manning, C.D, Surdeanu, M, Bauer, J, Finkel, J, Bethard, S and Mcclosky, D. "The
Stanford CoreNLP natural language processing toolkit". Proceedings of 52nd annual meeting
of the association for computational linguistics: system demonstrations, Baltimore, Maryland.
Pages: 55-60. 04 June 2014. 0.3115/v1/P14-5010.

[225]. Collobert, R, Weston, J, Bottou, L, Karlen, M, Kavukcuoglu, K and Kuksa, P "Natural


language processing (almost) from scratch". Journal of machine learning research, 12, Pages:
2493− 2537. 2011.

[226]. Vaswani, A, Shazeer, N, Parmar, N, Uszkoreit, J, Jones, L, Gomez, A, Kaiser, L and


Polosukhin, I. "Attention Is All You Need". Proceedings of the 31st Conference on Neural
Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Pages: 6000-6010. 04-
09 Dec 2017. 10.5555/3295222.3295349.

[227]. Hochreiter, S and Schmidhuber, J "Long Short-term Memory". Neural computation, 9,


PAges: 1-32. 1997.

281
[228]. Rajpurkar, P, Zhang, J, Lopyrev, K and Liang, P. "SQuAD: 100,000+ Questions for
Machine Comprehension of Text". Proceedings of the 2016 Conference on Empirical Methods
in Natural Language Processing, Austin, Texas. Pages: 2383-2392. 21 April 2019.

[229]. Rajpurkar, P, Jia, R and Liang, P. "Know What You Don?t Know: Unanswerable
Questions for SQuAD". Proceedings of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia. Pages: 784-789.
01 July 2018. DOI:10.18653/v1/P18-2124.

[230]. Mosbach, M, Andriushchenko, M and Klakow, D. 2020. On the Stability of Fine-tuning


BERT: Misconceptions, Explanations, and Strong Baselines. Machine Learning [Online],
arXiv:2006.04884 Available: https://arxiv.org/abs/2006.04884. Pages: 1-20. [Accessed 06
October 2020].

[231]. Google - Machine Learning. 2020. Tensorflow Resources [Online]. Available:


https://www.tensorflow.org/resources/models-datasets [Accessed 20 June 2020].

[232]. Yang, Z, Qi, P, Zhang, S, Bengio, Y, Cohen, W, Salakhutdinov, R and Manning, C.D.
"HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering". Proceedings
of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels,
Belgium. Pages: 2369-2380. 31 Oct - 4 Nov 2018. 10.18653/v1/D18-1259.

[233]. The Institute of Electrical and Electronics Engineers (IEEE) . 1991. IEEE Std 610. IEEE
Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries
[Online],610.12.Available:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=15934
2.

[234]. Thuan, L.N "AI Deep Learning with Convolutional Neural Networks on Google Cloud
Platform". Journal of Strategic Innovation and Sustainability, 14, Pages: 101-111. 2019.

[235]. Arif, H, Hajjdiab, H, Harbi, F.A and Ghazal, M. "A Comparison between Google Cloud
Service and iCloud". IEEE 4th International Conference on Computer and Communication
Systems (ICCCS), Singapore, Singapore. Pages: 337-340. 23-25 February 2019.
10.1109/CCOMS.2019.8821744.

282

You might also like