TECHOP Software Testing
TECHOP Software Testing
SOFTWARE TESTING
JANUARY 2021
DISCLAIMER
The information presented in this publication of the Dynamic Positioning Committee of the Marine Technology
Society (‘DP Committee’) is made available for general information purposes without charge. The DP Committee
does not warrant the accuracy, completeness, or usefulness of this information. Any reliance you place on this
publication is strictly at your own risk. We disclaim all liability and responsibility arising from any reliance placed
on this publication by you or anyone who may be informed of its contents.
CONTENTS
SECTION PAGE
1 INTRODUCTION 5
1.1 PREAMBLE 5
1.2 TECHOP NAMING CONVENTION 5
1.3 MTS DP GUIDANCE REVISION METHODOLOGY 5
2 SCOPE AND IMPACT OF THIS TECHOP 6
2.1 SCOPE 6
2.2 IMPACT ON PUBLISHED GUIDANCE 6
3 CASE FOR ACTION 7
3.1 DP INCIDENTS 7
3.2 TYPICAL SOFTWARE DEVELOPMENT AND REVISION CONTROL ISSUES 8
3.3 NEED FOR AN INTEGRATED APPROACH TO TESTING 9
3.4 ADVANTAGES OF SOFTWARE BASED CONTROL 9
4 CURRENT PRACTICE 10
4.1 LIMITATIONS OF FMEA 10
4.2 SOFTWARE TESTING 10
5 THE GAP BETWEEN PRACTICE AND EXPECTATIONS 11
5.1 EXPECTATIONS 11
5.2 THE GAP 11
5.3 CLOSING THE GAP 11
6 SOFTWARE TEST METHODS 13
6.1 ACKNOWLEDGEMENT 13
6.2 SOFTWARE TESTING BACKGROUND 13
7 REVIEW OF EXISTING SOFTWARE TEST METHODOLOGIES 15
7.1 DISCLAIMER 15
7.2 CURRENT PRACTICE 15
7.3 RULES AND GUIDELINES 16
7.4 HARDWARE - IN–THE-LOOP, SOFTWARE-IN-THE-LOOP & ENDURANCE TESTING 16
7.5 CLOSED LOOP VERIFICATION 19
7.6 INTEGRATION TESTING 20
7.7 NOTATIONS FOR SYSTEM VERIFICATION BY TESTING 20
8 SUGGESTED METHODOLOGY FOR IMPROVED SOFTWARE ASSURANCE 24
8.1 TESTING METHODOLOGY 24
8.2 TOOLS 24
8.3 ROLES WITHIN THE VERIFICATION EFFORT 27
8.4 VERIFICATION & VALIDATION STRATEGY 28
8.5 OPPORTUNITIES FOR IMPROVEMENT 29
9 FURTHER READING 30
10 MISC 31
FIGURES
Figure 3-1 From M 218 Station Keeping Incident Database for 2010 – (Courtesy IMCA) 7
Figure 7-1 HIL Testing 17
Figure 7-2 Typical Software in the Loop 18
Figure 7-3 Closed Loop Verification 19
Table 7-1 ESV Target Systems 21
Table 7-2 HIL Notations 23
Figure 8-1 Software Life Cycle and Verification Standards 26
Table 8-1 Form and Level of Independence According to IEEE1012 27
TABLES
Table 7-1 ESV Target Systems 21
Table 7-2 HIL Notations 23
Table 8-1 Form and Level of Independence According to IEEE1012 27
1 INTRODUCTION
1.1 PREAMBLE
1.1.1 The guidance documents on DP (Design and Operations and People) were published by
the MTS DP Technical Committee in 2011, 2010 and 2012, respectively. Subsequent
engagement has occurred with:
• Classification Societies (DNV, ABS)
• United States Coast Guard (USCG)
• Marine Safety Forum (MSF)
• Oil Companies International Marine Forum (OCIMF)
1.1.2 Feedback has also been received through the comments section provided in the MTS DP
Technical Committee Web Site.
1.1.3 It became apparent that a mechanism needed to be developed and implemented to
address the following in a pragmatic manner.
• Feedback provided by the various stakeholders.
• Additional information and guidance that the MTS DP Technical Committee wished
to provide and a means to facilitate revisions to the documents and communication
of the same to the various stakeholders.
1.1.4 The use of Technical and Operations Guidance Notes (TECHOP) was deemed to be a
suitable vehicle to address the above. These TECHOP Notes will be in the following
categories:
• General TECHOP (G)
• Design TECHOP (D)
• Operations TECHOP (O)
• People TECHOP (P)
1.2 TECHOP NAMING CONVENTION
1.2.1 The naming convention, TECHOP (CATEGORY (G / D / O / P) – Seq. No. – Rev.No. –
MonthYear) TITLE will be used to identify TECHOPs as shown in the examples below:
Examples:
• TECHOP (D-01 - Rev1 - Jan21) Addressing C³EI² to Eliminate Single Point Failures
• TECHOP (G-02 - Rev1 - Jan21) Power Plant Common Cause Failures
• TECHOP (O-01 - Rev1 - Jan21) DP Operations Manual
Note: Each Category will have its own sequential number series.
1.3 MTS DP GUIDANCE REVISION METHODOLOGY
1.3.1 TECHOPs as described above will be published as relevant and appropriate. These
TECHOP will be written in a manner that will facilitate them to be used as standalone
documents.
1.3.2 Subsequent revisions of the MTS Guidance documents will review the published
TECHOPs and incorporate as appropriate.
1.3.3 Communications with stakeholders will be established as appropriate to ensure that they
are notified of intended revisions. Stakeholders will be provided with the opportunity to
participate in the review process and invited to be part of the review team as appropriate.
Figure 3-1 From M 218 Station Keeping Incident Database for 2010 – (Courtesy IMCA)
3.1.3 The largest number of incidents recorded was related to position reference issues. The
next most significant causes were electrical, computer and power. Human error,
environment, propulsion and procedures are the least significant causes. However, as
software forms a part of so many systems it is likely to be a factor in categories other than
computer systems and review of previous incident records by software professionals
suggests the actual number of incidents where software was a factor is closer to 19%.
3.1.4 This database does not record those software issues that affected the execution of the
industrial mission without affecting DP but there is no reason to believe the situation is any
better in this category. Note: Although this TECHOP addresses DP critical equipment MTS
guidance attempts to retain focus on the industrial mission as the reason for using DP as a
station keeping method.
3.2.6 In general, software control is vulnerable to changes performed on the vessel which do not
make it back to the Software Configuration Management (SCM) repository in a timely
fashion to update the code repository.
4 CURRENT PRACTICE
4.1 LIMITATIONS OF FMEA
4.1.1 MTS TECHOP (D-02 - Rev1 - Jan21) FMEA TESTING provides general details on the
philosophy of testing DP systems by FMEA.
4.1.2 Software testing in the DP community is largely performed by equipment vendors on their
own equipment during the development and commissioning phases. A limited amount of
software functionality is tested at the DP FMEA proving trials. Testing of fully integrated
systems is limited to demonstrating functionality and failure response during DP FMEA
proving trials.
4.1.3 From the perspective of software, the DP FMEA will exercise certain discrete capabilities
of the software hence validating only a small portion of the functionality. FMEA testing as it
has been developed in the DP community focuses on design flaws and construction
defects that could defeat the redundancy concept. Probability is still a factor in so far as
failure effects in redundant systems may not be entirely predictable as hidden failures may
go undetected. Protective functions may fail to operate effectively on demand. In contrast,
a failure in software is inherently systematic, without a stochastic component. This is why
the FMEA should not be relied on to validate the software.
Note a stochastic system is one whose state is non-deterministic (i.e. random) so that the
subsequent state of the system is determined probabilistically.
6.2.8 International standards for software development and verification have been produced;
classification societies such as ABS and DNV recognised the need to provide a process
for systematically addressing elements of software development and integration that are
required to ensure adequate traceability. These standards require the assignment of roles
and responsibilities for the overall integration process including the role of systems
integrator and independent verifier. On completion, a class certificate confirming the
successful conclusion of the integration process is issued for the vessel.
6.2.9 The DP system FMEA of the various software components, on which the overall vessel
FMEA relies, is usually undertaken by the control system vendors themselves without
third-party testing and verification. In the FMEA proving trials, which constitutes an
important part of the sea trials for a new-build, focus is placed on the hardware
components and partly the Input Output (IO) layer of the computer systems. The software
functionality of the computer systems is only superficially tested, mainly due to a lack of
appropriate testing tools. The problem is further compounded for aging systems. In order
to properly support a testing effort for the vessel’s life-cycle, the software developers need
to have the legacy tools and systems maintained so that the software changes can be
validated.
7.4.5 HIL testing is accomplished by isolating the control system and its operator stations from
its surroundings and replacing all actual I/O with simulated I/O from a HIL simulator in real
time. The control system cannot sense any difference between the real world and the
virtual world in the HIL simulator. This facilitates systematic testing of control system
design philosophy, functionality, performance, and failure handling capability, both in
normal and off-design operating conditions. HIL testing only superficially tests hardware
and redundancy, since the main test target is software. The main common objective of
both FMEA and DP HIL testing is to verify the vessel’s ability to maintain position after a
single failure. Another important objective of HIL testing is to verify the control system
software functionality is as specified by the supplier and possibly the end user of the DP
system.
7.4.6 From an industrial mission point of view, the same tendency is driven by the fact that
control system safety and performance testing by hardware-in-the-loop (HIL) simulations
contribute to reduced time for tuning during commissioning and sea trials and, not least,
reduced risk of incidents during operation caused by software bugs and erroneous control
system configurations.
7.4.7 Software-in-the-loop (SIL) does not verify the networked connections, input or output
hardware modules or the processor. The configuration consists of an exact copy of the
control system software running on a machine that emulates the hardware controller
closely and another machine that simulates the plant. PCs are often used providing
flexibility and relatively cheap simulator environments compared to the simulated system
configuration. This SIL simulation can be achieved by various configurations using several
PC machines. The simulated plant and control model can be loaded on one PC machine,
which is connected to the other PCs and has the Man Machine Interface program.
7.4.8 Configuring a software-in-the-loop simulator requires programming the controller model as
well as the plant model. Using a proper simulation tool for this purpose provides many
advantages for time and modelling efforts. The desirable combination of simulation tools
uses a continuous process simulation tool for the plant model and a computer aided
control system design (CACSD) tool for the controller model.
7.4.9 The advantages of a SIL system include low equipment cost, flexibility, and portability due
to the use of readily available hardware and software, without the need of a high-cost
replica of MMI and controllers such as a distributed control system (DCS).
7.4.10 This TECHOP does not address SIL beyond this reference.
7.4.11 Endurance testing: The term ‘endurance testing’ or ‘soak testing’ is commonly associated
with hardware testing but is also performed on software.
7.4.12 Endurance testing may be used to provide an estimate of how long software can operate
without crashing due to memory leaks, resource capacity issues, bandwidth limits etc.
During such tests, memory utilization is monitored to detect potential memory leaks and
performance degradation is monitored to ensure that the response times and throughput
does not vary significantly with time. The goal is to discover how the system behaves
under sustained use.
7.4.13 Although software faults are normally associated with design issues and software itself
does not deteriorate with the passing of time it is possible to collect similar reliability
metrics to those used in hardware testing.
7.4.14 In certain DP applications the controllers running the DP software are expected to perform
reliably for many months and therefore knowledge of long term performance is essential.
7.7.2 Hardware in the Loop testing is one means of enhanced system verification. There are
now two different HIL notations (see Table 7-2) one for independent systems verification
where the entire test package is provided by an independent HIL supplier. The second is
for dependent HIL where the test simulator is provided by the organisation that produces
the control system being tested. There are advantages and disadvantages to both
approaches and some of the arguments made by their proponents are given below.
7.7.3 Dependent HIL may offer cost savings in relation to the absence of the requirement for an
in-house simulator. For those vessel owners that are significantly cost constrained this
could make the difference between carrying out some form of testing or none at all. It
seems likely that there would always be some benefit from software testing particularly in
the case of those systems with a strong vessel specific element and safety critical nature.
Note: There may be drivers other than cost savings that dictate the choice of dependant
HIL. The benefits of HIL regardless of whether it is dependent or independent is being
recognised and realized by industry.
7.7.4 For independent HIL, there is a cost associated with creating a new simulator but this can
be offset when it can be reused for subsequent projects with minor adjustments or
configuration changes. For maritime systems like DP and PMS, the work associated with
configuring the simulator is not excessive since it overlaps with the system analysis that
has to be performed in order to produce an effective test program.
7.7.5 When considering the degree of independence required, IEEE 1012 Annex C states that
rigorous technical independence, which includes independent test tools, should be applied
for systems requiring a high integrity level. Therefore, the independence of the simulator
developer and configurator should be driven by the criticality of the system.
7.7.6 The rigour required in the simulator should be driven by the criticality of the system, rather
than the independence and knowledge of the simulator verifier or the independence of the
simulator owner. The independence of the simulator owner is less important than the
independence of the developer and configurator of the simulator.
7.7.7 The quality of any simulator is of course dependent on the knowledge of the developers
whether they are associated with the simulator owner or an independent organisation. The
same argument is true for a simulator verifier. If the verifier is not competent, the rigour of
the simulator may not be satisfactory.
7.7.8 When a dependent simulator is used, the independent verifier must verify two black boxes:
i.e. the control system and the simulator. Without detailed knowledge of the design,
implementation, and configuration of the simulator, the independent verifier may have an
additional burden related to identifying configuration errors or design weaknesses common
to the control system and the simulator.
7.7.9 HIL simulators are to a large degree constantly validated by the control systems they test
and vice versa. This is comparable to the relationship between production code and test
code (unit tests) and double-entry book-keeping in a financial accounting system.
Independent HIL simulators are further validated since they are tested against control
systems from different vendors. If the control system and simulator are developed and
configured by the same vendor, the opportunity for such validation may be lost. A vendor
can save time by automatically configuring the simulator based on the control system
configuration, but this may cause masking of erroneous configurations. The error might not
be discovered since it is the same in both the controller and simulator.
7.7.10 In general terms, the basis of choice if a system will be tested to HIL-IS or HIL-DS
standards should be objective and based on sound technical reasons. For example, the
organization that holds the most relevant mathematical model (and the HIL test harness to
activate it) could form the basis for choosing the organization that is to provide the HIL
service. If an Independent HIL supplier has a suitable mathematical model then they
would be the best technical solution. If a drilling system supplier has an internal 3D
environment with collision detection that is well developed to test their software, then they
should supply the HIL service in this case.
Qualifier Description
HIL-IS HIL test package provided by independent HIL supplier.
HIL test program package and HIL test package report provided by
HIL-DS independent HIL supplier. HIL test simulator package provided by the
organization delivering the HIL target system.
7.7.11 Reference should be made to IEEE 1012 Annex C for further guidance on the degree of
independence but the following may be useful when reaching a conclusion.
• Carefully consider the guidance in industry standards on software testing particularly
in relation to safety critical activities.
• Carefully evaluate the value of any verification process not just the cost.
• Consider competence as well as independence.
• Given the safety critical nature of some industrial missions, some level of software
verification is essential. The inability to reach alignment on dependence or
independence should not preclude such verification.
8.2 TOOLS
8.2.1 Testing different types of software and test environment
8.2.1.1 The difference between control system software and a desktop application - like Microsoft
Word®, is that while the only interaction MS Word has with the surroundings is the user of
the program, a control system also interacts with its associated plant.
8.2.1.2 A realistic user / test environment for MS Word, is a PC with MS Word installed. In
addition, you would most probably connect a printer device to test that a document can be
printed on paper.
Note: 8.2.1.1 and 8.2.1.2 describe an example.
8.2.1.3 A realistic test environment for a control system will not only be a computer with the control
system software application installed; it must also contain a representation of the plant
which is controlled by the control system. This type of software testing is called simulator
based testing. In the marine and offshore sector this kind of testing is known as Hardware-
in-the-loop (HIL) testing.
8.2.2 Test Location
8.2.2.1 The real plant often imposes limitations in terms of the test scope, since testing with the
plant in the loop may be impractical. Testing outside plant limitations may also lead to sub-
optimal test conditions both for the plant itself, and the people performing and witnessing
the testing.
8.2.2.2 In order to overcome limitations of using a real plant, most of the software functionality and
failure handling capabilities can be tested against a plant simulator. This allows for
enhancing the quality of testing by increasing the test scope. However, some parts of the
test scope must be covered by on-board testing.
8.2.2.3 A plant simulator also enables testing without access to the vessel, which usually reduces
the cost of such testing.
8.2.3 Software life cycle considerations
8.2.3.1 As indicated in Figure 8-1, the other main category of software testing is static testing.
Typically, this includes reviewing requirement, design specifications and code. This is also
an important test method as it may reveal different types of defects compared to dynamic
testing.
8.2.3.2 Good quality software requires a systematic and mature development process. To be able
to maintain the quality during the life cycle of the software (and the vessel), good
maintenance processes are also a necessity. Within all life cycle models, verification is
essential and should be applied during the entire software life cycle.
8.2.3.3 Verification will be performed by stakeholders with different viewpoints usually influenced
by their roles. Some verification methods are better suited to specific life cycle phases than
others revealing different kind of defects.
8.2.4 Standards for software development and verification of integrated systems
8.2.4.1 Software quality depends on good and well-defined development processes. Within the
maritime industry there have emerged standards and guidelines that place requirements
on the software development and verification processes.
8.2.4.2 These new process oriented standards and guidelines by DNV and ABS are
complementary to the equipment specific class rules. The standards may be divided into
two main groups - those focusing on the whole software life cycle, and those focusing on
the verification of the software.
• SW verification standards: DNV-ESV/SfC, and ABS-SV.
• SW life cycle standards: DNV-ISDS, and ABS-ISQM.
8.2.4.3 The software verification standards, DNV-ESV/SfC, and ABS-SV, describes HIL-testing
(simulator based dynamic testing - black box testing) of the control system software.
8.2.4.4 The other two software standards issued by DNV and ABS, DNV-ISDS, and ABS-ISQM
may be categorized as software life cycle standards, see Figure 8-1. They will have a
broader focal point than the two previous; as they put requirements towards all software
development activities in all phases of the software development process, including the
verification process.
8.2.4.5 Both DNV-ESV/SfC, and ABS-SV may be used within DNV-ISDS and ABS-ISQM as a
framework for HIL testing.
8.2.4.6 DNV-ESV/SfC, and ABS-SV focuses on safety. DNV-ISDS and ABS-ISQM focus on other
software quality aspects.
8.2.4.7 The system owners are an integral part of the System Development Life Cycle (SDLC) and
Validation & Verification (V&V) portions of software engineering as the end user. The
system owner must have a rigorous and documented software change management
(MOC) process in place.
DNV-ISDS/ABS-ISQM
DNV-ESV&SfC/ABS-SV
Static software testing (Reviews and Dynamic software testing - e.g. HIL testing
inspections) Typical types of defects detected:
Typical types of defects detected: - Implementation/configuration errors (“SW bugs”)
- System design errors - Integration errors
- System specification errors
- Code-standard errors
Artifacts:
SRS – Software Requirements Specification
SDS – Software Design Specification
Code
Artifact: Running code
ship in operation
concept design construction acceptance
(maintenance)
8.3.3 This can be interpreted to mean that technical independence requires independent test
tools. This requirement should be discussed with all stakeholders and agreement reached
as stipulating independent test tools has a potential cost impact.
8.3.4 The verification organization is responsible for running the independent test process:
1. Planning.
2. System analysis.
3. Test design.
4. Test case and test procedure implementation.
5. Test execution.
6. Test result analysis and follow up.
8.3.5 Extensive knowledge about software test techniques and system analysis in addition to
general system knowledge are required for this role. It is the level of independence of the
validation team that determines the verification effort independence, not the independence
of the test witness or the system integrator.
8.3.6 The Test Witness will monitor the fidelity of test execution to the specified test procedures
and witness the recording of test results. It is important to note that the performance of the
Test Witness task largely relies on the transparency of the test processes set in place by
the Verification Organization. The transparency of the vendor's verification process may
become degraded due to a conflict of interest, while the independent Verification
Organization will not have such conflicts of interest.
8.3.7 It is also important to notice that witnessing a test execution will not qualify the witness to
take the role of the validation team. This role requires more hands on effort in the testing
process including actual usage of the test tools such as e.g. the HIL simulator.
8.3.8 The system integrator will be responsible for the integration of the total system and
therefore may be regarded as part of the development process. This role is intended to be
taken by the shipyard for new build and by the owner for vessels in operation.
9 FURTHER READING
1. ‘Control System Maker Involvement in HIL Testing’, Oyvind Smogeli, Marine
Cybernetics, 2011-10-31.
2. ‘DP System HIL vs. DP System FMEA’, Odd Ivar Haugen, Marine Cybernetics,
2012-02-06.
3. ‘HIL Test Interfacing’, Oyvind Smogeli, Marine Cybernetics, 2011-10-31.
4. ‘HIL Test Process Class Rules’, Odd Ivar Haugen, Marine Cybernetics, 2012-12-12.
5. ‘Introduction to Third Party HIL Testing’, Oyvind Smogeli, Marine Cybernetics, 2010-
10-26.
6. ‘Why Chose an Independent Verification Organisation’, Odd Ivar Haugen, Marine
Cybernetics, 2012-12-13.
7. ‘Control Systems Acceptance: Why Prevention is Cheaper than Test’, Athens Group.
8. ‘Cross Checking the Checklist’, Bill O'Grady, Athens Group, September 2011.
9. ‘How to Build a Stronger Contract to Improve Vendor Accountability and Reduce
Control Systems Software Costs and Risks’ - Athens Group, 2010.
10. ‘Fit for Purpose Control Systems Emulation - The Case for State Based Emulation
and a Lifecycle Approach to Reducing risks and Costs on Offshore Assets’- Athens
Group, 2011.
11. ‘Getting Software Risk Mitigation Right from the Start, Nestor Fesas and Don
Shafer’, SPE 143797, 2011.
10 MISC